Sie sind auf Seite 1von 903

V11.

cover

Front cover
Course Guide
IBM Storwize V7000 Implementation
Workshop
Course code SSE1G   ERC 3.1
August 2016 edition
Notices
This information was developed for products and services offered in the US.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative
for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not
intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or
service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate
and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this
document does not grant you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
United States of America
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein;
these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s)
and/or the program(s) described in this publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an
endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those
websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other
publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other
claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those
products.
This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible,
the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to
actual people or business enterprises is entirely coincidental.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many
jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM
trademarks is available on the web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
© Copyright International Business Machines Corporation 2012, 2016.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
V11.0
Contents

TOC

Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii

Course description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi

Unit 0. Course introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-1


Course overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2
Course prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-3
Course objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-4
Agenda: Day 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-5
Agenda: Day 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-6
Agenda: Day 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-7
Agenda: Day 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-8
Introductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-9
Class logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-10

Unit 1. IBM Storwize V7000 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
IBM System Storage product positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Storwize V7000 terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Storwize V7000 Gen2 at a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Storwize V7000 implementation, zoning, and management interfaces . . . . . . . . . . . . . . . . . . . . . . . 1-7
Storwize V7000 initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Storwize V7000 storage provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Storwize V7000 host and volume administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Spectrum Virtualize advanced features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Storwize V7000 data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
IBM Spectrum Virtualize Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
IBM Storwize V7000 administration management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15

Unit 2. Storwize V7000 hardware architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Storwize V7000 hardware topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
IBM Storwize V7000 Gen2 basic components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Storwize V7000 terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Storwize V7000 hardware topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Storwize V70000 control enclosure front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Storwize V7000 Gen2: Exploded view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
Storwize V7000 Gen2: Block diagram of node canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Storwize V7000 node interior view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
Storwize V7000 Gen2 processor subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-12
Storwize V7000 Gen2: Rear view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
Storwize V7000 Gen2 node canister indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-14
Storwize V7000 Gen2 I/O connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16
8 Gb FC host interface card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-18
16 Gb FC host interface card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-19
FC host port indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-20
10 Gb host interface card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21

© Copyright IBM Corp. 2012, 2016 iii


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC FCoE support using 10 Gb card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22


10 Gb host port indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-23
Compression Accelerator card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-24
Storwize V7000 I/O summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-25
Storwize V7000 Gen2 integrated battery pack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-26
Storwize V7000 Gen2 firehose dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-27
Storwize V7000 Gen2 battery pack reconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-28
Storwize V7000 Gen2 fan module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29
Storwize V7000 Gen2 control enclosure power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-30
Storwize V7000 hardware topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31
Storwize V7000 Gen2 expansion enclosure option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-32
Storwize V7000 rear view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33
Expansion canister LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-34
Storwize V7000 Gen2 12 Gb SAS interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35
Storwize V7000 Gen2 drive options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-36
Storwize V7000 Gen2 cable options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37
Storwize V7000 Gen2 expansion enclosure power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-38
Storwize V7000 hardware topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-39
Storwize V7000 2076-524 scale-out implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-40
Storwize V7000 configuration node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-41
Storwize V7000 Gen 2 SAS chain layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-42
Storwize V7000 enclosure configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-43
Clustered system example (1 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-44
Clustered system example (2 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-45
Clustered system example (3 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-46
Clustered system example (4 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-47
Storwize V7000 Gen2 migration and investment protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
IBM System Storage Storwize support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-49
Storwize V7000: Gen1 versus Gen2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-50
Hardware compatibility within the Storwize family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-51
Spectrum Virtualize licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-52
Benefits of Spectrum Virtualize Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-53
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-54
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-55
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-57
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-59

Unit 3. Storwize V7000 planning and zoning requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Storwize V7000 physical planning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Planning and implementation check list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Storwize V7000 node cable requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Storwize V7000 upstream high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Storwize V7000 Gen2 cable options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Cable management arm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Storwize V7000 physical planning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
Dual fabric for high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
Storwize V70000 management IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
Example of an IPv4 management and iSCSI shared subnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Service Assistant IP interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
Management GUI access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
System communication and management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17
Storwize V7000 physical planning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
SAN zoning best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20
Restrict access with zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
System switch port connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22

© Copyright IBM Corp. 2012, 2016 iv


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Host communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-23


Fibre Channel network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25
Internet SCSI network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-26
Fibre Channel over Ethernet support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-27
Native IP replication support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-28
Zone definitions by port number or WWPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-30
Name and addressing convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-31
WWN addressing scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-32
Storwize V7000 new WWNN/WWPN schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-33
Storwize V7000 worldwide names schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-34
Storwize V7000 port destination recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-35
Storwize V7000 and SVC switch zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-36
Storwize V7000 and FlashSystem 900 switch zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-38
Storwize V7000 and DS5K switch zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-39
Storwize V7000 and DS3500 switch zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-40
Storwize V7000 nodes and storage zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-41
FC Zoning and multipathing LUN access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-42
Maximum paths supported: No more than eight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-43
Storage infrastructure and access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-44
Host to SVC access: Supported multipath drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-45
Multipathing and host LUN access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-46
Host zoning preferred paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-47
Zoning multi HBA hosts for resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-48
Example of host port zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-49
Changing the preferred node and paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-50
Host connection verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-51
AIX host object and ports example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-52
AIX host bus adapter (WWPNs) details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-53
CLI AIX host paths view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-54
Host zone worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-55
Power on and off sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-56
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-57
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-58
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-60
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-62

Unit 4. System initialization and user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Storwize V7000 local planning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Storwize V7000 management interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Storwize V7000 Gen2 Technician port initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Welcome configuration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
License agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Storwize V7000 GUI user ID and password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
System Setup: Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
System Setup: Licensed Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
System Setup: Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11
System Setup: Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12
System Setup: System Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
System Setup: Contact Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
System Setup: Email Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
System Setup: Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Add storage enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
System Setup complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-18
Update might be available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19
Storwize V7000 logical planning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20

© Copyright IBM Corp. 2012, 2016 v


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Storwize V7000 GUI dynamic system view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-21


Storwize V7000 GUI dynamic menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
GUI function Icon: Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-23
Monitoring: System events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-24
Monitoring: Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25
Actions menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-26
Storwize V7000 system details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-27
Rename Storwize V7000 node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-28
Storwize V7000 I/O hardware information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-29
Modify Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-30
Storwize V7000 GUI status indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-31
Storwize V7000 system capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-32
Storwize V7000 GUI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-33
Storwize V7000 GUI: Access menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-34
User authentication methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-35
Access menu: User Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-36
User group roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-37
Managing user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-38
Create new user and assign user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-39
Remote authentication configuration (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-40
Remote authentication configuration (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-41
Add remote user group (enable member logins) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-42
Configure remote user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-43
Remote user1 login (GUI and CLI) examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-44
Remote users centralized management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-45
CLI SSH keys encrypted communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-46
PuTTYgen key generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-47
Save the generated keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-48
User with SSH key authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-49
Create CLI session with SSH key authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-50
Accessing CLI from Microsoft Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-51
Command-line interface commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-52
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-53
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-54
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-56
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-58

Unit 5. Storwize V7000 storage provisioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Storage provisioning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Storwize V7000 terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
IBM System Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
IBM Spectrum Virtualize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
Storwize V7000 block-level virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
Storwize V7000 I/O virtualization structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
Spectrum Virtualize with Storwize V7000: One complete solution . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
Storage provisioning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Storwize V7000 logical building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
Storwize V7000 managed resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15
Storage pools and extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
Pool extent size and cluster capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17
Mapping of extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Storage pool types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Internal storage supported RAID levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Best practices: RAID 5 compared to RAID 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21

© Copyright IBM Corp. 2012, 2016 vi


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Array and RAID levels: Drive counts and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22
Storwize V7000 balanced system (chain balanced) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Enclosure 4 (chain 2) drives and array members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25
Enclosure 3 (chain 1) drives and array members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Array member goals and spare attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Spare drive use attribute assignment by GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Spare selection for array member replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29
Traditional RAID 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30
Traditional RAID 6 reads/writes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
Distributed RAID (DRAID) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-32
Distributed RAID 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-33
DRAID performance goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-34
Distributed RAID considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-35
Drive-Auto Manage/Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-36
Storwize V7000 Gen2 supports T10DIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-37
Storage provisioning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-38
Storwize V7000 overview block-level structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-39
Storwize V7000 internal storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-40
Internal drive attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-41
Internal drive properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42
Change internal drive attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-43
Create a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-44
Modifying the default extent size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-45
Adding capacity to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-46
Mdisks by pools view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-47
Advanced custom array creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-48
Parent and child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-49
Creating a child pool from an existing mdiskgrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-50
Child pool attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-51
Benefit of a child pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-52
Child pool volumes and host view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-53
Child pool limitations and restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54
Storage provisioning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-55
Storwize V7000 to back-end storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-56
Backend-storage partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57
Storwize V7000 WWNNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-58
Backend storage system WWNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-59
Storwize V7000 to DS3500 with more than one WWNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-60
Logical unit number to managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-61
Disk storage management interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-63
DS3K Storage system WWNN and WWPNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-64
DS3K Storwize V7000 host group definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-65
DS3K LUNs assigned to Storwize V7000 host group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-66
External storage system (automatic discovery) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-67
Managed disks are SCSI LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-68
Renaming logical number units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-70
Example of storage system LUN details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-71
Best practice: Rename a storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-72
Rename controller using chcontroller CLI command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-73
Best practice: Rename an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-74
Rename MDisks using chmdisk CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-75
MDisk properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-76
Storwize V7000 quorum index indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-77
MDisk properties: Quorum index indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-78
Distribute quorum disks across multiple controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-79
Best practice: Reassign the active quorum disk index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-80

© Copyright IBM Corp. 2012, 2016 vii


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC System failure: Quorum auto relocates quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-81


Modify storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-82
CLI commands addmdisk and rmmdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-84
Access methods for MDisk multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-85
Four multipathing methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-86
Example: Storage system path count DS3K in four-node Storwize V7000 cluster . . . . . . . . . . . . . . 5-87
Example: DS3K MDisk access path count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-88
Example: Storage system path count DS8K in four-node Storwize V7000 cluster . . . . . . . . . . . . . . 5-89
Example: DS8K Mdisk access path count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-90
Example: Storage system path count FlashSystem in two-node Storwize V7000 cluster . . . . . . . . 5-91
Example: FlashSystem MDisk access path count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-92
Best practices: Pools and MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-93
Storage provisioning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-94
Storwize V7000 Gen2 encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-95
Data at Rest encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-96
System storage V7.3 I/O stack with encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-97
Data-at-rest encryption key management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-98
System setup: Activate encryption licenses automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-99
Encryption license activated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-100
Manual encryption activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-101
Suggested task: Enable Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-102
Enable Encryption wizard (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-103
Enable Encryption wizard (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-104
Enable Encryption wizard (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-105
Software data encryption and decryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-106
Encryption is enabled at the pool level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-107
Supported storage systems (also known as controllers) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-108
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-109
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-110
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-112
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-114

Unit 6. Storwize V7000 host to volume allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
Storwize V7000 host topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Host terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
Storwize V7000: Functional categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
Physical storage and logical presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-6
Storwize V7000 host objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
FC redundant configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Preparation guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9
Maximum generic host configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10
Storwize V7000 host connection topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11
FC hosts to the Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Fibre Channel Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Host multipath support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14
Windows host FC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Identifying Windows HBAs (example for QLogic) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
Verifying host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
Identifying Fibre Channel connectivity using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18
Defining FC host objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-19
Identifying AIX adapter attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-20
CLI mkhost command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-21
Host object details and I/O group counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-22
Manage host object counts per I/O group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-23
Host connectivity connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-24

© Copyright IBM Corp. 2012, 2016 viii


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Storwize V7000 host connection topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-25


iSCSI architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-26
MPIO and iSCSI initiator support for iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-27
Configure iSCSI IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-28
iSCSI initiator and iSCSI target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-29
iSCSI host discover target (NODE) portal (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-30
iSCSI host discover target (NODE) portal (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-31
Discovered targets properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-32
Defining iSCSI host object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-33
SDDDSM netstat command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-35
Example of an auto iSCSI IP addresses failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-36
Node failover: Advantage versus disadvantage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-37
Examples of an auto iSCSI IP addresses failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-38
Best practice: IP connectivity failure protected by second IP port . . . . . . . . . . . . . . . . . . . . . . . . . . 6-39
iSCSI connectivity best practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-40
Managing host resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-41
Host properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-42
Storwize V7000 host connection topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-43
Volume allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-44
Volume mapping to host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-45
Three types of virtualized volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-46
Storwize V7000 volume caching I/O group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-48
I/O group and write I/O distributed cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-50
I/O group failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-52
I/O group node failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-53
Standard (volume) topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-54
HyperSwap (volume) topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-55
New volume commands (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-56
New volume commands (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-57
New volume commands (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-58
Create a basic (generic) volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-59
Create a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-60
Create volume option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-61
Create and Map Volume to Host option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-62
Volume formatting: Quick initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-63
Create a custom volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-64
Thin-provision and compressed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-65
General tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-66
Volumes and hosts view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-67
Modify host mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-68
Managing volume resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-69
Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-70
View mapped hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-71
Volume properties using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-72
View host mappings and pools details using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-73
Expand volume capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-74
Shrink volume capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-75
Removing MDisk migrates volume extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-76
Protect volumes on delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-77
Volume data protection with encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-78
Encrypting volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-79
Child pool encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-80
Storwize V7000 host connection topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-81
Discover volumes on the Windows (iSCSI) hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-82
Windows host paths view by using SDDDSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-83
Windows hosts Device Manager view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-84

© Copyright IBM Corp. 2012, 2016 ix


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Discover volumes on the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-85


Mounting V7000 volume on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-86
AIX host paths view using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-87
AIX path connection data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-88
Host paths to volume’s preferred node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-89
Storwize V7000 host connection topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-90
Moving volume between I/O groups: Host perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-91
IBM Support: Non-disruptive volume move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-92
NDVM: Supported OS and multipath drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-93
Moving volume between I/O groups: Volume perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-94
Changing preferred node using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-95
Changing preferred node using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-97
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-98
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-99
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-101
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-103

Unit 7. Spectrum Virtualize advanced features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Storwize V7000 enhanced features topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Spectrum Virtualize software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Storwize V7000 enhanced features topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5
IBM System Storage Easy Tier functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
Easy Tier configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7
IBM System Storage Easy Tier function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Easy Tier supports up to three tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Easy Tier modes of operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10
Evaluation mode: I/O activity monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-11
Automatic data placement mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12
Automated storage pool balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
How does automated storage pool balancing work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14
Easy Tier advanced settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
Extent migration types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-17
How does Easy Tier migration work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
Easy Tier Sub-LUN automated movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-20
Automated data placement plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-21
Easy Tier analytic processing cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Easy Tier settings summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23
Easy Tier default settings: Summary notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
What is considered to be hot? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-25
Type of storage pools: Single or multi-tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26
Example: Easy Tier single tier pool and volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27
Example: Easy Tier two tier pool and volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-28
External storage with flash drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29
Modifying MDisk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30
Easy Tier overload protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-31
Example of Hybrid storage pool properties view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
Example of Hybrid pool dependent volumes and volume extents . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33
Example of Easy Tier volume level using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34
Example of Easy Tier status volume indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-35
Easy Tier volume interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-36
Disk tier drive types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Creating a Hybrid pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-38
Disabling Easy Tier using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-39
Easy Tier limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
IBM Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-41

© Copyright IBM Corp. 2012, 2016 x


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Download STAT.exe file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-42


STAT dpa heat data files for analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-43
Setting Iometer parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-44
Easy Tier STAT CSV files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-46
STAT System Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-47
STAT example: Storage pool performance and recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . 7-48
STAT example: Drive configuration recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-50
STAT workload distribution across tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-51
STAT example: Volume Heat Distribution report (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-52
STAT example: Volume Heat Distribution report (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-53
Best practices: Easy Tier free extents in pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-54
Best practices: Easy Tier implementation options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-55
Best practices: Easy Tier data relocation decision criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-56
Storwize V7000 enhanced features topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-57
Thin provisioning concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-58
Fully allocated versus thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-59
Thin provision total transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-60
Managing thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-61
Thin provision concept: Auto expand off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-62
Thin provision concept: Auto expand on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-63
Thin-provision: Metadata management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-64
Limitations of virtual capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-66
Best practices: Monitoring thin-provisioned capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-67
Create a thin provision volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-68
Thin Provisioning tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-69
Thin Provision summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-70
Thin-provisioned volume created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-71
Thin-provisioned volume details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-72
Thin-provisioned threshold warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-73
Convert fully allocated volume to thin-provisioned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-74
Host view of volume attributes and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-75
Storwize V7000 enhanced features topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-76
Real-time Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-77
Spectrum Virtualize RtC delivers efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-78
RtC embedded RACE architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-80
Traditional compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-81
RACE innovation: Temporal locality compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-82
Example of a compression using a sliding window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-83
RtC from host and copy services perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-84
Storwize V7000 compression enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-85
Spectrum Virtualize Dual RACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-86
Memory/CPU core allocation: RtC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-87
Storwize V7000 Gen1 versus Gen2: Max performance (one I/O group) . . . . . . . . . . . . . . . . . . . . . 7-88
Real-time Compression license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-89
Mirroring to compressed disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-90
Configuring a compressed volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-91
Compressed volume: Advanced settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-92
Allocated compressed volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-93
Compressed volume copy details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-94
Compressed volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-95
Migrating to compressed data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-96
Compressed volume details at creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-97
Existing volume converted to compressed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-98
Storwize V7000 enhanced features topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-99
Compression implementation guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-100
Comprestimator: Compression benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-101

© Copyright IBM Corp. 2012, 2016 xi


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC IBM Comprestimator utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-103


IBM Comprestimator utility examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-104
RtC recommendations for implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-105
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-106
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-107
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-109
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-111

Unit 8. Spectrum Virtualize data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Data migration agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Data migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Image mode: Migrating existing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
Facilitating migration using volume extent pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Replace storage system or migrate to different storage tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Import existing LUN data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9
Reverse migration: Export from striped to image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Removing MDisk from storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11
Data migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-12
Data migration topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
Migrate volume to another pool (another tier/box) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14
Volume copy migration is transparent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Data migration topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Image mode volumes overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-18
Importing existing data from external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-19
Perform SAN device discovery to detect MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-21
Rename Mdisk to correlate to application owner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-22
Import to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-23
Import as image volume then migrate to striped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-24
Generated commands to import (create) volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-25
Map image volume to host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26
Migration volume to new storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-27
Import migration completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28
Volume extent distribution after migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Delete MigrationPool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-30
DS3K MDisk in unmanaged mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-31
Host apps oblivious to pool changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-32
Data migration topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-33
Export volume from striped to image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-34
Create temporary export pool to match extent size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-35
Export volume from striped to image mode (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-36
Export volume from striped to image mode (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-37
CLI migratetoimage command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-38
Volume Details: Extents being migrated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39
MDisk image access mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-40
Volume copy migrated to image in new pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-41
Volume Details: New pool with image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-42
Delete volume copy to exit Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-43
Map LUN (unmanaged MDisk) directly to host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-44
Data migration topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-45
Storage system migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-46
System Migration Wizard tasking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-47
System migration verification restrictions and prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-48
Preparing for data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-49
Preparing host and SAN environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-50
Remap host server’s LUNs to Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-51

© Copyright IBM Corp. 2012, 2016 xii


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC System migration MDisk (LUN) discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-52


Select MDisks to import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-53
Create image type volume for each MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-54
Verify host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-55
Review and rename image type volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-56
Map image volumes to host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-57
Select pool and create mirrored volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-58
Data migration to virtualize has begun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-59
System migration complete: Managed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-60
Finalize system migration: Delete image volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-61
Delete MigrationPool_8192 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-62
Data migration topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-63
Volume mirror concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-64
Volume mirroring offers better performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-65
Volume mirroring nondisruptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-66
Volume mirroring flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-68
Volume copies on different storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-69
Volume mirror I/O processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-70
Create a (simple) volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-71
Create a volume mirror using Mirrored or Custom presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-72
Volume copy created: Sync the two copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-73
Writes performed on both copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-74
Mirrored volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-75
Volume mirroring high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-76
Change volume primary copy (read/write) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-77
Delete volume mirror copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-78
Split into new volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-79
Transparent to host and user applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-80
Volume mirroring protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-81
Volume mirroring summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-82
Benefit of the add/remove MDisks options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-84
Data migration: Review summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-85
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-86
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-87
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-89
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-91

Unit 9. Spectrum Virtualize Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
Spectrum Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3
Spectrum Virtualize software architecture: Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
FlashCopy point in time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
FlashCopy functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
FlashCopy implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8
FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-10
FlashCopy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-11
FlashCopy: Background copy rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-13
FlashCopy reads/writes: Full background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
FlashCopy reads/writes: No background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-16
FlashCopy: Sequence of events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-17
FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-19
Copy Services: FlashCopy options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Fast path one-click preset selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-22
FlashCopy event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24
Spectrum Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-25
Create snapshot volume (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-26

© Copyright IBM Corp. 2012, 2016 xiii


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Create snapshot volume (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-27


FlashCopy mapping properties for snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-28
FlashCopy mapping details using CLI (no writes) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-29
FlashCopy mapping details with writes using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-30
Spectrum Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-31
Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-32
Create clone as consistency group with multi-select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-33
Generated commands for both selected volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-34
FlashCopy mappings and consistency group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-35
FlashCopy mappings and consistency group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-36
Clone consistency copy completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-37
Spectrum Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-38
Incremental FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-39
Create incremental FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-40
Backup preset (full + incremental) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-41
Start FlashCopy mapping manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-42
Background copy completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-43
Host I/O operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-44
Example: Source and target content differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-45
Example: Incremental copy to sync target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-46
Issue: Data corruption occurs on source volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-47
Solution: Reverse FlashCopy to restore source from target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-48
Create Clone FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-49
Create incremental FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-50
FC mappings and rename volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-51
Start FlashCopy PiT copy restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-52
Monitor FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-53
Source restored: Rebooted host view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-54
Reverse multi-target FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-55
Benefits of backup as consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-57
Spectrum Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-58
FlashCopy internal cache operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-59
Example: Bitmap space defaults for data replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-61
Bitmap space and copy capacity (per I/O group) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-62
Example: Bitmap space configuration and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-63
Spectrum Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-64
Tivoli Storage Manager for Advanced Copy Services (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-65
Tivoli Storage Manager for Advanced Copy Services (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-66
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-67
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-68
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-70
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-72

Unit 10. Spectrum Virtualize Copy Services: Remote Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Spectrum Copy Services: Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
When disaster occurs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
Remote Copy replication types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-5
Remote Copy Services replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-6
Remote Copy within the I/O stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-7
Synchronous Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8
Synchronous Metro Mirror communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-9
Asynchronous Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-10
Asynchronous Global Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-12
Global Mirror without cycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13
Global Mirror with cycling and change volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-14

© Copyright IBM Corp. 2012, 2016 xiv


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Global Mirror cycling mode processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-16


Remote Copy advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-18
Spectrum Copy Services: Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-19
Multi-cluster Remote Copy partnerships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-20
Multi-cluster Remote Copy topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-21
Defining a Storwize V7000 to Storwize partnerships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-22
Creating a partnership relationship using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-24
Managing partnership relationship using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-25
Creating a MM/GM relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-26
Creating an MM/GM consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-28
Partnerships: Supported code levels matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-29
Partnerships code compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-30
Remote Copy replication sequence of events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-31
Spectrum Copy Services: Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-33
Planning for remote copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-34
Connecting partnership using ISLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-35
Metro and Global Mirror over longer distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-36
Spectrum Copy Services: Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-37
Copy Services: Remote Copy configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-38
Remote Copy scenarios: Example environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-39
Define partnership between two Storwize V7000s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-40
Partnership established from NAVY to OLIVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-41
Define partnership from OLIVE to NAVY Storwize V7000s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-42
Partnership established across both clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-43
Establish cluster partnerships using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-44
Updated partnerships: GUI views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-45
Metro Mirror relationship: Scenario steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-46
Define Metro Mirror relationship (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-47
Define Metro Mirror relationship (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-48
Metro Mirror relationship: View from each cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-49
Relationship rcrel0 details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-50
Current content of master volume WINE_M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-51
Start relationship (mirroring) rcrel0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-52
Background copy and auxiliary volume offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-53
Synchronization and mirror suspension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-54
Content of volumes: Identical at stop relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-55
Content of volumes: After independent writes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-56
Out of sync and start of copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-57
Auxiliary volume content mirrored to master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-58
Restart relationship with force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-59
Switch back to original copy direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-60
rcrel0 relationship: Original copy direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-61
Update GM relationship to cycling mode (GMCV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-62
Create change volume for the master volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-63
Properties of GUI created master change volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-64
Rename master change volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-65
Create change volume for the auxiliary volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-66
Review GUI created auxiliary change volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-67
FlashCopy mappings generated by GMCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-68
FlashCopy mappings for the master volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-69
FlashCopy mappings for the auxiliary volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-70
Change to cycling mode and update cycle period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-71
Updated relationship entries in both systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-72
GMCV relationship state and progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-73
Start GMCV relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-74
Consistent copying state and freeze time updated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-75

© Copyright IBM Corp. 2012, 2016 xv


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC GMCV next cycle progress and freeze time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-76


Relationship view at auxiliary site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-77
GMCV relationship view: In between cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-78
Global Mirror relationship details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-79
Start Global Mirror relationship: PINK_SWV7K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-80
Monitor copy progress from NAVY_SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-81
Performance: Reads and writes at 25 MBps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-82
Background copy completed: Stop relationship with write access . . . . . . . . . . . . . . . . . . . . . . . . . 10-83
Global Mirror link tolerance (without cycling) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-84
Change the copy type of a relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-85
Change between MM to GM requires v7 partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-86
Impact of disconnected clusters on mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-87
Relationship state: Before link failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-88
Partnership states: Before and after link failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-89
Relationship state is disconnected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-90
Details after disconnect: Master view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-91
Details after disconnect: Auxiliary view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-92
Master after link restore: consistent_stopped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-93
Auxiliary after link restore: consistent_stopped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-94
Issue restart relationship after link restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-95
Relationship consistent and synchronized again . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-96
Auxiliary volume: After link restore and sync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-97
Copy Services: Supported features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-98
Summary of Copy Services configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-99
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-100
Review questions (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-101
Review questions (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-103
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-105

Unit 11. Storwize V7000 administrative management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
Administrative management topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-3
System Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-4
Health Status and Status Alerts indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-5
System event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-6
System event log access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-8
Maintenance mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-9
Alerts with error code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-10
Directed Maintenance Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-11
Example (1 of 3):Scenario of a Directed Maintenance Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 11-12
Example (2 of 3):Scenario of a Directed Maintenance Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 11-13
Example (3 of 3):Scenario of a Directed Maintenance Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 11-14
Run error code fix procedure solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15
Alerts without error code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
Administrative management topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-17
System audit log entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-18
System audit log /dumps/audit directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-19
Administrative management topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-20
Download support data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-21
Additional debug information capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
Backup Storwize V7000 system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-23
Download config backup file from system using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-24
Example of CLI: PSCP Storwize V7000 config backup file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-25
Network: Managing system, service IP addresses, ports and connectivity . . . . . . . . . . . . . . . . . . . 11-26
Management IP address redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-28
Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-29

© Copyright IBM Corp. 2012, 2016 xvi


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Contents

TOC Notifications: Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-30


Notifications: SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-31
Notifications: Syslog messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-32
Security: Remote Authentication and encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-33
Update licensed capacities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-35
System and drives software upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-36
Software packages download . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-38
Software Upgrade Test Utility overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-40
Software upgrade: Automatic versus manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-41
Launch Update System wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-42
Update System: Running Update Test Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-43
Update System: Node is taken offline during upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-44
Update System: Host path discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-45
Update System: Host path discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-46
Upgrade event log entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-47
Update System: Complete/All nodes upgraded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-48
Update drive firmware using GUI or CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-49
SSD drive conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-50
List drive details: SAS_SSD drive example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-51
GUI Preferences: Navigation and login message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-52
GUI preferences: General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-53
VMware virtual volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-54
Verify management GUI web browser settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-55
Clear web browser cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-56
Administrative management topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-57
When to use the Service Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-58
Service Assistant interface overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-59
Service assistant basic management information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-60
SA service-related actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-61
Recover lost data using the SA T3 recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-62
Support and email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-63
Best practices for proactive problem prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-64
Administrative management topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-65
IBM Spectrum Storage: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-66
IBM Spectrum Storage family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-67
IBM Spectrum Storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-69
Help and technical assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-70
PDF publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-71
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-73
Review questions (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-74
Review questions (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-76
Review questions (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-78
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-80

© Copyright IBM Corp. 2012, 2016 xvii


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Trademarks

TMK

Trademarks
The reader should recognize that the following terms, which appear in the content of this training
document, are official trademarks of IBM or other companies:
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in many
jurisdictions worldwide:
AIX 5L™ AIX® DB2®
developerWorks® DS4000® DS5000™
DS8000® Easy Tier® Express®
FlashCopy® FlashSystem™ GPFS™
HyperSwap® IBM FlashSystem® IBM Flex System®
IBM Spectrum™ IBM Spectrum Accelerate™ IBM Spectrum Archive™
IBM Spectrum Control™ IBM Spectrum Protect™ IBM Spectrum Scale™
IBM Spectrum Storage™ IBM Spectrum Virtualize™ Linear Tape File System™
Notes® Power Systems™ Power®
Real-time Compression™ Redbooks® Redpaper™
Storwize® System Storage DS® System Storage®
Tivoli® XIV®
Intel, Intel Xeon and Xeon are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of
Oracle and/or its affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware and the VMware "boxes" logo and design, Virtual SMP and VMotion are registered
trademarks or trademarks (the "Marks") of VMware, Inc. in the United States and/or other
jurisdictions.
Other product and service names might be trademarks of IBM or other companies.

© Copyright IBM Corp. 2012, 2016 xviii


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Course description

pref

Course description
IBM Storwize V7000 Implementation Workshop

Duration: 4 days

Purpose
This course is designed to leverage SAN storage connectivity by integrating a layer of intelligence
of virtualization, the IBM Storwize V7000 to facilitate storage application data access independence
from storage management functions and requirements. The focus is on planning and
implementation tasks associated with integrating the Storwize V7000 into the storage area network.
It also explains how to:
• Centralize storage provisioning to host servers from common storage pools using internal
storage and SAN attached external heterogeneous storage.
• Improve storage utilization effectiveness using Thin Provisioning and Real-Time Compression.
• Implement storage tiering and optimize solid state drives (SSDs) or flash systems usage with
Easy Tier.
• Facilitate the coexistence and migration of data from non-virtualization to the virtualized
environment.
• Utilize network-level storage subsystem-independent data replication services to satisfy backup
and disaster recovery requirements.
• This course lecture offering is a the Storwize V7000 V7.6. level.

Important

This course consists of several independent modules. The modules, including the lab exercises,
stand on their own and do not depend on any other content.

Audience
This lecture and exercise-based course is for individuals who are assessing and/or planning to
deploy IBM System Storage networked storage virtualization solutions.

Prerequisites
• Introduction to Storage (SS01G)
• Storage Area Networking Fundamentals (SN71) or equivalent experience
• An understanding of the basic concepts of open systems disk storage system and I/O
operations.

© Copyright IBM Corp. 2012, 2016 xix


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Course description

pref
Objectives
After completing this course, you should be able to:
• Outline the benefits of implementing an Storwize V7000 storage virtualization solution.
• Differentiate between the Storwize V7000 2076-524 control enclosure and the 2076-312/324
expansion enclosure models.
• Outline the physical and logical requirements to integrate the Storwize V7000 system solution.
• Implement the Storwize V7000 GUI and CLI system setup to configure the V7000 systems.
• Summarize the symmetric virtualization process to convert physical storage into virtual storage
resources.
• Implement volume allocations and map volumes to SAN attached host systems.
• Summarize the advanced system management strategies to maintain storage efficiency,
enhance storage performance and reliability.
• Employ data migration strategies to the virtualized Storwize V7000 system environment.
• Implement Copy Services strategies to managed Storwize V7000 system environment
remotely.
• Employ administration operations to maintain system ability.

Contents
Introduction to IBM Storwize V7000
Storwize V7000 hardware architecture
Storwize V7000 planning and zoning requirements
Storwize V7000 system initialization and user authentication
Storwize V7000 storage provisioning
Storwize V7000 host to volume allocation
Spectrum Virtualize advanced features
Spectrum Virtualize data migration
Spectrum Virtualize Copy Services: FlashCopy
Spectrum Virtualize Copy Services: Remote Copy
Storwize V7000 administration management

© Copyright IBM Corp. 2012, 2016 xx


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Agenda

pref

Agenda

Note

The following unit and exercise durations are estimates, and might not reflect every class
experience.

Day 1
(00:30) Course Introduction
(00:25) Unit 1: Introduction to IBM Storwize V7000
(01:00) Unit 2: Storwize V7000 hardware architecture
(00:45) Unit 3: Storwize V7000 planning and zoning requirements
(00:25) Unit 4: Storwize V7000 system initialization and user authentication
(00:45) Unit 5: Storwize V7000 storage provisioning
(00:10) Exercise 0: Lab environment overview
(00:15) Exercise 1: Storwize V7000 system initialization
(00:45) Exercise 2: Storwize V7000 system configuration
(00:20) Exercise 3: System user authentication
(00:20) Exercise 4: Provision internal storage
(00:15) Exercise 5: Examine external storage resources

Day 2
(00:20) Review
(01:15) Unit 6: Storwize V7000 host and volume allocation
(01:15) Unit 7: Spectrum Virtualize advanced features
(00:45) Exercise 6: Managing external storage resources
(00:45) Exercise 7: Host definitions and volume allocations
(00:30) Exercise 8: Access storage from Windows and AIX
(01:00) Exercise 9: Hybrid pool and Easy Tier
(00:30) Exercise 10: Access Storwize V7000 through iSCSI host

© Copyright IBM Corp. 2012, 2016 xxi


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Agenda

pref
Day 3
(00:20) Review
(01:30) Unit 8: Spectrum Virtualize data migration
(00:45) Unit 9: Spectrum Virtualize Copy Services: FlashCopy
(00:45) Unit 10: Spectrum Virtualize Copy Services: Remote Copy
(00:25) Exercise 11: Volume dependencies and tier migrations
(00:30) Exercise 12: Reconfigure internal storage: RAID options
(00:30) Exercise 13: Thin provision and volume copy
(00:30) Exercise 14: Real-time Compression
(01:00) Exercise 15: Import Data Migration

Day 4
(00:20) Review
(01:15) Unit 11: Storwize V7000 administration management
(01:00) Exercise 16: Copy Services: FlashCopy and consistency groups
(00:30) Exercise 17: User roles and access
(01:00) Exercise 18: Migrate existing data: Migration Wizard
(00:30) Exercise 19: Easy Tier and STAT analysis
Class Review and Evaluation

© Copyright IBM Corp. 2012, 2016 xxii


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Unit 0. Course introduction


Estimated time
00:30

Overview
This unit provides a high-level overview of the course deliverables and overall course objectives
that will be discussed in detail in this course.

© Copyright IBM Corp. 2012, 2016 0-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Course overview
This is a 4-day lecture and exercise-based course for individuals who are
assessing and/or planning to deploy IBM System Storage networked
storage virtualization solutions.

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-1. Course overview

© Copyright IBM Corp. 2012, 2016 0-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Course prerequisites
• IBM Introduction to Storage (SS01G)
• IBM Storage Area Networking Fundamentals (SN71) or equivalent
experience
• A basic understanding of the basic concepts of open systems disk
storage systems and I/O operations

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-2. Course prerequisites

© Copyright IBM Corp. 2012, 2016 0-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Course objectives
After completing this course, you should be able to:
• Outline the benefit of implementing an Storwize V7000 storage virtualization solution
• Differentiate between the Storwize V7000 2076-524 control enclosure model and the
2076-312/324 expansion enclosure models
• Outline the physical and logical requirements to integrate the Storwize V7000 system
solution
• Implement the Storwize V7000 GUI and CLI system setup to configure the V7000
systems
• Summarize the symmetric virtualization process to convert physical storage into virtual
storage resources
• Implement volume allocations and map volumes to SAN attached host systems.
• Summarize the advanced system management strategies to maintain storage efficiency,
enhance storage performance and reliability
• Employ data migration strategies to the virtualized Storwize V7000 system environment
• Implement Copy Services strategies to perform data replication between two virtualized
Storwize V7000 system environment
• Employ administrative operations to maintain system ability

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-3. Course objectives

© Copyright IBM Corp. 2012, 2016 0-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Agenda: Day 1
• Course introduction
• Unit 1: Introduction to IBM Storwize V7000
• Unit 2: Storwize V7000 hardware architecture
• Unit 3: Storwize V7000 planning and zoning requirements
• Unit 4: Storwize V7000 system initialization and user authentication
• Unit 5: Storwize V7000 storage provisioning
ƒ Exercise 1: Storwize V7000 system initialization
ƒ Exercise 2: Storwize V7000 system configuration
ƒ Exercise 3: Configure user authentication
ƒ Exercise 4: Provision internal storage
ƒ Exercise 5: Examine external storage resources

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-4. Agenda: Day 1

© Copyright IBM Corp. 2012, 2016 0-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Agenda: Day 2
• Review
• Unit 6: Storwize V7000 host and volume allocation
• Unit 7: Spectrum Virtualize advanced features
ƒ Exercise 6: Managing external storage resources
ƒ Exercise 7: Host definitions and volume allocations
ƒ Exercise 8: Access storage from Windows and AIX
ƒ Exercise 9: Hybrid pools and Easy Tier
ƒ Exercise10: Access Storwize V7000 through iSCSI host

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-5. Agenda: Day 2

© Copyright IBM Corp. 2012, 2016 0-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Agenda: Day 3
• Review
• Unit 8: Spectrum Virtualize data migration
• Unit 9: Spectrum Virtualize Copy Services: FlashCopy
• Unit 10: Spectrum Virtualize Copy Services: Remote Copy
ƒ Exercise 11: Volume dependencies and tier migration
ƒ Exercise 12: Reconfigure internal storage: RAID options
ƒ Exercise 13: Thin provisioning and volume mirroring
ƒ Exercise 14: Real-time compression
ƒ Exercise 15: Migrate existing data: Import Wizard

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-6. Agenda: Day 3

© Copyright IBM Corp. 2012, 2016 0-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Agenda: Day 4
• Review
• Unit 11: Storwize V7000 administration management
ƒ Exercise 16: Copy Services: FlashCopy and consistency groups
ƒ Exercise 17: User roles and access
ƒ Exercise 18: Migrate existing data: Migration Wizard
ƒ Exercise 19: Easy Tier and STAT analysis
• Class review and evaluation

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-7. Agenda: Day 4

© Copyright IBM Corp. 2012, 2016 0-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Introductions
• Name
• Company
• Where you live
• Your job role
• Your current experience with the products and technologies in this
course
• Do you meet the course prerequisites?
• What you expect from this class

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-8. Introductions

© Copyright IBM Corp. 2012, 2016 0-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 0. Course introduction

Uempty

Class logistics
• Course environment
• Start and end times
• Lab exercise procedures
• Materials in your student packet
• Topics not on the agenda
• Evaluations
• Breaks and lunch
• Outside business
• For classroom courses:
ƒ Lab room availability
ƒ Food
ƒ Restrooms
ƒ Fire exits
ƒ Local amenities

Course introduction © Copyright IBM Corporation 2012, 2016

Figure 0-9. Class logistics

© Copyright IBM Corp. 2012, 2016 0-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Unit 1. IBM Storwize V7000 Introduction


Estimated time
00:25

Overview
This unit provides an overview for each unit that will be discussed in detail in this course.

How you will check your progress


• Machine exercises

© Copyright IBM Corp. 2012, 2016 1-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Unit objectives
• Summarize the units covered in this course

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 1-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

IBM System Storage product positioning


The Storwize Family
A comprehensive range of fully virtualized storage systems

• One code base on all platforms


• One set of functions (selectively licensed)
• One experience
SAN Volume Controller

Storwize V7000
Unified

Storwize V7000

Storwize V5000

* FlashSystem
Storwize V3700 840, 900 and V9000

*IBM FlashSystems are supported on the


Storwize V3500 same enhanced functions and
(Asia Pacific only)
management tools.

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-2. IBM System Storage product positioning

IBM’s market-leading Software Defined Storage solutions, offers smarter storage for smarter
computing with distinct characteristics and values for small and mid-size businesses to major
enterprises.
IBM System Storage Storwize V7000 systems are virtualizing RAID storage systems that are
designed to store more data with fewer disk drives to reduce space, power and cooling demands,
and reduce operational cost.
IBM SAN Volume Controller, IBM’s first storage virtualization appliance for large enterprises, offers
high-availability and a wide range of sophisticated functions. IBM took this platform software and
shared it across this family of virtualized storage systems to fit businesses of all sizes. The Storwize
family offers a common code base and integrated set of advanced functions like Real-time
Compression and Easy Tier with an easy to used GUI.
Although not part of the Storwize family, IBM FlashSystems are supported on the same enhanced
functions and management tools.

© Copyright IBM Corp. 2012, 2016 1-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Storwize V7000 terminology


A hardware unit that includes the chassis with a midplane for connection of node
Control enclosure
canisters, drives, and power supply units with batteries
A hardware unit that includes the node electronics, fabric, and service interfaces,
Node canister serial-attached SCSI (SAS) expansion ports and direct connections to internal
drives in the enclosure
A hardware unit that includes the chassis with a midplane for connection of
Expansion enclosure
expansion canisters, drives, and power supply units without batteries
A hardware unit that includes the electronics to provide serial-attached SCSI
Expansion canister (SAS) connections to the internal drives in the enclosure and SAS expansion
ports for attachment of additional expansion enclosures

Two node canisters in a control enclosure. Up to four control enclosures can be


System
clustered to form one system.

An SCSI logical unit (also known as LUN) built from an internal or external RAID
Managed disk (MDisk)
array

Physical SAS drives within a Storwize V7000 control or expansion enclosure


Internal storage
used to create RAID arrays and managed disks
Managed disks that are SCSI logical units (aka LUNs) presented by storage
External storage
systems that are attached to the SAN and managed by the system

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-3. Storwize V7000 terminology

This table lists storage terminologies that will be used in this unit.

© Copyright IBM Corp. 2012, 2016 1-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Storwize V7000 Gen2 at a glance


• 19 inch rack-mounted 2U device
• Two node canisters (each with an
8-core processors and integrated
hardware-assisted compression
acceleration
• New cache architecture IBM Storwize V7000 2076-524 front view

ƒ 64 GB cache per system


(standard) • 12 Gb SAS interface support for
ƒ Optional 128 GB cache per connecting Storwize V7000
system for Real-time Compression expansion enclosures supporting
workloads twelve 3.5-inch or twenty-four 2.5-
inch drives
• Host connectivity support:
ƒ Scaling up to 504 drives per system
ƒ 8 Gb /16 Gb Fibre Channel (FC) with the attachment of 20 expansion
ƒ 10 GbE iSCSI / Fibre Channel enclosures
over Ethernet (FCoE), and 1 Gb í Up to 1,056 drives in a four
iSCSI system configuration
• Dual redundant power supplies • Compatible with Gen1 models

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-4. Storwize V7000 Gen2 at a glance

IBM Storwize V7000 is a virtualized, software-defined storage system designed to consolidate


workloads into a single storage system for simplicity of management, reduced cost, highly scalable
capacity, and high performance and availability. The Storwize V7000 Gen2 Model 524 is a 2U,
19-inch rack mount enclosure capable of delivers improved performance, scalability, and efficiency.
The Storwize V7000 SFF Control Enclosure Model 524 features:
• Two node canisters, each with an eight-core processor and 32 GB cache for a system total of
64 GB cache.
▪ Model 524 comes standard with integrated, hardware-assisted compression acceleration.
Real-time Compression workloads can further benefit from the optional cache upgrade to
increase the total system cache to 128 GB and the optional compression accelerator card
feature for additional hardware-assisted compression acceleration.
• Offers host connectivity support using
▪ 1 Gb Ethernet ports standard for 1 Gb iSCSI connectivity
▪ Up to two I/O adapter features for 8 Gb FC and 10 Gb iSCSI/FCoE connectivity
• The Storwize V7000 Model 524 Storwize V7000 offers both 3.5-inch large form factor (LFF) and
2.5-inch small form factor (SFF) 12 Gb SAS expansion enclosure models.

© Copyright IBM Corp. 2012, 2016 1-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty
• The Storwize V7000 LFF Expansion Enclosure Model 12F supports up to twelve 3.5-inch
drives, while the Storwize V7000 SFF Expansion Enclosure Model 24F supports up to
twenty-four 2.5-inch drives.
• High-performance disk drives, high-capacity nearline disk drives, and flash (solid state) drives
are supported. Drives of the same form factor can be intermixed within an enclosure and LFF
and SFF expansion enclosures can be intermixed within a Storwize V7000 system.
▪ A Storwize V7000 Model 524 system scales up to 504 drives with the attachment of 20
Storwize V7000 expansion enclosures. Storwize V7000 systems can be clustered to help
deliver greater performance, bandwidth, and scalability. A Storwize V7000 clustered system
can contain up to four Storwize V7000 systems and up to 1,056 drives. Storwize V7000
Model 524 systems can be added into existing clustered systems that include previous
generation Storwize V7000 systems.
Unit 2 will discuss in detail the architecture structure of the IBM Storwize V7000 Gen2 model.

© Copyright IBM Corp. 2012, 2016 1-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Storwize V7000 implementation, zoning, and management


interfaces

‰Physical planning
9Rack hardware configuration
9Cabling connection requirements

‰ Logical planning
9 Management IP addressing plan
9 iSCSI IP addressing plan
9 SAN zoning and SAN connections
9 Backend storage subsystem configuration
9 Storwize V7000 system configuration

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-5. Storwize V7000 implementation, zoning, and management interfaces

In Unit 3, we will review the Storwize V7000 infrastructure physical planning requirements for
installing and cabling the hardware environment. We will also discuss the logical planning
requirements for defining system management access, implementing dual SAN fabric zoning
policies that includes external storage devices, host systems, including optional zoning
requirements to support remote copy services. In addition, we will highlight best practices to
achieve performance as well as non-disruptive scalability.

© Copyright IBM Corp. 2012, 2016 1-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Storwize V7000 initialization


• Initialize Storwize V7000 system using
the Service Assistant tool by way of the
Technician Port (T-port)
• Complete system setup using the
Storwize V7000 management GUI

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-6. Storwize V7000 initialization

In Unit 4, we will discuss the procedures to initializing the Storwize V7000 Gen2 system using the
technician port and define system information and configuration parameters using the Storwize
V7000 management GUI System Setup wizard.

© Copyright IBM Corp. 2012, 2016 1-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Storwize V7000 storage provisioning

External
LUN1
Storage LUN0
RAID 6
APPLOG
NL
20 TiBSAS
20 TiBFLASH
NL
SAS
20 TiB NL
10 TiB
10 TiBFLASH 20 TiB
NL
10 TiB
10 TiB
Internal Distributed
Hybrid pool Virtualization
RAID 6
RAID 5

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-7. Storwize V7000 storage provisioning

In Unit 5, we will discuss how the Storwize V7000 manages physical (internal) storage resources
using different RAID levels and different optimization strategies; and discuss its ability to
consolidate disk controllers from various vendors into pools of virtualized storage resources.

© Copyright IBM Corp. 2012, 2016 1-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Storwize V7000 host and volume administration


• Host configuration options
• Volume allocation
FC iSCSI
• Host storage access 16 Gbs / 8 Gb 10 Gb
• Non-disruptive volume move (NDVM)
FCoE
10 Gb

Storwize V7000 rear view

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-8. Storwize V7000 host and volume administration

With the IBM Storwize V7000, clients can create various host objects to support specific
configuration such as Fibre Channel, Fibre Channel over Ethernet (FCoE) and iSCSI.
In Unit 6, we will discuss each of the supported host interface support that includes 8 gigabit (Gb)
and 16 Gb FC, and 10 Gb Fibre Channel over Ethernet (FCoE) and Internet Small Computer
System Interface (iSCSI). In addition, we will discuss volume allocations, creating host-accessible
storage provisioned from a storage pool.

© Copyright IBM Corp. 2012, 2016 1-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Spectrum Virtualize advanced features

Dynamic
growth Recycle waste
Thin Purchase only the
Without thin provisioning, pre-allocated With thin provisioning, applications can storage you need when
provisioning space is reserved whether the grow dynamically, but only consume space you need it.
application uses it or not. they are actually using.

Store less
Real-time Reduce data storage
Compression (RtC) ingestion.

Perform economically
Flash Optimized
Meet and exceed
with IBM Easy business service levels.
Tier

Continuing storage efficiency with data migration and virtualization

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-9. Spectrum Virtualize advanced features

IBM Storwize V7000 offers advanced software features that are based on capabilities in IBM
Spectrum Virtualize software, which has its origins in IBM’s SAN volume Controller (SVC); included
in its the base price.
Unit 7 introduce the basic concepts of dynamic data relocation and storage optimization features
and how each can be implement in the IBM Storwize V7000 environment.
The following functions for storage efficiency will be discussed:
• Thin provisioning
• Real-time compression
• Easy Tier

© Copyright IBM Corp. 2012, 2016 1-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Storwize V7000 data migration


• Non-virtualized image mode
ƒ Allows existing data to become Storwize V7000-managed without data
conversion or movement
• Virtualized striped mode
ƒ Allows the change in the mapping between volumes extents and MDisk
extents without impacting host access

Data Migration
NetApp DS3000
N series

EMC Storwize
Moving workload (data extents) to: family
HPQ
; Balance usage distribution XIV
; Move data to lower-cost storage tier
HDS ; Expand or convert to new storage DS8000
systems; decommission old systems
Sun ; Optimize Flash with Easy Tier Storwize V7000
900

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-10. Storwize V7000 data migration

In Unit 8 of the IBM Storwize V7000 Storage Implementation Workshop covers the functionality of
data migration that enables you to seamless integrate the Storwize V7000 into existing storage
environments to include the ability to transfer data to and from other storage systems for
consolidation or decommission.

© Copyright IBM Corp. 2012, 2016 1-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

IBM Spectrum Virtualize Advanced Copy Services


• Volume mirroring is an alternative method of
migrating volumes.
• FlashCopy creates point-in-time copy.
ƒ Increases the maximum number of FlashCopy consistency
Source
groups from 127 per system to 255 Volume
• Metro Mirror
ƒ Up to 300 km between sites for business continuity
• As with any synchronous remote replication, performance
requirements might limit usable distance
• Host I/O completed only when data stored at both
locations
• Global Mirror
ƒ Supports up to 250 ms Global Mirror round-trip latency and
distances of up to 20,000 km are supported.
ƒ Does not wait for secondary I/O before completing host I/O Supports both intracluster and
intercluster Metro Mirror and
í Helps reduce performance impact to applications Global Mirror
ƒ Maintains consistent secondary copy at all times
IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-11. IBM Spectrum Virtualize Advanced Copy Services

In Unit 9 and Unit 10 of the IBM Storwize V7000 Storage Implementation Workshop course, we will
describe the Advanced Copy Services functions that are enabled by IBM Spectrum Virtualize
software running inside IBM Storwize family products. This unit will includes the following topics:
• Volume Mirroring alternative method of migrating volumes between storage pools internally or
externally (two close sites).
• FlashCopy allows the administrator to create copies of data for backup, parallel processing,
testing, and development.
• Metro Mirror is a Synchronous Mirror Copy that ensures updates are committed at both the
primary and the secondary before the application considers the updates complete. Therefore,
the secondary is fully up to date if it is needed in a failover.
• Global Mirror is an Asynchronous Mirror Copy which means the application acknowledges that
the write is complete before the write is committed at the secondary. Therefore, on a failover
certain updates (data) might be missing at the secondary. Global Mirroring supports Recovery
Point Objective (RPO) which defines the amount of acceptable data loss in the event of a
disaster. It supports up to 250 ms Global Mirror round trip latency and distances of up to 20,000
km are supported.
This also includes native IP replication. Supporting Remote Mirroring over IP communication on the
IBM Storwize Family systems by using Ethernet communication links.

© Copyright IBM Corp. 2012, 2016 1-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

IBM Storwize V7000 administration management


• IBM Storwize V7000GUI
ƒ Event Log Monitoring
ƒ Directed Maintenance Procedure
(DMP)
ƒ Firmware upgrade
ƒ Configuration backup
ƒ Call Home functions

• Command-line interface (CLI)

• IBM Storwize V7000 Service


Assistant Tools

• IBM Storage Mobile Dashboard

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-12. IBM Storwize V7000 administration management

As part of the final unit (Unit 10) for this course, we will discuss the essentials for administration,
maintenance, and serviceability of the IBM Storwize V7000 storage system. This unit reviews how
management GUI events are reported by the system, highlight procedures for troubleshooting and
handling components for service, such as using Directed Maintenance Procedure (DMP), and
review the implementation of concurrent firmware code updates. To include management support
from the CLI.
In addition, we will take a look at system maintenance options that can be configured to perform
system backup, Call Home notifications, and how the Service Assistant Tool can be used for
troubleshooting if the flash nodes are inaccessible or when an IBM Support engineer directs you to
use it.
And for those storage administrators that are mobile, we will also highlight the features of the IBM
Storage Mobile Dashboard application that provides basic monitoring capabilities for IBM storage
systems health and performance status of their IBM Storage systems.

© Copyright IBM Corp. 2012, 2016 1-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 1. IBM Storwize V7000 Introduction

Uempty

Unit summary
• Summarize the units covered in this course

IBM Storwize V7000: Introduction © Copyright IBM Corporation 2012, 2016

Figure 1-13. Unit summary

© Copyright IBM Corp. 2012, 2016 1-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Unit 2. Storwize V7000 hardware


architecture
Estimated time
01:00

Overview
This unit introduces the IBM Storwize V7000 2076 hardware architecture, detailing the 2076-524
control enclosure components and features to include the option Storwize V7000 2076-12F/24F
expansion enclosure options. This unit will also discuss the benefits of scaling the Storwize V7000
for both performance and capacity.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 2-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Unit objectives
• Identify component features of the IBM Storwize V7000 2076-524
control enclosure model
• Distinguish between the IBM Storwize V7000 2076-12F and 2076-24F
expansion enclosure
• Characterize IBM Storwize V7000 Gen2 scalability requirements to
incrementally increase storage capacity and performance

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 2-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 hardware topics


• Storwize V7000 hardware architecture
ƒ Control enclosure subsystem
ƒ Expansion enclosure
ƒ Scalability

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-2. Storwize V7000 hardware topics

This topic introduces the Storwize V7000 2076 next generation hardware components of the IBM
Storwize V7000 disk system.

© Copyright IBM Corp. 2012, 2016 2-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

IBM Storwize V7000 Gen2 basic components


• IBM Storwize V7000 2076 is a virtualized, Enterprise class storage
system that is available in the following models:
ƒ Storwize V7000 SFF Control Enclosure Model 524
ƒ Storwize V7000 LFF Expansion Enclosure Model 12F
ƒ Storwize V7000 SFF Expansion Enclosure Model 24F
• All enclosures are 2U, 19-inch rack mount enclosure

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-3. IBM Storwize V7000 Gen2 basic components

Storwize V7000 is a virtualized, enterprise-class storage system that provides the foundation for
implementing an effective storage infrastructure; providing the latest storage technologies for
unlocking the business value of stored data, including virtualization and Real-time Compression,
and is designed to deliver outstanding efficiency, ease of use and dependability for organizations of
all sizes.
The IBM Storwize V7000 2076 is available in the following models:
• Storwize V7000 SFF Control Enclosure Model 524
• Storwize V7000 LFF Expansion Enclosure Model 12F
• Storwize V7000 SFF Expansion Enclosure Model 24F
All models are delivered in a 2U, 19-inch rack mount enclosure and include a three-year warranty
with customer replaceable unit (CRU) and on-site service. Optional warranty service upgrades are
available for enhanced levels of warranty service.
This unit will discuss features and function of each enclosure option.

© Copyright IBM Corp. 2012, 2016 2-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 terminology


A hardware unit that includes the chassis with a midplane for connection of node
Control enclosure
canisters, drives, and power supply units with batteries
A hardware unit that includes the node electronics, fabric, and service interfaces,
Node canister serial-attached SCSI (SAS) expansion ports and direct connections to internal
drives in the enclosure
A hardware unit that includes the chassis with a midplane for connection of
Expansion enclosure
expansion canisters, drives, and power supply units without batteries
A hardware unit that includes the electronics to provide serial-attached SCSI
Expansion canister (SAS) connections to the internal drives in the enclosure and SAS expansion
ports for attachment of additional expansion enclosures
A cabling scheme for a string of expansion enclosures that provides
SAS chain redundant access to the drives inside the enclosures by both node canisters in
the control enclosure
One half of the cabling scheme for a SAS chain providing redundant SAS
Strand connectivity to a set of drives within multiple enclosures. Also considered an
independent SAS domain or network
A phy contains a transceiver (transmitter + receiver) and each SAS port contains
Phy
four phys. A physical link or lane interconnects two phys.

Physical link or lane Within a single SAS cable are four physical links each capable of 6 Gbps

Protect against the potential exposure of sensitive user data and


Encryption
user metadata that is stored on discarded, lost, or stolen storage devices
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-4. Storwize V7000 terminology

This table lists terminologies that are used in this unit.

© Copyright IBM Corp. 2012, 2016 2-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 hardware topics


• Storwize V7000 hardware architecture
ƒ Control enclosure subsystem
í Processor subsystem
í Interface connectivity
í Battery modules
í Power and cooling
ƒ Expansion enclosure
ƒ Scalability

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-5. Storwize V7000 hardware topics

This topic introduces the hardware architecture of the Storwize V7000 2076-524 control enclosure
and the Storwize V7000 2076-12F and 24F expansion enclosures.

© Copyright IBM Corp. 2012, 2016 2-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V70000 control enclosure front view


• IBM Storwize V7000 2076-524 control enclosure features the same
core components and machine type
ƒ Replaces 2076 Gen1 models ( -112, -124, -312, -324 (no 12 drive control
enclosure available))
ƒ Supports up to twelve SAS hot-swap 2.5-inch drive bays

LED
Indicator
Panel

2U

Width: 445 mm (17.5 in; 19 inch Rack Standard)


Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-6. Storwize V70000 control enclosure front view

The IBM Storwize V7000 2076-524 Gen2 control enclosure is a midrange virtualization RAID
storage subsystem that employs the IBM Spectrum Virtualize software engine. The Storwize V7000
control enclosure is packaged in a double-high, double-wide rack-mount enclosure that installs in a
standard 19-inch equipment rack that contains twelve front load 2.5-inch drives slots.
All components located in the front of the unit are protected by redundant hot-swappable
components.

© Copyright IBM Corp. 2012, 2016 2-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2: Exploded view

Canisters

PSU

Fan Cage

Enclosure Chassis

Midplane
Drive Cage

Drives

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-7. Storwize V7000 Gen2: Exploded view

This diagram illustrates all the moving parts of the Storwize V7000 Gen2 model. As you can see,
the front of the chassis is basically unchanged. The major change applies to the rear of the chassis
which as redesigned to make room for the more powerful node canisters and power supply units.
Also, you’ll notice that a separate fan assembly is now part of the structure and no longer
configured inside the power supply.

© Copyright IBM Corp. 2012, 2016 2-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2: Block diagram of node canister


*Optional 16GB DIMM
16GB DIMM
16GB DIMM
Standard
16GB DIMM

HBAs
High speed 8Gb/
PLX Ivy Bridge PCIe 3-1GB full duplex
cross card 16Gb FC
communications
1.9GHz 8 lanes
E5-2628L-V2 or
10GbE
Boot
DMI 128GB SSD
Quad
1GbE
Mezz Conn

*Optional 2nd 1GbE


Compression COLETO COLETO
Acceleration CREEK PLX CREEK
Card CHIP
USB

To Expansion Enclosure
To Control 4 phys TPM
SAS Drives on
Enclosure 4 phys SPC
EXP SAS Chain 12Gb/phy 4 phys SAS Chain 1
Drives on
SAS Chain 0 4 phys SAS Chain 2

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-8. Storwize V7000 Gen2: Block diagram of node canister

This visual is provides a logical block diagram of the components and flow of each Storwize V7000
Gen2 hardware specification (per control canister):
• Same form factor as existing Storwize V7000 control enclosure. However, each canister in
Gen2 now occupies a full 2U inside the 2U chassis.
• Each controller has a modern 8-core 64-bit Intel Ivy Bridge processor
• 32 GB RAM with the ability to scale up to 64 GB for Real-time Compression
• On-board hardware compression engine that is build into the system board via the Coleto
Creek compression accelerator chipset to support the default pass-through adapter enablement
and encryption. The Gen2 system board also supports the attachment of the optional
compression Acceleration card. Each Intel Quick Assist compression acceleration card is also
based on the support of the Coleto Creek chipset.
• Two 12Gb/s SAS drive expansion ports per node canister to support the attach of the optional
Storwize V7000 2076-12F large form factor (LFF) drives and 24F small form factor (SFF) drives
expansion enclosures.
• Three PCIe Gen 3 slots (2x Host Interface, 1x for additional hardware compression).
• Two USB ports for debug and emergency access

© Copyright IBM Corp. 2012, 2016 2-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty
• One battery (moved from PSU into canister)
The SAS expander provides the drive attachment for the drives within the control enclosure.
Storwize V7000 Gen2 controller used SAS Chain 0 to manage the drives within the enclosure. The
controller disk drives connect to the Gen2 chassis through the midplane interconnect for their
power. Also in the control enclosure, the midplane interconnect is used for the internal control and
IO paths to the chassis IO bays.
All optional attached expansion enclosure are connected using SAS Chain 1 and SAS Chain 2.

© Copyright IBM Corp. 2012, 2016 2-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 node interior view User-serviceable parts:


1. HICs + compression card
2. Cover
3. Battery
I/O interface PCIe3 4. DIMMs
slots

DIMM slots (1-2)


with 32 GB
memory

Lithium ion cache


Ivy Bridge battery
8-core
CPU
3
4
DIMM slots (3-4)
with 32 GB memory
(not shown)

Rear connectors

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-9. Storwize V7000 node interior view

This diagram illustrate the interior view of a Storwize V7000 Gen 2.

© Copyright IBM Corp. 2012, 2016 2-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 processor subsystem


• Dual Socket-R, LGA2011 processor
subsystem
ƒ 8-core CPUs with 64 GB memory
í 4 I/O per core
í Intel Xeon Ivy Bridge processor
ƒ PCIe Gen3 provides 8 lanes, equal roughly
to 8 GB of data. per second – per slot
• Intel QuickAssist technology
ƒ Provide dedicated processing power and
greater throughput for compression using

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-10. Storwize V7000 Gen2 processor subsystem

The Storwize V7000 Gen2 controller adopts the processor subsystem of the IBM SAN Volume
Controller DH8 node with specific hardware modifications that match the needs of the Storwize
V7000 Gen2.
The Storwize V7000 Gen2 processor subsystem incorporates dual Socket-R, LGA2011 to support
the Intel Xeon Ivy Bridge, eight-core processors with up to 8 GT/s QPI link between the two
processors. With the PCIe Gen3 provides 8 lanes, which gives you about 8 GB of data per second
- per slot.
The Storwize V7000 Gen2 control enclosure also combines Intel QuickAssist Technology with an
Intel architecture core supporting the industry first hardware compression accelerator. This feature
provides dedicated processing power and greater throughput for compression.

© Copyright IBM Corp. 2012, 2016 2-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2: Rear view


• Redesigned control enclosure with side-by-side canister layout (instead of
upper and lower) to support up to six half height PCIe Gen3 cards for I/O
connectivity

3x 1GbE ports
2x 12GBps SAS
expansion ports PCIe3 slots PCIe3 slots

Port 1
Port 2
Port 3

Port 1
Port 2
Port 3
Canister1 Canister2
ac PSU PSU

Dual controller/node canisters


Technician port
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-11. Storwize V7000 Gen2: Rear view

The 2U rack-mount form factor of the 2076-524 allows the Storwize V7000 node to accommodate a
mix of different supported network adapters and compression accelerators. IBM Storwize V7000
Gen2 node canisters were redesigned to increase the enclosure’s height which now allows support
for up to three half height PCIe3 slots per canister for I/O connectivity to be installed.
Each Storwize V7000 node canister come standard with:
• Four 1-Gb on-board Ethernet ports. Ethernet port 1 -3 are used for 1 Gb iSCSI connectivity and
system management. Ports are numbered in the order of left to right beginning with Ethernet
port 1.
• The fourth Ether port is a dedicated Technician port (T-Port) used to initialize the system or
redirects to the Service Assistant (SA) tool.
• Two 12 Gb SAS ports for Storwize V7000 expansion enclosure attachment
• Two USB ports (management port - not in use)
▪ The USB ports and the Technician port are not used during normal operation. Connect a
device to any of these ports only when you are directed to do so by a service procedure or
by an IBM service representative.
• Two ac power supplies and cooling units

© Copyright IBM Corp. 2012, 2016 2-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 node canister indicators

SAS Port Link SAS Port Fault


(Green) (Amber) Battery Status
Battery Fault

System Status LEDs

Status Canister Fault


Status

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-12. Storwize V7000 Gen2 node canister indicators

Each Storwize V7000 2076-524 node canister has indicator LEDs that provide status information
about the canister.
Each of the SAS port contains a port link and a fault link LEDs:
• When there is no link connection on any phys (lanes). The connection is down.
• The status is On and Port link is Green when there is a connection on at least one phy which
indicates at least one phy connector is up.
• An OFF status indicates no fault. All four phys have a link connection. This can indicate a
number of different error conditions:
▪ One or more, but not all, of the 4 phys are connected.
▪ Not all 4 phys are at the same speed.
▪ One or more of the connected phys are attached to an address different from the others
Each canister has a system status LED panel that is the same LED panel indicator located on the
front of the chassis.
When Power status is Green:
• OFF No power is available or power is coming from the battery.

© Copyright IBM Corp. 2012, 2016 2-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty
• SLOW BLINK Power is available but the main CPU is not running; called standby mode.
• FAST BLINK In self test.
• ON Power is available and the system code is running.
When Status indicator is Green:
• OFF The system code has not started. The system is off, in standby, or self test.
• BLINK The canister is in candidate or service state. It is not performing I/O. It is safe to remove
the node.
• FAST BLINK The canister is active, able to perform I/O, or starting. ON The canister is active,
able to perform I/O, or starting. The node is part of a cluster.
When Canister fault status is Amber:
• OFF The canister is able to function as an active member of the system. If there is a problem on
the node canister, it is not severe enough to stop the node canister form performing I/O.
• BLINK The canister is being identified. There might or might not be a fault condition.
• ON The node is in service state or an error exists that might be stopping the system code from
starting. The node canister cannot become active in the system until the problem is resolved.
You must determine the cause of the error before replacing the node canister. The error may be
due to insufficient battery charge; in this event, resolving the error simply requires waiting for
the battery to charge.
Each canister also have battery status LEDs that indicates the following:
• When the Battery status Green:
▪ OFF Indicates the battery is not available for use (e.g., battery is missing or there is a fault
in the battery).
▪ FAST BLINK The battery has insufficient charge to perform a fire hose dump.
▪ BLINK The battery has sufficient charge to perform a single fire hose dump.
▪ ON The battery has sufficient charge to perform at least two fire hose dumps.
• When Battery fault is Amber:
▪ OFF No fault. An exception to this would be where a battery has insufficient charge to
complete a single fire hose dump. Refer to the documentation for the Battery status LED.
▪ ON There is a fault in the battery.

© Copyright IBM Corp. 2012, 2016 2-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 I/O connectivity


• Up to six I/O adapter cards can be supported by using two riser card slots
(3x PCIe Gen3 slots):
ƒ Up to two (2 and 4-port) 16 Gbps Fibre Channel HIC (max. 8-ports)
ƒ Up to two (4-port) 8 Gbps Fibre Channel (max. 8 ports)
ƒ Up to two (4-port) 10 Gb Ethernet iSCSI/FCoE (optional)
‫ ޤ‬Only one FCoE 10 GbE quad-port HIC is supported per node HIC ports numbered
from top down
ƒ Three 1 Gb iSCSI
ƒ A minimum of one I/O adapter feature is required.
Dedicated compression Host interface card (HIC) slots
pass-through or accelerator slot 1 per node 2 and 3 per node
Port 1

Port 2
Port 3
Example identifies the installment location for the I/O adapter cards
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-13. Storwize V7000 Gen2 I/O connectivity

IBM Storwize V7000 Gen2 offers enhanced I/O connectivity with the support of two riser card slots
(3 PCIe Gen3 slots). The Storwize V7000 2076-524 model does not ship with any I/O connectivity
cards. However, customers can select multiple add-on adapters for driving host I/O and offloading
compression workloads.
The Storwize V7000 node provides link speeds of 2, 4, 8 and 16 Gb with the following optional
support:
Slot 1 is only used for the on-board compression hardware engine or you can replace it with the
Compression Accelerator card.
Slots 2 and 3 support the following:
• 16 Gb FC four port adapter pair for 16 Gb FC connectivity (two cards each with four 16 Gb FC
ports and shortwave SFP transceivers)
• 16 Gb FC two port adapter pair for 16 Gb FC connectivity (two cards each with two 16 Gb FC
ports and shortwave SFP transceivers)
▪ The quantity of 16 Gb FC adapter features can be two
• 8 Gb FC adapter pair for 8 Gb FC connectivity (two cards each with four 8 Gb FC ports and
shortwave SFP transceivers)

© Copyright IBM Corp. 2012, 2016 2-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty
- The quantity of 8 Gb FC adapter features can be two
• 10 Gb Ethernet adapter pair for 10 Gb iSCSI and FCoE connectivity (two cards each with four
10 Gb Ethernet ports and SFP+ transceivers)
▪ The quantity of 10 Gb Ethernet feature cannot exceed one.
A minimum of one I/O adapter feature is required. Effectively, with the optional card in place,
customers would get 2 Gb pipe from 1 Gb Ethernet ports + 32 Gb of pipe from FC adapter (16 Gb
adapters) and additional 20 Gb of pipe from converged network adapter.

© Copyright IBM Corp. 2012, 2016 2-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

8 Gb FC host interface card


• Same adapter as used in the Gen1 Storwize V7000 models
ƒ PMC-Sierra Tachyon QE8
ƒ SW SFPs included
ƒ LW SFPs optional
• Up to two can be installed in each node canister for total of 8 FC
ports in the control enclosure

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-14. 8 Gb FC host interface card

The 8 Gbps FC HIC is a high-performance 4-port adapter that features an 8-lane native
PCI-Express Gen-2 link, enabling full-duplex operation simultaneously on all ports. The Tachyon
QE8 is an integrated single chip solution ideal for a variety of high-performance I/O applications.
The 8 Gb FC 4-port HIC supports up to eight ports in a single system configuration, and up to 32
ports in a 4-node cluster.

© Copyright IBM Corp. 2012, 2016 2-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

16 Gb FC host interface card


• Dual and quad-port 16 Gb Fibre Channel HIC
ƒ Requires Spectrum Virtualize Family Software V7.4
ƒ Enables use of next generation Fibre Channel SANs
ƒ Connect to legacy 8 Gb servers or storage through switches
ƒ Overall system throughput largely unchanged
ƒ Up to double single-stream single port throughput (to 1.5 GB/s) can
benefit analytics workloads
ƒ SW (standard) and LW (optional) SFPs available

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-15. 16 Gb FC host interface card

The 16 Gb FC HIC supports up to eight ports in a single system configuration, and up to 32 ports in
a 4-node cluster. The 16 Gb node hardware requires the Spectrum Virtualize Family Software V7.4
to be installed. The 16 Gb HIC is supported when connected
Review the System Storage Interoperation Center (SSIC) for supported 16 Gbps Fibre Channel
configurations as it can only be supported using Brocade 8 Gb or 16 Gb fabrics and Cisco 16 Gb
fabrics. Direct connections to Brocade 2 and 4 Gbps or Cisco 2, 4 or 8 Gbps Fabrics are currently
not supported. Other configured switches, which are not directly connected to the 16 Gbps Node
hardware can be any supported fabric switch as currently listed in SSIC.

© Copyright IBM Corp. 2012, 2016 2-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

FC host port indicators


1. Fibre Channel 8/16 Gbps ports (x4)
2. Link-state LED (x4 - one for each port)
3. Speed-state LED (x4 - one for each port) 2

3
1

Link-state LED Speed-state LED Link state

OFF OFF Inactive


ON or FLASHING OFF Active low speed (2 Gbps)
ON or FLASHING FLASHING Active medium speed (4 Gbps)
ON or FLASHING ON Active high speed (8/16 Gbps)

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-16. FC host port indicators

Each FC port can have up to an 8 or 16 Gbps SW SFP transceiver installed. Each transceiver
connects to a host or Fibre Channel switch with an LC-to-LC Fibre Channel cable. Each Fibre
Channel port has two green LED indicators. The link-state LED [2] is above the speed-state LED
[3] for each port. Consider the LEDs as a pair to determine the overall link state, which in listed in
the table.

© Copyright IBM Corp. 2012, 2016 2-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

10 Gb host interface card


• Quad port 10 Gb card
ƒ Supported in Storwize V7000 Gen2 nodes
ƒ Must have v7.3.0 and up to support 1 x 10 Gb adapter
ƒ Delivered with the SFPs fitted (unless it is a FRU)
í Only IBM supported 10 Gb SFPs should be used
ƒ Requires extra IPv4 or extra IPv6 addresses for each of those 10 GbE
ports used on each of the AC2 control enclosure
ƒ Each adapter port has amber and green
í Colored LEDs to indicate port status
í Fault LED is not used in 7.3.0

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-17. 10 Gb host interface card

Storwize V7000 offers client 10 Gb iSCSI/FCoE connection using the 4-port 10 Gb iSCSI/FCoE
host interface adapter that enables Storwize V7000 connections to servers for host attachment and
to other Storwize V7000 systems using Fibre Channel cables to connect them to your 10Gbps
Ethernet or FCoE SAN.
This type of configuration would require extra IPv4 or extra IPv6 addresses for each of those 10
GbE ports used on each node canister. These IP addresses are independent of the system
configuration IP addresses which allows the IP-based hosts to access Storwize V7000 managed
Fibre Channel SAN-attached disk storage.

© Copyright IBM Corp. 2012, 2016 2-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

FCoE support using 10 Gb card


• Storwize V7000 supports FCoE for all the same functions that FC is
supported

FC FCoE
Host can connect to Storwize V7000 using FC ports or
FCoE ports
Storwize V7000 can connect to storage system using FC
ports or FCoE ports
Storwize V7000 Gen2 system, communication among I/O
groups using any combination of FC and FCoE ports
For remote mirroring, between systems using any
combination of FC and FCoE

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-18. FCoE support using 10 Gb card

Storwize V7000 Gen2 supports 10 Gb Ethernet Fibre Channel over Ethernet (FCoE) fabric
configuration only if the optional 10 Gb Ethernet host interface card is installed. This 4-port card can
be used simultaneously for both FCoE and iSCSI server attachment. It also supports migration from
Fibre Channel networks.
A Fibre Channel forwarder (FCF) switch has both 10 Gb ports and Fibre Channel (FC) ports. The
terms FCF and FCoE switch are used interchangeably. It provides both Ethernet switching
capability and FC switching capability in a single switch. A pure Fibre Channel over Ethernet
(FCoE) switch has 10 Gb ports and FCoE switching capability.
Storwize V7000 supports FCoE with 10 Gb Ethernet ports on Gen1 models can be upgraded
without disruption.

© Copyright IBM Corp. 2012, 2016 2-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

10 Gb host port indicators


• Each port can support simultaneous FCoE and iSCSI connections
• Small Form-factor Pluggable (SFP) transceivers that are installed on the card
support data transfer speeds of 10 Gbps
2

3
1

Green LED state Amber LED state Meaning

OFF OFF The port is not configured in flex hardware, or the port is not active in the current
profile. For example, in the 2 x 16 Gbps profile, two ports are not active.
ON ON The port is configured, but is not connected or the negotiation of the link failed.
ON OFF The link is up and is running at the configured speed.
ON ON The link is up and is running at less than the configured speed.

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-19. 10 Gb host port indicators

The 10 Gbps host interface card has four Ethernet ports, none of which are used for system
management. The ports are named 1, 2, 3 and 4, from top to bottom when installed in a slot. Each
port has two LED indicators, one green and one amber.
The table lists the LED states and its meaning.

© Copyright IBM Corp. 2012, 2016 2-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Compression Accelerator card


• Standard with integrated, hardware assisted compression acceleration via
Compression Pass-through adapter
• Optional Compression Accelerator card can replaced compression pass-
through card
• Intel Quick Assist technology integrated into Compression Acceleration
cards
ƒ Use PCIE x16 slot 1 for the compression card
ƒ Used to offload the LZ compression and decompression processing
ƒ Each controller supports up to two Compression Acceleration cards
ƒ Storwize uses 4 parallel compression engines per card
ƒ Four compression accelerators supports up to 512 compress volumes
ƒ IBM Real-time Compression license required for external use

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-20. Compression Accelerator card

IBM Storwize V7000 Gen2 2076-524 model comes standard with integrated, hardware-assisted
compression acceleration to support Compression Pass-though adapter that is installed in slot 1.
This is a special compression pass-through adapter is standard on each node canister. With a
single on-board card, the maximum number of compressed volumes per I/O group is 200.
Enabling compression on the Storwize V7000 Gen2 does not affect non-compressed hosts to disk
I/O performance. You can replace the on-board compression card with an Intel based “Quick
Assist” Compression Accelerator card. With the addition of a second Quick Assist card, the
maximum number of compressed volumes per I/O group is 512. Real-time Compression workloads
can further benefit using compression with dual RACEs and two acceleration cards for best
performance.
Compressed volumes are a special type of volume where data is compressed as it is written to
disk, saving additional space. To use the compression function outside of the internal use, you must
obtain the IBM Real-time Compression license.

© Copyright IBM Corp. 2012, 2016 2-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 I/O summary

Slot Supported cards

1 Compression pass-through OR optional Compression Acceleration card+

2 None, Fibre Channel 4x8, 2x16, 4x16 or 10Gbps Ethernet*

3 None, Fibre Channel 4x8, 2x16, 4*x16 or 10Gbps Ethernet*

* Minimum of one FC card required


**Can be installed in slots 2 and 3 - if available
+ Dedicated for compression

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-21. Storwize V7000 I/O summary

This visual lists the requirements and limitations of I/O card combinations:
• Slot 1 is dedicated to support compression pass-through or compression Accelerator cards.
• Slot 2 and slot 3 can be used to support FC host connectivity using 8 Gb FC HIC or 16 Gb FC
HIC, or 10 Gb Ethernet card for both iSCSI or FCoE connectivity (only one 10 GbE card per
node)

© Copyright IBM Corp. 2012, 2016 2-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 integrated battery pack


• Battery pack is located within each node
canister rather than the PSU of the Gen1
model.
ƒ Provides independent protection for each
node canister.
ƒ Each node canister caches critical data and
holds state information in volatile memory.
ƒ If power to a node canister fails, the node
canister uses battery power to write cache
and state data to its boot drive.
ƒ Battery pack continues to support power
supply unit in the event of a failure.
ƒ Customer replicable unit.
• Expansion enclosures do not require a
battery pack as they do not cache
volume data or store state information in
volatile memory.
ƒ If ac power supply fails, expansion enclosure
goes offline and returns when power is
restored without operator intervention.
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-22. Storwize V7000 Gen2 integrated battery pack

Unlike Storwize V7000 Gen1 battery pack which was located in the power supply unit, Storwize
V7000 Gen2 contains integrated battery pack within each node canister. Their main task is to allow
the controllers to save the current configuration and the write cache to the internal Flash drive, in
case of failure of the guaranteed supply. This means that the control enclosure who now provides
battery backup to support a non-volatile write cache and protect persistent metadata.
For control enclosure power supply units, the battery integrated in the node canister continues to
supply power to the node in the event of a failure.
Storwize V7000 Gen2 expansion canisters do not cache volume data or store state information in
volatile memory. Therefore, expansion canisters do not require battery power. If ac power to both
power supplies in an expansion enclosure fails, the enclosure powers off. When ac power is
restored to at least one power supply, the enclosure restarts without operator intervention.

© Copyright IBM Corp. 2012, 2016 2-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 firehose dumps


• Fully charged battery is able to perform two firehose dumps.
• If power to a node is lost, data starts saving after a five-second AC
power loss ride-through.
ƒ After this period, if power is not restored, it initiates a graceful shutdown.
ƒ If power is restored during the ride-through period, the node will revert back
to main power and the battery will revert to 'armed‘ state.
ƒ If power is restored during the graceful shutdown, the system will revert back
to main power and the node canisters will shutdown and automatically
reboot.
• A one second full-power test is performed at boot before the node
canister comes online.
• A periodic test on the battery (one at the time) is performed within the
node canister, only if both nodes are online and redundant, to check
whether the battery is functioning properly.

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-23. Storwize V7000 Gen2 firehose dumps

The battery is maintained in a fully charged state by the battery subsystem. At maximum power, the
battery can save critical data and state information in two firehose dumps (back-to-back power
failures). If power to a node canister is lost, saving critical data starts after a five-second wait (If the
outage is shorter than five seconds, the battery continues to support the node and critical data is
not saved.). During this process, the battery pack powers the processor and memory for a few
minutes while the Storwize code copies the memory contents to the onboard SSD. The node
canister stops handling I/O requests from host applications. The saving of critical data runs to
completion, even if power is restored during this time. The loss of power might be because the input
power to the enclosure is lost, or because the node canister is removed from the enclosure.
When power is restored to the node canister, the system restarts without operator intervention. How
quickly it restarts depends on whether there is a history of previous power failures. The system
restarts only when the battery has sufficient charge for the node canister to save the cache and
state data again. A node canister with multiple power failures might not have sufficient charge to
save critical data. In such a case, the system starts in service state and waits to start I/O operations
until the battery has sufficient charge.

© Copyright IBM Corp. 2012, 2016 2-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 battery pack reconditioning


• The design life of the battery in the Storwize V7000 Gen2 is five years
service after one year on the shelf.
• Each battery is automatically reconditioned every three months to
measure the battery capacity. Batteries in the same enclosure are not
reconditioned within two days of each other. If a battery has a lower
capacity than required (below the planned threshold), it is marked as
"End Of Life" and should be replaced.
• Each battery provides power only for the canister in which it is installed.
If a battery fails, the canister goes offline and reports a node error. The
single running canister destages its cache and runs the I/O group in
"write-through" mode until its partner canister is repaired and online.

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-24. Storwize V7000 Gen2 battery pack reconditioning

Reconditioning the battery ensures that the system can accurately determine the charge in the
battery.
As a battery ages, it loses capacity. When a battery no longer has capacity to protect against two
power loss events, it reports the battery end of life event and it should be replaced.
A reconditioning cycle is automatically scheduled to occur approximately once every three months,
but reconditioning is rescheduled or canceled if the system loses redundancy. In addition, a
two-day delay is imposed between the recondition cycles of the two batteries in one enclosure.

© Copyright IBM Corp. 2012, 2016 2-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 fan module


• Each control enclosure contains two fan modules for cooling, one per
node canister.
• Each fan module contains eight individual fans in four banks of two.

• The fan module as a whole is a replaceable component, but the


individual fans are not.
IBM_Storwize:V009B:superuser>svcinfo lsenclosurefanmodule
enclosure_id fan_module_id status
1 1 online New CLI
1 2 online
command
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-25. Storwize V7000 Gen2 fan module

The Storwize V7000 Gen2 control enclosure has a new component feature called a Fan Module.
The fan modules replaced the fans that were previously housed inside the Gen1 power supplies.
Each Storwize V7000 Gen2 control enclosure contains two fan modules for cooling purposes. Each
fan module contains eight individual fans in four banks of two. The fan modules case has been
strategically placed between the node canisters and the midplane to continues to cool drives.
The fan modules are designed in purpose of servicing as they can be easily removed using the two
cam levers. It is important that the fan module be reinserted into the Storwize V7000 Gen2 within 3
minutes of removal to maintain adequate system cooling.
The fan module as a whole is a replaceable component, but the individual fans are not.
You can also used the lsenclosurefanmodule command to view a concise or detailed status of
the new fan modules that are installed in the V7000 Storwize Gen2.

© Copyright IBM Corp. 2012, 2016 2-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 control enclosure power supply


• IBM Storwize V7000 control enclosure
contains two hot swappable 1200 watts
100 – 240V AC auto-sensing power
supply modules.
ƒ N+1 configuration
ƒ Contains NO battery
ƒ No fan unit
ƒ No power switch

• Power supply modules are accessed


through the rear of the unit.
ƒ Positioned left (PSU1) to right (PSU2)
AC/DC Fault
LED LED
For best practices, connect
each power supplies to Indicators
separate UPS

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-26. Storwize V7000 Gen2 control enclosure power supply

The Storwize V7000 model 2076-524 control enclosure contains two redundant, 1200 watts, hot
swappable 100 - 240V AC auto-sensing high efficiency power supply modules that are located on
the rear of the unit.
A Gen2 power supply has no power switch. A power supply is active when ac power cord is
connected to the power connector and to a power source. With the Storwize V7000 Gen2
integration of a redundant battery backup system, eliminates the need for external rack-mount
uninterruptible power supply (UPS), optional power switch, and related cabling.
If a power failure should occur, the system can fully operate under one power supply. However, it is
highly recommended that you attach each of the two power supplies in the enclosure to separate
power circuits or to separate uninterruptible power supply (UPS) battery-backed power source.
Remember to replaced failed power supply as soon as possible, it is never advised to run the
system using only one power supply. Storwize V7000 management GUI and alerting systems (such
SNMP and Event notifications) will report a power supply fault. The failed power supply can
replaced without software intervention by following the directed maintenance procedure as
instructed from the management GUI.

© Copyright IBM Corp. 2012, 2016 2-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 hardware topics


• Storwize V7000 hardware architecture
ƒ Control enclosure subsystem

ƒ Expansion enclosure
í Expansion canisters
í Supported drive form factors
í Power and cooling
ƒ Scalability

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-27. Storwize V7000 hardware topics

This topic introduces the Storwize V7000 2076-12F and 2076-24F expansion enclosure hardware
components.

© Copyright IBM Corp. 2012, 2016 2-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 expansion enclosure option


• 24 x Small Form Factor (SFF) flash drives

• 12 x Large Form Factor (LFF) flash drives

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-28. Storwize V7000 Gen2 expansion enclosure option

Storwize V7000 offers both large form factor (LFF) and small form factor (SFF) 12 Gb SAS
expansion enclosure models in a 2U, 19-inch rack mount enclosure. The Storwize V7000 LFF
Expansion Enclosure Model 2076-12F supports up to twelve 3.5-inch drives, while the Storwize
V7000 SFF Expansion Enclosure Model 2076-24F supports up to twenty-four 2.5-inch drives.
High-performance disk drives, high-capacity nearline disk drives, and flash (solid state) drives are
supported. Drives of the same form factor can be intermixed within an enclosure and LFF and SFF
expansion enclosures can be intermixed within a Storwize V7000 system.
The 2076-12F/24F is only supported with the Storwize V7000 2076-524 controller. The 2076 Gen2
expansion enclosures are presented within the GUI as internal drives just like the Storwize V7000
control enclosure.

© Copyright IBM Corp. 2012, 2016 2-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 rear view


• Dual side by side expansion canisters
• Attach to control enclosures and additional expansions using
12 Gbps SAS
LED panel
Port LEDs Power Fault
SAS port 1 SAS port 2 Link Fault Status

Rear
2 x hot-swap redundant power supplies

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-29. Storwize V7000 rear view

Both Storwize V7000 Gen2 expansion enclosure contains two vertical expansion canisters, two 12
Gb SAS port connectors, and LED panel located in the rear of the enclosure.
Each expansion enclosure contains two hot-swap redundant 800 watts power supplies. These
redundant power supplies operate in parallel with one continuing to power the canisters if the other
fails. Even though these are hot-swappable components they are intended to be used only when
your system is not active (no I/O operations). Therefore do not remove a power supply unit from an
active enclosure until a replacement power supply unit is ready to be installed. If a power supply
unit is not installed then airflow through the enclosure is reduced and the enclosure can overheat.
Replace the power supply within 5 minutes of replacing a faulty unit.

© Copyright IBM Corp. 2012, 2016 2-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Expansion canister LEDs

Phy LEDs Power Status Fault

Same as node SAS phy LED Link state


canister Off No link connected
phy LED Flashing Link connected and activity
On Link connected

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-30. Expansion canister LEDs

The two 12 Gb SAS ports on each canister are side by side and are numbered 1 on the left and 2
on the right. Port 1 is used to connect to a SAS expansion port on a node canister or port 2 of
another expansion canister.
The canister is ready with no critical errors when Power is illuminated, Status is illuminated, and
Fault is off.
When both ends of a SAS cable are inserted correctly, the green link LEDs next to the connected
SAS ports are lit.

© Copyright IBM Corp. 2012, 2016 2-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 12 Gb SAS interface

SAS SAS SAS Phy


physical
Port cable Port LEDs
link
Phy Phy
Phy Phy
Phy Phy
Phy Phy

• A phy contains a transceiver (transmitter + receiver).


• A physical link connects two phys and contains a pair of wires for full duplex
transmission at 12 Gb (max. up to 48 Gb).
• A SAS port contains four phys.
• A SAS cable contains four physical links or lanes with total transmission
bandwidth of 44 Gb.
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-31. Storwize V7000 Gen2 12 Gb SAS interface

The purpose of the 12 Gb SAS card to attach the Storwize V7000 expansion enclosure to
expansion system capacity. The SAS ports of both units are connected by using SAS connectors.
The 12 Gb SAS port is a third-generation SAS interface that offers double the performance rate
which allows the SAS infrastructure to deliver bandwidth that exceeds that of PCI Express 3.0. The
improved bandwidth backed by I/O processing capabilities to maximize link utilization supports
increased scaling of traditional HDDs as well as improved SSD performance.
Each 12 Gb SAS port as well as the SAS cable contains four physical (PHY) lanes. Each lane uses
multiple links (as the 6 Gb SAS technology) for full duplex transmission to transmit and receive
higher date rates up to 4800 Mb (48 Gb).
Above each port is a green LED that is associated with each PHY (eight LEDs in total). The LEDs
are numbered 1 - 4. The LED indicates activity on the PHY. For example, if traffic starts it goes over
PHY 1. If the line is saturated, the next PHY starts working. If all four Phy LEDs are flashing, the
backend is fully saturated.
The 12 Gb SAS also provides investment protection with compatibility with an earlier version with 3
Gb and 6 Gb SAS.

© Copyright IBM Corp. 2012, 2016 2-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 drive options


Storwize Family Drive Options
Drive Type Drive Sizes

200/400GB
SSD 6Gb SAS
200/400/800GB
SSD 12Gb SAS 200/400/800GB
Flash 12Gb SAS 3.2TB
300/600/900GB
2.5" Small Form Factor
(SFF) 10K 6Gb SAS 1.2TB
10K 12Gb SAS 1.8TB
15K 6Gb SAS 146/300GB
15K 12Gb SAS 300/600GB
7.2K NL-SAS 1TB
No restrictions on mixing of drive types with same form factor within the same enclosure
7.2K NL-SAS
2TB / 3TB / 4TB
3.5" Large Form Factor
(LFF) 2TB/ 6TB / 8TB
12GB NL-SAS

Components available at the time of publication


Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-32. Storwize V7000 Gen2 drive options

This table lists the available drive options for the Storwize V7000 2076-12 and 24F expansion
enclosures. The drives listed were available during this publication.
Both Gen2 control enclosure and the expansion enclosure support the same disk drives and form
factor as listed here. Drive options are SAS drives, Near-line SAS drives Solid State and Flash
drive. Gen2, 12 Gb SAS expansion enclosures supports twelve 3.5-inch large form factor (LFF) or
twenty-four 2.5-inch small form factor (SFF) drives. All drives dual ported and hot swappable.
Drives of the same form factor can be intermixed within an enclosure, and LFF and SFF expansion
enclosures can be intermixed behind the SFF control enclosure.

© Copyright IBM Corp. 2012, 2016 2-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 cable options

Feature code Product name


ACUA 0.6 m 12 Gb SAS Cable (mSAS HD)
ACUB 1.5 m 12 Gb SAS Cable (mSAS HD)
ACUC 3 m 12 Gb SAS Cable (mSAS HD)
ACUD 6 m 12 Gb SAS Cable (mSAS HD)
Components available at the time of publication

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-33. Storwize V7000 Gen2 cable options

This table lists the available cable components for the Storwize V7000 2076-12 and 24F expansion
enclosures. The 2076 controller and expansions are connected using the IBM 0.6m, 1.5 m, 0.3 m,
0.6 m 12 Gb SAS Cable (mSAS HD to mSAS).
Check Interoperability Guide for the latest supported options. Cable requirements are discussed in
the installation unit.

© Copyright IBM Corp. 2012, 2016 2-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 expansion enclosure power supply


• Expansion enclosure PSU:
ƒ 800W power supplies
ƒ Does not contains battery

• Two fans in each PSU:


ƒ If a PSU fails, fans operate using power from other PSU across the midplane.

Expansion Enclosure PSU

Note: Control and expansion PSUs are not interchangeable

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-34. Storwize V7000 Gen2 expansion enclosure power supply

The Storwize V7000 models 2076-24F and 2076-12F contains two 800Wpower supply units. The
power supply has no power switch. A power supply is active when a power cord is connected to the
power connector and to a power source. With the Storwize V7000 Gen2 integration of a redundant
battery backup system, eliminates the need for external rack-mount uninterruptible power supply
(UPS), optional power switch, and related cabling. All expansion enclosures power on and off
automatically after AC is plugged in or an interruption.

© Copyright IBM Corp. 2012, 2016 2-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 hardware topics


• Storwize V7000 hardware architecture
ƒ Control Enclosure subsystem

ƒ Expansion enclosure

ƒ Scalability

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-35. Storwize V7000 hardware topics

This topic discusses the scalability configuration to grow the Storwize V7000 system capacity for
greater performance.

© Copyright IBM Corp. 2012, 2016 2-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 2076-524 scale-out implementation


• Up to 20 expansion enclosures (single
controller fills a 42U rack)
ƒ 24 SFF drives or 12 LFF drives in 2U
ƒ Up to 504 drives per controller – more than

Expand
2x todays V7000 Gen1
ƒ Maximum configurations supports up to
1056 drives
‫ ޤ‬All SFF = 44 enclosures, just over 2 racks
‫ ޤ‬All LFF = 84 enclosure, 4 racks
• Storwize V7000 can be added into
existing clustered systems including
Gen1 system.

Storwize V7000 2076--24F


Expansion Enclosures
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-36. Storwize V7000 2076-524 scale-out implementation

Storwize V7000 model 2076 524 offers scalable growth and flexibility to start small and grow as
needed. You can basically use Storwize V7000 Gen2 model as stand-alone with it choice of 12 and
24 bay enclosures. IBM Storwize V7000 solution can scale up to 480 x 3.5 inch, up to 504 x 2.5
inch serial-attached SCSI (SAS) drives with the attachment of twenty expansion enclosures
(intermix with 12 and 24 bay expansion or even with in the enclosure intermix drive HDDs and
SDDs). For an even greater capacity of drives, you can scale up to 1,056 drives with four Storwize
V7000 clustered systems. In a clustered system, the Storwize V7000 can provide up to 8 PiB raw
capacity (with 6 TB nearline SAS disks), delivering greater performance, bandwidth, and scalability.
Storwize V7000 Model 524 systems can be added into existing clustered systems that include
previous generation Storwize V7000 systems.

© Copyright IBM Corp. 2012, 2016 2-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 configuration node


• Storwize V7000 nodes are logical nodes.
ƒ One node in the system serves as the configuration node, but any node can
assume the roll of a configuration node.
ƒ Node canisters are active/active storage controllers, where both node canisters
are processing I/O at any time and any volume can be accessed by either node
canister.

Storwize V7000 clustered system

Failure
Node1
offline

Node1: Configuration node Node2: Standby node

Once Node 1 is back online, it becomes Node2 assumes the role of


the standby node of the system configuration node for the system

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-37. Storwize V7000 configuration node

The IBM System V7000 node canisters in a clustered system operate as a single system and
present a single point of control for system management and service. System management and
error reporting are provided through an Ethernet interface to one of the node canisters in the
system, which is called the configuration node canister.
The configuration node canister is a role that any node canister can take. If the configuration node
fails, the system chooses a new configuration node. This action is called configuration node
failover. The new configuration node takes over the management IP addresses. Thus you can
access the system through the same IP addresses although the original configuration node has
failed. During the failover, there is a short period when you cannot use the command-line tools or
management GUI.
The Storwize V7000 node canisters are also active/active storage controllers, in that both node
canisters are processing I/O at any time and any volume can be happily accessed via either node
canister.

© Copyright IBM Corp. 2012, 2016 2-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen 2 SAS chain layout


• Each control enclosure supports two ‘SASport0’ NodeCanister
expansion chains and each can connect Chain0
Internal
up to 10 enclosures. SASlinks
SAS
• Unlike previous Storwize V7000 the Control Adapter
control enclosure drives are not on either
of these two SAS chains. SASport1 SASport2
Chain1 Chain2
ƒ Double-width high-speed link to the control
enclosure and SSDs should be installed in
control enclosure.
ƒ Same SAS bandwidth dedicated to these
24 slots as there is to other two chains Expansion Expansion
combined.
ƒ Control enclosure internal drives are shown
as being on ‘port 0’ where this matter.
Expansion Expansion
• SSDs can also go in other enclosures if
more then 24 required for capacity 8more 8more
reasons.
Enclosure IDs are assigned
• HDDs can go in control enclosure if
dynamically by SAS fabric
desired. device discovery.
ƒ Mix of flash/SSDs and HDDs is supported.
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-38. Storwize V7000 Gen 2 SAS chain layout

Storwize V7000 2076-524 supports two independent SAS chains to connect the V7000 control
enclosure to the expansion enclosures. This provides a symmetrical way to balanced distribution of
the expansion enclosures on both SAS chains for performance and availability. The internal disk
drives of the control enclosure belong to SAS Chain 0. Each of the independent SAS chain (SAS
port 1 and SAS port 2) supports a maximum of 10 expansion enclosures per SAS chain.

© Copyright IBM Corp. 2012, 2016 2-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 enclosure configuration


• Examples of valid maximum configurations:
ƒ Two control enclosures:
í 40x 24F expansions in four chains of 10 ->
1008 SFF drives
ƒ Four control enclosures:
í 40x 24F expansions in four chains of 10 ->
1056 SFF drives
ƒ Four control enclosures:
í 40x 24F expansions in eight chains of 5 ->
1056 SFF drives
ƒ Four empty control enclosures:
í 80x 12F expansions in eight chains of 10 ->
960 LFF drives
ƒ Four full control enclosures:
í 80x 12F expansions in eight chains of 10 ->
960 LFF, 96 SFF = 1056 Total

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-39. Storwize V7000 enclosure configuration

Listed are examples in the number of Storwize V7000 control enclosures and expansion enclosure
that can be configured in a clustered system.

© Copyright IBM Corp. 2012, 2016 2-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Clustered system example (1 of 4)


2x IOGs and Max of 40 SFF expansion enclosures

I/O Group 0 I/O Group 1


Control Enclosure SAS Chain 0 Control Enclosure SAS Chain 0

Expansion Enclosure Expansion Enclosure Expansion Enclosure Expansion Enclosure

SAS Chain 1 SAS Chain 2 SAS Chain 1 SAS Chain 2

Supports miniSAS 0.6m, 1.5m, 3m, 6m cables

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-40. Clustered system example (1 of 4)

This is an example of a dual clustered system. Each control enclosure is chained to twenty 24 SFF
drives expansion enclosures (10 expansion enclosures per SAS chain) for a maximum total of 40
expansion enclosures and 1008 SFF drives.

© Copyright IBM Corp. 2012, 2016 2-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Clustered system example (2 of 4)


4x IOGs and Max of 40 SFF expansion enclosures

I/O Group 0 I/O Group 1


Control Enclosure SAS Chain 0 Control Enclosure SAS Chain 0

Expansion Enclosure Expansion Enclosure Expansion Enclosure Expansion Enclosure

SAS Chain 1 SAS Chain 2 SAS Chain 1 SAS Chain 2

I/O Group 2 I/O Group 3


Control Enclosure SAS Chain 0 Control Enclosure SAS Chain 0

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-41. Clustered system example (2 of 4)

In this example of a four clustered system, each control enclosure is chained to twenty 24 SFF
drives expansion enclosures (10 expansion per SAS chain) for a maximum total of 40 expansion
enclosures and 1056 SFF drives.

© Copyright IBM Corp. 2012, 2016 2-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Clustered system example (3 of 4)


4x IOGs and Max of 40 SFF expansion enclosures
I/O Group 0 I/O Group 1
Control
Control Enclosure
Enclosure SAS Chain 0 Control Enclosure SAS Chain 0

Expansion Enclosure Expansion Enclosure Expansion Enclosure Expansion Enclosure

SAS Chain 1 SAS Chain 2 SAS Chain 1 SAS Chain 2

I/O Group 2 I/O Group 3


Control Enclosure SAS Chain 0 Control Enclosure SAS Chain 0

Expansion Enclosure Expansion Enclosure Expansion Enclosure Expansion Enclosure

SAS Chain 1 SAS Chain 2 SAS Chain 1 SAS Chain 2

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-42. Clustered system example (3 of 4)

In this example of a four clustered system, each control enclosure is chained to ten 24 SFF drives
expansion enclosures (5 expansions per SAS chain) for a maximum total of 40 expansion
enclosures and 1056 SFF drives.

© Copyright IBM Corp. 2012, 2016 2-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Clustered system example (4 of 4)


4x IOGs and Max of 80 SFF expansion enclosures
I/O Group 0 I/O Group 1
Control Enclosure
Control Enclosure SAS Chain 0 Control Enclosure SAS Chain 0

Expansion Enclosure Expansion Enclosure Expansion Enclosure Expansion Enclosure

SAS Chain 1 SAS Chain 1 SAS Chain 1 SAS Chain 1

I/O Group 2 I/O Group 3


Control Enclosure SAS Chain 0 Control Enclosure SAS Chain 0

Expansion Enclosure Expansion Enclosure Expansion Enclosure Expansion Enclosure

SAS Chain 1 SAS Chain 1 SAS Chain 1 SAS Chain 1

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-43. Clustered system example (4 of 4)

In the final example of a four clustered system, each control enclosure is chained to twenty 12 LFF
drives expansion enclosures (10 expansions per SAS chain) for a maximum total of 90 expansion
enclosures and 960 LFF drives.

© Copyright IBM Corp. 2012, 2016 2-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000 Gen2 migration and investment protection


• Can mix Storwize V7000 Gen2 and existing Storwize V7000 systems in a cluster
• Provides complete protection for existing Storwize V7000 investments
ƒ All existing Gen1 and Gen2 system can be accessed by any host
• Migration from existing system with no downtime at all
ƒ For systems that support non-disruptive volume move (NDVM)
ƒ No competitive system can make similar claim
• Storwize V7000 Gen2 can virtualize existing Storwize V7000 Gen1 systems
ƒ Provides conventional Storwize family migration using standard Image mode
virtualization capability

V7000 Gen2 System


Cluster Replicate

V7000 Gen1 System Virtualize V7000 Gen1 System

V7000 Gen1 System

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-44. Storwize V7000 Gen2 migration and investment protection

IBM Storwize V7000 is a virtualized, software-defined storage system designed to consolidate


workloads into a single storage system for simplicity of management, reduced cost, highly scalable
capacity, and high performance and availability. Storwize V7000 Model 524 systems can be added
into existing clustered systems that include previous generation Storwize V7000 systems.

© Copyright IBM Corp. 2012, 2016 2-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

IBM System Storage Storwize support


IBM IBM AIX Microsoft Sun Apple Linux
z/VSE IBM iOS VMware Windows Solaris HP-UX Mac OS and more

Network attachment: SAN, FcoE, iSCSI


IBM switches, Brocade, Cisco, and more

IBM System Storage Storwize


Thin Provisioning, Easy Tiering, SSD, Mirroring, FlashCopy, internal HDDs ...

DELL IBM IBM IBM Hitachi HP EMC NetApp NEC Many


DS XIV Storwize Bull more
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-45. IBM System Storage Storwize support

Storwize provides the same interoperability as the SAN Volume Controller since it is based on the
Storwize V7000 software.
It is recommended to have a look to the actual interoperability matrix, since it experiences changes
continuously.
For more information have a look at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004946

© Copyright IBM Corp. 2012, 2016 2-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Storwize V7000: Gen1 versus Gen2

Attribute (per node) Gen1 Gen2


CPU 8 cores 16 cores

Controller memory 16GB 64 GB to 128 GB

6x 1 GbE
4x 1 GbE 8x to 16x - 8 Gb FC
Host I/O 8x 8 Gb FC 8x to 16x - 16 Gb FC
4x 10 GbE (some models) 4x to 8x - 10 GbE
(Six I/O cards max)

Compression On-board Comp Accel Engine


6 cores
resources Optional 2nd Comp Accel Engine
Up to 9 expansions Up to 20 expansions
Drive expansion
(240 drives per controller) (504 drives per controller)
SAS fabric 6 Gb SAS 12 Gb SAS
Battery 1U uninterrupted power Integrated dual battery backup units
supply
47

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-46. Storwize V7000: Gen1 versus Gen2

The following is a comparison of the IBM Storwize V7000 Gen2 model to the Storwize V7000 Gen1
model functional differences:
• Dual CPU supporting the 8-core Ivy Bridge processor (up to 16-cores)
• Cache re-architecture, up to 128 GB cache, better performance
• Up to 16 I/O FC ports using six PCIe cards (4x 16 Gb Fc or 8 Gb FC)
• Up to two Compression Accelerator cards supporting 512 compressed volumes
• Up to 1056 flash drives using twenty 2076-12F/24F expansion enclosure
• Supports 12 Gb HD Mini SAS versus 6 Gb SAS connectors
• Integrated battery pack that resides inside the Storwize V7000 Gen2 control node versus inside
the Gen1 power supply

© Copyright IBM Corp. 2012, 2016 2-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Hardware compatibility within the Storwize family


• Expansion enclosures
ƒ The V7000 Gen2 expansion enclosures can only be used with a V7000 Gen2
control enclosure.
ƒ The V7000 Gen1 expansion enclosures can only be used with a V7000 Gen1
control enclosure.
ƒ The V3x00/V5000 and SVC-DH8 expansion enclosures cannot be used with a
V7000 Gen2 control enclosure and drives cannot be swapped between models
either.
• Control enclosures
ƒ V7000 Gen2 control enclosures can cluster with V7000 Gen1 control enclosures.
ƒ Allows for non-disruptive migration from Gen1 to Gen2 or long-term system
growth.
ƒ No clustering between V7000 Gen2 and V3x00 and V5000.
• Remote Copy
ƒ No remote-copy restrictions as we can replicate amongst any of the
SVC/Storwize models.
• Virtualization
ƒ Fibre-channel and FCoE external storage virtualization with appropriate HBAs.
ƒ No SAS host support or SAS storage support with 2076-524.
• File modules
ƒ V7000 Unified will support V7000 Gen2 control enclosures when IFS 1.5 or later.
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-47. Hardware compatibility within the Storwize family

Listed are a few hardware compatibility guidelines when integrating the Storwize V7000 Gen1
model.

© Copyright IBM Corp. 2012, 2016 2-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Spectrum Virtualize licensing


License per enclosure

Option 1 Option 2
FLEXIBLE OPTIONS FULL BUNDLE
Controller Expansion External Controller Expansion External
; Base ; Base

Advanced Functions ; Full Bundle


… Easy Tier ; Easy Tier
… Flashcopy ; Flashcopy
… Remote Mirror ; Remote Mirror
… Compression* ; Compression*

Basic software:
• 5639-CB7 (Controller-Based software)
• 5639-XB7 (Expansion-Based software)
• 5639-EB7 (External virtualization)
Storwize V7000 hardware architecture * Storwize V7000 Only © Copyright IBM Corporation 2012, 2016

Figure 2-48. Spectrum Virtualize licensing

IBM has simplified the license structure for the IBM Storwize V7000 Gen2 which include new
features. IBM Storwize V7000 offers two ways of license procurement: Fully flexible and Bundled
(license packages) The license model is based on license-per-enclosure concept known from the
first generation of IBM Storwize V7000, however the second generation offers more flexibility that
exactly matches your needs.
The base module is represented by IBM Spectrum Virtualize family and is mandatory for every
controller, enclosure, or externally managed controller unit. For advanced functions, there will be a
choice. The Full bundle, entitles the user to all advanced functions available on the system, and will
cost less than the sum of the those licenses. This full bundle will be the default pre-select, as we
expect the majority of customers will select this, for the value for money it offers.
We would expect almost all customers to be using Easy Tier and Flashcopy, and with the new
assurance and performance of Real-time compression, we would again expect this to be sold in all
but the most exceptional situation.
Additional licensed features can be purchased on-demand either as a full software bundle or each
feature separately.

© Copyright IBM Corp. 2012, 2016 2-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Benefits of Spectrum Virtualize Storwize V7000


• Centralized point of control for storage provisioning
• Improve capacity utilization
• Disaster recovery
• Data migration
• Facilitates common platform for data replication
• Application testing
• Increases operational flexibility and administrator productivity
• High availability
• Facilitates common platform for data replication
• Resource sharing between
heterogeneous servers

Freedom of Choice + Investment Protection


Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-49. Benefits of Spectrum Virtualize Storwize V7000

Listed are the benefits of block-level virtualization that is provided by the Storwize V7000:
1. Central point of control: All advanced functions are implemented in the virtualization layer.
Therefore, it enables rapid, flexible provisioning and simple configuration changes thus
increasing storage administrator productivity.
2. Improve capacity utilization: By pooling storage, storage administrators can improve capacity
utilization rates.
3. Disaster recovery: Enables environments to replicate asymmetrically at the DR site.
4. Data migration: Enables non-disruptive movement of virtualized data among tiers of storage,
including Easy Tier.
5. Facilitates common platform for data replication: Improve network utilization for remote
mirroring with innovative replication technology.
6. Application testing: Instead of testing an application against actual production data, use
virtualization to create a replicated data set to safely test with an application.
7. Increases operational flexibility and administrator productivity: Increase application
throughput performance for most critical activities by migrating data from HDD to Flash (SSDs).
8. High availability: By separating an application's storage from the application, virtualization
insulates an application from an application's server failure.
9. Resource sharing between heterogeneous servers: Virtualization helps to ensure servers
running different operating systems can safely coexist on the same SAN.

© Copyright IBM Corp. 2012, 2016 2-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Keywords
• IBM Storwize V7000 2076-524 Control Enclosure
• IBM Storwize V7000 2076-12F Expansion Enclosure
• IBM Storwize V7000 2076-24F Expansion Enclosure
• IBM Spectrum Virtualize
• Fibre Channel (FC)
• Fibre Channel over Ethernet (FCoE)
• Technician port
• Firehose Dump (FHD)
• Boot drives
• Battery modules
• Serial Attached iSCSI (SAS)
• Compression Accelerator card
• Intel QuickAssist technology
• Real-time Compression (RtC)
• Real-time Compression Acceleration card

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-50. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 2-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Review questions (1 of 2)
1. Which of the following components are supported with the
Storwize V7000 Gen2 control enclosure?
a. Two processors with 64 GB of memory
b. Dual batteries backup
c. Up to six PCIe I/O slots
d. Up to two Compression Accelerator cards
e. All of the above

2. How many Storwize V7000 nodes can be supported in an


Storwize V7000 clustered system?
a. Two
b. Four
c. Six
d. Eight

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-51. Review questions (1 of 2)

Write your answers here:


1.
2.

© Copyright IBM Corp. 2012, 2016 2-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Review answers (1 of 2)
1. Which of the following components are supported with the
Storwize V7000 Gen2 control enclosure?
a. Two processors with 64 GB of memory
b. Dual batteries backup
c. Up to six PCIe I/O slots
d. Up to two Compression Accelerator cards
e. All of the above
The answer is all of the above.

2. How many Storwize V7000 nodes can be supported in an


Storwize V7000 clustered system?
a. Two
b. Four
c. Six
d. Eight
The answer is eight. Up to four I/O groups with two nodes per I/O
group.

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 2-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Review questions (2 of 2)
3. True or False: Storwize V7000 Gen2 node I/O slots 2 and 3
are used for both FC and iSCSI/FCoE connections.

4. True or False: To support iSCSI access, Fibre Channel


HICs are required to connect the iSCSI host to a Fibre
Channel SAN.

5. True or False: Different models of Storwize V7000 Gen2


node hardware can exist in the same Storwize V7000
clustered system.

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-52. Review questions (2 of 2)

Write your answers here:


3.
4.
5.

© Copyright IBM Corp. 2012, 2016 2-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Review answers (2 of 2)
3. True or False: Storwize V7000 Gen2 node I/O slots 2 and 3
are used for both FC and iSCSI/FCoE connections.
The answer is true.

4. True or False: To support iSCSI access, Fibre Channel HICs


are required to connect the iSCSI host to a Fibre Channel
SAN.
The answer is false. iSCSI host access can be supported when a
10 GbE adapter installed.

5. True or False: True or False: Different models of Storwize


V7000 Gen2 node hardware can exist in the same Storwize
V7000 clustered system.
The answer is true.

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 2-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 2. Storwize V7000 hardware architecture

Uempty

Unit summary
• Identify component features of the IBM Storwize V7000 2076-524
control enclosure model
• Distinguish between the IBM Storwize V7000 2076-12F and 2076-24F
expansion enclosure
• Characterize IBM Storwize V7000 Gen2 scalability requirements to
incrementally increase storage capacity and performance

Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016

Figure 2-53. Unit summary

© Copyright IBM Corp. 2012, 2016 2-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Unit 3. Storwize V7000 planning and


zoning requirements
Estimated time
00:45

Overview
This unit examines the physical planning guidelines for installing and configuring an Storwize
V7000 environment. This unit also provides best practices on how to logically configure the
Storwize V7000 system management IP addresses and network connections, implement zoning
fabrics, and host and storage attachments.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 3-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Unit objectives
• Determine planning and implementation requirements that are
associated with the Storwize V7000
• Implement the physical hardware and cable requirements for the
Storwize V7000 Gen2
• Implement the logical configuration of IP addresses, network
connections, zoning fabrics, and storage attachment to the Storwize
V7000 Gen2 nodes
• Integrate the Storwize V7000 Gen2 into an existing SVC environment
• Verify zoned ports between a host to the Storwize V7000 and between
a storage system

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 3-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 physical planning topics


• Storwize V7000 physical planning
ƒ Rack hardware configuration
ƒ Cabling connection requirements

• Storwize V7000 logical planning

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-2. Storwize V7000 physical planning topics

This topic provides installation guidance for physically installing and cabling the IBM Storwize
V7000 control enclosures and Storwize V7000 storage enclosures in a rack environment.

© Copyright IBM Corp. 2012, 2016 3-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Planning and implementation check list

9 Physical planning
9Rack hardware configuration
9Cabling connection requirements

9 Logical planning
9 Management IP addressing plan
9 iSCSI IP addressing plan
9 SAN zoning and SAN connections
9 Backend storage subsystem configuration
9 System configuration
9 Initial cluster system configuration

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-3. Planning and implementation check list

IBM Storwize V7000 planning can be categorized into two types: physical planning and logical
planning. To achieve the most benefit from the Storwize V7000, we recommend using a
pre-installation planning check list. The visual lists several important planning steps. These steps
ensure that the V7000 provides the best possible performance, reliability, and ease of management
for application needs. Proper configuration also helps minimize downtime by avoiding changes to
the V7000 and the storage area network (SAN) environment to meet future growth needs.
Logical planning is done in two parts. Portion of this topic focuses only on the logical planning of
configuration of the management IP, host connection, SAN, backend storage and the initial
configuration of the clustered system. The storage pools, volume creation, host mapping, advanced
copy services, and data migration are covered in other units.
Before configuring the IBM Storwize V7000 environment, ensure that you have all license required,
IP addresses established and they are zoned properly.

© Copyright IBM Corp. 2012, 2016 3-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 node cable requirements

Enclosure 2 10 units up

Node 1 Node 2
1 1

1 1

Enclosure 1 10 units down

Both ends of a SAS cable must be inserted correctly


(green link LEDs next to the connected SAS ports are lit)
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-4. Storwize V7000 node cable requirements

This visual shows the rear view of a Storwize V7000 Gen2 2076-524 controller with the two PCIe
adapter slots identified and configured with six Storwize V7000 expansion enclosures.
To install the cables, complete the following steps.
1. Using the supplied SAS cables, connect the control enclosure to the expansion enclosure 1.
a. Connect SAS port 1 of the left node canister in the control enclosure to SAS port 1 of the left
expansion canister in the first expansion enclosure.
b. Connect SAS port 1 of the right node canister in the control enclosure to SAS port 1 of the
right expansion canister in the first expansion enclosure.
2. To add a second expansion enclosure to the control enclosure, use the supplied SAS cables to
connect the control enclosure to the expansion enclosure at rack position 2.
a. Connect SAS port 2 of the left node canister in the control enclosure to SAS port 1 of the left
expansion canister in the second expansion enclosure.
b. Connect SAS port 2 of the right node canister in the control enclosure to SAS port 1 of the
right expansion canister in the second expansion enclosure.
3. If additional expansion enclosures are installed, connect each one to the previous expansion
enclosure in a chain, using two Mini SAS HD to Mini SAS HD cables.

© Copyright IBM Corp. 2012, 2016 3-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty
4. If additional control enclosures are installed, repeat this cabling procedure on each control
enclosure and its expansion enclosures.
When using the optional 2076-12F/24F flash arrays as part of your Storwize V7000 Gen2 cluster
implementation, the distance that you can separate the 2076-524 nodes in the I/O Group away from
their shared 2076-12F/24F flash array is limited by the maximum length of the 6-meter
serial-attached SCSI (miniSAS) cable used to attach the array to the Storwize V7000 Gen2 units.
Ensure that cables are installed in an orderly way to reduce the risk of cable damage when
replaceable units are removed or inserted. The cables need to be arranged to provide clear access
to Ethernet ports, including the technician port.

© Copyright IBM Corp. 2012, 2016 3-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 upstream high availability


• Power cord requirement
ƒ Split power supplies between two independent power sources for greater
upstream high availability.
ƒ One power cord from each rack component gets plugged into left side
internal PDU and the right side internal PDU.
í PDU are fed by separate power sources.

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-5. Storwize V7000 upstream high availability

The Storwize V7000 control enclosure and the Storwize V7000 expansion enclosure can be
installed in almost any of the IBM rack offerings which are industry-standard 19-inch server
cabinets that are designed for high availability environments.
When cabling power, connect one power cable from each Storwize V7000 node to the left side
internal PDU and the other power supply power cable to the right side internal PDU. PDUs are fed
by separate power sources. This enables the cabinet to be split power supplies between two
independent power sources for greater upstream high availability. When adding more V7000s to
the solution the same power cabling scheme should be continued for each additional enclosure.
Upstream redundancy of the power to your cabinet (power circuit panels and on-floor Power
Distribution Units (PDUs)) and within cabinet power redundancy (dual power strips or in-cabinet
PDUs) and also upstream high availability structures (uninterruptible power supply (UPS),
generators, and so on) influences your power cabling decisions.

© Copyright IBM Corp. 2012, 2016 3-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 Gen2 cable options

Feature code Product name

ACUA 0.6 m 12 Gb SAS cable (mSAS HD to mSAS HD)

ACUB 1.5 m 12 Gb SAS Cable (mSAS HD to mSAS HD)

ACUC 3 m 12 Gb SAS Cable (mSAS HD to mSAS HD)

ACUD 6 m 12 Gb SAS Cable (mSAS HD) to mSAS HD)

*Components available at the time of publication

Mini SAS
connector
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-6. Storwize V7000 Gen2 cable options

The visual lists the available cable components for the Storwize V7000 2076-12F/24F expansion
enclosures. The cables listed were available during this publication. Both the Storwize V7000 2076
controller and expansion are connected using the IBM 1.5 m, 0.3 m, 0.6 m 12 Gb SAS Cable
(mSAS HD to mSAS). Check Interoperability Guide for the latest supported options. Cable
requirements are discussed in the installation unit.
The following SAS cables can be ordered with the Storwize V7000 expansion enclosure:
• 0.6 m 12 Gb SAS cable (mSAS HD to mSAS HD)
• 1.5 m 12 Gb SAS cable (mSAS HD to mSAS HD)
• 3 m 12 Gb SAS cable (mSAS HD to mSAS HD)
• 6 m 12 Gb SAS cable (mSAS HD to mSAS HD)

© Copyright IBM Corp. 2012, 2016 3-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Cable management arm


• Reduce the complexity of cabling the Storwize V7000 rack environment

Lost in translation

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-7. Cable management arm

The cable-management arm is an optional feature and is used to efficiently route cables so that you
have proper access to the rear of the system. Cables are routed through the arm channel and
secured with cable ties or hook-and-loop fasteners. Allow slack in the cables to avoid strain in the
cables as the cable management arm moves.

© Copyright IBM Corp. 2012, 2016 3-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 physical planning topics


• Storwize V7000 physical planning

• Storwize V7000 logical planning


ƒ Management IP requirements
ƒ SAN zoning and network requirements

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-8. Storwize V7000 physical planning topics

This topic discusses the Storwize V7000 management IP address requirements.

© Copyright IBM Corp. 2012, 2016 3-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Dual fabric for high availability


• The following components are required
to maintain performance and high
availability: IP
Fabric 1
ƒ Two SAN fabric switches Switch1
Fabric 2
Switch2
‫ ޤ‬Four FC switch ports to maintain a dual
fabric with each Storwize V70000 node
adapter ports spread evenly across both
fabrics
ƒ One or two (recommended) Ethernet Node 1
Node 2
switches
‫ ޤ‬All V7000 control enclosures in the
clustered system must be on the same A Storwize V7000 control nodes
LAN. are deployed in pairs (I/O
Groups).
‫ ޤ‬Eth port 1 must be on one LAN segment
and Eth port 2 must be on the second LAN
segment.
‫ ޤ‬Each V7000 node can assume the
clustered system management IP address.
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-9. Dual fabric for high availability

To ensure proper performance and to maintain application high availability in the unlikely event of
an individual node canister failure, Storwize V7000 node canisters are deployed in pairs (I/O
Groups).
The visual illustrates network port connections of the Storwize V7000 node canister to dual SAN
fabrics and the local area network.
It is recommended that all V7000 control enclosures in a clustered system must be on dual SAN
fabrics with each Storwize V7000 node adapter ports spread evenly across both fabrics. All V7000
control enclosures must also be on the same local area network (LAN) segment, which allows for
any node in the clustered system to assume the clustered system management IP address. For a
dual LAN segment, port 1 of every node is connected to the first LAN segment, and port 2 of every
node is connected to the second LAN segment. Therefore, if a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but the configuration is still valid
for the I/O Group.

© Copyright IBM Corp. 2012, 2016 3-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V70000 management IP addresses


• At least three IPv4 or IPv6 addresses are required.
ƒ Ethernet port 1: One cluster management IP address
í Owned by configuration node
ƒ Ethernet port 2:
í Alternate cluster management IP (optional)
ƒ Service IP address for each node canister
í Highly recommended
ƒ Supports both IPv4 and IPv6 address formats
Ports 1 and 2 are used for
1Gb system management

E1 E2

E3 T

Eth4 1Gb (T) port is used for


system initialization

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-10. Storwize V70000 management IP addresses

At least three management IP addresses are required to manage the Storwize V7000 storage
system through either a graphical user interface (GUI), command-line interface (CLI) accessed
using a Secure Shell connection (SSH), or using an embedded CIMOM that supports the Storage
Management Initiative Specification (SMI-S). The system IP address is also used to access remote
services like authentication servers, NTP, SNMP, SMTP, and syslog systems, if configured. Each
node canister contains a default management IP addresses that can be changed to allow the
device to be managed on a different address than the IP address assigned to the interface, which is
used for data traffic.
The Storwize V7000 system requires the following IP addresses:
• Cluster management IP address: Address used for all normal configuration and service access
to the cluster. There are two management IP ports on each control enclosure. Port 1 is required
to be configured as the port for cluster management. Both Internet Protocol Version 4 (IPv4)
and Internet Protocol Version 6 (IPv6) are supported.
• Service assistant IP address: One address per control enclosure. The cluster operates without
these control enclosure service IP addresses but it is highly recommended that each control
enclosure is assigned an IP address for service-related actions.
• A 10/100/1000 Mb Ethernet connection is required for each cable.

© Copyright IBM Corp. 2012, 2016 3-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty
For increased redundancy to the system management interface, connect Ethernet port 2 of each
node canister in the system to a second IP network. The second IP port of the control enclosure
can also be configured and used as an alternate address to manage the cluster.
Each node canister ports 1, 2 and 3 on the rear of each canister can also provide iSCSI
connectivity.

© Copyright IBM Corp. 2012, 2016 3-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Example of an IPv4 management and iSCSI shared subnet

Redundant network configuration iSCSI


initiator iSCSI
Host Admin
iSCSI IP Mgmt IP
iSCSI targets Addresses Addresses

10.10.1.10 10.10.1.100
ETH1
Config Node 1 ETH2
10.10.2.10 10.10.2.100

10.10.1.20 10.10.1.x 10.10.1.1


Gateway
Node 2 10.10.2.20
Rest of IP
10.10.1.30
Network
Node 3 10.10.2.30

10.10.2.1
10.10.1.40 10.10.2.x
Gateway
Node 4 10.10.2.40

Fibre Channel iSCSI Email


network iSNS
Host gateway
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-11. Example of an IPv4 management and iSCSI shared subnet

The visual illustrates the configuration of a redundant network for Storwize V7000 IPv4
management and iSCSI addresses that shares the same subnet. This same setup can be
configured by using the equivalent configuration with only IPv6 addresses.

© Copyright IBM Corp. 2012, 2016 3-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Service Assistant IP interface


• Service Assistant is a browser-based GUI.
• Access using the Service IP address or SSH session.
– Requires default superuser / passw0rd
• Perform initial configuration of the system.
• Troubleshoot service relates issues.
http://node SA IP/service

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-12. Service Assistant IP interface

IBM Storwize V7000 Service Assistant (SA) is a browser-based GUI designed to assist with service
issues. Administrators can access the interface of a node that uses its Ethernet port 1 service IP
address through either a web browser or open an SSH session. During the system initialization, the
system automatically redirects you to the initial configuration of the system through a web browser.
The SA provides a default user ID (superuser) and password (passw0rd with a zero “0” instead of
the letter “o”). Only those with a superuser ID can access the Service Assistant interface. This ID
can be changed if required.
Administrators can use the Service Assistant IP address to access the SA GUI and perform
recovery tasks and other service-related issues.

© Copyright IBM Corp. 2012, 2016 3-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Management GUI access


• Must have a supported web browser to access the management GUI.
• Enable cookies in your web browser.
• Enable scripts to disable or replace context menus (Mozilla Firefox
only).

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-13. Management GUI access

To access the IBM Storwize V7000 management GUI, direct a web browser to the system
management IP address after the system initialization of the IBM Storwize V7000. To ensure that
you have the latest supported web and the appropriate settings are enabled, visit the IBM Storwize
V7000 Knowledge Center.

© Copyright IBM Corp. 2012, 2016 3-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

System communication and management


V7000 system time Four I/O
can be obtained
from an NTP
System State Data Groups
server

I/O Group0 I/O Group1 I/O Group2 I/O Group3


Copy of Copy of Copy of Copy of
cluster state cluster state cluster state cluster state
Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Node 8

Configuration Node
Boss Node

• Owns cluster IP address (up to • Controls cluster state updates


two addresses) • Propagates cluster state data to
• Provides configuration interface to all nodes
cluster
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-14. System communication and management

IBM Storwize V7000 system can contain up to four I/O groups which is a four Storwize V7000 system
configuration. When the initial node is used to create a cluster it automatically becomes the configuration
node for the Storwize V7000 system. The configuration node responds to the system IP address and
provides the configuration interface to the cluster. All configuration management and services are
performed at the system level. If the configuration node fails another node is chosen to be the configuration
node automatically and this node takes over the cluster IP address. Thus, configuration access to the
cluster remains unchanged. A cluster can contain up to four I/O groups (4x Storwize V7000 systems).
The system state data holds all configuration and internal system data for the 8 node-V7000
system. This system state information is held in non-volatile memory of each node. If the main
power supply fails then the battery modules maintain battery power long enough for the cluster
state information to be stored on the internal disk of each control enclosure. The read/write cache
information is also held in non-volatile memory. If power fails to a node then the cached data is
written to the internal disk.
A control enclosure in the cluster serves as the Boss node. The Boss node ensures
synchronization and controls the updating of the cluster state. When a request is made in a node
that results in a change being made to the cluster state data, that node notifies the boss node of the
change. The boss node then forwards the change to all nodes (including the requesting node) and
all the nodes make the state-change at the same point in time. This ensures that all nodes in the

© Copyright IBM Corp. 2012, 2016 3-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty
cluster have the same cluster state data. The FlashSystem cluster time can be obtained from an
NTP (Network Time Protocol) server from time synchronization.

© Copyright IBM Corp. 2012, 2016 3-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 physical planning topics


• Storwize V7000 physical planning

• Storwize V7000 logical planning


ƒ Management IP requirements
ƒ SAN zoning and network requirements

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-15. Storwize V7000 physical planning topics

This topic discusses the SAN zoning and network requirements for the Storwize V7000 system
environment.

© Copyright IBM Corp. 2012, 2016 3-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

SAN zoning best practices


SAN

Remote Flash
Fabric 1 System
Host Zones
Hosts see only
the volumes
Remote Copy
Redundancy
Zones
V7000 Intra-cluster
Open zoning

Fabric 2 DS8800
Storage
Zones
External storage
Storwize V70000
Storwize V70000
Local storage
DS3500

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-16. SAN zoning best practices

SAN zoning configuration is implemented at the switch level. To meet business requirements for
high availability it is recommend to build a dual fabric network that uses two independent fabrics or
SANs (up to four fabrics are supported).
The switches can be configured into four distinct types of fabric zones:
• Storwize V7000 intracluster open zoning: This requires using internal switches to create up to
two zones per fabric and include a single port per node, which is designated for intra-cluster
traffic. No more than four ports per node should be allocated to intra-cluster traffic.
• Host zones: A host zone consists of the V7000 control enclosure and the host.
• Storage zones: Is a single storage system zone that consists of all the storage systems that is
virtualized by the Storwize V7000 controller.
• Remote copy zones: An optional zone to support Copy Services features for Metro Mirroring
and Global Mirroring operations if the feature is licensed. This zone contains half of the system
ports of the system clusters in partnerships.
The SAN fabric zones allow the Storwize V7000 nodes to see each other and all of the storage that
is attached to the fabrics, and for all hosts to see only the Storwize V7000 controllers. The host
systems should not directly see or operate LUNs on the storage systems that are assigned to the
Storwize V7000 systems.

© Copyright IBM Corp. 2012, 2016 3-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Restrict access with zoning


• Host servers and the storage systems should not be configured in
the same zone (excepted only in split system configurations are in
use).
ƒ Hosts see only the volume presented.
ƒ A single host should not have more than eight paths to an I/O group.
ƒ For best practices, ensure that each host has at least two as network
adapters with each on a separate network.

System zone

Host zone 1 Host zone 2

Storage system zone


Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-17. Restrict access with zoning

Typically, the front-end host HBAs and the back-end storage systems are not in the same zone.
The exception to this is where split host
and split storage system configuration is in use. You will need to create a host zone for every host
server that needs access to storage from the V7000 controller. A single host should not have more
than eight paths to an I/O group.
Follow basic zoning recommendations to ensure that each host has at least two network adapters,
that each adapter is on a separate network (or at minimum in a separate zone), and is connected to
both canisters. This setup assures four paths for failover and failback purposes.

© Copyright IBM Corp. 2012, 2016 3-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

System switch port connection


• Supports 2 Gb, 4 Gb, 8 Gb, or 16 Gb FC fabric speeds
• Recommend connecting the Storwize V7000 controller and the
disk subsystem to the switch operating at the highest speed

Fabric1
1 2 3 4 5 6 7 8 FC Switch1

Fiber Cable
(LC) Fabric2
FC Switch1

8GB/16GB Node1 Node2 Node3 Node4


FC LW SFP
Transceivers
Example of one 4-port 8 Gb FC HIC per V7000 nodes

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-18. System switch port connection

The visual illustrates an example of a switch port connection. A dual fabric environment must be
identical to one another in concept. The eight ports on the switch are used to connect to a four-node
Storwize V7000 system. Identical switch port numbers are used for the second fabric of the dual
fabric SAN configuration. You can alternate the port attachments between the two fabrics.
Storwize V7000 base configuration ships with no host adapters are installed. Storwize V7000 Gen2
supports 2 gigabits per second (Gbps), 4 Gbps, or 8 Gbps FC fabric, 16 Gbps connects, depending
on the hardware platform and on the switch where the Storwize V7000 Gen2 is connected. In an
environment where you have a fabric with multiple-speed switches, the preferred practice is to
connect the Storwize V7000 Gen2 and the disk subsystem to the switch operating at the highest
speed. his SFP transceiver provides an auto-negotiating 2, 4, 8 or 16 Gb shortwave optical
connection on the 2-port Fibre Channel adapter.
A shortwave small form-factor pluggable (SFP) transceiver is required for all FC adapters and must
be of the same speed as the adapter to be installed. For example, if a 2-port 16 Gb HIC is installed,
then a 16 Gb SFP transceiver must be installed.

© Copyright IBM Corp. 2012, 2016 3-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Host communication
• No adapters are shipped with the Storwize V7000 control enclosures.
• The optional features of Storwize V7000 that can be configured for host
attachment include:
ƒ 16 Gb FC four port adapter pair for 16 Gb FC connectivity
ƒ 16 Gb FC two port adapter pair for 16 Gb FC connectivity
ƒ 8 Gb FC adapter pair (four port each) for 8 Gb FC connectivity
ƒ 10 Gb Ethernet adapter pair for 10 Gb iSCSI/FCoE connectivity
í Requires extra IPv4 or extra IPv6 addresses for each 10 GbE port used on each
node canister

Host interface cards (HIC)

E1 E2
E3
Port 2
Port 3

Port 2
Port 3
Example identifies the installment location for the host interface cards.
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-19. Host communication

The node ports on each Storwize V7000 system must communicate with each other for the
partnership creation to be performed. Switch zoning is critical to facilitating intercluster
communication. One port for each node per I/O Group per fabric that is associated with the host is
the recommended zoning configuration for fabrics. IBM Storwize V7000 control enclosures are
shipped without any host I/O adapters. Following optional features are available for host
attachment.
• 16 Gb FC four port adapter pair for 16 Gb FC connectivity (two cards each with four 16 Gb FC
ports and shortwave SFP transceivers)
• 16 Gb FC two port adapter pair for 16 Gb FC connectivity (two cards each with two 16 Gb FC
ports and shortwave SFP transceivers)
• 8 Gb FC adapter pair for 8 Gb FC connectivity (two cards each with four 8 Gb FC ports and
shortwave SFP transceivers)
• 10 Gb Ethernet adapter pair for 10 Gb iSCSI/FCoE connectivity (two cards each with four 10 Gb
Ethernet ports and SFP+ transceivers)
▪ This type of configuration uses Fibre Channel cables to connect to your 10Gbps Ethernet or
FCoE SAN. Connect each 10 Gbps port to the network that will provide connectivity to that
port. It would also require extra IPv4 or extra IPv6 addresses for each 10 GbE port used on

© Copyright IBM Corp. 2012, 2016 3-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty
each node canister. These IP addresses are independent of the system configuration IP
addresses which allows the IP-based hosts to access Storwize V7000 managed Fibre
Channel SAN-attached disk storage.
Hosts can be connected to the Storwize V7000 Fibre Channel ports directly or through a SAN
fabric. To provide redundant connectivity, connect both node canisters in a control enclosure to the
same networks.
• Each node canister ports 1, 2 and 3 can also be used to provide iSCSI connectivity.

© Copyright IBM Corp. 2012, 2016 3-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Fibre Channel network


• Fibre Channel Protocol (FCP) is a technology for transmitting data
between computer devices at data rates of up to 16 Gb.
ƒ This is especially suited for connecting computer servers to shared storage
devices and for interconnecting storage controllers and drives.
• FC uses a worldwide name (WWN) as a unique identity for each
Fibre Channel device.
ƒ End-points in FC communication (node port) have a specific WWN called a
worldwide port name (WWPN).

Full duplex
Host Switch Disk
TX
TX TX
Port

RX RX
RX

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-20. Fibre Channel network

Fibre Channel Protocol (FCP) is the prevalent technology standard in the storage area network
(SAN) data center environment. Fibre Channel (FC) offers a high speed serial interface for
connecting servers and peripheral devices together to consolidate dedicated SAN. Fibre Channel
was also designed to enable redundant and fault tolerant configurations, and especially appropriate
in SAN environments where high availability is an important requirement. Therefore FC technology
creates a multitude of FC-based solutions that have paved the way for high performance, high
availability, and the highly efficient transport and management of data.
Fibre channel communicates at full-duplex, allowing data to flow in opposite directions at the same
time. Fibre Channel devices are attached together through the use of light as a carrier at data rates
up to 8 Gb/s (gigabits per second) and up to 16 Gb/s when used on supported switches. FC
massive bandwidth capabilities, allows high-speed transfer of multiple protocols over the same
physical interface.
Each device in the SAN is identified by a unique world wide name (WWN). The WWN also contains
a vendor identifier field and a vendor-specific information field, which is defined and maintained by
the IEEE.

© Copyright IBM Corp. 2012, 2016 3-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Internet SCSI network


• Internet SCSI (iSCSI) is an IP based network connecting servers and
storage using Ethernet switches.
ƒ Encapsulate SCSI commands and transport as TCP/IP packet.
• iSCSI carries block level data over IP network.
ƒ File systems are in the servers.
• It communicates at speeds:
ƒ Runs at 1 Gbps and 10 Gbps:
í TCP/IP offload engine (TOE) NIC card, iSCSI HBA, or standard NIC
• Hardware based gateway to FC storage.
• Removes distance limitations.
iSCSI frame

Ethernet
header IP TCP iSCSI Data CRC

TCP/IP and iSCSI require CPU processing


Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-21. Internet SCSI network

Internet SCSI (iSCSI) is a storage protocol that transports SCSI over TCP/IP allowing IP-based
SANs to be created using the same networking technologies — for both storage and data networks.
iSCSI runs at speeds of 1Gbps or at 10Gbps with the emerge of 10 Gigabit Ethernet adapters with
TCP Offload Engines (TOE). This technology allows block-level storage data to be transported over
widely used IP networks, enabling end users to access the storage network from anywhere in the
enterprise. In addition, iSCSI can be used in conjunction with existing FC fabrics as gateway
medium between the FC initiators and targets, or as a migration from a Fibre Channel SAN to an IP
SAN.
The advantage of an iSCSI SAN solution is that it uses the low-cost Ethernet IP environment for
connectivity and greater distance than allowed when using traditional SCSI ribbon cables
containing multiple copper wires. The disadvantage of an iSCSI SAN environment is that data is still
managed at the volume level, performance is limited to the speed of the Ethernet IP network, and
adding storage to an existing IP network may degrade performance for the systems that were using
the network previously. When not implemented as part of a Fibre Channel configuration, it is widely
recommended building a separate Ethernet LAN exclusively to support iSCSI data traffic.

© Copyright IBM Corp. 2012, 2016 3-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Fibre Channel over Ethernet support


• FC over Ethernet (FCoE) 10 Gb port on the Storwize V7000 node provide
the same target and initiator functions as the Fibre Channel protocol.
ƒ Host access to volume (using FC or FCoE)
í Converged Network Adapter (CAN) is required
ƒ Access to external storage LUN (using FC or FCoE)
ƒ Replication between V9000s for Remote Copy

Converged
Enhanced SAN Fabric 1
Ethernet (CEE)
network
AC2 with
Hosts with 10Gb
CNAs

SAN Fabric 2

CEE ports FC ports


Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-22. Fibre Channel over Ethernet support

FCoE provides the same target and initiator functions as the Fibre Channel protocol as it is
encapsulated in the Fibre Channel frames over Ethernet networks. This allows Fibre Channel to
use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel protocol.
FCoE maps Fibre Channel directly over Ethernet while being independent of the Ethernet
forwarding scheme.
The FlashSystem AC2 node supports 10 Gb FCoE port attachment to a converged Ethernet switch
to support FCoE, Fibre Channel, Converged Enhanced Ethernet (CEE), and traditional Ethernet
protocol connectivity for servers and storage.

© Copyright IBM Corp. 2012, 2016 3-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Native IP replication support


• Storwize V7000 support native IP replication as
an alternative to FC
• Creates an IP partnership between two
FlashSystems
ƒ Supports all remote copy modes (MM and GM)
í GM with Change Volumes preferred mode
ƒ Up to three system may be partnered (1x IP/2x
FC)
Ethernet
ƒ Enables use of 1 Gb or 10 Gb Ethernet No FC or
connections using TCPIP FCIP routers
í Use Ethernet ports 1 or 2 only are required

í If 10 Gb is available, it uses both ports 2 and 3


ƒ Requires no additional licenses
í Remote mirror is a chargeable option
ƒ Bridgeworks SANSlide IP network optimization
technology that is built into code

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-23. Native IP replication support

If optional 4-port 10Gbps Ethernet host interface adapters are installed in the node canisters,
connect each port to the network that will provide connectivity to that port. To provide redundant
connectivity, connect both node canisters in a control enclosure to the same networks.
IBM Storwize V7000 systems supports remote copy over native Internet Protocol (IP)
communication using Ethernet communication links. Native IP replication enables the use of
lower-cost Ethernet connections for remote mirroring as an alternative to using Fibre Channel
configurations.
Native IP replication enables replication between any FlashSystem family member (running the
supported) that uses the built-in networking ports of the cluster nodes. IP replication includes
Bridgeworks SANSlide network optimization technology to bridge storage protocols and accelerate
data transfer over long distances. SANSlide is available at no additional charge.
Native IP replication supports Copy Services features Metro and Global Mirror and Global Mirror
Change Volumes. The function in the same way that traditional FC-based mirroring, native IP
replication is transparent to servers and applications.
IP replication requires a 1 Gb or 10 Gb LAN connections. IBM FlashSystem V9000 can have only
one port that is configured in an IP partnership, either port 1 or 2, cannot use both. If the optional 10
Gb Ethernet card is installed in a system, ports 3 and 4 are also available. A system may be

© Copyright IBM Corp. 2012, 2016 3-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty
partnered with up to three remote systems. A maximum of one of those can be IP and the other two
FC.
Recommend a straight forward setup:
▪ Two active Ethernet links with two port groups to provide link failover capabilities
▪ At least two I/O groups to provide full IP replication bandwidth if one component is offline

© Copyright IBM Corp. 2012, 2016 3-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Zone definitions by port number or WWPN

Zoning by port
(Domain ID and Port #)
WinA AIX Zoning by
WWPN

Fabric

Switch domain#

LUN
masking
Lw Ls
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-24. Zone definitions by port number or WWPN

This is a basic understanding on how zoning can be implemented by switch domain ID and port
number. When a cable is moved to another switch or another port, then the zoning definition needs
to be updated. This is sometimes referred to as port zoning.
Zoning by WWPN provides the granularity at the adapter port level. If the cable is moved to another
port or to a different switch in the fabric the zoning definition is not affected. However, if the HIC is
replaced and the WWPN is changed (this does not apply to the FlashSystem WWPNs) then the
zoning definition needs to be updated accordingly.
When zoning by switch domain ID, ensure that all switch domain IDs are unique between both
fabrics and that the switch name incorporates the domain ID. Having a unique domain ID makes
troubleshooting problems much easier in situations where an error message contains the Fibre
Channel ID of the port with a problem. For example, have all domain IDs in first fabric to start with
10 and all domain IDs in second fabric to start with 20.

© Copyright IBM Corp. 2012, 2016 3-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Name and addressing convention

64-bit port WWN


24-bit port address

Port name
N_Port ID
Host HBA (node HBA)

Port name
N_Port ID

64-bit port WWN


24-bit port address Node name
64-bit node WWN

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-25. Name and addressing convention

IBM storage uses a methodology whereby each world wide port name (WWPN) is a child of the
world wide node name (WWNN). The unique world wide name (WWN) is used to identity the Fibre
Channel storage device in a Storage Area Network (SAN). This means that if you know the WWPN
of a port, you can easily identify the vendor and match it to the WWNN of the storage device that
owns that port.

© Copyright IBM Corp. 2012, 2016 3-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

WWN addressing scheme


Vendor specific
Company ID Vendor Specific Info
information

10 : 00 00 : 00 : c9 2f : 65 : d6 IEEE Standard
format

Section 1 Section 2 Section 3

2 0 : 00 00 : 0e : 8b 05 : 05 : 04 IEEE Extended
format
Section 1 Section 2 Section 3 Section 4

Vendor specific
Company ID Vendor Specific Info
information

5 0 : 05 : 07 : 6 3 : 00 : c7 : 01 : 99
Registered
format
6 0 : 05 : 07 : 6 3 : 00 : c7 : 01 : 99

Section 1 Section 2 Section 3


Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-26. WWN addressing scheme

Each N_port on a storage device contains persistent (16 hexadecimal) World Wide Port Name
(WWPN) that is actually 8 bytes.
The first table is an example of an Emulex HBA IEEE Standard format (10). Section 1 identifies the
WWN as a standard format WWN. Only one of the 4 digits is used, the other three must be zero
filled. Section 2 is called the OUI or “company_id” and identifies the vendor (more on this later).
Part 3 is a unique identifier created by the vendor.
Our next example is an QLogic HBA identifying an IEEE Extended format (20). Section 1 identifies
the WWN as an extended format WWN. Section 2 is a vendor specific code and can be used to
identify specific ports on a node or to extend the serial number (section 4) of the WWN. Section 3
identifies the vendor. Section 4 is the unique vendor-supplied serial number for the device.
The last two tables identifies vendor IEEE Registered Name format of the WWN. This is referred to
a Format 5 which enables vendors to create unique identifiers without having to maintain a
database of serial number codes. IBM owns the 005076 company ID. Section 1: 5 identifies the
registered name WWN. Section 2: 0: 05: 07:6 identifies the vendor. Section 3: 3:00:c7:01:99 is a
vendor-specific generated code, usually based on the serial number of the device, such as a disk
subsystem.
All vendors wishing to create WWNs must register for a company ID or OUI (Organizationally
Unique Identifier). These are maintained and published by IEEE.

© Copyright IBM Corp. 2012, 2016 3-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 new WWNN/WWPN schema


• Old schema for Storwize V7000 Gen1: 50 05 07 68 02
• New schema for Storwize V7000 Gen2: 50 05 07 68 0B
• New port outlining/numbering:
ƒ Past: 50 05 07 68 01 <10-80>…, not clearly aligned to physical ports
ƒ Now: 50 05 07 68 0B <slot><port>…, mapped to physical layout
• Supports up to 1024 WWNNs back-end storage subsystems to be
virtualized

*Ports are numbered top to bottom

50 05 07 68 0B 22 xx xx 50 05 07 68 0B 34 xx xx

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-27. Storwize V7000 new WWNN/WWPN schema

As a Fibre Channel SAN participant, each Storwize V7000 Gen2 has a unique worldwide node
name (WWNN) and each Fibre Channel port on the HICs has a unique worldwide port name
(WWPN). These ports are used to connect the V7000 node canister to the SAN. The Storwize
V7000 nodes use the new 80c product ID, IBM’s latest schema to generate WWNN/WWPNs. The
previous generation of Gen1 nodes WWNN seed supports only 59 WWPN End Points. One of the
important considerations when upgrading the system to Gne2 nodes or when just installing an
additional I/O group based on Gne2 nodes is the use of WWPN Range. With the support of the 16
Gb FC HIC, Storwize V7000 Gen2 generate 6x WWPNs per port, far too many ports to use the
WWPN Range provided by the pre-existing Gen1 nodes.
The visual shows the rear of an Storwize V7000 Gen2 model with two 4-port FC HIC installed in
slots 2 and 3. Ports are physically numbered from top to bottom. Each node port takes the form of
the Gen2 WWN numbering scheme.
For high availability the ports of an Gen2 node should be spread across the two fabrics in a dual
fabric SAN configuration.
The maximum number of worldwide node names (WWNNs) increased to 1024 allowing up to 1024
back-end storage subsystems to be virtualized.

© Copyright IBM Corp. 2012, 2016 3-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 worldwide names schema


• Two classes of WWNs used in V7000 Slot Port WWPN

Gen2 nodes 1 1 500507680b11xxxx

1 2 500507680b12xxxx
ƒ Public names
1 3 500507680b13xxxx
‫ ޤ‬Used for the various fake switch components
for FC direct-attach 1 4 500507680b14xxxx

2 1 500507680b21xxxx
‫ ޤ‬Used for SAS initiator in 2076-12F/24F
2 2 500507680b22xxxx
expansion units
2 3 500507680b23xxxx
‫ ޤ‬Public WWNs take the form
2 4 500507680b24xxxx
500507680b <slot number> <port number>
xxxx 3 1 500507680b31xxxx

3 2 500507680b32xxxx
ƒ Private name:
3 3 500507680b33xxxx
‫ ޤ‬Used by hosts to identify storage
3 4 500507680b34xxxx
‫ ޤ‬Used by backend controllers for LUN
masking 4 1 500507680b41xxxx

‫ ޤ‬Needed for fabric zoning 4 2 500507680b42xxxx

4 3 500507680b43xxxx

4 4 500507680b44xxxx

********************************************
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-28. Storwize V7000 worldwide names schema

There are two classes of WWNs used in 2145-DH8 nodes:


• Public names which need to be migrated during hardware upgrade. Public names are used for
the various 'fake switch' components for FC direct-attach and for SAS initiator in 2076-12F/24F
expansion units. Public WWNs take the form: 500507680b <slot number> <port number> xxxx.
With 4 bits for slot number and 4 bits for port number giving 16 public names per slot and 16 bits
for the serial number. Other slots follow the same pattern but with their own slot number in place
of the 1.
• Private names which can change during hardware upgrade. Private names are used by hosts
to identify storage, backend controllers for LUN masking, and fabric zoning. Private names for a
slot are first taken from its pool of unused public names and any extra needed can be generated
by adding 8 to the slot number (wrapping around to 0 for slot 8) and starting again from port 1.
This means the 17th port name for slot 1 would be 500507680b91xxxx, the 18th
500507680b92xxxx and so on, up to the 28th, which would be 500507680c9cxxxx. This allows
the Storwize V7000 Gen2 to support up to 8 slots with 7 addresses per physical port (assuming
4 ports per card) which is enough to cover FC direct-attach with a spare address per port.

© Copyright IBM Corp. 2012, 2016 3-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 port destination recommendations


Optimal configurations based on Storwize port assignment to function
12-port Nodes
Port SAN 4-port Nodes 8-port Nodes 12-port Nodes Write Data Rate > 3 GBps per IO Group
C1P1 A Host/Storage/Inter-node Host/Storage Host/Storage Inter-node
C1P2 B Host/Storage/Inter-node Host/Storage Host/Storage Inter-node
C1P3 A Host/Storage/Inter-node Host/Storage Host/Storage Host/Storage
C1P4 B Host/Storage/Inter-node Host/Storage Host/Storage Host/Storage
C2P1 A Inter-node Inter-node Inter-node

C2P2 B Inter-node Inter-node Inter-node


Replication or
C2P3 A Host/Storage Host/Storage
Host/Storage
Replication or
C2P4 B Host/Storage Host/Storage
Host/Storage
C5P1 A Host/Storage Host/Storage

C5P2 B Host/Storage Host/Storage


Replication or
C5P3 A Replication or Host/Storage
Host/Storage
Replication or
C5P4 B Replication or Host/Storage
Host/Storage
• SAN column assumes an odd/even SAN port configuration. Modifications must be made if other
SAN connection schemes are used.
• Care needs to be taken when zoning so that inter-node ports are not used for Host/Storage in
the 8-port and 12-port configurations.
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-29. Storwize V7000 port destination recommendations

The visual lists options that represent optimal configurations based on port assignment to function.
Using the same port assignment but different physical locations will not have any significant
performance impact in most client environments.
This recommendation provides the wanted traffic isolation while also simplifying migration from
existing configurations with only 4 ports, or even later migrating from 8-port or 12-port
configurations to configurations with additional ports. More complicated port mapping
configurations that spread the port traffic across the adapters are supported and can be considered
but these approaches do not appreciably increase availability of the solution since the mean time
between failures (MTBF) of the adapter is not significantly less than that of the non-redundant node
components.

© Copyright IBM Corp. 2012, 2016 3-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 and SVC switch zoning


Canister 1 Canister 2

P1 P2 P1 P2
Connect the P1 P3 P4 P3 P4

ports to one fabric


and the P2-ports
to the other fabric.

For 8 Gb connect the SAN Fabric 1 SAN Fabric 2


P3 and P4 ports to
the same fabrics as
P1 and P2 ports
respectively.
1 1 1 1 2 2 2 2 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4
P1 P2 P3 P4 P1 P2 P3 P4
Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2
Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Node 8

I/O Group 0 I/O Group 1 I/O Group 2 I/O Group 3

SVC 8-node - 4 FC ports per node (8 ports per I/O group)


1 processor and 32 GB memory
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-30. Storwize V7000 and SVC switch zoning

The visual illustrates connecting the Storwize V7000 Gen 2 and the SVC DH8 8-node system to
redundant fabrics that use two 4-port 8 Gb FC HIC per node for an 8-port fabric connections. Each
of the odd number ports (1 and 3) is connected to the first SAN switch and even number ports (2
and 4) are connected the second SAN switch.
For this example, we are zoning the Storwize V7000 as a back-end storage controller of SAN
Volume Controller. Therefore, every SAN Volume Controller node must have the same Storwize
V7000 view as a minimum requirement which must be at least one port per Storwize canister. For
best performance and availability, it is recommended to zone all the Storwize Gen2 and SAN
Volume Controller DH8 ports together in each fabric. If the SVC nodes see a different set of ports
on the same storage system then operation is degraded and logged as error.
Cabling is done to facilitate zone definitions that are coded by using either switch domain ID and
port number, or WWPN values. When cabling the Storwize V7000 ports to the switch, adhere to the
following recommendations and objectives:
• Split the attachment of the ports of the V7000 node across both fabrics. This implies that ports
alternate between the two V7000 nodes as they are attached to the switch.
• Enable the paths from the host, with either four-paths or eight-paths to the V7000 I/O group to
be distributed across WWPNs of the V7000 node ports.

© Copyright IBM Corp. 2012, 2016 3-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty
• The ports of all nodes in a cluster (even from multiple I/O groups of the same cluster) need to be
zoned together in the system zone. This example shows how it works with the distribution of the
ports through two distinct fabrics. In the case that an intersystem zone is required (that is
planned usage of remote copy function with another Storwize or SVC device), it is required to
create an additional zone. This zone must contain all WWPNs of the nodes from both clusters
(any-to-any). Even if that technically implies that the system zone becomes obsolete, it is still a
support requirement (and best practice) to keep it.

© Copyright IBM Corp. 2012, 2016 3-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 and FlashSystem 900 switch zoning


Canister 1 Canister 2
P1 P2 P1 P2 P1 P2 P1 P2
Connect the P1
ports to one fabric
and the P2-ports
to the other fabric.

For 8 Gb connect the SAN Fabric 1 SAN Fabric 2


P3 and P4 ports to
the same fabrics as
P1 and P2 ports
respectively.
1 1 1 1 2 2 2 2 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4
P1 P2 P3 P4 P1 P2 P3 P4
Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2
Node 1 Node 2 Node 3 Node 4 Node 1 Node 6 Node 7 Node 8

I/O Group 0 I/O Group 1 I/O Group 2 I/O Group 3

Storwize V7000 8-node – 4 FC ports per node (8 per I/O Grp)

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-31. Storwize V7000 and FlashSystem 900 switch zoning

The visual illustrates connecting Storwize Gen2 8-node system and the FlashSystem 900 to
redundant fabrics using 4-port 8 Gb FC connections. For an Storwize V7000 in a FlashSystem 900
environment, the switch has the same recommendations as the Storwize V7000. To maximize the
performance that can be achieved when deploying the FlashSystem 900 with the Storwize V7000
carefully consider the assignment and usage of the FC HBA ports on the Storwize V7000.
Specifically, SAN switch zoning which is coupled with port masking can be used for traffic isolation
for various Storwize V7000 functions reducing congestion and improving latency.

© Copyright IBM Corp. 2012, 2016 3-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 and DS5K switch zoning

SAN Fabric 1 SAN Fabric 2

1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2
P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4
4
Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2
Node 1 Node 2 Node 3 Node4 Node 5 Node 6 Node 7 Node 8
I/O Group 0 I/O Group 1 I/O Group 2 I/O Group 3

SVC 8-node – 4 FC ports per node (8 per I/O Grp)

Cntrl A Cntrl B
Channels Channels
1 and 3 2 and 4

C1 C2 C4
C3

Controller 1 Controller 2
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-32. Storwize V7000 and DS5K switch zoning

The visual illustrates connecting Storwize V7000 Gen2 8-node system and the DS3500 to
redundant fabrics using 4-port 8 Gb FC connections. The DS3K has two ports and the 4-node
Storwize V7000 cluster has 16 ports. Both parties have their ports that are evenly split between two
SAN fabrics.

© Copyright IBM Corp. 2012, 2016 3-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 and DS3500 switch zoning

SAN Fabric 1 SAN Fabric 2

1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2
P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4
4
Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2
Node 1 Node 2 Node3 Node 4 Node 5 Node 6 Node 7 Node 8
I/O Group 0 I/O Group 1 I/O Group 2 I/O Group 3

Storwize V7000 with 4 FC ports per node (8 per I/O Grp)

Ctrl A Ctrl B
Channels Channels
1 and 3 2 and 4

C2 C4
C1 C3

Controller 1 Controller 2
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-33. Storwize V7000 and DS3500 switch zoning

This visual illustrates how each Storwize V7000 Gen2 node with four ports per I/O group and the
IBM System Storage DS3500 two port are connected to a redundant fabric using 8 Gb FC
connections. Both system ports are evenly split between two SAN fabrics.

© Copyright IBM Corp. 2012, 2016 3-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storwize V7000 nodes and storage zones


Nodes-VendorX zone Nodes-VendorX zone
[(11,1) (11,2) (11,3) (11,4) (11, 5)] [(12,1) (12,2) (12,3) (12,4) (12,5)]
Nodes-DSxK zone Nodes-DSxK zone
[(11,1) (11,2) (11,3) (11,4) (11,6) (11,0)] [(12,1) (12,2) (12,3) (12,4) (12,6) (12,0)]

ID ID
All storage ports and all SVC ports
21 22

Fabric 1 Fabric 2
0 1 2 3 4 5 6 0 1 2 3 4 5 6
ID ID
E3 11 21 14 24 F1 E1 11 E4 13 23 12 22 F2 E2 12

11 12 13 14 F
1 F
2
E
1 E
2
NODE1
VendorX
E
3 E
4
21 22 23 24 DSxK
NODE2
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-34. Storwize V7000 nodes and storage zones

Multiple ports or connections from a given storage system can be defined to provide greater data
bandwidth and more availability. To avoid interaction among storage ports of different storage
system types, multiple back-end storage zones can be defined.
For example, one zone contains all the Storwize V7000 ports and the VendorX port and another
zone contains all the node ports and the DSxK ports. Storage system vendors might have additional
best practice recommendations such as not mixing ports from different controllers of the same
storage system in the same zoning. Storwize V7000 supports and follows those guidelines that are
provided by the storage vendors.

© Copyright IBM Corp. 2012, 2016 3-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

FC Zoning and multipathing LUN access control


Zones Zones
WinA SunA How many paths?
SDDDSM MPIO How many LUNs?
w1 w2 s1 s2
Fabric 1 Fabric 2

FC SwitchA FC SwitchB

LUN
masking LUN sharing requires
Lw Ls additional software.

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-35. FC Zoning and multipathing LUN access control

A host system is generally equipped with two HBAs requiring one to be attached to each fabric.
Each storage system also attaches to each fabric with one or more adapter ports. A dual fabric is
also highly recommended when integrating the Storwize V7000 into the SAN infrastructure.
LUN masking is typically implemented in the storage system and in an analogous manner in the
Storwize V7000 to ensure data access integrity across multiple heterogeneous or homogeneous
host servers. Zoning is deployed often complementing LUN masking to ensure resource access
integrity. Issues that are related to LUN or volume sharing across host servers are not changed by
the Storwize V7000 implementation. Additional shared access software, such as clustering
software, is still required if sharing is desired.
Another aspect of zoning is to limit the number of paths among ports across the SAN and thus
reducing the number of instances the same LUN is reported to a host operating system.

© Copyright IBM Corp. 2012, 2016 3-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Maximum paths supported: No more than eight


AIXA How many
WinA
paths?
SDDDSM SDDPCM
w1 w2 a1 a2 a3 a4
Fabric 1 Fabric 2

FC SwitchA1 FC SwitchA

Use zoning to manage the number of paths


11 12 13 14 21 22 23 24
NODE1 V1 NODE2

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-36. Maximum paths supported: No more than eight

A Storwize V7000 cluster with multiple nodes might potentially introduce more paths than
necessary between the host HBA ports and the Storwize V7000 FC ports. A given host should have
two HBA ports for availability and no more than four HBA ports. This allows for a minimum of four
paths and a maximum of eight paths between the host and the I/O group.
From the perspective of the host, the eight paths do not necessarily provide performance
improvement over the four paths environment. However, from an Storwize V7000 perspective the
host activity is balanced across the four ports of each node. Usually there are multiple hosts
connected to an Storwize V7000 cluster. The eight path configuration provides for an automatic
load leveling of activity across the four Storwize V7000 node ports as opposed to the manual load
placement approach of the four path configuration. With a manual load placement it is easy to
introduce a skew of activity to a subset of ports on a node. The skew manifests itself as higher
utilization on some ports and therefore longer response times for I/O operations.

© Copyright IBM Corp. 2012, 2016 3-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Storage infrastructure and access


Host Host Host
Server1 Server2 Server3

RAIDa and RAIDb Multipath LUN


coexistence? access? sharing?

SAN Switch zoning


+
Storage controllers
LUN masking

Lz Lh Li cLx La Lb Lc Ly Le L f Lg
RAIDa RAIDb RAIDc

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-37. Storage infrastructure and access

Several data access issues and considerations need to be examined when heterogeneous servers
and storage systems are interconnected in a SAN infrastructure. These include:
• Device driver coexistence on the same host if that host is to be accessing storage from different
types or brands of storage systems.
• Multipath driver when multiple paths are available to access the same LUN or same set of
LUNs. If multiple storage systems of differing types or brands are to be accessed then
coexistence of multipath path drivers needs to be verified.
• LUN sharing among two or more hosts require shared access capability software (such as
clustering or serialization software) to be installed on each accessing host.
• Each storage system must be managed separately. Changes in the storage configuration might
have a significant impact on application availability.
These storage systems, while intelligent, exist as separate entities. Storage under or over utilization
is managed at the individual storage system level. Data replication capabilities of each storage
system are typically unique to that system and as a rule generally require like-systems as targets of
remote mirroring operations. Also, to decommission older equipment, data movement (or
migration) typically requires scheduled application outages.

© Copyright IBM Corp. 2012, 2016 3-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Host to SVC access: Supported multipath drivers


V7K+nonV7K V7K only V7K only NonV7K

Server1 Server2 Server3 ServerX


SDDDSM SDDPCM MPIO multipath
driver
IP
SAN V7K Additional multipath drivers
supported:
V1 V2 V3 V4 V5 9 ATTO multipath
9 AIXPCM
Pool1 Pool2
SAN 9 Citrix Xen
9 Debian
9 IBM i
9 Linux
9 Novell NetWare
V7K V7K 9 OpenVMS
9 ProtectTier
9 PV Links, HP native
Lx La Lb La Lb L 1 L2 9 SGI
9 Sun MPxIO
9 Tru64
Wide array of supported storage systems 9 Veritas DMP, DMPDSM
9 VMware
9 Windows MPIO

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-38. Host to SVC access: Supported multipath drivers

When MDisks are created, they are not visible to the host. The host only sees a number of logical
disks, known as virtual disks or volumes, which are presented by the Storwize V7000 I/O groups
through the SAN (FC/FCoE) or LAN (iSCSI) to the servers. The host system access virtual disks as
SCSI targets (for example, SCSI disks in Windows, hdisks in AIX). Since host systems will be
zoned to access LUNs provided by the SVC, the only multipath driver needed in these host
systems is the Subsystem Device Driver (SDD).
The Subsystem Device Driver (SDD, or SDDDSM for Windows MPIO environments, SDDPCM for
AIX MPIO environments) is a standard function of the Storwize V7000 and provides multipathing
support for host servers accessing Storwize V7000 provisioned volumes. These devices are often
referred to as Multipath I/O or MPIO. For Windows and AIX only a multipath driver that instructs the
OS which path to pick for a given I/O is required.
In addition to SDD, a wealth of other multipath drivers is supported. Refer to the Storwize V7000
product support website for latest support levels and platforms.

© Copyright IBM Corp. 2012, 2016 3-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Multipathing and host LUN access control


Host Example
SDD
Multipath driver
multipath manages SDDDSM
support multiple paths
SDDPCM
MPIO to volume
Exploits multipath device access:
2 4 9Dynamic load balancing -
L22 L11 L33 L44
based on policy
9Dynamic reconfiguration
3
1 4 1 3 9Automatic path failover
2 Switch 9Automatic path reclamation
9Automatic path failback
Supports Storwize V7000
preferred node for multipathing
L

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-39. Multipathing and host LUN access control

The SDD provides multipath support for certain OS environments that do not have native MPIO
capability. SDD also enhances the functions of the Windows DSM and AIX PCM MPIO frameworks.
For availability, host systems generally have two HBA ports installed; and storage systems typically
have multiple ports as well. The number of multiple instances of the same LUN increases as more
ports are added. In a SAN environment, a host system with a multiple Fibre Channel adapter ports
which connect through a switch to multiple storage ports is considered to have multiple paths. Due
to these multiple paths, the same LUN is reported to the host system more than once.
For coexistence and gradual conversion to the Storwize V7000 environment, a storage system
RAID controller might present LUNs to both the Storwize V7000 as well as other hosts attached to
the SAN. Dependent upon some restrictions, a host might be accessing SCSI LUNs surfaced either
directly from the storage system or indirectly as volumes from the Storwize V7000. Besides
adhering to the support matrix for storage system type and model, HBA brand and firmware levels,
device driver levels and multipath driver coexistence, and OS platform and software levels, the
fabric zoning must be implemented to ensure resource access integrity as well as multipathing
support for high availability.
Although attached storage is supported, it is the Storwize V7000 and not the individual host
systems that interacts with these storage systems, their device drivers, and multipath drivers.

© Copyright IBM Corp. 2012, 2016 3-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Host zoning preferred paths

Preferred path for


Dual paths per
vdisk1 is Gne2 fabric per HBA
V7000 Node 1 P2 V1 P1 P2

and P3
DIR 1 SAN Fabric DIR 2 SAN Fabric

Non Preferred path


for vdisk1 is Gen2
V7000 Node 2 P2
and P3
Logical ports 4 3 1 3 5 6 7 8 4 3 1 3 5 6 7 8

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
Physical ports HIC 1 HIC 2 HIC 1 HIC 2

Node 1 Node 2

I/O Group 0

SDD path selection algorithm drives I/O and load


balances across the paths to the preferred node
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-40. Host zoning preferred paths

By default, the Storwize V7000 GUI assigns ownership of even-numbered volumes to one node of
a caching pair and the ownership of odd-numbered volumes to the other node. When a volume is
assigned to an V7000 node at creation, this node is known as the preferred node through which the
volume will normally be accessed. The preferred node is responsible for I/Os for the volume and
coordinates sending the I/Os to the alternative node.
This illustration is of a 2-node system with dual paths to both the fabric and HBA to the Storwize
V7000 I/O group. Each host HBA port is zoned with one port of each V7000 node of an I/O group in
a four-path environment. The first volume (vdisk1), whose preferred node is NODE 1, is accessed
for I/O then the path selection algorithms will load balance across the two preferred paths to NODE
1. The other two non preferred paths defined in this zone are to NODE 2, which is the alternate
node for volume (vdisk1).
The reason for not assigning one HBA to each path is because, one node solely serves as a
backup node for any specific volume. That is, a preferred node scheme is used. The load is never
be balanced for that particular volume. Therefore, it is better to load balance by I/O group instead
so that the volume is assigned to nodes automatically.

© Copyright IBM Corp. 2012, 2016 3-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Zoning multi HBA hosts for resiliency


Spread IO across multiple SVC Ports

Host HBA ports accessing only


V7000-surfaced volumes should
not be in the same zone as
HBA 1 HBA 2
P1 P2 P1 P2
back-end storage ports.

SAN Fabric 1 SAN Fabric 2

1 1 1 1 2 2 2 2 5 5 5 5 1 1 1 1 2 2 2 2 5 5 5 5 1 1 1 1 2 2 2 2 5 5 5 5 1 1 1 1 2 2 2 2 5 5 5 5
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12
Slot 1 Slot 2 Slot5 Slot 1 Slot 2 Slot5 Slot 1 Slot 2 Slot5 Slot 1 Slot 2 Slot5

Node 1 Node 2 Node 3 Node 4

I/O Group 0 I/O Group 1

Storwize V7000 4-node

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-41. Zoning multi HBA hosts for resiliency

Multiple fabrics increase the redundancy and resilience of the SAN by duplicating the fabric
infrastructure. With multiple fabrics, the hosts and the resources have simultaneous access to both
fabrics, and have zoning to allow multiple paths over each fabric.
In this example, the host has two HBAs installed, and each port of the HBA is connected to a
separate SAN switch. This allows the host to have multiple paths to its resources. This also means
that the zoning has to be done in each fabric separately. If there is a complete failure in one fabric,
the host can still access the resources through the second fabric.

© Copyright IBM Corp. 2012, 2016 3-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Example of host port zoning


AIX1-A1 zone AIX1 AIX1-A2 zone
[(21,1) (11,1) (11,2)] A1 A2 [(22,1) (12,1) (12,2)]

V1
1 1
ID ID
A1 21 A2 22

Fabric 1 Fabric 2
1 2 1 2
ID ID
E3 11 21 14 24 F1 E1 11 E4 13 23 12 22 F2 E2 12
Preferred Alternate
paths paths
11 12 13 14 21 22 23 24
NODE1
V1 NODE2

Preferred node I/O group Alternate node


Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-42. Example of host port zoning

This example shows how host access to the Storwize V7000 nodes is implemented in a dual SAN
fabric. Each host HBA is connect to a separate SAN switch to allow multiple paths to resources. For
example, the AIX HBA A1 is zoned with one port from NODE1 and one from NODE2. HBA A2 is
also zoned with one port from each Storwize V7000 node. The vdisk1 is assigned to NODE 1
(preferred node) with two preferred paths, and has an alternative paths that are zoned to NODE 2.
The numbers located in switch ID 11 and 12 correlates to the Storwize V7000 nodes’ HBA ports 1
and 3. Therefore, the AIX host HBA A1 [(21, 1)] port is a member of a single zone whose zone
members [(11,1) and (11,2)] are the list of ports shown in both fabrics.

© Copyright IBM Corp. 2012, 2016 3-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Changing the preferred node and paths


AIX1-A1 zone AIX1 AIX1-A2 zone
[(21,1) (11,1) (11,2)] A1 A2 [(22,1) (12,1) (12,2)]

V2
1 1
ID ID
A1 21 A2 22

Fabric 1 Fabric 2
1 2 1 2
ID ID
E3 11 21 14 24 F1 E1 11
E4 13 23 12 22 F2 E2 12
Alternate Preferred
paths paths
11 12 13 14 21 22 23 24
NODE2
NODE1
V2
Alternate node I/O group Preferred node
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-43. Changing the preferred node and paths

The preferred node can also specified by the administrator in situation where you need to assigned
owner of a specific volume created. In this example, at volume creation the vdisk2 was assigned to
NODE2 as its preferred node. SDDPCM path selection will load balance I/Os for vdisk2 across
paths to its preferred node NODE2. Access from AIX1 is with the assigned paths to the volume’s
preferred node by SDDPCM. All four paths are used to handle AIX1’s I/O requests if the ownership
of volumes is spread across both SVC nodes in the I/O group. Therefore, AIX host HBAs A1 and
A2 zone members are unchanged.

© Copyright IBM Corp. 2012, 2016 3-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Host connection verification


• Verify host HBA using the vendor-supplied to display the Fibre Channel
configuration.
• Verify host zoning using management GUI: Settings > Network > Fibre
Channel.
í Change view connectivity to Hosts and the host name.
• Each host WWPN is zoned with one Storwize V7000 WWPNs per node.

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-44. Host connection verification

Before the host can be presented to the Storwize V7000, it needs to know the server’s HBA’s WWN
(whether you are using a fiber switch or plugging directly). Once a host is created, you can verify
host connectivity from the Storwize V7000 management GUI by selecting Settings > Network >
Fibre Channel in the Network filter list. Change the “view connectivity to Hosts and the host name.
Much like the storage system view, this visual displays the connection between the host and the
Storwize V7000 ports.
The host shown is zoned for four-path access to its volumes. The guideline for four-path host
zoning is to zone each host port with one Storwize V7000 port per node. The connectivity data
displayed confirms that each host port (Remote WWPN) has four entries (one per node). The Local
WWPN column lists the specific Storwize V7000 node ports zoned with a given host WWPN.
The lsfabric command can also be use to list the SAN connectivity data between the Storwize
V7000 and its attaching ports. It is actually the command invoked by the GUI Fibre Channel
connectivity view.

© Copyright IBM Corp. 2012, 2016 3-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

AIX host object and ports example

IBM_Storwize:V009B1admin> lshost V009B1-AIX


id 1
name V009B1-AIX
port_count 2
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 0
status online
WWPN C050760AFA3007C
node_logged_in_count 2
state active
WWPN C050760AFA3007E
node_logged_in_count 2
state active

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-45. AIX host object and ports example

The visual shows details of an AIX host object using both the GUI and the CLI. The Name column
lists the WWPN values of its two ports. The host port is inactive if all the nodes with volume
mappings have a login for the specified WWPN but no nodes have seen any Small Computer
System Interface (SCSI) commands from the WWPN within the last five minutes. The host ports
becomes active once all nodes with VDisk (volume) mappings have a login for the specified
worldwide port name (WWPN) and at least one node has received SCSI commands from the
WWPN within the last five minutes.
From the CLI output, the AIX host has an object ID of 1. This AIX host is entitled to only access
volumes owned by I/O group 0.
You can use the lshost command to generate a list with concise information about all the hosts
visible to the clustered system and detailed information about a single host.

© Copyright IBM Corp. 2012, 2016 3-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

AIX host bus adapter (WWPNs) details


{V009B1-AIX} / # lscfg -vl fscsi0
fscsi0 U5796.001.0629517-P1-C4-T1 FC SCSI I/O Controller Protocol Device
{V009B1-AIX} / # lscfg -vl fcs0
fcs0 U5796.001.0629517-P1-C4-T1 FC Adapter
Part Number.................10N8620
Serial Number...............1F8100C060
Manufacturer................001F
EC Level....................A
Customer Card ID Number.....5759
FRU Number.................. 10N8620
Device Specific.(ZM)........3
Network Address............. C050760AFA3007C WWPN

{V009B1-AIX} / # lscfg -vl fscsi1


fscsi1 U5796.001.0629517-P1-C4-T2 FC SCSI I/O Controller Protocol Device
{V009B1-AIX} / # IBM_2145:SVC:SVCAdmin> fcs1 U5796.001.0629517-
P1-C4-T2 FC Adapter
Part Number.................10N8620
Serial Number...............1F8100C060
Manufacturer................001F
EC Level....................A
Customer Card ID Number.....5759
FRU Number.................. 10N8620
Device Specific.(ZM)........3
Network Address............. C050760AFA3007E WWPN
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-46. AIX host bus adapter (WWPNs) details

For AIX host, you can check the availability of the FC host adapters and find the worldwide port
name (WWPN) using the AIX command lsdev -Cc adapter |grep fcs shows the names of the
Fibre Channel adapters in this AIX system. The Network Address of the lscfg -vl fcs0 and
lscfg -vl fcs1 output identifies the WWPN of each HBA port. The fscsi0 and fscsi1 devices are
protocol conversion devices in AIX. They are child devices of fcs0 and fcs1 respectively.

© Copyright IBM Corp. 2012, 2016 3-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

CLI AIX host paths view


{V009B1-AIX} / # lsdev -Cc disk
hdisk0 Available Virtual SCSI Disk Drive
hdisk1 Available 00-09-02 MPIO FC 2145
hdisk2 Available 00-09-02 MPIO FC 2145
{V099B1-AIX} / # lscfg -vl hdisk2
hdisk2 U5796.001.0629517-P1-C4-T2-W500507680110EE17-L1000000000000
MPIO FC 2145
Manufacturer................IBM
Machine Type and Model......2145
ROS Level and ID............0000
Device Specific.(Z0)........0000063268181002
Device Specific.(Z1)........0200606
Serial Number...............6005076801818781F800000000000009
{V009B1-AIX} / # pcmpath query device 2
DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801818781F800000000000009
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0* fscsi0/path0 OPEN NORMAL 42 0
1 fscsi0/path1 OPEN NORMAL 1134 0
2* fscsi0/path2 OPEN NORMAL 42 0
3 fscsi0/path3 OPEN NORMAL 1202 0
4* fscsi1/path4 OPEN NORMAL 42 0
5 fscsi1/path5 OPEN NORMAL 1186 0
6* fscsi1/path6 OPEN NORMAL 42 0
7 fscsi1/path7 OPEN NORMAL 1186 0
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-47. CLI AIX host paths view

The Storwize V7000 externalized volumes have a device type of 2145.


The Storwize V7000 lscfg -vl CLI command output for the hdisk identifies the Storwize V7000
port that reported this LUN to AIX as SCSI LUN 1 (L1).
The UID from the CLI lsvdisk output is displayed as the serial number in the SDDPCM pcmpath
query device output. So hdisk2 is the SNUGGUS volume.
The SDDPCM output displays eight paths for this hdisk which is an indication that eight-path zoning
was implemented for this host. Half of the paths are flagged with an asterisk to indicate that the
path is not to the preferred node. Therefore, the other four paths (1, 3, 5, and 7) are paths to node
ID 1 which is the preferred node for this volume/hdisk.

© Copyright IBM Corp. 2012, 2016 3-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Host zone worksheet


Zones Fabric 1 Fabric 2

Nodes 11 21 14 24 13 23 12 22
Stgbox1 11 21 14 24 F1 13 23 12 22 F2
Stgbox2 11 21 14 24 E1 E3 13 23 12 22 E2 E4
AIX1 11 21 A1 13 23 A2
W2K 14 24 W1 12 22 W2
Linux 14 24 L1 12 22 L2

non-virtualized zone

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-48. Host zone worksheet

By creating a worksheet that documents which host ports should be assigned to which Storwize
V700 ports can aid in spreading the workload across the Storwize V7000 HBA ports. This might be
particularly helpful when host ports are set up with four paths to the Storwize V7000 I/O group.
Not all OS platforms recommend or support eight (or even four) paths between the host ports and
the I/O group. Consult the Storwize V7000 Information Center for platform-specific host attachment
details.

© Copyright IBM Corp. 2012, 2016 3-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Power on and off sequence

Fibre Channel • Before powering off, stop all I/O to


switches
the servers accessing volumes
from the Storwize V7000 system.
External
Enclosures

Expansion
enclosure

Storwize V7000

Host
systems

* The power off sequence is the reverse of the arrow.

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-49. Power on and off sequence

The sequence in which to turn on the Storwize V7000 system is important. Bringing up the
enclosures initially for configuration after installation in a rack requires no particular power on
sequence. Once the system is operational, you will need to power on all expansion enclosures by
connecting both power supply units of the enclosure to their power sources, using the supplied
power cables. The enclosure does not have power switches. Repeat this step for each expansion
enclosure in the system.
Wait for all expansion canisters to finish powering on. Power on the control enclosure by connecting
both power supply units of the enclosure to their power sources, using the supplied power cables.
Verify the system is operational and power on or restart servers and applications.
The power off sequence is the reverse of the arrow. However, you need to stop all I/O to the servers
accessing the volume from the Storwize V7000. Stopping I/O operation on the external storage
virtualized by the system is not required.
Depending on the storage system, powering up the disk enclosures and storage system can be a
single step. Ensure that all the devices to be powered on are in the off position before plugging in
the power cables.
Shutting down the system while it is still connected to the main power ensures that the node’s
batteries are fully charged when the power is restored.

© Copyright IBM Corp. 2012, 2016 3-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Keywords
• Physical planning • Fabric zoning
• Logical planning • Host zoning
• Management interface • Virtualization
• Clustered system • Worldwide node name (WWNN)
• I/O Group • Worldwide port name (WWPN)
• Configuration node • Lightweight Directory Access
• Boss node Protocol (LDAP)
• SSH client • Service Assistant Tool
• SAN Zoning
• Management GUI

Storwize V7000
© Copyright IBMplanning and2014
Corporation zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-50. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 3-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Review questions (1 of 2)
1. To initialize the Storwize V7000 node canisters a PC or
workstation must be connected to (blank) on the rear of a
node canister.

2. True or False: To access the Storwize V7000 GUI, a user


name with a password must be defined. To access the
Storwize V7000 CLI, a user name can be defined with a
password or SSH key; or both.

3. Which of the following menu options will allow you to create


new users, delete, change, and remove passwords?
a. Settings
b. Monitoring
c. Access

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-51. Review questions (1 of 2)

Write your answers here:


1.
2.
3.

© Copyright IBM Corp. 2012, 2016 3-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Review answers (1 of 2)
1. To initialize the Storwize V7000 node canisters a PC or workstation
must be connected to Technician port (T-Port) on the rear of a node
canister
The answer is Technician port (T-Port).

2. True or False: To access the Storwize V7000 GUI, a user name with a
password must be defined. To access the Storwize V7000 CLI, a user
name can be defined with a password or SSH key; or both.
The answer is true.

3. Which of the following menu options will allow you to create new
users, delete, change, and remove passwords?
a. Settings
b. Monitoring
c. Access
The answer is access.

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 3-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Review questions (2 of 2)
4. Which IP address is not a function of Ethernet port 1 of
each Storwize V7000 node?
a. Cluster management IP
b. Service Assistant IP
c. iSCSI IP
d. Cluster alternate management IP

5. True or False: Zoning is used to control the number of paths


between host servers and the Storwize V7000.

6. True or False: A multipath driver is required when multiple


paths exists between a host server and the system cluster.

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-52. Review questions (2 of 2)

Write your answers here:


4.
5.
6.

© Copyright IBM Corp. 2012, 2016 3-60


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Review answers (2 of 2)
4. Which of the following IP addresses can be configured to
the Ethernet port 1 of each Storwize V7000 node?
a. Cluster management IP
b. Service Assistant IP
c. iSCSI IP
d. Cluster alternate management IP
The answer is cluster alternate management IP, which is
configured to the Ethernet port 2 of the Storwize V7000
node.
5. True or False: Zoning is used to control the number of paths
between host servers and the Storwize V7000.
The answer is true.

6. True or False: A multipath driver is required when multiple


paths exists between a host server and the system cluster.
The answer is true.

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 3-61


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 3. Storwize V7000 planning and zoning requirements

Uempty

Unit summary
• Determine planning and implementation requirements that are
associated with the Storwize V7000
• Implement the physical hardware and cable requirements for the
Storwize V7000 Gen2
• Implement the logical configuration of IP addresses, network
connections, zoning fabrics, and storage attachment to the Storwize
V7000 Gen2 nodes
• Integrate the Storwize V7000 Gen2 into an existing SVC environment
• Verify zoned ports between a host to the Storwize V7000 and between
a storage system

Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016

Figure 3-53. Unit summary

© Copyright IBM Corp. 2012, 2016 3-62


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Unit 4. System initialization and user


authentication
Estimated time
00:25

Overview
This unit highlights the procedures required to initialize a Storwize V7000 system using the
Technician port (T-port) and the Service Assistant (SA) interface. Users will also review steps to
setup system resources using the graphical user interface (GUI).
In addition, introduce administrative operations to establish user authentication for local and remote
users’ management access to both the GUI and CLI are introduced.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 4-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Unit objectives
• Summarize the concept of using the Storwize V7000 Technician port
and Service Assistant tool to initialize the system
• Identify the basic usage and functionality of IBM Storwize V7000
management interfaces
• Recall administrative operations to create user authentication for local
and remote users access to the Storwize V7000 system

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 4-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 local planning topics


• Storwize V7000 logical planning
ƒ System initialization
í Technician Port initialization
í System basic configuration
ƒ Storwize V7000 management interfaces
í Management graphical user interface (GUI)
í Command-line interface (CLI)

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-2. Storwize V7000 local planning topics

This topic starts with listing the management interfaces for the administration of IBM Storwize
V7000 and highlights the steps required to create an clustered system by initializing the Storwize
V7000 node canisters using the Technician (service) port. We will also review the system basic
configuration and its accessing mechanisms.

© Copyright IBM Corp. 2012, 2016 4-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 management interfaces


Open industry-standard interfaces

V7000 Ethernet
2 – 8 nodes https
GUI, CLI, and
CIMOM
GUI: Web browser
CLI: Over SSH
over https
with key or password
Embedded GUI with password
with best practices
Any
presets resource
SMI-S to
manager
CIMOM
CIM interface

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-3. Storwize V7000 management interfaces

The Storwize V7000 simplifies storage management by providing a single image for multiple
controllers and a consistent user interface for provisioning heterogeneous storage.
The Storwize V7000 provides cluster management interfaces that includes:
• An embedded Storwize V7000 Graphical User Interface (GUI) that supports a web browser
connection for configuration management, which is similar to the common source code base as
IBM Storwize V7000.
• A command line interface (CLI) accessed using a Secure Shell connection (SSH) with PuTTY.
• An embedded CIMOM that supports the SMI-S which allows any CIM compliant resource
manager to communicate and manage the system cluster.
To access the cluster for management, there are two user authentication methods available:
• Local authentication: Local users are those managed within the cluster, that is, without using
a remote authentication service. Local users are created with a password to access the
management GUI, and/or assigned an SSH key pair (public/private) to access the CLI.
• Remote authentication: Remote users are defined and authenticated by a remote
authentication service. The remote authentication service enables integration of Storwize
V7000 with LDAP (or MS Active Directory) to support single sign-on. We will take a closer look
at the remote authentication method later in this unit.

© Copyright IBM Corp. 2012, 2016 4-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 Gen2 Technician port initialization


• Initialization must be directly configured
Technician port is
using the Technician port.
marked with a T
• Connect cable to the Technician port. (Ethernet port 4)
ƒ Port run a dedicated DHCP server
• Configure an Ethernet port on the
personal computer to enable DHCP.
ƒ If DHCP needs to be configured
í Static IPv4: 192.168.0.2
í Subnet mask: 255.255.255.0
í Gateway: 192.168.0.1
í DNS: 192.168.0.1
• Connect an Ethernet cable between the
ports (personal computer port and
technician port).
• Open a supported browser which should
automatically be directed to 192.168.0.1
for initial configuration of the cluster.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-4. Storwize V7000 Gen2 Technician port initialization

In order to create a Storwize V7000 clustered system, you must initialize the system using the
Technician service port. The technician port is designed to simplify and ease the initial basic
configuration of the Storwize V7000 storage system. This process requires the administrator to be
physically at the hardware site.
To initialize the V7000 Gen2 nodes you connect a personal computer (PC) to the Technician port
(Ethernet port 4) on the rear of a node canister ─ only one node required. This port can be identified
by the letter “T”. The node canister uses DHCP to configure IP and DNS settings of the personal
computer. If your PC is not DHCP enabled, configure the IP addresses as follows:
• Static IPv4: 192.168.0.2
• Subnet Mask: 255.255.255.0
• Gateway: 192.168.0.1
• DNS: 192.168.0.1
After the Ethernet port of the PC is connected to the technician port, open a supported web
browser. If the Storwize V7000 node has Candidate status, you are automatically redirect
192.168.0.1 initialization wizard. Otherwise, the service assistant interface is displayed.
The IBM Storwize V7000 Gen2 does not provide IPv6 IP addresses for the technician port.

© Copyright IBM Corp. 2012, 2016 4-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Welcome configuration wizard


• Specify as a new system or
expanding an existing system.
ƒ Accept certificate warnings as they
appear.
í Warnings are not harmful to the
system configuration.
• Specify the node cluster
management IP address using
IPv4 or IPv6.
• Re-directs to the management IP
address for system setup
completion.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-5. Welcome configuration wizard

The System Initialization wizard provides a few simple steps to initialize a new system or to expand
an existing system using IPv4 or IPv6 management address (you can use DHCP or statically
assign one). The subnet mask and gateway will be a listed by default but can be changed, if
required.
The most common cause of configuration errors is the inability to access and communicate due to
an incorrect IP address, a wrong subnet mask, or an incorrect default gateway entered. Remember
to validate the entries before you proceed.
The system will generate the svctask mkcluster command to create the system cluster using the
addresses as specified. The Web Server restarts to complete the initialization. This process might
take several minutes to complete. Once complete, disconnect the Ethernet cable from the Storwize
V7000 Gen2 node’s technician port and connect the same PC to the same network as the system.
The system will now re-direct to the management IP address for completion of the system setup.

© Copyright IBM Corp. 2012, 2016 4-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

License agreement
• Open a supported browser (Mozilla Firefox, Microsoft Internet Explorer
(IE) or Google Chrome).
ƒ Enable browser with JavaScript support.
• Point browser to the http://management_IP_address of IBM Storwize
V7000.
ƒ System redirects http access automatically to https.
• Accept License Agreement to continue.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-6. License agreement

To complete the system basic configuration, open a supported browser to the management IP
address of the IBM Storwize V7000 system. If the web browser fails to launch, you might need to
enable JavaScript support through your web browser.
When launching the management GUI for the first time a License Agreement is presented. This is
different from the previous code releases as the license agreement was part of the System setup
wizard. You must accept the Storwize V7000 product license agreement to continue with the
system configuration.

© Copyright IBM Corp. 2012, 2016 4-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 GUI user ID and password


• Storwize V7000 maintains factory-set default credentials.
ƒ Default User name: superuser
í Enter the default password for user superuser: Passw0rd
í System prompts to change the default password.
• It is recommended to maintain IT security policies that enforce the use of
password-protected user IDs.

ƒ Click Login.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-7. Storwize V7000 GUI user ID and password

Once the user accepts the License Agreement the log in screen is displayed. Storwize V7000
maintains a factory-set default username (superuser) and password (passw0rd with a zero in place
of the letter “o”). The superuser ID is displayed by default. Once you have entered the default
passw0rd you are immediately prompted to change the username and password. It is highly
recommended to maintain IT security policies that enforce the use of password-protected user IDs
rather than the use of a generic, shared IDs, such as superuser, admin, or root.

© Copyright IBM Corp. 2012, 2016 4-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup: Welcome

Provides guided steps to


setup the system
configuration

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-8. System Setup: Welcome

The Welcome to System Setup page of the System Setup wizard displays a list of required
components and content that needs to be available during this system setup configuration. If you
do not have this information ready or choose not to configure some of these settings during the
installation process you can configure them later through the management GUI.
The system name field displays the cluster name that was created during the system initialization
using the Service Assistant interface. You may choose to change the system name which will apply
the name change to the SA. In a data center environment, it might be best to define system names
by which it can reflect the system use or client.

© Copyright IBM Corp. 2012, 2016 4-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup: Licensed Functions


• Enter the total purchased capacity for your system as authorized by
your license agreement.

System creates warning


messages when capacity
The svctask chlicense used for licensed functions is
cmd issued to change above 90% of the license
license functions. settings

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-9. System Setup: Licensed Functions

The Licensed Functions is where you must specify the additional licenses required to expand the
base functions of the Storwize V7000. This includes Encryption license base on the size equal to
the number of expansion enclosure, external storage virtualization, FlashCopy, Global and metro
Mirror and Real-time Compression. Each license option supports capacity-based licensing that is
based on the number of terabytes (TB).
Administrators are responsible for managing use within the terms of the existing licenses. They are
also responsible for purchasing extra licenses when existing license settings are no longer
sufficient. In addition, the system also creates warning messages if the capacity used for licensed
functions is above 90% of the license settings that are specified on the system.
When the Apply and Next > button is clicked, the GUI generates the necessary CLI commands to
update the license settings.

© Copyright IBM Corp. 2012, 2016 4-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup: Date and Time

Recommend using NTP server


for common timestamp for
troubleshooting issues. NTP
Server IP address

All executed task generate


commands required to complete
task

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-10. System Setup: Date and Time

The date and time can be set manually or with an NTP server IP address. However, it’s highly
recommended to configure an NTP server that will allow you to maintain common time stamp for
troubleshoot tracking all of your SAN and storage devices.
You will find that all executed task such as the Apply and Next > option will generate commands
that are used to achieve the desired settings specified in the wizard panels. You can click the View
more detail hyperlink to view specific details of the commands issued.
At this time, you cannot choose to use the 24-hour clock. You can change to the 24-hour clock after
you complete the initial configuration.

© Copyright IBM Corp. 2012, 2016 4-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup: Encryption


• IBM Storwize V7000 supports hardware and software
encryption (licensed required).
ƒ Can be activated automatically during System Setup or manually at a
later date

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-11. System Setup: Encryption

IBM Storwize V7000 supports hardware and software encryption. There are no “trial” licenses for
encryption on the basis that when that trial runs out, access to the data would be lost. For either
hardware or software encryption to be enabled at a system level, you must have purchased an
encryption license before you activate the function. Therefore an encryption license must be
purchased prior to activation.
Activation of the license can be performed in one of two ways, either Automatically or Manually and
can be performed during either System Setup.

© Copyright IBM Corp. 2012, 2016 4-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup: System Location


• Enter the address where the system is physically located.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-12. System Setup: System Location

As part of the Call Home procedure, you will need to specify the address to where the system is
physically located. This information is used by IBM service personnel for troubleshooting site
service requirements and shipment of parts.

© Copyright IBM Corp. 2012, 2016 4-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup: Contact Details


• Enter a point of contact personnel in the event Call Home service is
required.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-13. System Setup: Contact Details

Next, enter the contact details for the person who will be contacted in the event call home service is
required.

© Copyright IBM Corp. 2012, 2016 4-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup: Email Servers


• Enter the IP address of the SMTP email server.

Ping to verify if there is network access to


the Email server (SMTP server)

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-14. System Setup: Email Servers

Enter the SMTP server IP address in which call home and event notification are routed. You can
click on the Ping button which verifies if there is network access to the Email server (SMTP server).
Ensure that the email server accepts SMTP traffic because some enterprises do not permit SMTP
traffic especially if the destination email address is outside the enterprise.

© Copyright IBM Corp. 2012, 2016 4-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup: Summary


• The Summary view can vary depending on the initial configuration
options chosen.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-15. System Setup: Summary

Review the setting values in the Summary panel. You can use the Back button to make any
modification. Once you click Finish the system setup of the initial configuration is complete. The
view will depend on the initial configuration options chosen.

© Copyright IBM Corp. 2012, 2016 4-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Add storage enclosure

Click the empty box

2
1

Select control
enclosure

3 4

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-16. Add storage enclosure

The System Overview window will appear showing your system configuration is complete and the
next step will be adding the storage enclosure. Click in the empty box in the center of the screen
and the add storage enclosure wizard starts. From the Add Enclosures panel, select the available
controllers to be added to the system. Click Next. Review the summary panel and click Finish to
complete the add enclosure procedure. Click Close when the completion panel opens.

© Copyright IBM Corp. 2012, 2016 4-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

System Setup complete


• System is now ready to configure and virtualized storage resources.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-17. System Setup complete

The System Overview window will appear showing your system configuration is complete. At this
point you are ready to configure and virtualized storage system resources such as pools, hosts and
volumes.

© Copyright IBM Corp. 2012, 2016 4-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Update might be available


• The software on the Storwize V7000 is upgraded with concurrent code
upgrade (CCU).
ƒ Software package includes software updates for V7000 block storage.
‫ ޤ‬Automatically manage all firmware on the FRUs (excluding drives) to ensure that they are
at a supported level for use with given release of V7000 software.
ƒ System also initiates upgrades when the V7000 software is upgraded, or when an
FRU with incorrect firmware installed on it is inserted into the system.

Software upgrade
is reviewed later

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-18. Update might be available

Once the system setup is complete, the Storwize V7000 GUI might display a reminder that a
current update is available. This notification indicates the importance of maintaining the latest
software code. The Settings > System > Update System link will redirect you to check for the
latest version.

© Copyright IBM Corp. 2012, 2016 4-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 logical planning topics


• Storwize V7000 physical planning
• Storwize V7000 logical planning
ƒ Storwize V7000 management interfaces
í Management graphical user interface (GUI)
í Command-line interface (CLI)

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-19. Storwize V7000 logical planning topics

This topic explores the IBM Storwize V7000 management graphical user interface (GUI) and its
accessing mechanisms. This topic also describes the steps that are required to configure an SSH
(PuTTYGen) connection and create user authentication for access.

© Copyright IBM Corp. 2012, 2016 4-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 GUI dynamic system view

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-20. Storwize V7000 GUI dynamic system view

With the release of the Spectrum Virtualize V7.4 code, the IBM Storwize V7000 management GUI
welcome screen has changed from what was formerly known as the Overview panel to a dynamic
system panel with enhanced functions that are available directly from welcome screen.
These panels group common configuration and administration objects and present individual
administrative objects to the GUI users. It provides common, unified procedures to manage all
these systems in a similar way allowing administrators a way to simplify their operational
procedures across all systems.

© Copyright IBM Corp. 2012, 2016 4-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 GUI dynamic menu

Hover Mouse Cursor over function icons


to get the Menu for each function.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-21. Storwize V7000 GUI dynamic menu

The dynamic menu icons are grouped to display the associated menu options available. The
dynamic menu is a fixed menu on the left side of management GUI window and is accessible from
any page inside GUI. As of the V7.4 release, the dynamic menu offers 7 menu options. There is no
Overview menu or a System Details option to choose. You can now hover over each menu icon
with ease. To browse by using this menu, hover the mouse pointer over the various icons and
choose a page that you want to display. You can use the various function icons to examine
available storage, create storage pools, create host objects, create volumes, map volumes to host
object, create user access, update system software, and configure network settings.

© Copyright IBM Corp. 2012, 2016 4-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

GUI function Icon: Monitoring


• The Monitoring System menu is the
default home page for the Storwize
V7000 GUI.
• It has three menu options.
ƒ System (Home Page)
ƒ Hardware appears in its physical form.
ƒ Events
ƒ View and manage storage events
reported.
ƒ Performance
ƒ View system performance statistics.
• Part of the home page is also the
Actions menu where system level
tasks are performed and
information can be obtained.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-22. GUI function Icon: Monitoring

From the System panel (which is the default panel) allows you to monitor the entire system capacity
as well as view details on control and expansion enclosures and various hardware components of
the system.
The Hardware is represented in its physical form, with all components indicators providing a
dynamic view of your system.
For systems with multiple expansion enclosures, the number indicates the total of detected
expansion enclosures that are attached to the control enclosure. Components can be selected
individually to view the status and Properties in detail. In addition, you can view individual hardware
components and monitor their operating state.

© Copyright IBM Corp. 2012, 2016 4-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Monitoring: System events


• Select Monitoring > System > Events to review events that occur in
the system.
CSV file can be created
for export

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-23. Monitoring: System events

Select Monitoring > System > Events to track all informational, warning, and error messages that
occur in the system. You can apply various filters to sort them or export them to an external
comma-separated values (CSV) file. A CSV file can be created from the information listed.

© Copyright IBM Corp. 2012, 2016 4-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Monitoring: Performance
• Select Monitoring > System > Performance to view system statistics
in MBps or IOPS.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-24. Monitoring: Performance

Select Monitoring > System > Performance to capture various reports of general system
statistics with regards to the processor (CPU) utilization, host and internal interfaces, volumes, and
MDisks. You can switch between MBps or IOPS.

© Copyright IBM Corp. 2012, 2016 4-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Actions menu
• The Actions menu lists actions to rename the system, update the
system or power off the entire system.
ƒ Located in the upper-left corner of the home screen of the Storwize
V7000 GUI.
ƒ The Actions menu can be accessed by right-clicking anywhere in the
home screen GUI blank space.
• Each enclosure also has its own Action menu, which can be
displayed by right-clicking on an enclosure.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-25. Actions menu

The Action provides a list of menu actions to:


• Rename System: Changes the host name and iSCSI IQN name.
• Update System: Provides a link to the firmware update wizard.
• Power Off: Power off entire system.
You can also view the actions menu by right-clicking anywhere in the System view window.
Each enclosure also has its own action menu, which can be displayed by right-clicking on an
enclosure.

© Copyright IBM Corp. 2012, 2016 4-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 system details


• Displays the system information: MTM, serial number, and FRU

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-26. Storwize V7000 system details

Dynamic system view in the middle of the System panel can be rotated by 180° to view front and
rear of the nodes. When you click a specific component of a node a pop-up window indicates the
details about the unit and components installed.
You can right click on the Storwize V7000 and select Properties to view general details such as the
product name, status, machine type and model number, serial number and FRU part number.
This context menu also provides additional options to rename a node, power off the node (without
option for remote start), remove the node or enclosure from the system, or list all volumes
associated with the system.

© Copyright IBM Corp. 2012, 2016 4-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Rename Storwize V7000 node


• Object names serve as a quick reference.
ƒ Storwize V7000 processing is done by object IDs and not object names.
• Right click on a node and select Rename to rename the node.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-27. Rename Storwize V7000 node

Once a node has been added to the cluster you can rename the node by right clicking on the node
and select Rename.
In general, changing an object name is not a concern as Storwize V7000 processing is done by
object IDs and not object names. One exception is the case of changing the name of a node if the
iSCSI protocol is being used for host data access to the node. Be aware that the Storwize V7000
node name is part of the node’s IQN name. Thus changing the node name requires more planning
as the iSCSI host connections need to be updated Otherwise, it might cause an iSCSI-connected
hosts to lose access to their volumes.

© Copyright IBM Corp. 2012, 2016 4-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 I/O hardware information

Storwize V7000 specific


logical WWPN scheme

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-28. Storwize V7000 I/O hardware information

For a quick view of a specific adapter, hover the mouse pointer over any component in the rear of
the node to see its status, WWPN, and speed. For more detail information, right click to open the
Property view.
Select View and choose the option Fibre Channel Ports to see the list and status of available FC
ports with their WWPN. The View option also allows you to display Storwize V7000 expansion
enclosures and configured Ethernet ports.

© Copyright IBM Corp. 2012, 2016 4-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Modify Memory
• Displays the total amount of memory allocated to the I/O group for
certain service features

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-29. Modify Memory

The modify memory option display the default memory that is available for Copy Services or VDisk
mirroring operations. This information can be modified to allocate additional memory to maintain
efficient memory for certain uses of routine and advance services on the Storwize V7000 system.

© Copyright IBM Corp. 2012, 2016 4-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 GUI status indicators


• Status indicators that appear at the bottom of the window provide
information about:
ƒ Capacity usage
ƒ Performance based bandwidth in megabytes per second, I/O per second
(IOPS), and latency
ƒ Health status of the system
• Running tasks: Current task ongoing and recent task completed
• Status indicators are visible from all windows in the GUI windows.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-30. Storwize V7000 GUI status indicators

The Status Indicators provide information about capacity usage, performance in bandwidth, IOPS
and latency, and the Health Status of the system. The Status Indicators are visible from all panels in
the GUI. The status indicators also show the running tasks of current ongoing tasks and those that
are recently completed.

© Copyright IBM Corp. 2012, 2016 4-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 system capacity

Initial amount of
physical storage
storage that was Virtual
Initial
(fixed
amount
capacity)
of
allocated
allocated amount
storageofthatstorage
was
allocated
used

Switch between Physical


and virtual capacity view

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-31. Storwize V7000 system capacity

The cylindrical shape that is located around the node displays the capacity utilization for the entire
system. This same information is also indicated from the left status indicator. You can switch
between views to display information about the overall physical capacity (the initial amount of
storage that was allocated) as well as the virtual capacity (with thin provisioned storage, volume
size is dynamically changed as data grows or shrinks, but you still see a fixed capacity).

© Copyright IBM Corp. 2012, 2016 4-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 GUI overview


• Overview panel illustrates a task flow of how storage is provisioned as
well as existing configuration.

Click Overview hyperlink


to view modified
Overview panel

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-32. Storwize V7000 GUI overview

A modified Overview panel is accessible by clicking the Overview hyperlink in the top-right corner of
the System panel. The Overview panel offers a similar structure as in previous versions that
illustrates a task flow of how storage is provisioned, as well as existing configuration. Resources
managed by the cluster are itemized and updated dynamically. You can click on any option to be
redirected to the selected panel.

© Copyright IBM Corp. 2012, 2016 4-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Storwize V7000 GUI: Access menu


• The Access function has two menu options:
ƒ Users option define users, group users, and user roles.
ƒ Audit log tracks action commands that are issued through an SSH session
or through the management GUI.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-33. Storwize V7000 GUI: Access menu

The Storwize V7000 users and the access level of the users are defined and managed through the
Access menu. The Users panel allows you to specify the name and password of the user and
delete users, change and remove passwords, and add and remove Secure Shell (SSH) keys (if the
SSH key has been generated). The SSH key is not required for CLI access, and you can choose to
use either SSH or a password for CLI authentication.
A Storwize V7000 clustered system maintains an audit log of successfully executed commands for
the Storwize V7000 using the management GUI or through CLI. It also indicates which users
performed particular actions at certain times.

© Copyright IBM Corp. 2012, 2016 4-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

User authentication methods


User authentication to control access to the web-based management
interface (GUI) and the CLI
• Local authentication
ƒ Performed within the Storwize
V7000 system
ƒ Requires user name and
password, SSH key
authentication, or both

• Remote authentication
ƒ Access performed from
a remote server login as: superuser
ƒ Requires validation of superuser@10.208.2.50’s password:
IBM_Storeize:V009B:superuser>lscurrentuser
user’s permission to access name superuser
role SecurityAdmin
IBM_Storwize V7000:V09B:superuser>

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-34. User authentication methods

Part of an administrator’s role is managing the authentication of users. In addition to authenticating


with private/public SSH key, the Storwize V7000 support the following methods of user
authentication to control access to the web-based management interface (GUI) and the CLI:
• Local authentication is performed within the Storwize V7000 and requires local CLI
authentication methods (Secure Shell (SSH) key authentication and user name and password).
• Remote authentication requires validation of a user’s permission to access the Storwize
V7000’s management CLI or GUI. Access is performed at a remote authentication server. That
is, except for the superuser account there is no need to administer local user accounts on the
Storwize V7000.

© Copyright IBM Corp. 2012, 2016 4-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Access menu: User Groups


• The User Groups window represents access levels defined by roles.
ƒ Roles define a specific set of privileges on the system.
ƒ Local users must be part of a user group.
• Default user “Superuser” belongs to SecurityAdmin Group.
• The role is to be specified while creating a user group.

A local user can only


belong to a single group

User have to have


administrator rights to
creates a new user

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-35. Access menu: User Groups

Administrators can create role-based user groups where any users that are added to the group
adopt the role that is assigned to that group. Roles apply to both local and remote users on the
system and are based on the user group to which the user belongs. A local user can only belong to
a single group; therefore, the role of a local user is defined by the single group to which that user
belongs.
The User Group navigation pane lists the user groups pre-defined in the system. To create a user
group you must define its Roles. Once created, you can determine the authentication type and the
number of users assigned with this group.

© Copyright IBM Corp. 2012, 2016 4-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

User group roles

Access to all the functions provided by both the management GUI


and CLI including those related to managing users, user groups,
and authentication

Same as SecurityAdmin, except those related to managing users,


user groups, and authentication

Authority to start, modify, change direction, and stop FlashCopy


mappings and Remote Copy relationships, but cannot create or
delete definitions

User has a limited command set related to servicing the cluster,


and has access to all the functions associated with monitor role

User does not have the authority to change the state of the cluster
or cluster resources

The user group assigned to a user controls the role or the scope of
operational authority granted to that user.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-36. User group roles

There are five default user groups and roles. When adding a new user to a group, the user must be
associated with one of the corresponding roles:
• Security Administrator: User has access to all the functions provided by both the management
GUI and CLI including those related to managing users, user groups, and authentication.
• Administrator: User has access to all the functions provided by both the management GUI and
CLI except those related to managing users, user groups, and authentication.
• Copy Operator: User has the authority to start, modify, change direction, and stop FlashCopy
mappings and Remote Copy relationships at the standalone or consistency group level, but
cannot create or delete definitions. The user has access to all the functions associated with
monitor role.
• Service: User has a limited command set related to servicing the cluster. It is designed primarily
for IBM service personnel. The user has access to all the functions associated with monitor
role.
• Monitor: User has access to all information-related panels and commands, backup system
configuration metadata, manage its own password and SSH key, and issue commands related
to diagnostic data collection. The user does not have the authority to change the state of the
cluster or cluster resources.

© Copyright IBM Corp. 2012, 2016 4-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Managing user authentication


• Select Access > Users to perform user administration.
ƒ Create new users; delete, change, and remove passwords; plus add and remove
SSH keys.
ƒ With a valid password and username, users are allowed to log in to both GUI and
CLI.

GUI,
CLI CLI

Superuser SSH key


authentication required for
access to the CLI

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-37. Managing user authentication

When a Storwize V7000 clustered system is created, the authentication settings default to local,
which means that the Storwize V7000 contains a local database of users and their privileges. The
Access > Users option can be used to perform user administration such as create new users,
apply password administration, plus add and remove SSH keys. Authorized access to the GUI is
provided for the default superuser ID which belongs to the SecurityAdmin user group. Therefore,
users can be created on the system using the user accounts they are given by the local superuser
account. With a valid password and username, users are allowed to login into both GUI and CLI
with the defined access level privileges. If a password is not configured, the user will not be able to
log in to the GUI.
SSH keys are not required for CLI access. However, you can choose either to use SSH or a
password for CLI authentication. The CLI can be accessed with a pair of public and private SSH
keys. The public key is stored in the cluster as part of the user create process. In order for the
superuser to have access to the CLI, the SSH public key must be upload. We will discuss
authentication using SSH keys later in the CLI topic.

© Copyright IBM Corp. 2012, 2016 4-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Create new user and assign user group

Must be associated with


only one user group

Create additional
user with up to 256
characters

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-38. Create new user and assign user group

User roles are predefined and can not be changed or added to. However, an administrator does
have authorization to create new user groups and assign a predefined role with the same
SecuirtyAdmin role but with lesser authority.
To add an additional local user, select the Create User option:
• Enter the Name (user ID) that you want to create and then enter the password twice. A user
name can be up to 256 characters and cannot contain colon, comma, percent sign, quote
marks (double or single).
• Select the user Authentication Mode. Select the access level that you want to assign to the
user.
• Select the User Group in which the user belongs to. A local user must be associated with one
and only one of the user groups. The Security Administrator (SecurityAdmin) is the maximum
access level.
• If a local user requires access to the management GUI or CLI, then a password, an SSH key, or
both are required.
▪ Enter and verify the password.
▪ Select the location from which you want to upload the SSH Public Key file that you created
for this user.
• Click Create.

© Copyright IBM Corp. 2012, 2016 4-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Remote authentication configuration (1 of 2)


• Click Configure Remote
Authentication.
• Select the authentication type:
ƒ IBM Tivoli Integrated Portal
ƒ LDAP
• Specify the LDAP server IP address.
• Configure LDAP type:
ƒ Select Microsoft Active Directory.
ƒ For an OpenLDAP server, select Other.
• Select Security:
ƒ Choose None if your LDAP server requires
a secure connection.
ƒ Choose Transport Layer Security; the
LDAP server’s certificate is configured later.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-39. Remote authentication configuration (1 of 2)

IBM Storwize V7000 remote authentication using LDAP is supported. This enables authentication
with a domain user name and password instead of a locally defined user name. If the enterprise
has multiple Storwize V7000 clusters then user names are no longer need to be defined on each of
these systems. Centralized user management is at the domain controller level instead of the
individual Storwize V7000 clusters.
Before configuring authentication for a remote user, you first verify that the remote authentication
service is configured for the SAN management application. You also need to configure remote
authentication before you can create a new user.
To configure the remote authentication service, navigate to the Directory Services panel. Click
Configure Remote Authentication. The supported types of LDAP servers are IBM Tivoli Directory
server, Microsoft Active Directory (MS AD), and Open LDAP (running on a Linux system).
The user that is authenticated remotely by an LDAP server is granted permission on the Storwize
V7000 system according to the role that is assigned to the group of which the user is a member.
That is, the user group must exist with an identical name on the Storwize V7000 and on the LDAP
server for the remote authentication to succeed.

© Copyright IBM Corp. 2012, 2016 4-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Remote authentication configuration (2 of 2)


• Select Advanced Settings:
ƒ For MS AD server, enter the credentials of an existing user on the LDAP server
with permission to query the LDAP directory.
ƒ Leave the credential fields blank if your LDAP server supports anonymous bind.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-40. Remote authentication configuration (2 of 2)

IBM Storwize V7000 remote authentication using LDAP is supported. This enables authentication
with a domain user name and password instead of a locally defined user name. If the enterprise
has multiple Storwize V7000 clusters then user names are no longer need to be defined on each of
these systems. Centralized user management is at the domain controller level instead of the
individual Storwize V7000 clusters.
Before configuring authentication for a remote user, you first verify that the remote authentication
service is configured for the SAN management application. You also need to configure remote
authentication before you can create a new user.
To configure the remote authentication service, navigate to the Directory Services panel. Click
Configure Remote Authentication. The supported types of LDAP servers are IBM Tivoli Directory
server, Microsoft Active Directory (MS AD), and Open LDAP (running on a Linux system).
The user that is authenticated remotely by an LDAP server is granted permission on the Storwize
V7000 system according to the role that is assigned to the group of which the user is a member.
That is, the user group must exist with an identical name on the Storwize V7000 and on the LDAP
server for the remote authentication to succeed.

© Copyright IBM Corp. 2012, 2016 4-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Add remote user group (enable member logins)

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-41. Add remote user group (enable member logins)

In this example, a MS Active Directory server is located at 10.6.5.30. A user group by the name of
IBM_Storage_Administrators has been defined to contain two users: SpunkyAdmin and
WiskerAdmin. The domain name is reddom.com.

© Copyright IBM Corp. 2012, 2016 4-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Configure remote user group


• Select Access > Users > Create User Group.

Ensure LDAP is
enabled to allow
remote user access

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-42. Configure remote user group

In this example, the IBM_Storage_Administrators user group is created, as defined in the MS


Active Directory. In the Create User Group panel, specify the group name, select a Role, and
ensure that the Remote Authentication box it checked to enable the group for LDAP
authentication before creating the new user group.

© Copyright IBM Corp. 2012, 2016 4-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Remote user1 login (GUI and CLI) examples

login as: SpunkyAdmin


SpunkyAdmin@10.208.2.50’s password:
Last login: Fri Apr 11 12:13:35 2015from 10.208.2.1
IBM_Storwize:V009B:SpunkyAdmin>lsuser
id name password ssh_key remote usergrp_id usergrp_name
0 superuser yes yes no 0 SecurityAdmin
1 TeamAdmin yes yes no 1 Administrator

IBM_Storwize:V009B:SpunkyAdmin>lscurrentuser
name SpunkAdmin
role Administrator
IBM_Storwize:V009B:SpunkyAdmin>
System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-43. Remote user1 login (GUI and CLI) examples

The user group IBM_Storage_Administrators is now listed in the User Groups filter list, and
remote access is enabled. User SpunkyAdmin of the IBM_Storage_Administrators group which is
defined on the MS Active Directory server is able to login to both the GUI and the CLI using its
network defined user name and password. However, this user group is defined to support remote
authentication, the users of this group are not defined locally in the Storwize V7000 system. Use
the lsuser or lscurrentuser command to view remote users.

© Copyright IBM Corp. 2012, 2016 4-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Remote users centralized management


• For multiple Storwize V7000s, define users at the domain controller
for centralized management of user IDs and passwords.

login as: WiskerAdmin@reddom.com


WiskerAdmin@10.208.2.50’s password:
Last login: Fri Apr 11 12:16:14 2015 from 10.208.2.1
IBM_Storwize:V009B:WiskerAdmin@reddom.com>lscurrentuser
name WiskerAdmin@reddom.com
role Administrator
IBM_Storwize:NV009B:WiskerAdmin@reddom.com>lsusergrp
id name role remote
0 SecurityAdmin SecurityAdmin no
1 Administrator Administrator no
2 CopyOperator CopyOperator no
3 Service Service no
4 Monitor Monitor no
5 IBM_Storage_Administrators Administrator yes
IBM_Storwize:V009B:WiskerAdmin@reddom.com>

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-44. Remote users centralized management

A user can login with either its short name or the fully qualified user name. By defining user
credentials at the domain controller enables centralized user management. More efficiency is
realized as additions and removals of user credentials only need to be performed once on the
LDAP server.

© Copyright IBM Corp. 2012, 2016 4-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

CLI SSH keys encrypted communications

1 Generate public/private keys for CLI


Private PuTTYGEN
PuTTY

Public
3
2
Install public key
in cluster

Storwize V7000

Secure communications
Storwize V7000

Public

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-45. CLI SSH keys encrypted communications

To use the CLI, the PuTTY program (on any workstation with PuTTY installed) must be set up to
provide the SSH connection to the Storwize V7000 cluster. The command-line interface (CLI)
commands use the Secure Shell (SSH) connection between the SSH client software on the host
system and the SSH server on the system cluster. For Windows environments, the Windows SSH
client program PuTTY can be downloaded.
A configured PuTTY session using a generated Secure Shell (SSH) key pair (Private and Public) is
needed to use the CLI. The key pair is associated with a given user. The user and its key
association are defined using the superuser. The public key is stored in the system cluster as part
of the user definition process. When the client (for example, a workstation) tries to connect and use
the CLI, the private key on the client is used to authenticate with its public key stored in the system
cluster.
The CLI can be accessed using password instead of SSH. However, when invoking commands
from scripts, using the SSH key interface is recommended as it is more secure.

© Copyright IBM Corp. 2012, 2016 4-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

PuTTYgen key generation


• PuTTYGen SSH-2 RSA (default)
offers the best key size and
security level.
ƒ Separated into modules and
consists of three protocols Private
working together
Public
‫ ޤ‬SSH Transport Layer Protocol
(SSH-TRANS)
‫ ޤ‬SSH Authentication Protocol
(SSH-AUTH)
‫ ޤ‬SSH Connection Protocol
(SSH-CONN)

Only choose SSH-1 if the


Default values
server/client you want to connect Default values
Recommend 2048
to does not support SSH-2

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-46. PuTTYgen key generation

Since most desktop workstations are Windows-based, we are using PuTTY examples.
To generate a key-pair on the local-host, you will need to specify the key type. PuTTYGen defaults
to the SSH-2 RSA which is recommended to a provide better security level.
SSH2 is separated into modules and consists of three protocols working together:
• SSH Transport Layer Protocol (SSH-TRANS)
• SSH Authentication Protocol (SSH-AUTH)
• SSH Connection Protocol (SSH-CONN)
The SSH-TRANS protocol is the fundamental building block which provides the initial connection,
packet protocol, server authentication, basic encryption services, and integrity services. PuTTYGen
supports bits up to 4096 and defaults to 1024. However it is recommended to set this at a minimum
of 2048. Once you have chosen the type of key-pair to generate, click Generate. This procedure
generates random characters used to create a unique key.
A helpful tip is to move the cursor over the blank area in the Key Generator window until the
progress bar reaches the far right. Movement of the cursor causes the keys to be generated faster.
The progress bar will move faster with more mouse movement.

© Copyright IBM Corp. 2012, 2016 4-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Save the generated keys

Public
\Keys.PUBLICKEY.PUB

_@hostname \Keys.PRIVATEKEY.PPK
Private
SSH Keys to be
unique for a user

\Keys

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-47. Save the generated keys

The result of the key generation shows the public key (in the box labeled Public key for pasting into
OpenSSH authorized_keys file).
The Key comment enables you to generate multiple keys. Therefore it is general recommended to
set this to username@hostname for easy identification.
The Key passphrase is an additional way to protect the private key and is never transmitted over
the Internet. If your set a passphrase then you will be asked enter it before any connection is made
via SSH. If you can not remember the key passphrase then there is no way to recover it.
Save the generated keys using the Save private key and Save public key buttons respectively. The
name and location of the file to place the key will be prompted. The default location is C:\Support
Utils\PuTTY. If another location is chosen then make a record for later reference. The public key
can be saved in any format such as *.PUB or *.txt. The public key is stored into the cluster as part of
user management. However, the private key uses the PuTTY format of *.PPK which is required for
authentication.

© Copyright IBM Corp. 2012, 2016 4-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

User with SSH key authentication


• From the management GUI, navigate
to Users > Create User.
ƒ New user:
í Enter name and password of the user.
ƒ Modify user:
í Right-click on the user and select
Properties.
ƒ Click Browse button to upload the
public.PPK file.
ƒ Click Create.
ƒ CLI mkuser command is generated to
define or add user with SSH key
authentication for a CLI no-password
required login.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-48. User with SSH key authentication

The SSH-AUTH protocol defines three authentication methods: public key, host based, and
password. Each SSH-AUTH method is used over the SSH-TRANS connection to authenticate itself
to the server.
Now that we have generated the SSH key pair, if a user requires CLI access for Storwize V7000
management through SSH you must provide the user with a valid SSH public key file. The SSH
public key option can also be configured later after user creation. In this case a password for the
user is required.
To upload the SSH public key for an existing user, right-click on the user and select Properties.
From the Create User pane, click the Browse button which will open the windows explorer.
Navigate to the \Keys folder to upload the public .PPK file, and click Create. The CLI mkuser
command is generated to define or add user with SSH key authentication for a CLI no-password
required login.
This is an optional feature for users and is not compulsory for Storwize V7000 management.

© Copyright IBM Corp. 2012, 2016 4-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Create CLI session with SSH key authentication


• Open PuTTY.exe SSH client
ƒ Click Session (if required):
í Provide IP Address/DNS name.
í Accept default SSH Protocol port
22.
í Select SSH as Connection Type.
ƒ Navigate to Connection > SSH >
Auth.
í Click Browse next to the Private
key file for authentication field.
í Upload the Private.PPK file.
ƒ Return to the Session pane.
í Click Save > Open.
í If no mismatch, private key
authentication has been
established with no password .ppk file is not required
requirement. once SSH authentication
has been established

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-49. Create CLI session with SSH key authentication

Now that you have stored the public key in the Storwize V7000 cluster, you will need to establish an
CLI SSH connection and upload the private key .PPK file. To do so, open the PuTTY client. From
the Category navigation tree, click Session and enter the management IP address or DNS host
name of the cluster and accept the default port 22 that is used for SSH Protocol. Ensure that SSH
is selected as the connection type.
Next, select Connection > SSH > Auth. Since we are using generated key pair, we will use the o
that matches the corresponding public key, In the Private key file for authentication field box, use
the Browse button to navigate to the location of the generated private.PPK file, or copy paste the
file path into the field.
Once the session parameters are specified, return to the Session pane and provide a name to
associate with the new session environment definition in the Saved Sessions field. Click Save to
save the PuTTY session settings and establish SSH private key authentication using CLI SSH
connection. Putty is a commonly used Terminal client.

© Copyright IBM Corp. 2012, 2016 4-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Accessing CLI from Microsoft Windows


• PuTTY Client prompts for user ID only
ƒ User ID must be established with a public.ppk file.
ƒ SSH authentication validates password access in the form of private key
with public key.
• Install and set up the standard SSH client software on each system
that will be used to access the CLI.

login as: supuser


superuser@10.208.4.53’s password:
IBM_Storwize:V009B:superuser>lscurrentuser
name superuser
role SecurityAdmin
BM_Storwize:V009B:superuser>

For Windows Machines


Use Portable, no-charge Putty Software's.
Download URL: http://www.putty.org/

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-50. Accessing CLI from Microsoft Windows

Providing the SSH authentication has been enabled, the PuTTY client will only prompt login for the
user ID. This will be the same user ID that was authenticated with the public.ppk file. Once the login
ID is entered, SSH authentication validates password access in the form of the private key and
management IP address against the public key and user ID in the cluster.
In this example, we have issued the lscurrentuser command, which list the username by which
the current terminal is logged in.
The PuTTY SSH client software's are available in portable form and requires no need of special
setup. For other operating systems, use the default SSH clients or installed ones.
Once the SSH authentication has been established, upon the next log in using the PuTTY Client,
you will need to only select the name saved session and click Load > Open to recall the saved
management IP address.

© Copyright IBM Corp. 2012, 2016 4-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Command-line interface commands


• Complemented with logically
consistent command line syntax
ƒ action_argument parameters Command Grouping Samples
• ls => List information
• Listing information: Examples • mk => Create object
ƒ lsvdisk • ch => Change object properties
ƒ lshost AIXHOST • rm => Remove or delete object.
• Performing tasks: Examples
ƒ mkvdisk –size 10 –unit gb
–name testvol1
ƒ chhost -name newname svcinfo/svctask prefixes are
oldname no longer needed when you are
ƒ rmvdisk testvol1 issuing a command. However, if
they are still contained in scripts,
scripts are still functional.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-51. Command-line interface commands

The command-line interface (CLI) enables you to manage the Storwize V7000 by typing
commands. Based on the user’s privilege level, commands can be issued to list information and
execute the commands for performing actions. Commands can be complemented with logically
consistent command line syntax. The syntax of a command is basically the rules for running the
command. It is important to understand how to read syntax notation so that you can use a
command properly.

© Copyright IBM Corp. 2012, 2016 4-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Keywords
• System initialization
• Cluster system
• Service Assistant Tool
• Storwize V7000 GUI
• Secure Shell (SSH) key
• Stretched System
• Event notifications
• Directory Services
• Remote authentication
• Lightweight Directory Access Protocol (LDAP)
• Support package
• Upgrade test utility
• User group
• Remote user
• System audit log entry
System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-52. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 4-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Review questions (1 of 2)
1. To initialize the Storwize V7000 node canisters a PC or
workstation must be connected to (blank) on the rear of a
node canister.

2. True or False: To access the Storwize V7000 GUI, a user


name with a password must be defined. To access the
Storwize V7000 CLI, a user name can be defined with a
password, an SSH key, or both.

3. Which of the following menu options will allow you to create


new users and delete, change, and remove passwords?
a. Settings
b. Monitoring
c. Access

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-53. Review questions (1 of 2)

Write your answers here:


1.
2.
3.

© Copyright IBM Corp. 2012, 2016 4-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Review answers (1 of 2)
1. To initialize the Storwize V7000 node canisters a PC or
workstation must be connected to Technician port (T-Port) on the
rear of a node canister
The answer is Technician port (T-Port).

2. True or False: To access the Storwize V7000 GUI, a user name


with a password must be defined. To access the Storwize V7000
CLI, a user name can be defined with a password, an SSH key, or
both.
The answer is true.

3. Which of the following menu options will allow you to create new
users and delete, change, and remove passwords?
a. Settings
b. Monitoring
c. Access
The answer is Access.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 4-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Review questions (2 of 2)
4. List the two administration management interface options
for IBM Storwize V7000.
5. List the two authentication mechanisms supported by IBM
Storwize V7000.
6. True or False: The CLI interface can only be accessed
using the Service Assistant IP address.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-54. Review questions (2 of 2)

Write your answers here:


4.
5.
6.

© Copyright IBM Corp. 2012, 2016 4-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Review answers (2 of 2)
4. List the two administration management interface options
for IBM Storwize V7000.
The answers are web browser-based GUI and SSH
protocol based command-line interface.

5. List the two authentication mechanisms supported by IBM


Storwize V7000.
The answers are local authentication and remote
authentication.

6. True or False: The CLI interface can only be accessed


using the Service Assistant IP address.
The answer is false. the CLI interface can be accessed
using the Storwize V7000 management IP.

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 4-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 4. System initialization and user authentication

Uempty

Unit summary
• Summarize the concept of using the Storwize V7000 Technician port
and Service Assistant tool to initialize the system
• Identify the basic usage and functionality of IBM Storwize V7000
management interfaces
• Recall administrative operations to create user authentication for local
and remote users access to the Storwize V7000 system

System initialization and user authentication © Copyright IBM Corporation 2012, 2016

Figure 4-55. Unit summary

© Copyright IBM Corp. 2012, 2016 4-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Unit 5. Storwize V7000 storage


provisioning
Estimated time
00:45

Overview
This unit identifies the provisioning and management of the Storwize V7000 internal storage
resources as well as external storage devices that are part of the SAN fabric.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 5-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Unit objectives
• Summarize the infrastructure of Storwize V7000 block storage
virtualization
• Recall steps to define internal storage resources using GUI
• Identify the characteristic of external storage resources
• Summarize how external storage resources are virtualized for Storwize
V7000 management GUI and CLI operations
• Summarize the benefit of quorum disks allocation
• Recognize how external storage MDisk allocation facilitate I/O load
balancing across zoned
• Distinguish between Storwize V7000 hardware and software encryption

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 5-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storage provisioning topics


• Storage virtualization infrastructure

• Storage logical building block

• Internal storage

• External storage

• Encryption

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-2. Storage provisioning topics

This topic examines the Storwize V7000 storage infrastructure and identifies its use of SAN block
aggregation to virtualize its resources.

© Copyright IBM Corp. 2012, 2016 5-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 terminology


Two node canisters in a control enclosure. Up to four control enclosures can be
System
clustered to form one system.
A SCSI logical unit (also known as LUN) built from an internal or external RAID
Managed disk (MDisk)
array
Physical SAS drives within a Storwize V7000 control or expansion enclosure
Internal storage
used to create RAID arrays and managed disks
Managed disks that are SCSI logical units (aka LUNs) presented by storage
External storage
systems that are attached to the SAN and managed by the system
Virtualization refers to the act of creating a virtual (rather than actual) version of
Virtualization something, including virtual computer hardware platforms, operating systems,
storage devices, and computer network resources.
A collection of MDisks providing real capacity for volumes.
Storage pool
SVC/CLI term: Managed disk group (MDG)

What the host operating system sees as a SCSI disk drive.


Volume
Storwize CLI term: Virtual disk (VDisk)

I/O group The pair of nodes is known as input/output (I/O) group.


Distributed RAID distributes data across a higher amount of physical drives,
Distributed RAID reducing the load on each individual drive from rebuild activity and increase
performance as data can be read from/written to more drives for a given I/O.
A cluster quorum disk is the storage medium on which the configuration database
Quorum disks
is stored for a cluster computing network

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-3. Storwize V7000 terminology

This table defines terminologies that are used in this unit.

© Copyright IBM Corp. 2012, 2016 5-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

IBM System Storwize V7000


Storwize V7000 is implemented by using the symmetric virtualization approach

Host Server Host Server


Logical Entity
(volume)

Target
SAN
SAN
Virtualization

Initiator
Virtualization

Disk Disk Disk


subsystem subsystem subsystem Disk subsystem Z
A C C

In-Band Appliance Symmetric Out-of-Band Controller based


Virtualization Asymmetric Virtualization
This model will not be covered in this course.
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-4. IBM System Storwize V7000

Two major approaches in use today for the implementation of block-level aggregation and
virtualization are Symmetric (In-band Appliance Virtualization) and Asymmetric (Out-of-band or
controller-based virtualization).
IBM Storwize V7000 is implemented by using the symmetric virtualization, an in-band SAN, or
fabric-based appliance approach. Storwize V7000 control enclosure sits in the data path and all I/O
flows through the device as it acts as both target (I/O requests from the host) and initiator (I/O
requests from the storage) perspective. The redirection is performed by issuing new I/O requests to
the storage.
The controller-based (asymmetric) approach offers high functionality, but it fails in terms of
scalability or upgradeability. The device is usually a storage controller that provides an internal
switch for external storage attachment. In this approach, the storage controller intercepts and
redirects I/O requests to the external storage as it does for internal storage. The actual I/O requests
are themselves redirected. Because of the nature of its design, there is no true decoupling with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Only the fabric-based appliance solution offers an independent and scalable virtualization platform
that provides enterprise-class copy services. The fabric-based appliance is open for future
interfaces and protocols, which allows you to choose the disk subsystems that best fit your
requirements, and does not lock you into specific SAN hardware.

© Copyright IBM Corp. 2012, 2016 5-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty
With controller-based approach, there are data migration issues such as how to reconnect the
servers to the new controller and how to reconnect them online without any effect on your
applications. When using this approach, if there is a need to replace a controller it also indirectly
replaces the entire virtualization solution.

© Copyright IBM Corp. 2012, 2016 5-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

IBM Spectrum Virtualize


Adding intelligence to the storage network

Node level cache


(up to 128 GB)
(model dependent)
Storage Area
Network
Volumes Volumes Volumes Volumes

Node1 Node2 Node3 Node4 Node5 Node6 Node7 Node8

Node level Flash (SSDs)


(up to 4/node)
(model dependent) Managed Disks (MDisks)

Storage Pools (IBM, HDS, HP, EMC, Sun, NetApp, Fujitsu Eternus, Bull Storeway,
NEC iStorage, Pillar Data, Texas Memory, Xiotech, Nexsan, Compellent (ongoing…)

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-5. IBM Spectrum Virtualize

IBM Storwize V7000 Gen2 is an appliance-based in-band block virtualization process, in which
intelligence, including Spectrum Virtualize advanced storage functions, is migrated from individual
(internal/external) storage devices to the storage network. Therefore, Storwize V7000 is a complete
virtualization solution for flexibility, scalability, and redundancy.
IBM Spectrum Virtualize adds an abstraction layer to the existing SAN infrastructure, enabling
enterprises to centralize storage provisioning with a single point of control. The Spectrum Virtualize
approach is based on a scale-out cluster architecture and lifecycle management tasks. Spectrum
Virtualize allows for non-disruptive replacements of any part in the storage infrastructure, including
the Storwize V7000 devices themselves. It also simplifies compatibility requirements that are
associated in heterogeneous server and storage environments. Therefore, all advanced functions
are implemented in the virtualization layer, which allows switching storage array vendors without
impact. This enables the application server storage requirement needs to be articulated in terms of
performance, availability, or cost.
One of the significant benefits of this approach is that the Storwize V7000 control enclosure, a
virtualization engine, provides a common platform for IBM Spectrum Virtualize advanced functions.
The virtualization engine provides one place to perform, administer, and control functions like Copy
Services regardless of the underlying storage.

© Copyright IBM Corp. 2012, 2016 5-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 block-level virtualization


Application Layer
Word, email, web, ERP, TSM
I/O Request
Open, Read, Write, Close
Database File Systems
(DBMS: DB2, Oracle) (UFS, NTFS, JFS)

Host

Storwize V7000 is
Network
implemented as a
clustered appliance in the Block aggregation
storage network layer Device

Lz Lh Li
Storage devices RAID

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-6. Storwize V7000 block-level virtualization

Virtualization at the disk layer is referred to as block-level virtualization, or the block aggregation
layer. The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage controllers), or
in storage devices (intelligent disk arrays). IBM’s implementation of a block aggregation solution is
the Storwize V7000 which is implemented as a clustered appliance in the storage network layer.
The key concept of virtualization is to decouple the storage from the storage functions that are
required in the storage area network (SAN) environment. This means abstracting the physical
location of data from the logical representation of the data. The virtualization engine presents
logical entities to the user and internally manages the process of mapping these entities to the
actual location of the physical storage.
Storwize V7000 block-level virtualization provides a layer of abstraction between the application
servers and the underlying physical storage systems. By having the virtualization layer reside
above the storage controller level, application servers can be configured to use virtual disks while
the physical disks (or disks surfaced by the RAID controllers) are hidden from the application
servers.

© Copyright IBM Corp. 2012, 2016 5-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 I/O virtualization structure


Volumes:
Belong to one or more I/O groups
Volume Size 16 MB to 256 TB
Max. 8192, 2048 per I/O group
Dynamically Expandable
Thin-Provisioned, Compressed
I/O Group 0 I/O Group 3
Cluster:
1-4 Node-pairs (I/O Groups)
Cache, Copy Services

Disk2 Disk3
Storage Pools:
*Quorum Managed Disks from 256 disk
Disk1
systems (Max. 128)
Pool 1 Pool 2 Pool 3 Assign LUNs to Storage Pools
(Max. 128)
Define Extent size (16 MB to 8
RAID5 Hybrid RAID10 GB)
*Quorum functionality is not supported on flash drives

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-7. Storwize V7000 I/O virtualization structure

The Storwize V7000 Gen2 hardware consists of control enclosures and expansion enclosures,
connected with wide SAS cables (four lanes of 6Gbit/s or 12Gbit/s). Each enclosure houses 2.5" or
3.5" drives. The control enclosure contains two independent control units (nodes) based on SAN
Volume Controller technology, which are clustered via an internal network.
The Storwize V7000 system can support up to four I/O groups (8-node system). Each I/O group
(node pairs) manages assigned volumes. The term I/O Group is used to denote the group of
volumes managed by a specific node-pair. A single I/O group can manage up to 1024 volumes for a
maximum of 4096 total with all four node-pairs.
Each volume can be as small as 16 MB or as large as 2 TB in size, and can be dynamically resized
smaller or larger as needed.
The cluster manages a group of physical volumes called managed disks (MDisks). This is the
foundation of an I/O virtualization in which MDisks are grouped into storage pools. Managed disks
can also be LUNs selected from up to 64 disk subsystems.
The Storwize V7000 uses the storage from the storage pools to create virtual volumes. Volumes
can be segregated into managed disk groups, for whatever reason. For example, You can separate
an RAID5 group from an RAID10 group, or separate the EMC from the HDS disks, or separate data
that belongs to different customers or departments. Typically, like devices are placed into a group.

© Copyright IBM Corp. 2012, 2016 5-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty
Therefore, when a volume is defined, you designate which node-pair handles the I/O (which I/O
group it belongs to) and which managed disk group you want as the physical storage of the data.
The mapping for which volumes are stored onto which managed disks is stored inside each node
and mirrored across all nodes so that all nodes know where all data is stored. You also can back up
this mapping externally to handle the unlikely event of losing all nodes.

© Copyright IBM Corp. 2012, 2016 5-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Spectrum Virtualize with Storwize V7000:


One complete solution
Hosts see thousands of disks
• One device type
• One multipathing driver
• One management interface
• Server does not see storage systems managed
Virtual
Disk
by Storwize V7000

Storwize V7000

Disks from different vendors


NL-SAS • Different device types
• Different multipathing drivers
Enterprise • Different management interfaces
Flash • Adding a new storage system requires no
additional maintenance to servers

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-8. Spectrum Virtualize with Storwize V7000: One complete solution

IBM Spectrum Virtualize supports different tiers of storage from different vendors with different
interfaces and multipathing drivers. But the hosts see only one device type, one multipathing driver
and one management interface regardless of the number of types of storage controllers being
managed by the Storwize V7000.

© Copyright IBM Corp. 2012, 2016 5-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storage provisioning topics


• Storage infrastructure

• Storage logical building block


• Internal storage

• External storage

• Encryption

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-9. Storage provisioning topics

This topic examines the Storwize V7000 storage infrastructure and identifies its use of SAN block
aggregation to virtualize its resources.

© Copyright IBM Corp. 2012, 2016 5-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 logical building blocks


• Storwize V7000 uses basic storage units called managed disks (MDisks)
and collects them into one or more storage pools .
• Storage pools provide the physical capacity to create volumes (also known
as Vdisks) for use by hosts.
• Volume space is then allocated as needed for data.
Volumes

Storage Pool

Storage Pool

RAID 5
Managed Disks (MDisks)
11
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-10. Storwize V7000 logical building blocks

A logical building block represents basic storage units called managed mdisks (MDisks) which are
added to storage pools to create virtualized storage resources that gets presented to a host.

© Copyright IBM Corp. 2012, 2016 5-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 managed resources


• Storwize V7000 supports two different types:
ƒ Internal Array MDisk
í The internal RAID implementation inside the system takes drives and builds a
RAID array with protection against drive failures.
• 2076-524 supports twenty-four 2.5-inch drives
• 2076-12F/24F expansion enclosures (twelve 3.5-inch and twenty-four 2.5-
inch drives
ƒ External SAN attached MDisk
í An external storage system provides the RAID function and presents a Logical
Unit (LU) to the Storwize V7000.
í Storage servers independent of the Storwize V7000 nodes

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-11. Storwize V7000 managed resources

A Storwize V7000 system can manage a combination of internal and supported external storage
systems. Internal storage is the RAID-protected storage that is directly attached to the system using
the drive slots in the front of the node or with the expansion enclosure. The Storwize V7000
automatically detects the drives that are attached to it and displays them within the GUI as internal
or external storage.
The external storage subsystems are independent back-end disk controllers that are discovered
out on the same fabric to be by the Storwize V7000 systems for virtualization.

© Copyright IBM Corp. 2012, 2016 5-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Managed disks
• A managed disk must be protected by RAID to prevent loss of the entire
storage pool.
• MDisk can be either part of a RAID array of internal storage or a logical unit
(LUN) from external storage.
• Each managed disk group can contain up to 128 managed disks and a
maximum of 4096 MDisks per system.
• MDisk is not visible to a host system on the SAN.

3.2 TB
Member Member Member Member
12 Gbps Flash disk disk disk disk

1.8 TB
Member Member Member Member
10K RPM disk disk disk disk

2 TB
Member Member Member Member
7.5 K RPM disk disk disk disk

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-12. Managed disks

A managed disk (MDisk) refers to the unit of storage that the Storwize V7000 system virtualizes.
This unit might be a logical volume from an external storage array that is presented to the system, a
RAID array that is created on internal drives, or an external expansion that is managed by the
Storwize V7000 node. The node allocates these MDisks into various storage pools for different
usage or configuration needs.
If zoning has been configured correctly, MDisks are not be visible to a host system on the storage
area network as it should only be zoned to the Storwize V7000 system.
Managed disks are grouped by the storage administrator into one or more pools of storage that is
known as storage pools, or managed disk groups. The grouping is typically based on performance
and availability characteristics. While a storage pool can span multiple storage systems, for
availability and ease of management it is recommended that a storage pool be populated with
MDisks from the same storage system.

© Copyright IBM Corp. 2012, 2016 5-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storage pools and extents


• Storwize V7000 supports up to 1024 storage pools to be created per
system.
ƒ Up to 128 parent pools
ƒ Up to 1023 child pools
• Managed disks provide usable blocks or extents of physical storage.
• You must specify the extent size when you create a new storage pool.
Extent 1

Extent 2
Extent 3
Pool_IBMSAS
Extent 4

R5 Extent 5

R5
R5
R5
...
Extent-n
Extent-n

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-13. Storage pools and extents

Once the MDisks are placed into a storage pool they are automatically divided into a number of
extents. The system administrator must determine how many storage pools are to be defined and
the extent size to be used by the pool. Each clustered system can manage up to 1024 storage
pools, 128 parent pools, and 1023 child pools.

© Copyright IBM Corp. 2012, 2016 5-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Pool extent size and cluster capacity


• Determine extent size based on storage growth trends/forecasts and
correlate to Storwize V7000 cluster capacity.
• Use the same extent size value for all storage pools in the cluster.
ƒ Storage pools can have different extent sizes; however, this places restrictions
on the use of data migration.
Extent size Cluster capacity
16 MB 64 TB
To change the
32 MB 128 TB
extent default
size, you must 64 MB 256 TB
enable this feature 128 MB 512 TB
using the GUI 256 MB 1 PB
Settings >
Preference option 512 MB 2 PB
1024 MB 4 PB Default
2048 MB 8 PB
4096 MB 16 PB
8192 MB 32 PB

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-14. Pool extent size and cluster capacity

Storwize V7000 management GUI provides a default extent size of 1024 MB. To change the extent
size, you must enable this feature using the management GUI preference option. Extent sizes
ranges from 16 to 8192 MB. The choice of the extent size affects the total amount of storage that
can be managed by a cluster. Once set, the extent size stays constant for the life of the pool.
A 16 MB extent size supports a maximum capacity of 64 TB, and the 32 MB extent size supports up
to 128 TB. Increasing capacity based on the powers of 2, the 8192 MB extent size allows for 32 PB
of Storwize managed storage.
For most systems a capacity of 1 to 2 PB is sufficient. A preferred practice is to use 256 MB for
larger clustered systems. To avoid wasting storage capacity, the volume size should be allocated as
a multiple of the extent size. You can specify different extent sizes for different (storage pools);
however, you cannot migrate (volumes) between (storage pools) with different extent sizes. If
possible, create all your (storage pools) with the same extent size to facilitate easy migration of
volume data from one storage pool to another.

© Copyright IBM Corp. 2012, 2016 5-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Mapping of extents
• Storage pool extents are used to create volumes (VDisk).
ƒ Whenever you create a new volume you must pick a single storage pool to
provide the physical capacity.
ƒ Extents taken from each MDisk (unallocated) in a storage pool (round robin) to
fulfill the required capacity of a specified volume.
ƒ By default the created volume will stripe all of its data across all the managed
disks in the storage pool.

Storage pool
Striped Volume
(name, extent size)
(default)

Extent 1a
Extent 1a Extent 2a Extent 3a Extent 2a
Extent 1b Extent 2b Extent 3b Extent 3a
Extent 1c Extent 2c Extent 3c Extent 1b
Extent 1d Extent 2d Extent 3d Extent 2b
Extent 1e Extent 2e Extent 3e Extent 3b
Extent 1f Extent 2f Extent 3f Extent 1c
Extent 1 g Extent 2 g Extent 3 g
Extent 2c
Extent 3c
Managed Disks (MDisks)
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-15. Mapping of extents

The extents from a given storage pool are used by the Storwize V7000 to create volumes which are
known as logical disks. A volume also represents the mapping of extents that are contained in one
or more MDisks of a storage pool. When an application server needs a disk capacity of a given
size, a volume of that capacity can be created from a storage pool that contains MDisks with free
space (unallocated extents). Storwize V7000 creates the volume by allocating extents from a given
storage pool. The number of extents that are required is based on the extent size attribute of the
storage pool and the capacity that is requested for the volume. By default, extents are taken from all
MDisks contained in the storage pool in round robin fashion until the capacity of the volume is
fulfilled.
A volume is sourced by extents that are contained in only one storage pool. A storage pool is
referred to as a MDisk group and a volume is referred to as a virtual disk (VDisk). These terms are
still used and the CLI command syntax is still based on the traditional terms of MDisk group and
VDisk.

© Copyright IBM Corp. 2012, 2016 5-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storage pool types


• Single-tiered storage pool Pool_IBMSAS
ƒ Must have the same hardware
character tics R5 R5
í Same RAID type, RAID array size, disk R5
type, and RPMs R5

• Multi-tiered storage pool Pool_HPbox


ƒ Mix disk tier attribute
ƒ Mix hardware characteristics R10 R10
R10
í Different drive types
R10

• Hybrid storage pool Pool_Hybrid


ƒ HDD and flash/SDD SSD
R5 R5 array
MDisk*

R5

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-16. Storage pool types

A storage pool provides the pool of storage from which volumes are created. You must ensure that
the MDisks that make up each tier of the storage pool have the same performance and reliability
characteristics to avoid causing performance problems and other issues.
MDisks that are used in a single-tiered storage pool must have the same hardware characteristics,
such as the same RAID type, RAID array size, disk type, and RPMs. Any disk subsystems that are
providing the MDisks must also have similar characteristics, such as maximum input/output
operations per second (IOPS), response time, cache, and throughput. The MDisks that are used
are the same size, therefore the MDisks provide the same number of extents. If that is not feasible
then check the distribution of the volumes’ extents in that storage pool.
A multi-tiered storage pool contains a mix of MDisks with more than one type of disk tier attribute. A
multi-tiered storage pool that contains both generic_hdd and generic_ssd or flash MDisks is also
known as a hybrid storage pool. Therefore a multi-tiered storage pool contains MDisks with various
characteristics as opposed to a single-tiered storage pool. However, it is a preferred practice for
each tier to have MDisks of the same size and MDisks that provide the same number of extents.

© Copyright IBM Corp. 2012, 2016 5-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Internal storage supported RAID levels


• Storwize V7000 supports RAID levels: RAID0
Array
ƒ RAID 0: Striping with no redundancy RAID0
Array
mdisk
RAID0
Array
(1 to 4, 1 to 8 drives) mdisk RAID0 mdisk
Array
mdisk
ƒ RAID 1: Mirroring (2 drives - one on Storage
each node) Pool

ƒ RAID 5: Striping with parity, can survive


one drive fault (3 to 16 drives) RAID1 RAID1
Array Array
mdisk
ƒ RAID 6: Striping with double parity, can mdisk

survive two drive faults (5 to 16 drives) Storage


Pool
ƒ RAID 10: RAID 0 on top of RAID 1
(2 to 8, 2 to 16 drives)

• Spectrum Virtualize V7.6 RAID10 RAID10 RAID10 RAID10


Array Array Array Array
ƒ Distribute RAID 5 and Distribute RAID 6 mdisk mdisk mdisk mdisk

RAID10 RAID10 RAID10 RAID10


Array Array Array Array
mdisk mdisk mdisk mdisk

Storage
Pool
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-17. Internal storage supported RAID levels

RAID levels provide various degrees of redundancy and performance, and have various restrictions
regarding the number of members in the array. IBM Storwize V7000 supports RAID levels 0, 1, 5, 6
and 10. With the release of the Spectrum Virtualize V7.6 code, IBM Storwize V7000 now support
Distributed RAID 5 and Distributed RAID 6.

© Copyright IBM Corp. 2012, 2016 5-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Best practices: RAID 5 compared to RAID 10


• RAID 10 offers higher throughput for
random write workloads.
ƒ Requires two I/Os per logical write RAID10
Array
RAID10
Array
RAID10
Array
RAID10
Array
ƒ Performance advantage comes at a mdisk mdisk mdisk mdisk

higher cost RAID10 RAID10 RAID10 RAID10


Array Array Array Array
ƒ Recommend the use of Disk Magic to mdisk mdisk mdisk mdisk

determine the difference in I/O service


times Storage Pool

• RAID 5
ƒ Use four I/Os per logical write (sequence
RAID5 RAID5
writes) Array Array
mdisk mdisk
ƒ Best choice in performance and storage
RAID5 RAID5 RAID5
levels Array Array Array
mdisk mdisk mdisk

Storage Pool

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-18. Best practices: RAID 5 compared to RAID 10

In general, RAID 10 arrays are capable of higher throughput for random write workloads than RAID
5 because RAID 10 requires only two I/Os per logical write compared to four I/Os per logical write
for RAID 5. For random reads and sequential workloads, often no benefit is gained. With certain
workloads, such as sequential writes, RAID 5 often shows a performance advantage.
Selecting RAID 10 for its performance advantage comes at a high cost in usable capacity and in
most cases RAID 5 is the best overall choice.
If you are considering RAID 10, use Disk Magic to determine the difference in I/O service times
between RAID 5 and RAID 10. If the service times are similar then the lower-cost solution makes
the most sense. If RAID 10 shows a service time advantage over RAID 5 then the importance of
that advantage must be weighed against its additional cost.

© Copyright IBM Corp. 2012, 2016 5-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Array and RAID levels: Drive counts and redundancy


• A drive undergoes I/O tests before it is added to an array.
• An array is instantly available for I/O operations once created .
ƒ Array initialization is performed as a background task.
• Array redundancy is determined by its RAID level.
• System cache management attempts to combine writes into full stride
writes for better performance.

Level Drive count (DC) Approximate Redundancy


supported array capacity
RAID 0 1-8 DC * DS None
RAID 1 2 DS 1
RAID 5 3 – 16 (DC – 1) * DS 1
RAID 6 5 – 16 < ((DC – 2) * DS) 2
RAID 10 2 – 16 (even only) (DC/2) * DS 1 – DC/2
DC = Drive count DS = Drive size

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-19. Array and RAID levels: Drive counts and redundancy

An array can be created using the GUI or CLI. Both use the mkarray command. The Storwize
V7000 management GUI offers presets implemented based on best practices guidelines. After the
array is created, it can be used instantly and moved to a pool where volumes can be created.
Volumes can be written immediately after creation and mapping.
Redundancy depends on the type of RAID level that is selected at creation time. To reduce the
calculation of parity information and to improve performance, the cache attempts to combine writes
together into full strides. The usable capacity for RAID0 is the drive count time the drive size which
is 100% usage but with the cost of no redundancy. A redundancy of 1 means that one drive can fail
without failing the array. The usable capacity is only one drive size (50%) since the other is just a
mirror. The supported drive counts for RAID 5, 6, and 10 are higher, but a default size is used in the
GUI. Eight is the best practice number for a RAID5 array and that is why it is used by the GUI.
Avoid splitting arrays into multiple logical disks at the storage system level. Where possible, create
a single logical disk from the entire capacity of the array. You can intermix different block-size drives
within an array and a storage pool. Performance degradation can occur, however, if you intermix
512 block-size drives and 4096 block-size drives within an array. Depending on the redundancy
that is required, create RAID 5 arrays by using 5 - 8 data bits plus parity components (that is, 5 + P,
6 + P, 7 + P or 8 + P).

© Copyright IBM Corp. 2012, 2016 5-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty
Do not mix managed disks (MDisks) that greatly vary in performance in the same storage pool tier.
The overall storage pool performance in a tier is limited by the slowest MDisk. Because some
storage systems can sustain much higher I/O bandwidths than others, do not mix MDisks that are
provided by low-end storage systems with those that are provided by high-end storage systems in
the same tier.
Keep in mind that RAID data redundancy is not to the same as data backup. You still need to
ensure data safety by backing up your data daily to offline or off-site storage.

© Copyright IBM Corp. 2012, 2016 5-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 balanced system (chain balanced)

• Balanced RAIDs are special groups where the members of an array


are balanced using the two chains.
Enclosure 4
6 8
Chain 2
RAID5 RAID5
array array
Storwize V7000 controller 9k
MDis MDisk10

RAID5 RAID5
arrayPool ID 0 array
Chain 1 MDisk MDisk

Enclosure 3

Chain balance property for Balanced RAID-10 preset


• Exact: Take 50% of drives from chain 1 and 50% from chain 2
• Provides good performance and protects against at least one drive failure
í All data is mirrored on two array members.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-20. Storwize V7000 balanced system (chain balanced)

A special group is the balanced RAID group where the members of an array are balanced using the
two chains. For RAID10, select half disk drives from the enclosure in the first chain and the second
half of disk drives from the second chain (for better availability). Although drive selection is not a
concern for RAID5 or RAID6, it is still best to ensure that the selected drives that are in the
enclosures are part of the same SAS chain. The result is a balanced array that has 50% of its
drives on chain 1 and the other 50% of its drives on chain 2. The system performs the separation of
both chains in the wizard so that they are balanced well. Therefore, the arrays are in a set of up to
eight mirrored pairs with the data striped across mirrors. They can tolerate the failure of one drive in
each mirror and they allow reading from both drives in a mirror.

© Copyright IBM Corp. 2012, 2016 5-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Enclosure 4 (chain 2) drives and array members


IBM_2076:Team50A:TeamAdmin>chquorum lsdrive -filtervalue
enclosure_id=4 -delim ,
id,status,error_sequence_number,use,tech_type,capacity,mdisk_id,
mdisk_name,member_id,enclosure_id,slot_id,node_id,node_name
51,online,,candidate,sas_hdd,278.9GB,,,,4,19,,
52,online,,candidate,sas_hdd,278.9GB,,,,4,18,,
53,online,,member,sas_hdd,278.9GB,10,mdisk10,6,4,17,,
54,online,,member,sas_hdd,278.9GB,10,mdisk10,4,4,16,,
55,online,,member,sas_hdd,278.9GB,10,mdisk10,2,4,15,,
56,online,,member,sas_hdd,278.9GB,10,mdisk10,0,4,14,,
57,online,,member,sas_hdd,278.9GB,9,mdisk9,6,4,13,,
58,online,,member,sas_hdd,278.9GB,9,mdisk9,4,4,12,,
59,online,,member,sas_hdd,278.9GB,9,mdisk9,2,4,11,,
60,online,,member,sas_hdd,278.9GB,9,mdisk9,0,4,10,,
61,online,,member,sas_hdd,278.9GB,8,mdisk6,6,4,9,,
62,online,,member,sas_hdd,278.9GB,8,mdisk6,0,4,6,,
63,online,,member,sas_hdd,278.9GB,8,mdisk6,4,4,8,,
64,online,,member,sas_hdd,278.9GB,8,mdisk6,2,4,7,,
65,online,,spare,sas_hdd,278.9GB,,,,4,1,,
66,online,,member,sas_hdd,278.9GB,7,mdisk8,6,4,5,,
67,online,,member,sas_hdd,278.9GB,7,mdisk8,4,4,4,,
68,online,,member,sas_hdd,278.9GB,7,mdisk8,2,4,3,,
69,online,,member,sas_hdd,278.9GB,7,mdisk8,0,4,2,,

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-21. Enclosure 4 (chain 2) drives and array members

This output identifies four of the eight mirrored pairs that are part of chain 2 (enclosure 4), which is
half of the drives from each enclosure. Chain 2 also contains the spare drive which provides
protection for the two array members balanced across the two enclosure chain.

© Copyright IBM Corp. 2012, 2016 5-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Enclosure 3 (chain 1) drives and array members


IBM_2076:Team50A:TeamAdmin>chquorum lsdrive -filtervalue
enclosure_id=3 -delim ,
id,status,error_sequence_number,use,tech_type,capacity,mdisk_id
,mdisk_name,member_id,enclosure_id,slot_id,node_id,node_name
25,online,,candidate,sas_hdd,278.9GB,,,,3,19,,
26,online,,candidate,sas_hdd,278.9GB,,,,3,18,,
27,online,,member,sas_hdd,278.9GB,10,mdisk10,7,3,17,,
28,online,,candidate,sas_hdd,136.2GB,,,,3,21,,
29,online,,member,sas_hdd,278.9GB,10,mdisk10,5,3,16,,
30,online,,member,sas_hdd,278.9GB,10,mdisk10,3,3,15,,
31,online,,candidate,sas_hdd,136.2GB,,,,3,20,,
32,online,,member,sas_hdd,278.9GB,10,mdisk10,1,3,14,,
33,online,,member,sas_hdd,278.9GB,9,mdisk9,5,3,12,,
34,online,,member,sas_hdd,278.9GB,9,mdisk9,7,3,13,,
35,online,,member,sas_hdd,278.9GB,9,mdisk9,3,3,11,,
37,online,,member,sas_hdd,278.9GB,9,mdisk9,1,3,10,,
39,online,,member,sas_hdd,278.9GB,8,mdisk6,7,3,9,,
40,online,,member,sas_hdd,278.9GB,8,mdisk6,5,3,8,,
41,online,,member,sas_hdd,278.9GB,8,mdisk6,3,3,7,,
42,online,,member,sas_hdd,278.9GB,8,mdisk6,1,3,6,,
43,online,,member,sas_hdd,278.9GB,7,mdisk8,7,3,5,,
44,online,,member,sas_hdd,278.9GB,7,mdisk8,5,3,4,,
45,online,,member,sas_hdd,278.9GB,7,mdisk8,3,3,3,,
46,online,,member,sas_hdd,278.9GB,7,mdisk8,1,3,2,,
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-22. Enclosure 3 (chain 1) drives and array members

This output identifies the remaining mirrored pairs that are part of chain 1 (enclosure 3).

© Copyright IBM Corp. 2012, 2016 5-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Array member goals and spare attributes


• Array member goals are derived from the drives specified.
ƒ Capability goals: Technology type, RPM, and capacity
ƒ Location goals: Chain of enclosure and enclosure slot of drive

• Array member goals are used for hot spare selection and can be displayed with the
lsarraymembergoals command.

• Both the array and its member drives have a property called balanced (suitability in
the GUI) which indicates whether member goals are met.
ƒ Exact: All member goals have been met.
ƒ Yes: All member goals except location have been met.
ƒ No: One or more of the capability goals has not been met.

• Each array has a spare goal property that is specified at creation.


ƒ Indicates how many global spares must be available to protect the array.
í Alert is generated in system event log if available spares drop below goal.

• For an array and its members, the system dynamically maintains a spare protection
count of non-degrading spares.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-23. Array member goals and spare attributes

When creating an array using the GUI several parameters are used to select the used drives that
are based on different goals. One goal is to have only drives from the same type in one array (Flash
and SSDs). Also, the drive RPM is a goal; all members should have the same speed. The same is
true for the capacity. Another goal is the location goal which places the members on a special
chain, enclosure, or slot ID.
Storwize V7000 supports hot-spare drives. To decide for a spare drive, the member goals are used.
They can be listed with the lsarraymembergoals command.

© Copyright IBM Corp. 2012, 2016 5-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Spare drive use attribute assignment by GUI


• Internal storage spares are created automatically.
• Algorithm to change the drive use attribute from candidate to spare.
ƒ For every 23 array members with the same drive class on a single
chain, which is not RAID 0 members, a single spare is created.
‫ޤ‬ 24 identical drives on a single chain
One spare, 23 members

RAID5 RAID5
Spare RAID5
Array Array Array
mdisk mdisk mdisk

System automatically reconfigures Spare


RAID5 RAID5 RAID5
the replacement drive as a spare Array
mdiskbbb
Array
mdisk
Array
mdisk
and the replaced drive is removed
from the configuration Storage Pool

IBM_2076:Team50A:TeamAdmin>lsdrive -filtervalue use=spare -delim ,


id,status,error_sequence_number,use,tech_type,capacity,mdisk_id,mdisk_n
ame,member_id,enclosure_id,slot_id,node_id,node_name
4,online,,spare,sdd,278.9GB,,,,1
IBM_2076:Team50A:TeamAdmin>
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-24. Spare drive use attribute assignment by GUI

When a RAID member drive fails, the system automatically replaces the failed member with a
hot-spare drive and resynchronizes the array to restore its redundancy. The management GUI
automatically creates drives that are marked as spare when the internal storage is configured by
the wizards. The rule is to create one spare for every 23 array members. This results in one
enclosure with 24 disks in the following setup: 23 drives have the Candidate state while one has the
state of Spare.
The selection of a spare drive that replaces a failed disk is done by the system. A drive with a lit
fault LED indicates that the drive has been marked as failed and is no longer in use by the system.
When the system detects that such a failed drive is replaced, it reconfigures the replacement drive
to be a spare and the drive that was replaced is automatically removed from the configuration. The
new spare drive is then used to fulfill the array membership goals of the system. The process can
take a few minutes.
If the replaced drive was a failed drive, the system automatically reconfigures the replacement drive
as a spare and the replaced drive is removed from the configuration.
Slide contains animations.

© Copyright IBM Corp. 2012, 2016 5-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Spare selection for array member replacement


• Spare drives are global spares.
ƒ Any spare with at least the same capacity as the array member being
replaced is eligible to be considered.

• When the system selects a spare for member replacement, the spare
that is the best possible match to array member goals is chosen based
on.
ƒ An exact match of member goal capacity, performance, and location
ƒ A performance match: The spare drive has a capacity that is the same or
larger and has the same or better performance

• If a better spare is introduced to the system, the better spare is


exchanged automatically to rebalance the array.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-25. Spare selection for array member replacement

The goal is always to replace them with the same type and properties as the failed disk. If they are
not available, the system searches for the best solution. The spare drives are global and can be
used from any array. There are no limits in using the spare drives.

© Copyright IBM Corp. 2012, 2016 5-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Traditional RAID 6
• Double parity improves data
availability by protecting against single
or double drive failure in an array.

• Disadvantage:
ƒ Spare drives are idle and cannot
contribute to performance.
í Particularly an issue with flash drives
ƒ Rebuild is limited by throughput of
single drive.
í Longer rebuild time with larger drives
í Potentially exposes data to risk of dual
failure

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-26. Traditional RAID 6

Traditional RAID 6 offers double parity which improves data availability by protecting against single
or double drive failure in an array. With Traditional RAID (TRAID), reading from a single drive or
multiple drives and writing to a single spare drive, the rebuild time is extended due to the spare
drive’s performance. In addition the spares, when not being used, sit idle wasting resources.

© Copyright IBM Corp. 2012, 2016 5-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Traditional RAID 6 reads/writes


• Each stripe is made up of data strips and two parity strips (P and Q).
ƒ Supports multiple drive failures
• A stripe is either 128K or 256K ( 256K being the default).
• Extent size is irrelevant.
Active drives Spare

Stripe

Read from all drives Writes to 1 drive

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-27. Traditional RAID 6 reads/writes

In a RAID 6, each stripe is made up of data strips (represented by D1, D2 and D3) and two parity
strips (P and Q). A two parity strips means the ability to cope with two simultaneous drive failures.
RAID 6 does not have a performance penalty for read operations, but it does have a performance
penalty on write operations because of the overhead associated with parity calculations.
Performance varies greatly depending on how RAID 6 is implemented.

© Copyright IBM Corp. 2012, 2016 5-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Distributed RAID (DRAID)


• Improved RAID implementation
ƒ Faster drive rebuild improves availability
and enables use of lower cost larger
drives with confidence
ƒ All drives are active, which improves
performance especially with flash drives
• Spare capacity, not spare drives
• Rotating spare capacity position
distributes rebuild load across all drives
• More drives participate in rebuild
ƒ Bottleneck of one drive is removed
• More drives means faster rebuild
ƒ 5-10x faster than traditional RAID
ƒ Especially important when using large
drives
• No “idle” spare drives
ƒ All drives contribute to performance
ƒ Especially important when using flash
drives

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-28. Distributed RAID (DRAID)

Distributed RAID arrays are designed to improve RAID implementation for better performance and
availability by offering faster drive rebuilds using the spare capacity reserved on each drive in the
array. Therefore, no “idle” drives as all drives contribute to performance.

© Copyright IBM Corp. 2012, 2016 5-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Distributed RAID 6
• Distribute 3+P+Q over 10 drives with two distributed spares
ƒ Spare is allocated depending on the pack number.
• The number of rows in a pack depends on the number of strips in a
stripe, this means the pack size is constant for an array.
• Extent size is irrelevant.
Drive
Row
In this instance
these 5 rows make
up a pack.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-29. Distributed RAID 6

Distributed RAID 6 arrays stripe data over the member drives with two parity strips on every stripe.
These distributed arrays can support 6 - 128 drives. A RAID 6 distributed array can tolerate any two
concurrent member drive failures.
If a distributed array that contains a failed drive. To recover data, the data is read from multiple
drives. The recovered data is then written to the rebuild areas, which are distributed across all of
the drives in the array. The remaining rebuild areas are distributed across all drives.

© Copyright IBM Corp. 2012, 2016 5-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

DRAID performance goals


• A 4TB drive can be rebuilt within 90 minutes for an array width of 128
drives with no host I/O.

• With host I/O, if drives are being utilized up to 50%, the rebuild time will
be 50% slower.
ƒ Approximately three hours, but still that is much faster then TRAID time of 24
hours for a 4TB drive.

• Main goal of DRAID is to significantly lower the probability of a second


drive failing during the rebuild process compared to traditional RAID.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-30. DRAID performance goals

Since DRAID reads from every drive in the set, and can be written to during the rebuild, allowing
typical rebuild times in under 2 hours or 2-4 hours on average - rather than some 36+ hours in
worst case examples with traditional single spare drives. Which means you gain the performance in
daily operations, potentially adding 33% more performance to the array.

© Copyright IBM Corp. 2012, 2016 5-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Distributed RAID considerations


• V7.6 supports Distributed RAID 5 and 6.
• Up to 10 arrays/MDisks in an I/O Group and a maximum of 32 arrays in
a system.
• Array/MDisk can only contain drives from the same or a superior drive
class.
ƒ For example, 400GB SSDs available to build array, so only superior drives
are SSDs > 400GB.
ƒ For example, 450GB 10K SAS available to build array, so only superior
drives are 10/15K/SSDs >450GB.
ƒ Recommendation is to use same drive class for array/MDisk.
• Traditional RAID is still supported.
ƒ New arrays/MDisks will inherit properties of existing pool you are trying to
add it to.
ƒ New array width default for RAID5 is 8+P.
í If existing MDisks in pool are RAID5 7+P and/or 6+P then GUI will propose 6+P to
match lowest width in pool.
• Conversion from traditional to distributed RAID is not supported.
• Ability to expand an array/MDisk is a 2016 roadmap item.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-31. Distributed RAID considerations

DRAID arrays are a completely different RAID structure, which are still created as traditional RAID
using candidate drives as the building blocks. You cannot convert from traditional RAID to
Distributed RAID. DRAID are not currently supported the expansion of a distributed array.
Best practice would to be maintain the usual rules with storage pools, and only mix same capability
arrays in a single pool. You can use the CLI lsarrayrecommendation command for suitable
candidates.
When creating a new pool for your DRAID and only add the same type of DRAID to a single pool,
with 8+P+Q and 2 spares etc. You can have up to 4 spares per DRAID and up to 128 drives are
supported in a single DRAID. Each IO Group can support up to 12 DRAID.
DRAID supports large sets of drives, typically around 60 drives per array. So if you're only
considering adding a small number of DRAID, then this may not be appropriate for you. Use
RAID-6 with NLSAS drives for more redundancy as well as for larger 10K SAS drives
Keep in mind that I/O performance for a given number of drives improves with number of arrays,
distributed or non-distributed, specially with SSD drives.

© Copyright IBM Corp. 2012, 2016 5-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Drive-Auto Manage/Replacement
• DMP (directed maintenance procedure) step-by-step guidance no
longer required.

• Drive-Auto Manage/Replacement
ƒ Simply swap the old drive for new
í New drive in that slot takes over from the replaced drive

OLD NEW
drive drive

RAID5 RAID5
Array Array
mdisk mdisk

Slot 5 Slot 5

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-32. Drive-Auto Manage/Replacement

Replacing an old drive is much easier since you no longer have to follow the guidance of DMP
(direct maintenance procedure) to exchange an old drive for a new one. With Drive-Auto Manage,
you can simply swap the drives and the new drive in that slot takes over from the replaced drive.
Wait at least 20 seconds before you remove the drive assembly from the enclosure to enable the
drive to spin down and avoid possible damage to the drive. Do not leave a drive slot empty for
extended periods. Do not remove a drive assembly or a blank filler without having a replacement
drive or a blank filler with which to replace it.

© Copyright IBM Corp. 2012, 2016 5-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 Gen2 supports T10DIF


• The T10DIF feature allows the RAID software to store a checksum
alongside every logical block address (LBA) (512 bytes of data).
• Every time the data is read, the T10DIF checksum is validated in
hardware to ensure that the data which has been read back has not
been damaged since it was written.
• The T10DIF data also includes the LBA address of the data, allowing
the system to detect data which is written to the wrong location on the
drive.
• If the T10DIF checksum indicates that the data is invalid, the RAID
software can rebuild the data which has been damaged using the other
drives from the array.
• T10DIF is automatically applied to any drive which is added to an array
on the V7000 Gen2 once the V7000 Gen2 is running V7.4.0 or later.
• A drive with T10DIF enabled will show protection_enabled = yes
in the drive properties.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-33. Storwize V7000 Gen2 supports T10DIF

With the release of the V7.4 code introduced the industry standard extension at the RAID and SAS
level to provide an extra level of data integrity. T10DIF (data integrity field) was released in the v7.4
code and only available on Storwize V7000 Gen2 model.
T10DIF is a type two protection information (PI) that sits between the internal RAID layer and SAS
drives and appends 8 bytes of integrity metadata while the data is being transferred between the
controller and the PI-formatted disk drives. The 8 byte integrity field contains cyclic redundancy
check (CRC) data and more that provides validation data that can be used to ensure that data
written is valid and is not altered in the SAS network.

© Copyright IBM Corp. 2012, 2016 5-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storage provisioning topics


• Storage infrastructure

• Storage logical building block


• Internal storage

• External storage

• Encryption

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-34. Storage provisioning topics

This topic examines the internal structure by defining the components of the array.

© Copyright IBM Corp. 2012, 2016 5-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 overview block-level structure

Storage servers to be
virtualized

Logical disk that is


assignable storage
capacity

Collection of MDisks to provide Volume assignable to


capacity for volumes FC or iSCSI host
access

Configured from internal


storage for provisioning

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-35. Storwize V7000 overview block-level structure

The Storwize V7000 GUI Overview diagram illustrates the path in which storage resources are
configured. The management GUI automatically detects the number of internal storage drives and
the external attached storage systems that are configured within the SAN fabric. These block-level
storage components are used for virtualization. The Overview panel provides a quick view of the
system configuration. Hover the mouse pointer over the icons to view each description. You can
click on any of the resource option to be re-directed to the selected panel.

© Copyright IBM Corp. 2012, 2016 5-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 internal storage


• Storwize V7000 Gen2 internal drive objects cannot be directly added to
storage pools.
• Drives must first be included in a RAID to provide protection against the
failure of individual drives.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-36. Storwize V7000 internal storage

The Storwize V7000 internal drives are displayed within the management GUI Pools > Internal
Storage panel.
The installed Drive Class Filter column represents the type and size of the internal drives.
The Storwize V7000 GUI automatically detects each drive by its usage roles to include the capacity,
speed, and drive technology. Various types and drive capacities can be supported.
The 2076 -12F/24F enclosures that are attached to the Storwize V7000 nodes are presented as
internal storage. All drives that are detected by the GUI are presented in the usage role of Unused
as they have yet to be configured.

© Copyright IBM Corp. 2012, 2016 5-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Internal drive attributes


A disk falls under five categories.

Spare
(hot spare drive)

Member
(part of an array)

Candidate
(ready for use)

Failed
Unused
(Service needed)
(newly added)

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-37. Internal drive attributes

The Storwize V7000 assigns several internal drive usage roles. The usage role identifies the status
of an installed drive:
• Unused: The drive is not a member of an MDisk. The GUI offers to change the drive use
attribute if it is selected as part of an array.
• Candidate: The drive is available for use in an array.
• Member: The drive is a member of an MDisk.
• Spare: The drive can be used as a hot spare if required.
• Failed: The drive was either intentionally taken offline or failed due to an error.

© Copyright IBM Corp. 2012, 2016 5-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Internal drive properties


• Drive properties displays drive characteristics and vendor information.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-38. Internal drive properties

You can right-click on any physical drive to view specific details of the drive status, UID and the
drive technology characteristics to include the vendor ID, part number, speed and firmware level of
the drive.

© Copyright IBM Corp. 2012, 2016 5-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Change internal drive attributes


• Right-click a drive to change its unused status of a new drive to
candidate.
ƒ Select Mark as… > Candidate.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-39. Change internal drive attributes

All internal drives must have a use attribute of Candidate before it can be used as a member of an
array. You can change a drive attribute by right-clicking one or more of the unused drives and then
selecting Mark as. The GUI issues a chdrive command for each drive selected to change drive
usage from unused to candidate.

© Copyright IBM Corp. 2012, 2016 5-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Create a storage pool


• Defining a storage pool is the first task to virtualizing storage.
ƒ Click the Create button from the Pools panel toolbar.
í GUI issues the svctask mkmdiskgrp command to create a storage pool.

ƒ Administrators can define all storage pools required before adding storage to
create array Mdisks.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-40. Create a storage pool

Configuring a storage array and assign it to the pool is no longer an option using the Internal
Storage, Configure Storage. You must first define the storage pool in which the array is added for
virtualization.
All storage pools are created using with a (default) extent size of 1024 MB. An easytier setting is
defined and set to auto indicates that the Easy Tier function is to be automatically enabled if the
pool is to contain more than one tier of storage (Flash and HDD technologies). The -guiid value
corresponds to the particular GUI icon selected for the pool. The pool is also defined a -warning
value of 80% indicates that a warning message is to be generated when the 80% of the pool
capacity has been allocated.
The system provides an Add Storage notification as a reminder until storage has been added to the
pool. This feature can be disabled by clicking the Got it button.

© Copyright IBM Corp. 2012, 2016 5-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Modifying the default extent size


• To change the default extent size, use Settings > GUI Preferences. Select
the General option and click the box next to enable Advanced pool
settings.
• It is recommended to create a storage pool with the same extent size as
the extent size of existing storage pools – especially for migration
purposes.
Click the Save
button to save
changes

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-41. Modifying the default extent size

The extent size is not a real performance factor rather it is a management factor. If you have
preexisting storage pools it’s recommended to create a storage pool with the same extent size as
the extent size of existing storage pools. If you don’t have any other storage pools you can leave
the default extent size of 1GB.
You can allow user changes to the default extent size using the Settings > GUI Preferences.
Select the General option and click the box next to enable Advanced pool settings. Click the
Save button to save changes.

© Copyright IBM Corp. 2012, 2016 5-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Adding capacity to a storage pool


• The add storage option defines the RAID array and adds member drives
which becomes virtual storage capacity.
• Based on the location of the storage, you have three options to choose
from:
ƒ Quick Internal and External options, the management GUI displays the
recommended configuration based on drive class, RAID level, and the width of
the array.
ƒ Advanced Internal Custom option allow you to assign storage that has been
added to a system to customize your storage configuration.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-42. Adding capacity to a storage pool

After creating storage pools you must assign storage to specific pools. An array can be created
using the GUI or CLI. Both use the mkarray command. Spectrum Virtualize software has redesign
how to which still offers presets implemented based on best practices guidelines. The management
GUI provides three options to assign storage based on where the storage is located and its use.
The Quick Internal and External options assign storage based on drive class and RAID level. For
both of these options, the management GUI displays the recommended configuration based on
drive class, RAID level and the width of the array. Use the Internal Custom option to assign storage
that has been added to a system to customize your storage configuration.
By default, the system will recommend and create distributed arrays for most new Quick option
configurations. However, there are some exceptions. If not enough drives are available on the
system (for example, in configurations where there are under two flash drives), you cannot
configure a distributed array. In addition, you can continue to assign new storage to existing pools
in arrays that use previously-configured RAID settings.
The Advanced Internal Custom option allow you to assign storage that has been added to a
system to customize your storage configuration.

© Copyright IBM Corp. 2012, 2016 5-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Mdisks by pools view


• An MDisk can only be a member in
one storage pool, except for image
mode volumes.
• Drives that are part of an array
become members of the array.
• Depending on the array defined,
spares are not always displayed in
the use column.
ƒ Columns can be modified for specific
viewing.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-43. Mdisks by pools view

At any point in time, an MDisk can only be a member in one storage pool, except for image mode
volumes. Once a drive becomes part of an array, its Use attribute changes from Candidate to
Member indicating it is now part of an array. If you recall, a distributed array provides reserved
capacity on each disk within array to regenerate data if there is a drive failure. Therefore, a spare is
not indicate
After the array is created, it can be used instantly and moved to a pool where volumes can be
created. Volumes can be written immediately after creation and mapping.

© Copyright IBM Corp. 2012, 2016 5-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Advanced custom array creation


• The Custom option allows you
to create RAID array based on
your environment needs by
RAID type, spare requirement,
and array width.
• Custom provides the ability to
disable formatting.

RAID options

RAID types
# of spares Array width

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-44. Advanced custom array creation

The system supports non-distributed and distributed array configurations. In non-distributed arrays,
entire drives are defined as “hot-spares”. Hot-spare drives are idle and do not process I/O for the
system until a drive failure occurs. When a member drive fails, the system automatically replaces
the failed drive with a hot-spare drive. The system then resynchronizes the array to restore its
redundancy. However, all member drives within a distributed array have a rebuild area that is
reserved for drive failures. All the drives in an array can process I/O data and provide faster rebuild
times when a drive fails. The RAID level provides different degrees of redundancy and
performance; it also determines the number of members in the array.

© Copyright IBM Corp. 2012, 2016 5-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Parent and child pools


• Parent pools are created from Mdisks.
• Child pools are fully allocated and created from existing capacity that is
allocated from a parent pool.
Child Pool 1
50 GB capacity

Parent pool Extent 1a Extent 1a


Extent 2a Extent 2a
Extent 3a Extent 3a
Extent 1b Extent 1b
Extent 2b Extent 2b
Extent 3b
MDisk 1 MDisk 2 MDisk 3 Extent 1c
Extent 2c
Extent 3b
Extent 1c
Extent 2c
Extent 3c Extent 3c

Extent 1a Extent 2a Extent 3a


Extent 2b Extent 3b
Child Pool 2
Extent 1b
Extent 1c Extent 2c Extent 3c 75 GB capacity
Extent 1d Extent 2d Extent 3d
Extent 1e Extent 2e Extent 3e
Extent 1f Extent 2f Extent 3f Extent 1a Extent 1a
Extent 1 g Extent 2 g Extent 3 g Extent 2a
Extent 3a
Extent 2a
Extent 3a
Extent 1b Extent 1b
Extent 2b Extent 1a Extent 2b
1 TB capacity Extent 3b
Extent 1c
Extent 2c
Extent 2a
Extent 3a
Extent 3b
Extent 1c
Extent 1b Extent 2c
Extent 3c Extent 2b Extent 3c
Extent 3b
Extent 1c
Extent 2c
Extent 3c

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-45. Parent and child pools

There are often cases where you want to sub-divide a storage pool (or managed disk group) but
maintain a larger number of mdisks in that pool. A Parent pool is a standard pool creation that
receive its capacity from MDisks that are divided into a defined extent size.
Child Pools were introduced in V7.4.0 code release. Instead of being created directly from MDisks,
child pools are created from existing capacity that is allocated to a parent pool.
Child pools are created with fully allocated physical capacity. The capacity of the child pool must be
smaller than the free capacity that is available to the parent pool. The allocated capacity of the child
pool is no longer reported as the free space of its parent pool. Child pools are logically similar to
storage pools, but allow you to specify one or more sub divided child pools.

© Copyright IBM Corp. 2012, 2016 5-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Creating a child pool from an existing mdiskgrp


• To view child pool using the GUI, right-click the parent pool and select
Child Pools.

• To view child pool using the CLI, issue the following command:
IBM Storwize:V009B:V009B1-admin>mkmdiskgrp -name ChildPool_1 -unit gb
-size 40 -parentmdiskgrp 0
MDisk Group, id [2], successfully created
IBM Storwize:V009B:V009B1-admin>

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-46. Creating a child pool from an existing mdiskgrp

The same mkmdiskgrp command that is used to create physical storage pools is also used to
create child pools. To view the child pool that is created, right-click the parent pool and select
Child Pools. This view provides only basic information about the child pools.

© Copyright IBM Corp. 2012, 2016 5-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Child pool attributes


• A child pool provides most of the functions as any storage pool or
mdiskgrp.
ƒ Can be used to allocate capacity for special purpose volumes such as
Vmware
ƒ Maximum of 128 storage pools, each storage pool can have a maximum of
127 child pools

Table describe commands that are used to assign a given storage pool
Parameter Child pool usage Storage pool usage
-name Optional Optional

-mdisk Cannot be used with child pools Optional

-tier Cannot be used with child pools Optional

-easytier Cannot be used with child pools Optional

-size Mandatory Cannot be used with parent pools

-parentmdiskgrp Mandatory Cannot be used with parent pools

-exit Cannot be used with child pools Mandatory

-unit Optional Optional

-warning Optional Optional

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-47. Child pool attributes

Child pools are similar to parent pools with similar properties and provides most of the functions
that MDiskgrps have such as creating volumes that specifically use the capacity that is allocated to
the child pool.
Maximum number of storage pools remains at 128 and each storage pool can have up to 127 child
pools. Child pools can be created used both the GUI and CLI however they are shown as child
pools with all their differences to parent pools in the GUI.

© Copyright IBM Corp. 2012, 2016 5-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Benefit of a child pool


• Volume can be allocated from the child pool much like its parent pool.
ƒ Quotas and warnings can be set independently per child pool.
ƒ Child pool settings can be changed for volume management or to set
warning threshold in the same way you change it with normal storage
pools.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-48. Benefit of a child pool

Once the child pool has been defined, you can create a child pool volume by using the same
procedural steps listed within the Create Volumes wizard, as well as map volume directly to a host.
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes such as assigning application/server administrator their own pool of storage to
manage, without allocating entire managed disks.
As with parent pools, you can specify a warning threshold that alerts you when the capacity of the
child pool is reaching its upper limit. Use this threshold to ensure that access is not lost when the
capacity of the child pool is close to its allocated capacity.

© Copyright IBM Corp. 2012, 2016 5-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Child pool volumes and host view


• Child pool volumes are assigned the same unique identifier (UID) value
as the other volumes within the same Storwize V7000 cluster.
ƒ Same prefix and only the last couple of bytes vary from volume to
volume

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-49. Child pool volumes and host view

You can view child pool volumes collectively from the Volumes > Volumes view. Child pool
volumes maintain the same Unique Identifier (UID) value as the other volumes of the cluster.

© Copyright IBM Corp. 2012, 2016 5-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Child pool limitations and restrictions


• Maximum capacity cannot exceed parent pool’s size.
• Capacity can be allocated at creation (thick) or flexible (thin).
• Parent storage pool must be always specified; child pool does not
own any Mdisks.
• Child pools can also be created from GUI (new in 7.6).
• Maximum number of child pools in one parent pool is 127.
• Restricted to migrate image-mode volume to child pool.
• Volume extents cannot be migrated out of the child pool.
• Forbidden to shrink capacity below its real capacity.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-50. Child pool limitations and restrictions

Listed are some child pool limitations and restrictions.


You need to ensure that any child pools that are associated with a parent pool have enough
capacity for the volumes that are in the child pool before removing MDisks from a parent pool. The
system automatically migrates all extents that are used by volumes to other MDisks in the parent
pool to ensure data is not lost.
The system also supports migrating a copy of volumes between child pools within the same parent
pool or migrating a copy of a volume between a child pool and its parent pool. Migrations between
a source and target child pool with different parent pools are not supported. However, you can
migrate a copy of the volume from the source child pool to its parent pool. Then the volume copy
can be migrated from the parent pool to the parent pool of the target child pool. Finally the volume
copy can be migrated from the target parent pool to the target child pool.

© Copyright IBM Corp. 2012, 2016 5-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storage provisioning topics


• Storage infrastructure

• Storage logical building block


• Internal storage

• External storage
ƒ Examine external storage
ƒ Quorum disks allocation
ƒ MDisk multipathing methods

• Encryption

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-51. Storage provisioning topics

This topic examines the back-end storage and defines how external storage resources are
presented to the Storwize V7000 for management.

© Copyright IBM Corp. 2012, 2016 5-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 to back-end storage system


• Storage controllers that are used by the Storwize V7000 Gen2 clustered
system must be connected through SAN switches.
• In a storage controller environment, Storwize V7000 is defined as a SCSI
host.
ƒ An eight-node Storwize V7000 has a total of 32 WWPNs (max 1024 per four-
clustered system).

Best practice:
Node 8 Each V7000 node has
Define ALL V7000 ports Node 4
Node 7 four WWPNs
to storage system Node 3
Node 6
Node 2
Node 1
Node 5 SAN
SAN
LUN1 LUN2

LUN0 LUN4

DS3500 storage system

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-52. Storwize V7000 to back-end storage system

In the SAN, storage controllers that are used by the Storwize V7000 Gen2 clustered system must
be connected through SAN switches. Direct connection between the Storwize V7000 Gen2 and the
storage controller is not supported. All Storwize V7000 Gen2 nodes in an Storwize V7000 clustered
system must be able to see the same set of ports from each storage subsystem controller.
Inappropriate zoning and LUN masking can causes the paths to become degraded. You will need to
follow guidelines that applies to the supported
disk subsystem, as to which HBA WWPNs a storage partition can be mapped.
From the perspective of the disk storage system, Storwize V7000 is defined as a SCSI host. Since
a Storage V7000 host is a cluster with 8 Fibre Channel ports, each node canister has four WWPNs.
Therefore, an eight-node Storwize V7000 has a total of 32 WWPNs. Disk storage systems tend to
have different mechanisms or conventions to define hosts. For example, a DS3500 or a DS5000
uses the construct of a host group to define the Storwize V7000 cluster with each node in the
cluster that is identified as a host with four host ports within the host group. For best practice, define
all of the cluster’s WWPNs to the storage system.

© Copyright IBM Corp. 2012, 2016 5-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Backend-storage partitioning Host B Host_V7K C


• Creates multiple virtual systems
from a single disk system
Host A
ƒ Max 512 partitions
• Each of its disks partitions is a
logical device that is identified by a Storage Storage Storage
unique 128-bit (16-byte) GUID Partition Partition Partition
A B C
• Storage-based implementation
ensures data integrity
• Logical partitioning provides
flexibility
ƒ There is no limit is imposed beyond
the LUNs (MDisks) per system limit
LD
2

Unmapped Volumes

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-53. Backend-storage partitioning

Most backend storage system supports heterogeneous host support which enables consolidation in
multi-platform environments.
In a SAN fabric, LUN storage is essential to the configuration of the environment and its
performance. A storage device can be directly attached to the host group or connected via storage
networking protocols such as Fibre Channel and iSCSI. This allows LUNs to be mapped to a
defined host group.

© Copyright IBM Corp. 2012, 2016 5-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 WWNNS


• Each node canister has its own world-wind node name (WWNN) which is
based on 50:05:07:68:02:0z:zz:zz where z:zz:zz is unique for each
node canister.
ƒ Automatic failover of MDisks if issues with individual controller port.
• WWPN of each Storwize V7000 Fibre Channel port is based on the
unique z:zz:zz for each node canister.
ƒ Limited to a maximum of 1024 WWPNs

Node1 WWNN value Y value


50:05:07:68:02:0z:zz:zz

4 4
3 3
2 2
1 1

Node2 WWNN value


Y value 50:05:07:68:02:0z:zz:zz
50:05:07:68:02:1z:zz:zz.
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-54. Storwize V7000 WWNNS

Each of the Storwize V7000 node canister has its own WWNN which is based
on 50:05:07:68:02:0z:zz:zz where z:zz:zz is unique for each node canister. It is unrelated to the
WWNN of the other node canister (they may be sequential numbers, they may not).
The WWPN of each Storwize V7000 Fibre Channel port is based on: 50:05:07:68:02:Yz:zz:zz
where z:zz:zz is unique for each node canister and the Y value is taken from the port position.
The number in each black box (which represents a Fibre Channel port) is the Y value, which is also
the port number. Therefore, the Y value and the port number are the same number. In this
example, port 1, contains a 1, so a WWPN presented by this port would look like:
50:05:07:68:02:1z:zz:zz.
The environment of having multiple WWNNs used in certain disk storage systems is limited only by
the maximum of 1024 WWPNs and 1024 WWNNs.

© Copyright IBM Corp. 2012, 2016 5-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Backend storage system WWNNs


• Up to 16 WWNNs of the storage system can be set up as a group.
• Each WWNN appears as one controller (system) to Storwize V7000 cluster.
• Each LUN must be mapped to Storwize V7000 ports using same LUN ID.
• Automatic failover of MDisks if issues with individual controller port.
w w w w w w w w w w w w w w w w
w w w w w w w w w w w w w w w w
n n n n n n n n n n n n n n n n
n n n n n n n n n n n n n n n n

w w w w w w w w w w w w w w w w
w w w w w w w w w w w w w w w w
p p p p p p p p p p p p p p p p
n n n n n n n n n n n n n n n n

L0 L1 L2 L3 L4 L5 L6 L7
L8 L9 La Lb Lc Ld Le Lf
Examples: Various EMC and HDS models
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-55. Backend storage system WWNNs

Some storage systems generate more than 16 WWNNs. In this case, up to 16 WWNNs of the
storage system can be set up as a group. The Storwize V7000 treats each group of 16 WWNNs as
a storage system. Deploy LUN masking so that each LUN is assigned to no more than 16 ports of
these storage systems. The environment of having multiple WWNNs used in certain disk storage
systems is limited only by the maximum of 1024 WWPNs and 1024 WWNNs.

© Copyright IBM Corp. 2012, 2016 5-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 to DS3500 with more than one WWNN


• Assign MDisks in multiples of storage ports zoned with Storwize
V7000 cluster (8 WWPNs - 8 MDisks/16 MDisks).

Controller1 Controller2
WWNN1 WWNN2

w w w w w w w w
w w w w w w w w
p p p p p p p p
n n n n n n n n

L0 L2 L4 L6 L1 L3 L5 L7
Preferred node = Node1 Preferred node = Node2
Example: IBM Storwize V7000 to DS3500 implementation

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-56. Storwize V7000 to DS3500 with more than one WWNN

As best practice, assign MDisks in multiples of storage ports zoned with Storwize V7000 cluster (8
WWPNs - 8 MDisks/16 MDisks).
For the latest information on the Storwize V7000 product support, refer to the Storwize V7000
Information Center > Configuration > Configuring and servicing external storage systems for
details regarding to storage systems setup parameters. Maximum configuration limits can be found
at the web by searching with the keywords of IBM Storwize V7000 maximum configuration
limits.

© Copyright IBM Corp. 2012, 2016 5-60


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Logical unit number to managed disks


• LUN masking makes a logical unit number available to some hosts LUN and is
mainly implemented at the host bus adapter (HBA) level.
ƒ Create large LUNs, similar in size
ƒ Assign each LUN (MDisk) to all V7000 ports
ƒ Allocate one LUN (MDisk) per array

Assigned to V7K as unmanaged mdisks


mdisk2 mdisk3
mdisk1 mdisk4 Allocating large LUNs
from storage systems
System discovery alleviates micro
management of storage
provisioning at the
SAN SAN individual storage
system level and
facilitates centralize
LUN2 storage management.
LUN1

LUN0 LUN4

Assigned to Host_V7K C
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-57. Logical unit number to managed disks

Logical Unit Number (LUN) masking is an authorization process that makes a Logical Unit Number
available to some hosts and unavailable to other hosts. LUN masking is mainly implemented at the
host bus adapter (HBA) level.
A volume group is a named construct that defines a set of LUNs. The Storwize V7000 host
attachment can then be associated with a volume group to access its allowed or assigned LUNs.
The LUNs identified in the visual as LUN1 through LUN4 become unmanaged MDisks after
Storwize V7000 performs device discovery on the SAN. These LUNs should be large, similar in
size, and be assigned to all of the Storwize V7000 ports of the cluster. These LUNs must not be
accessible by other host ports or other Storwize V7000 clusters.
LUNs become MDisks to be grouped into storage pools. Create a storage pool by using MDisks
with similar performance and availability characteristics. For ease of management and availability,
do not span the storage pool across storage systems.
The recommendation is to allocate and assign LUNs with large capacities from the storage systems
to the Storwize V7000 ports. These SCSI LUNs or MDisks once under the control of the Storwize
V7000 provide extents from which volumes can be derived.
All storage systems use variations of these approaches to implement LUN masking. Refer to the
Storwize V7000 Information Center > Configuration > Configuring and servicing external

© Copyright IBM Corp. 2012, 2016 5-61


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty
storage systems for more specific information about the numerous heterogeneous storage
systems that are supported by the Storwize V7000.
Within a given disk storage system, considerations for best practices of LUN placement have to be
adjusted to the given disk storage system. For example, an IBM XIV disk storage system do not
have an array-based architecture. For an XIV, large LUN size, even multiples of LUNs per path to
the Storwize V7000, and usage of XIV capacity need to be considered.
For an IBM DS8000, performance optimizing functions such as rotate extent space allocation
techniques negate a LUN per array practice, where in a DS3500 or DS5000 one LUN per array is
indeed best practice, within the context of even multiples of LUNs per path, and same-sized, large
LUNs.
The best practice of LUN/array placement needs to be qualified by the storage system used. There
are many documents that relate the LUN masking. Going into details for each storage system that
is supported by the Storwize V7000 is beyond the scope of this course.

© Copyright IBM Corp. 2012, 2016 5-62


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Disk storage management interface


• The Storage Manager client uses
two main windows that gives
control over the storage system:
ƒ The Enterprise Management
window:
í The client can manage multiple
storage subsystems in a storage
domain, enabling the user to add,
remove, and monitor each storage
subsystem.
ƒ The Subsystem Management
window:
í The client can manage individual
storage subsystems.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-58. Disk storage management interface

Each storage device has an feature-rich management interface that gives control over the storage
device. The Subsystem Management window is used to configure and manage the logical and
hardware components with in the storage subsystem. With the latest upgrade, provides a new look
and feel, with a more intuitive interface and redefined tabs, creates an ease in the flow for
configurations and management. Summary tab interface has been modified to depicts a boarder
“at-a-glance” summary view of all component activities within the storage subsystem which
includes the latest firmware version and Premium Features installed.

© Copyright IBM Corp. 2012, 2016 5-63


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

DS3K Storage system WWNN and WWPNs

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-59. DS3K Storage system WWNN and WWPNs

To illustrate the disk storage system management interface, this visual shows the WWPNs and
WWNN of an IBM DS3x00 disk storage system.
The profile of this storage system can be displayed by the DS Storage Manager GUI. Two different
controllers within this DS3400 are displayed (note the Controllers tab). Each controller has its own
unique WWPN value but they share WWNN value.
Therefore, the DS3x00 storage system is identified by just one WWNN and each controller port
within the storage system has its own WWPN. This is also the case with other models of the
DS3000, DS4000, and DS5000 series of storage systems.

© Copyright IBM Corp. 2012, 2016 5-64


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

DS3K Storwize V7000 host group definition

All Storwize V7000 cluster


WWPNs defined to storage system
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-60. DS3K Storwize V7000 host group definition

This is an example of LUN masking using the host group constructed on a disk storage system.
The Configure Hosts view of the DS Storage Manager, the Storwize V7000 clustered system is
defined to the DS3K as a host group. Each host (node) is defined with four ports. This means that
the Storwize V7000 is defined as a SCSI initiator (host) to the external storage system.
The host ports are shown in detail in the Configured Hosts: box. The host type is an IBM TS SAN
VCE (IBM TotalStorage SAN Volume Controller Engine). The IBM TS SAN VCE host type uses an
Auto Logical Drive Transfer (ADT) that allows the Storwize V7000 to properly manage SCSI LUN
ownership between controllers used the paths specified by the host.

© Copyright IBM Corp. 2012, 2016 5-65


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

DS3K LUNs assigned to Storwize V7000 host group

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-61. DS3K LUNs assigned to Storwize V7000 host group

In the first example of the Configure Hosts view of the DS Storage Manager, the Storwize V7000
clustered system is defined to the DS3K as a host group. This means that the Storwize V7000 is
defined as a SCSI initiator (host) to the external storage system. The IBM TS SAN VCE host type
uses an Auto Logical Drive Transfer (ADT) that allows the Storwize V7000 to properly manage
SCSI LUN ownership between controllers used the paths specified by the host.
From the Host-to-Logical Drive Mappings view of the DS Storage Manager, eight LUNs have
been mapped to the host group called Storwize V7000 with their respective LUN numbers which
become the MDisks in the Storwize V7000.
If the LUN numbers or mappings are changed, the MDisk has to be removed from the Storwize
V7000 first. If this is not done, then access to Storwize V7000-surfaced volumes are lost and the
risk of data loss is high (due to human error).

© Copyright IBM Corp. 2012, 2016 5-66


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

External storage system (automatic discovery)


• Select Pools > External Storage to view external storage.
ƒ Device type (17xx FastT) indicates the type of storage system.
ƒ Storwize V7000 performs device logins with the storage system ports to
discover LUNs.
ƒ The detect mdisk command must be run after the creation or modification (add or
remove MDisk) of storage pools for paths to be redistributed.
ƒ Display column can be modified to show other attributes.

Click plus
+ sign to
expand
view

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-62. External storage system (automatic discovery)

External storage systems can be displayed by selecting the Pools > External Storage menu
option. The External Storage GUI panel display was captured in two images to provide a full view.
A controller entry is listed by a default name of controllerx (where x indicates the ID assigned to the
detected controller). Each controller is associated with an ID number and WWNN which was zoned
with the Storwize V7000 node ports. The Storwize V7000 GUI has performed device logins with the
storage system ports to discover LUNs that have been assigned to the Storwize V7000. The device
type of 17xx FastT indicates the type of storage system attached.
You can click the + (plus) sign of the controller entry to list the LUNs present. LUNs are displayed
as MDisk entries. The LUN number column represents the assigned LUN numbers of these
MDisks. To uniquely identify an MDisk in the Storwize V7000 inventory, it is correlated to a specific
storage system and the specific LUN number that is assigned by that storage system. The Storwize
V7000 assigns to the MDisk a default object name and object ID.
All newly discovered MDisks are always interpreted in an unmanaged mode. Each MDisk
represents a LUN that has been assigned to the Storwize V7000 host group from the DS3K storage
system. You must assign MDisks to the specific pool to be able to manage the allocated capacity.

© Copyright IBM Corp. 2012, 2016 5-67


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Managed disks are SCSI LUNs


MDisks - access modes:
SCSI LUNs become • Unmanaged mode becomes
managed mode (free space)
managed disks (MDisks)
• Unmanaged mode becomes
image mode (existing data)

R6 R1 R6 R10 R10 R10 R5 R5 R5


LUN LUN LUN LUN LUN LUN LUN LUN LUN

SCSI LUNs assigned to Storwize V7000


SCSI LUNs "surfaced"
from SAN-attachedfrom RAID
storage controllers
systems

R6 R1 R6 R10 R10 R10 R5 R5 R5


LUN LUN LUN LUN LUN LUN LUN LUN LUN

IBM system HP system EMC system

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-63. Managed disks are SCSI LUNs

Traditionally SCSI LUNs surfaced by RAID controllers are presented to application host servers as
physical disks.
With the Storwize V7000 serving as the insulating layer, SCSI LUNs become the foundational
storage resource that is owned by the Storwize V7000. A one-to-one relationship exists between
the SCSI LUNs and the managed disks.
The Storwize V7000 takes advantage of the basic RAID controller features (such as RAID 0, 1, 5, 6,
or 10) but does not depend on large controller cache or host independent copy functions that are
associated with sophisticated storage systems. The Storwize V7000 has its own cache repository
and offers network-based Copy Services.
Managed disks have associated access modes. These modes, which govern how the Storwize
V7000 cluster uses MDisks, are:
• Unmanaged: The default access mode for LUNs discovered from the SAN fabric by the
Storwize V7000. These LUNs have not yet been assigned to a storage pool.
• Managed: The standard access mode for a managed disk that has been assigned to a storage
pool. The process of assigning a discovered SCSI LUN to a storage pool automatically changes
the access mode from unmanaged to managed mode. In managed mode space from the
managed disk can be used to create virtual disks.

© Copyright IBM Corp. 2012, 2016 5-68


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty
• Image: A special access mode that is reserved for SCSI LUNs that contain existing data. Image
mode preserves the existing data when control of this data is turned over to the Storwize
V7000. Image mode is specifically designed to enable existing data to become Storwize
V7000-managed. SCSI LUNs containing existing data must be added to the Storwize V7000 as
image mode.

© Copyright IBM Corp. 2012, 2016 5-69


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Renaming logical number units


• Apply naming convention to the unique ID that is associate with MDisk on the
system corresponds to the logical number units (LUNs) on the external storage
system.

MDisk Names Storage Pool


40124100
40124100 DS3K_P1SAS15K_Pool
mdisk1 DS3KP1_sas1
arraysas1 X GB
xx GB
40124101
40124101
mdisk2 DS3KP1_sas2
arraysas2 X GB
xx GB
LUNs from
IBM DS3K Storage Pool
DS8Kdev_SATA_Pool
0
LUN0
mdisk3 DS8KP2dev_sata1
arraysata1 X GB
xx GB 1
LUN1
mdisk4 DS8KP2dev_sata2
arraysata2 X GB
LUNs from xx GB

IBM DS8K

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-64. Renaming logical number units

The LUNs (volumes) surfaced by the disk storage systems become unmanaged MDisks. The
logical unit names (LUNs) name spaces are local to the external storage systems. Therefore, it is
not possible for the system to determine this name. However, you can use the external storage
system WWNNs and LUN IDs to identify each device. This unique ID can be used to associate
MDisks on the system to the corresponding LUNs on the external storage system.
The visual continues with the DS3500 and DS8000 examples where the MDisk entries of those
LUNs are renamed to enable easier identification of the storage system and the LUNs within the
storage system.
The storage pool names in this example reflect the storage system and disk device type making it
easier to identify relative performance and perhaps storage tier in an enterprise.
Names of Storwize V7000 objects can be changed without impacting Storwize V7000 processing. If
installation naming standards have been modified then names of Storwize V7000 objects can be
modified accordingly. All Storwize V7000 processing is predicated by object IDs and not object
names. Up to 63 characters can be used in an object name.

© Copyright IBM Corp. 2012, 2016 5-70


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Example of storage system LUN details

Controller in Enclosure 85, Slot A


. . .
Host interface: Fibre
Channel: 1
Current ID: Not applicable/0xFFFFFFFF
Preferred ID: 0/0xEF
NL-Port ID: 0x010500
Maximum data rate: 4 Gbps
Current data rate: 4 Gbps
Data rate control: Auto
Link status: Up
Topology: Fabric Attach
World-wide port identifier: 20:24:00:a0:b8:75:ef:1f
World-wide node identifier: 20:04:00:a0:b8:75:ef:1f
Part type: HPFC-5700 revision 5
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-65. Example of storage system LUN details

The DS Storage Manager GUI can be used to display the details of the LUN represented by the
MDisk (LUN). The MDisk UID corresponds to the logical drive ID. The active WWPN from the
Storwize V7000 CLI output matches the Controller A WWPN in the storage system which is the
preferred and current owner of the LUN.

© Copyright IBM Corp. 2012, 2016 5-71


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Best practice: Rename a storage system


• To change controller default name:
ƒ Right-click the controller, select Rename.
ƒ Enter desired name and click Rename.
‫ ޤ‬The GUI issues the svctask chcontroller -name command to rename
controller.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-66. Best practice: Rename a storage system

Since the storage system default names are automatically assigned by the system, it can be
difficult to properly identify a system under this naming convention. Therefore, adhering to naming
convention standards to create a more self-documenting environment often saves time for problem
determination.
To change the name of a system:
1. Right-click the entry and select the Rename option from the pop-up list.
2. Enter a new system name in the Rename Storage System pane, and click the Rename button.
When you initiate a task the GUI generates a list of commands that were used to complete the
task. In this example, a chcontroller -name command was used to rename the storage
system.
3. Once task is completed, click the Close button.

© Copyright IBM Corp. 2012, 2016 5-72


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Rename controller using chcontroller CLI command


IBM_Storwize:V009B:superuser>lscontroller -delim ,
id,controller_name,ctrl_s/n,vendor_id,product_id_low,product_id_high
0,controller0, ,IBM ,1746 , FAStT
IBM_Storwize:V009B:superuser>
CLI offers
IBM_Storwize:V009B:superuser>lscontroller 0 straightforward,
id 0 obvious, and simple
controller_name controller0 way to view and
WWNN 20040080E537DE16
mdisk_link_count 7 Use WWNN value to configured the
max_mdisk_link_count 7 ascertain storage Storwize V7000
degraded no
vendor_id IBM system systems
product_id_low 1746
product_id_high FAStT
product_revision 1070
ctrl_s/n
allow_quorum yes
fabric_type fc
. . .

IBM_Storwize:V009B:superuser> chcontroller -name V009-DS3K


IBM_Storwize:V009B:superuser>lscontroller
id controller_name ctrl_s/n vendor_id
product_id_low product_id_high
0 V009-DS3K IBM 1746
FAStT
IBM_Storwize:V009B:superuser>

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-67. Rename controller using chcontroller CLI command

Command-line interface (CLI) commands offers a straightforward, obvious, and simple way to view
and configure the Storwize V7000 and the attached storage subsystems. Before you issue a
command it is best practice to first check the status of the system’s current configuration.
The lscontroller command appended with –delim, provides a comprehensive summary entry
for each storage system of the cluster. This output format is also referred to as the concise view as
it provides high-level summary or concise information for each object - namely object ID, name, and
device type.
The lscontroller x output (where x is the object ID of the controller) provides much more detail
about the specific object as a more verbose view. For example, an additional key field that is found
in the verbose view is the WWNN of the controller which enables the association of the controller
entry in the Storwize V7000 inventory with its physical entity counterpart.
Once the correct storage system has been pinpointed, a meaningful name can be assigned. To
rename the storage system merely add a ch to the object category. Thus, the chcontroller
command with -name allows a new name to be assigned to the specified object (identified by ID or
name).
Again, adhering to the best practice of using meaningful names for objects (instead of staying with
the Storwize V7000 assigned default names) the storage systems (also known as controllers) of the
cluster are renamed.

© Copyright IBM Corp. 2012, 2016 5-73


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Best practice: Rename an MDisk


• If not all MDisks appear select Action > Discover storage.
• To rename MDisk, right-click the MDisk and select Rename.
The chmdisk -name command is issued.
ƒ
• Storwize V7000 management GUI supports multi-selection using the Ctrl + Shift
key.
Access
mode

Best practice: Rename storage systems and MDisks


for ease of identification and troubleshooting.
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-68. Best practice: Rename an MDisk

The disk drives that are discovered within the storage device are automatically assigned a default
name and sequentially numbered (MDisk#). The GUI automatically presents a list of all unmanaged
MDisks by object name and ID. However, each drive’s availability can be identified by an access
mode which determines how the cluster uses the MDisk. As a productivity aid the management GUI
offers multi-select for some functions so that the same action can be applied to multiple entries or
objects. The Ctrl or Shift keys can be used to select multiple entries. Storwize V7000 management
GUI supports cut and paste to minimize editing with the same character string.
To change the name of an MDisk is a fairly simple task, right-click to select the Rename option from
the pop-up list.
The GUI generates sequential chmdisk -name commands to change the name of each selected
MDisk.

© Copyright IBM Corp. 2012, 2016 5-74


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Rename MDisks using chmdisk CLI


IBM_Storwize:V009B:superuser>detectmdisk
IBM_Storwize:V009B:superuser>lsmdisk -filtervalue
controller_name=*DS3K -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,
controller_name,UID,tier,encrypt
0,mdisk0,online,unmanaged,,,100.0GB,0000000000000000,V009-DS3K,
60080e500037f8d40000089c5448ecf900000000000000000000000000000000,
enterprise,no
1,mdisk1,online,unmanaged,,,100.0GB,0000000000000001,V009-DS3K,
60080e500037de16000007365448ee0500000000000000000000000000000000,
enterprise,no
2,mdisk2,online,unmanaged,,,100.0GB,0000000000000002,V009-DS3K,
60080e500037f8d40000089f5448ed2800000000000000000000000000000000,
enterprise,no
. . .
Use LUN_# and UID to
ascertain MDisk to
LUN correlation

IBM_Storwize:V009B:superuser> lsmdisk -filtervalue name=mdisk* -


nohdr |while read id name; do chmdisk -name VB1-mdisk$id
$id; done
IBM_Storwize:V009B:superuser>

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-69. Rename MDisks using chmdisk CLI

If you are renaming MDisks using the CLI then use the detectmdisk command to scan the system
for any new LUNs that might have been assigned to the Storwize V7000 from storage systems.
This is analogous to cfgmgr in AIX or Rescan Disks in Windows. Any newly discovered LUNs
become MDisks with an access mode of unmanaged.
The lsmdisk command is filtered to list MDisks from a controller whose name begins with an
asterisk (example: *DS3K). This output displays each MDisk by its ID, access mode, LUN number,
and UID.
To rename multiple MDisks, use the CLI and issue the commands as follows:
lsmdisk -filtervalue name=mdisk* -nohdr |while read id name; do chmdisk -name
mdisk$id $id; done
This command adds a do-loop to rename all MDisks with a “filtervalue name =mdisk” and
change the “VB1-DS3K$ID” such as VB1-DS3K0, VB1-DS3K1.
The renaming of the MDisks correlates to the MDisk ID and the LUN number that is assigned by the
storage system.

© Copyright IBM Corp. 2012, 2016 5-75


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

MDisk properties
• Right-click an MDisk and select Properties.
ƒ Show Details provides technical parameters such as capacity, interface, rotation
speed, and the drive status (online or offline).

Observe the
WWPNs correlates
to the storage
system

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-70. MDisk properties

Right-click a managed disk to view the properties of a specific drive. Check the View more details
link. IBM Storage uses a methodology whereby each WWPN is a child of the WWNN. This means
that if you know the WWPN of a port then you can easily match it to the WWNN of the storage
device that owns that port.

© Copyright IBM Corp. 2012, 2016 5-76


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 quorum index indicators


• Storwize V7000 automatically allocates three quorum disks to:
ƒ Resolve system tie-breaking node state issues
Active = yes
ƒ Store and maintain configuration metadata active quorum
IBM_2076:Team50A:TeamAdmin>lsquorum
quorum_index status id name controller_id controller_name active object_type
override
0 online 3 DS3K3 0 TeamA_DS3K yes mdisk
no
1 online 4 DS3K4 0 TeamA_DS3K no mdisk
no
2 online 5 DS3K5 0 TeamA_DS3K no mdisk
no
BM_Storwize:V009B:superuser>
0
active
1 2 Active = no
standby quorum
quorum 0 quorum 1 quorum 2

DS3K3 DS3K4 DS3K5

TeamB50_GRP2
IBM_2076:Team50A:TeamAdmin>svcinfo lsfreeextents DS3K0
id 0
number_of_extents 230 quorum index using 1 extent
BM_Storwize:V009B:superuser>

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-71. Storwize V7000 quorum index indicators

The three quorum disks are used to resolve tie-breaking cluster state issues and track cluster
control information or metadata.
Use the lsquorum command to list the quorum disks. Quorums are identified by three quorum index
values - 0, 1, and 2. One quorum index is the active quorum and the others are in stand-by mode.
For this example, the active quorum is index 0 resident on MDisk ID 0.
The quorum size is affected by the number of objects in the cluster and the extent size of the pools.
For this example, the pool extent size is 1 GB and based on the number of free extents available a
quorum disk is deduced to be using one extent or 1 GB (the smallest unit of allocation). This might
help explain the missing capacity in the storage pool capacity value. The remaining extents of a
quorum MDisk are available to be assigned to volumes (VDisks).
A quorum disk is an MDisk or a managed drive that contains a reserved area that is used
exclusively for system management. A clustered system automatically assigns quorum disk
candidates. The three quorum disks are used to resolve tie-breaking cluster state issues and track
cluster control information or metadata.

© Copyright IBM Corp. 2012, 2016 5-77


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

MDisk properties: Quorum index indicator


IBM_2076:Team50A:TeamAdmin>lsmdisk DS3K0
id 0
name DS3K0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name TeamA50_GRP1 active
0
capacity 115.0GB
quorum_index quorum 0
block_size 512
controller_name TeamA_DS3K
ctrl_type 4 DS3K3
ctrl_WWNN 20040080E537DE16
controller_id 0
path_count 2
max_path_count 2
ctrl_LUN_# 0000000000000000
UID
60080e500037f8d40000089c5448ecf900000000000000000000000000000000
preferred_WWPN 20340080E537DE16
active_WWPN 20340080E537DE16 Storage system
fast_write_state empty
raid_status logical drive ID
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier enterprise
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-72. MDisk properties: Quorum index indicator

As seen in the lsquorum output, MDisk ID 0 contains quorum index 0, which is the active quorum.
Only managed mode MDisks can be used as quorum disks. Observe the detailed lsmdisk output
for this MDisk has an entry of quorum index 0.
Each MDisk has a UID value, which is the serial number that is externalized by the owning storage
system for the LUN and appended with many bytes of zeros.

© Copyright IBM Corp. 2012, 2016 5-78


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Distribute quorum disks across multiple controllers

IBM_2076:Team50A:TeamAdmin>lsquorum
quorum_index status id name controller_id controller_name active object_type
override
0 Replaced online 3 DS3K3 0 TeamA_DS3K yes mdisk no
1 online 1 DS3K4 1 TeamA_DS3K no mdisk no
2 online 10 DS8K1 2 TeamA_DS3K no mdisk no
IBM_2076:Team50A:TeamAdmin>svcinfo lsfreeextents DS3K0
id 0
number_of_extents 500
IBM_2076:Team50A:TeamAdmin>chquorum -mdisk DS3K0
Use the chquorum
IBM_2076:Team50A:TeamAdmin>lsfreeextents DS3K0
id 0 command to change the
number_of_extents 499 quorum association.
IBM_2076:Team50A:TeamAdmin>lsquorum
quorum_index status id name controller_id controller_name active object_type
override
0 online 0 DS3K3 0 TeamA_DS3K no mdisk no
1 online 1 DS3K4 0 TeamA_DS3K yes mdisk no
2 online 10 DS8K1 0 TeamA_DS3K no mdisk no
IBM_2145:Team50A:TeamAdmin>
Best practice
0
quorum 0 1 10
quorum 2
DS3K0 active
quorum 0 DS8K1
DS3K4

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-73. Distribute quorum disks across multiple controllers

Quorum disks can be assigned to drives in the control enclosure automatically or manually by using
the chquorum command. This command allows the administrator to select the MDisk for a quorum
index.
In this example, quorum index 0 is being placed on MDisk DS3K 0. The three quorum disks have
been placed in three different storage pools that are backed by three different storage systems.
A quorum disk is automatically relocated if there are changes in the cluster configuration affecting
the quorum.

© Copyright IBM Corp. 2012, 2016 5-79


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Best practice: Reassign the active quorum disk index


IBM_2145:Team50A:TeamAdmin>chquorum -active 0
IBM_2145:Team50A:TeamAdmin>lsquorum
quorum_index status id name controller_id controller_name active object_type
override
0 online 0 DS3K0 0 TeamA_DS3K yes mdisk
no
1 online 4 DS3K4 0 TeamA_DS3K no mdisk
no
2 online 5 DS3K5 0 TeamA_DS3K no mdisk
no
IBM_2145:Team50A:TeamAdmin> chquorum –active controls
which quorum index is
0 1 2 1 2 the active quorum.
active 0
active
quorum 0 quorum 0 quorum 1 quorum 2
DS3K1 DS3K2
DS3K3 DS3K3 DS3K4 DS3K5

TeamA50_GRP2 TeamB50_GRP2

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-74. Best practice: Reassign the active quorum disk index

Not only is it best practice to spread the quorum disks across storage systems, it is also
recommended that the active quorum be placed in the storage system that is deemed to be the
most robust in the enterprise.
For example, the removal of MDisk ID 3 from a storage pool caused a configuration change that
impacted quorum index 0. Therefore, quorum index 0 was moved to another MDisk.
The -active keyword of the chquorum command allows the specification of the quorum index to be
the active quorum.

© Copyright IBM Corp. 2012, 2016 5-80


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

System failure: Quorum auto relocates quorum disks


• Quorum configuration is affected when an MDisk that contains a quorum disk goes
offline.
ƒ Storwize V7000 automatically reconfigures affected quorum disk to another eligible MDisk.
ƒ Running the system without a quorum disk can affect operations.
‫ ޤ‬Prevents storing of metadata for key operations like migrations

Once the storage


pool and MDIsk
MDisk
are
are back
back Online,
online,
quorum index are
restored.
restored

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-75. System failure: Quorum auto relocates quorum disks

A storage pool goes offline if an MDisk is unavailable, even if the MDisk has no data on it. If these
MDisks contains the quorum disks then it affects the Storwize V7000 quorum configuration. For
disaster recovery purposes, running an Storwize V7000 system without a quorum disk can
seriously affect the operation. A lack of available quorum disks for storing metadata prevents any
migration operation (including a forced MDisk delete). Therefore, the Storwize V7000 automatically
reconfigures the affected quorum disks and moves them from the storage pool MDisks to another
eligible managed mode MDisk. As a result, the Health Status indicator turns red.
Once the storage pool MDisks have been restored to Online then the Storwize V7000 automatically
reconfigures the quorum environment to the quorum indexes back to the storage pool MDisks.
There are special considerations concerning the placement of the active quorum disk for a
stretched (split) cluster and Split I/O Group configurations. For more information, refer to the
following website: http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311

© Copyright IBM Corp. 2012, 2016 5-81


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Modify storage capacity


• Increase or reduce storage capacity
without disruption to the system services
ƒ Assign to Pool
‫ ޤ‬To increase the number of extents that are DS3K DS3K
0 2 DS3K
available for new volume creations or to DS3K 6
1
expand existing volumes
TeamA50_GRP1
• MDisk can be added to a pool that contains
exiting data
o Volume extents are reallocated Right-click a pool
o Data is loss when adding an MDisks that and select Add to
contains exiting data Pool or Unassign
from Pool.
ƒ Unassign from Pool
• To reduce unused storage space
– MDisks with existing data can be removed
from pool
• Affected quorum index automatically moved
DS3K DS3K
• Must have sufficient free space available to 0 2 DS3K
DS3K 6
receive reallocated extents 1

ƒ Once MDisk is removed, access mode is rest TeamA50_GRP1


to unmanaged
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-76. Modify storage capacity

A single system can manage up to 128 storage pools. The size of a pool can be changed to
maintain storage utilization. You can add MDisks to a pool at any time either to increase the number
of extents that are available for new volume creations or to expand existing volumes. If statistics
show storage utilization rates are low, the pool’s unused space can be reduced by unassigning
MDisks from the pool. Both procedures can be performed using the management GUI without
disruption to the storage pool or volumes being accessed.
You can add only MDisks that are in unmanaged mode. When MDisks are added to a storage pool,
their mode changes from unmanaged to managed. Adding MDisk to a pool that contains exiting
data, the system automatically balances volume extents between the MDisks to provide the best
performance to the volumes. However, if you add an MDisk that contains existing data to a
managed disk group, you lose the data that it contains. The image mode is the only mode that
preserves its data.
When an MDisk is removed from a storage pool, the system reallocates the removed MDisk’s
extents to other MDisks in the same pool. If the MDisk contains a quorum file, the system will
automatically relocate it to a new location in the system.
You can add only MDisks that are in unmanaged mode. When MDisks are added to a storage pool,
their mode changes from unmanaged to managed.
You can delete MDisks from a group under the following conditions:

© Copyright IBM Corp. 2012, 2016 5-82


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty
• Volumes are not using any of the extents that are on the MDisk.
• Enough free extents are available elsewhere in the group to move any extents that are in use
from this MDisk.
More information is covered on deleting data in a volume protection slide.

© Copyright IBM Corp. 2012, 2016 5-83


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

CLI commands addmdisk and rmmdisk

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-77. CLI commands addmdisk and rmmdisk

The addmdisk command is generated to add a new MDisk to an existing pool. If volumes are being
used by the MDisks that you are removing from the storage pool, you must select the Remove the
MDisk from the storage pool even if it has data on it. The rmmdisk command will generate the
-force parameter enables the removal of the MDisk by redistributing the allocated extents of the
moved MDisk to other MDisks in the pool. However, if there is insufficient free space among the
remaining MDisks in the pool to receive the reallocated extents, the removal fails - no data can be
transferred, even a forced removal is not allowed. Once an MDisk is removed from a storage pool,
its access mode is reset to unmanaged to indicate it is not being used by the Storwize V7000
cluster.

© Copyright IBM Corp. 2012, 2016 5-84


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Access methods for MDisk multipathing

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-78. Access methods for MDisk multipathing

Backend storage system MDisks are accessed based on one of four multipathing methods upon
Storwize V7000’s discovery of the storage system model. The objective is to spread I/O activity to
MDisks across the available paths (or ports zoned) to the storage system.
The access method for a given storage system model is documented at the Storwize V7000
support website under Controllers > Multipathing of the Supported Hardware page. For example,
MDisks presented by a DS3500 are accessed using the MDisk group balancing method while
MDisks presented by a Storwize V7000 are accessed using the round robin method.

© Copyright IBM Corp. 2012, 2016 5-85


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Four multipathing methods


• Round robin: I/Os for the MDisk are distributed over multiple ports of
the storage system.
• MDisk group balanced: I/Os for the MDisk are sent to one target port
of the storage system. The assignment of ports to MDisks is chosen to
spread all the MDisks within the MDisk group (pool) across all of the
active ports as evenly as possible.
• Single port active: All I/Os are sent to a single port of the storage
system for all the MDisks of the system.
• Controller balanced: I/Os are sent to one target port of the storage
system for each MDisk. The assignment of ports to MDisks is chosen
to spread all the MDisks (of the given storage system) across all of the
active ports as evenly as possible.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-79. Four multipathing methods

The four multipathing methods or options to access an MDisk of an external storage system are:
• Round robin: I/Os for the MDisk are distributed over multiple ports of the storage system.
• MDisk group balanced: I/Os for the MDisk are sent to one target port of the storage system.
The assignment of ports to MDisks is chosen to spread all the MDisks within the MDisk group
(pool) across all of the active ports as evenly as possible.
• Single port active: All I/Os are sent to a single port of the storage system for all the MDisks of
the system.
• Controller balanced: I/Os are sent to one target port of the storage system for each MDisk.
The assignment of ports to MDisks is chosen to spread all the MDisks (of the given storage
system) across all of the active ports as evenly as possible.

© Copyright IBM Corp. 2012, 2016 5-86


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Example: Storage system path count DS3K in four-node


Storwize V7000 cluster
IBM_Storwize:V009B:superuse>lscontroller NAVYDS3K
id 0 Port A Port B
controller_name NAVYDS3K path count path count
WWNN 200400A0B875EF1F
mdisk_link_count 9 20 16
max_mdisk_link_count 10
degraded no 0 1
vendor_id IBM mdisk mdisk
product_id_low 1726-4xx
product_id_high FAStT
product_revision 0617
ctrl_s/n
2 3
allow_quorum yes mdisk mdisk
WWPN 203500A0B875EF1F
path_count 16
max_path_count 20 4 5
WWPN 202400A0B875EF1F
path_count 20 mdisk mdisk
max_path_count 20
Best practice: Assign MDisks in 6 7
multiples of storage ports mdisk mdisk
zoned with Storwize V7000 cluster
8 9
(2 WWPNs – 2, 4, 6, or 8 MDisks)
mdisk mdisk
so path counts are balanced across ports V009-DS3K
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-80. Example: Storage system path count DS3K in four-node Storwize V7000 cluster

For storage systems that implement preferred DS3K controllers (such as DS3500 or DS3400), the
Storwize V7000 honors the MDisk’s preferred controller attribute when this information is discerned
from SCSI inquiry data. It is good practice to balance MDisk (LUN) assignments across storage
controllers in the storage system.
For the DS3K system, the Storwize V7000 implements the MDisk group balancing multipathing
access method. This means I/Os for a given DS3K MDisk are sent to one given target port that is
zoned for this storage system. The assignment of ports to MDisks is chosen with the aim to spread
all the MDisks within the storage pool across the zoned ports and with consideration to the
preferred controller of MDisks (LUNs).
A path count value is accounted for on a per MDisk basis. Each node of the Storwize V7000 cluster
has a path to a given MDisk and is counted as one path. In a four-node cluster, the path count
would be 4 per MDisk.
In the lscontroller output detail, the path count is reported at the storage port level and provides
a clue to how many MDisks are accessed through a given storage port (16 = 4 paths x 4 MDisks, 20
= 4 paths x 5 MDisks).

© Copyright IBM Corp. 2012, 2016 5-87


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Example: DS3K MDisk access path count


IBM_Storwize:V009B:superuse>lsmdisk VB1-DS3K
id 0
name VB1-DS3K Storwize V7000 is
status online
mode managed comprised of four
mdisk_grp_id 0 nodes
mdisk_grp_name DS3K_SASpool NNNN
capacity 30.0GB OOOO
quorum_index
block_size 512 DDDD
controller_name V009-DS3K Each of the 4 nodes
ctrl_type 4 EEEE designates one of its ports
ctrl_WWNN 200400A0B875EF1F
controller_id 0 1234 to access a given MDisk
path_count 4 using the same
max_path_count 4 V009-DS3K storage port
ctrl_LUN_# 0000000000000000
UID
600a0b800075ed34000002354dd223cc000000000000000000000000000
00000
preferred_WWPN 202400A0B875EF1F
active_WWPN 202400A0B875EF1F
fast_write_state empty
4
raid_status Storage system
raid_level
redundancy
LUN to controller 0
strip_size preferences honored
DS3KNAVY0
spare_goal
spare_protection_min V009-DS3K
balanced
tier generic_hdd
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-81. Example: DS3K MDisk access path count

The path count value for a given MDisk is found in the lsmdisk verbose output for the MDisk. For a
four-node Storwize V7000 cluster, the path count to the DS3K-VB1 MDisk would be 4. Since the
MDisk group balancing access method is used when access the MDisks, a given MDisk is
accessed only through one assigned storage port. This port is documented as the preferred
WWPN and it should also be the active WWPN.
Examine the output detail for the DS3KNAVY0 MDisk to obtain its preferred WWPN and active
WWPN value. Verify that this WWPN is one of the two WWPN values from the lscontroller output
from a prior example. All four nodes of the Storwize V7000 cluster use the active WWPN port for
I/O to this MDisk.
Likewise each of the MDisks in the DS3K is assigned one of the DS3K ports (two ports that are
zoned in this example) based on the preferred controller attribute of the DS3K MDisk.

© Copyright IBM Corp. 2012, 2016 5-88


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Example: Storage system path count DS8K in four-node


Storwize V7000 cluster
IBM_Storwize:V009B:superuse>lscontroller 1
id 1
controller_name V009-DS8K
WWNN 5005076306FFC534
mdisk_link_count 4
max_mdisk_link_count 4
degraded no
vendor_id IBM path path path path
product_id_low 2107900 count count count count
product_id_high 16 16 16 16
product_revision 5.78
ctrl_s/n 75V9721FFFF 10 11
allow_quorum yes
WWPN 5005076306030534 mdisk mdisk
path_count 16
max_path_count 16
WWPN 5005076306080534
path_count 16 12 13
max_path_count 16
WWPN 5005076306034534 mdisk mdisk
path_count 16
max_path_count 16 V009-DS8K
WWPN 5005076306084534
path_count 16
max_path_count 16 LUN to controller access
is symmetric
(no controller preference)
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-82. Example: Storage system path count DS8K in four-node Storwize V7000 cluster

Contrasted with the DS3K and Storwize V7000, there is a key difference in pathing with the
DS8000. The DS8000 LUNs can be reached through any DS8000 port as it is a symmetric device.
There is no preferred controller concept.
The Storwize V7000 uses the same round robin multipathing method to access the DS8000
MDisks. Round robin distributes I/Os of an MDisk over all zoned ports of the DS8K.
The path count value is accounted on a per MDisk basis. Each node of the Storwize V7000 cluster
has a path to a given MDisk and is counted as one path. In a four-node cluster, the MDisk would
have a path count of 4. In the lscontroller output detail, the path count is reported by storage port
(16 = 4 paths x 4 MDisks). This output identify only four ports of the DS8K are zoned with the
Storwize V7000 cluster in this example.

© Copyright IBM Corp. 2012, 2016 5-89


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Example: DS8K Mdisk access path count


IBM_Storwize:V009B:superuse>lsmdisk 10
id 10
name VB1-DS8K1220
status online Each of the 4 nodes
mode managed designates one of its ports
mdisk_grp_id 1
mdisk_grp_name DS8K_FC15Kpool to access a given MDisk
capacity 100.0GB using any of the 4 ports
quorum_index 2 of NAVYDS8K.
block_size 512
controller_name V009-DS8K
ctrl_type 4
ctrl_WWNN 5005076306FFC534
controller_id 1
path_count 16 path path path path
max_path_count 16 count count count count
ctrl_LUN_# 4012402000000000 4 4 4 4
UID
6005076306ffc534000000000000122000000000000000000000000000000000
preferred_WWPN 10
active_WWPN many
fast_write_state empty mdisk
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier generic_hdd NAVYDS8K
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-83. Example: DS8K Mdisk access path count

The DS8K MDisks can be accessed through any of its ports without controller preference
considerations. There is no controller port allegiance, therefore, the preferred WWPN is blank.
Since the MDisk can be access through any of the zoned DS8K ports, the active WWPN has a
value of many. Remember, that each of the four Storwize V7000 nodes designates one of its
Storwize V7000 ports to access a given MDisk for a total of 4 paths. Since the Storwize V7000
access method for the DS8000 is round robin, any of the 4 ports of the DS8K can be used. Thus
maximum path count to this MDisk is 16 (4 ports x 4 paths of the MDisk).

© Copyright IBM Corp. 2012, 2016 5-90


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Example: Storage system path count FlashSystem in two-


node Storwize V7000 cluster
IBM_Storwize:V009B:superuser>lscontroller 0
id 0 path path path path
controller_name FLASHSYSTEM_B count count count count
WWNN 10000020C21224C0 18 18 18 18
mdisk_link_count 9
max_mdisk_link_count 9
degraded no 0 1
vendor_id IBM mdisk mdisk
product_id_low FlashSys
product_id_high tem
product_revision 6309 2 3
ctrl_s/n 1224c00000 mdisk
allow_quorum yes mdisk
WWPN 20040020C21224C0
path_count 18
max_path_count 18 4 5
WWPN 21040020C21224C0 mdisk mdisk
path_count 18
max_path_count 18
WWPN 20080020C21224C0 6 7
path_count 18 mdisk
max_path_count 18 mdisk
WWPN 21080020C21224C0
path_count 18
max_path_count 18 LUN to controller access 8
is symmetric mdisk
(no controller preference) FLASHSYSTEM_B
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-84. Example: Storage system path count FlashSystem in two-node Storwize V7000 cluster

Similar to the DS8000, the IBM FlashSystem is a symmetric device and there is no preferred
controller concept. LUNs are accessible from any of the four ports of the owning FlashSystem. The
Storwize V7000 also uses the round robin method to access FlashSystem MDisks. Round robin
distributes I/Os of an MDisk over all zoned ports of the FlashSystem.
The path count value is accounted on a per MDisk basis. Therefore, each node of the Storwize
V7000 cluster has a path to a given MDisk and is counted as one path. In a two-node cluster, the
MDisk would have a path count of 2.
In the lscontroller output detail, the path count is reported by storage port (18 = 2 paths x 9
MDisks). This is a two-node Storwize V7000 cluster. All four ports of the FlashSystem are zoned
with the two-node Storwize V7000 cluster in this example.

© Copyright IBM Corp. 2012, 2016 5-91


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Example: FlashSystem MDisk access path count


IBM_Storwize:V009B:superuser>lsmdisk 2
id 2
name FLASHSYSTEM_B_V7K_3 path path path path
status online count count count count
mode managed 2 2 2 2
mdisk_grp_id 0
mdisk_grp_name FLASHSYSTEM_B
capacity 2.3TB
quorum_index 0 2
block_size 512
controller_name FLASHSYSTEM_B mdisk
ctrl_type 4
ctrl_WWNN 10000020C21224C0
controller_id 0
path_count 8
max_path_count 8 FLASHSYSTEM_B
ctrl_LUN_# 0000000000000002
UID
0020c240021224c0000000000000000000000000000000000000000000000000
preferred_WWPN
active_WWPN many
fast_write_state empty
raid_status Each of the 2 V7K nodes designates one of
raid_level its ports to access a given MDisk using
redundancy
strip_size any of the 4 ports of FLASHSYSTEM_B.
spare_goal
spare_protection_min
balanced
tier generic_hdd
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-85. Example: FlashSystem MDisk access path count

The FlashSystem MDisks can be accessed through any of its ports without controller preference
considerations. There is no controller port allegiance, therefore, the preferred WWPN is blank.
Since the MDisk can be access through any of the zoned FlashSystem ports the active WWPN has
a value of many.
Each of the two Storwize V7000 nodes designates one of its Storwize V7000 ports to access a
given MDisk - for a total of 2 paths. Since the Storwize V7000 access method for the FlashSystem
is round robin, any of the 4 ports of the FlashSystem can be used. Thus maximum path count to
this MDisk is 8 (4 ports x 2 paths of the MDisk).

© Copyright IBM Corp. 2012, 2016 5-92


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Best practices: Pools and MDisks


Place MDisks with same availability and performance
attributes from the same storage system in the same
storage pool (Easy Tier excepted).
WWPN1 WWPN2 WWPN3 WWPN4
Storage system
with 4 ports

15K SAS 15K SAS 15K SAS 15K SAS


mdisk1 mdisk2 mdisk3 mdisk4

NL SAS NL SAS NL SAS NL SAS


mdisk11 mdisk12 mdisk13 mdisk14
Assign MDisks in multiples of storage ports
to balance utilization of all storage ports
zoned with Storwize V7000 cluster.
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-86. Best practices: Pools and MDisks

As a general practice ensure the number of MDisks presented from a given storage system is a
multiple of the number of its storage ports that are zoned with the Storwize V7000. This approach is
particularly useful for storage systems where the round robin method is not implemented for MDisk
access.

© Copyright IBM Corp. 2012, 2016 5-93


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storage provisioning topics


• Storage infrastructure

• Storage logical building block


• Internal storage

• External storage

• Encryption

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-87. Storage provisioning topics

This topic discusses the Storwize V7000 support for hardware and software encryption.

© Copyright IBM Corp. 2012, 2016 5-94


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Storwize V7000 Gen2 encryption


• Storwize hardware encryption: SAS
hardware encryption is specific to the mdiskmdisk
mdisk
Storwize V7000 Gen2 hardware - of
data at rest. Once encryption is
enabled all internal storage (array
objects) are created as hardware
encrypted by default.
ƒ Supported with Spectrum Virtualize V7.4
or later

• Storwize software encryption:


encryption of data at rest, at a storage
pool level, of external storage managed DS8x00
in that encrypted pool.
ƒ Supported with Spectrum Virtualize V7.6
or later FlashSystem

DS3x00

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-88. Storwize V7000 Gen2 encryption

The Storwize V7000 Gen2 supports two levels of encryption. Hardware encryption of internal
storage and software encryption of external storage. Both methods of encryption protect against
the potential exposure of sensitive user data and user metadata that is stored on discarded, lost, or
stolen storage devices, and it can also facilitate the warranty return, or disposal of hardware.
Distributed RAID (DRAID) is not supported by hardware encryption, and software encryption
cannot be used on internal storage.

© Copyright IBM Corp. 2012, 2016 5-95


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Data at Rest encryption


• Data at Rest encryption is implemented using AES_NI CPU instruction set
and engines: Eight cores on the 1 CPU performing AES 256-XTS Encryption
(optional).
• Encryption is the process of encoding data so that only authorized parties can
read it.
• Uses secret keys to encode the data according to well known algorithms.
• Data at Rest means that the data is encrypted on the end device (drives).
• Algorithm being used is AES – US government standard from 2001.
• Complies with FIPS-140 standard, but is not certified.
• XTS-AES 256 for data keys.
• AES 256 for master keys.
• Algorithm is public. The only secrets
here are the keys.
• Symmetric key algorithm (Same key
used to encrypt and decrypt data).

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-89. Data at Rest encryption

Storwize V7000 support software data at rest encryption using AES_NI CPU instruction set and
engines:- 8 cores on the 1 CPU performing AES 256-XTS Encryption, which is a FIPS 140-2
compliant algorithm.
Software encryption maps all IO buffers into user space (this carries a risk of data scribblers) and
reads are decrypted using the client-provided buffer. Writes are encrypted in a new pool of buffers
is performed by software in the Storwize V7000 nodes.
Data-at-Rest is also instant secure – without having to rely on human intervention which is open to
user’s errors, making the data vulnerable.
There is no performance penalty for data-at-rest encryption. Encryption of system data and
metadata is not required, so system data and metadata are not encrypted.
Data-at-Rest Encryption is an optional feature that requires a purchased license.

© Copyright IBM Corp. 2012, 2016 5-96


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

System storage V7.3 I/O stack with encryption

SCSI Target
Forwarding
Replication
Upper Cache

Communications
Interface Layer

Configuration
FlashCopy

Clustering
Software
encryption Peer Mirroring

Thin Provisioning Compression


Lower Cache

Virtualization Easy Tier 3

Forwarding
RAID
Forwarding
SCSI Initiator

Fibre Channel

iSCSI

FCoE
SAS Encryption
PCIe

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-90. System storage V7.3 I/O stack with encryption

By using software encryption in the interface layer of the Storwize V7000 node canister, rather than
performing encryption in the SAS hardware layer, allows for greater flexibility on how the Storwize
V7000 handles external MDisks and their encryption attributes. In this case, I/O goes into the
Platform interface (PLIF), and gets encrypted there before being passed to a driver.
If you are mixing storage pools with internal RAID encrypted drives/flash drives and externally
virtualized storage, apply the key to the pool and it will only apply the software encryption to the
external storage, letting the SAS hardware separately encrypt the internal storage.

© Copyright IBM Corp. 2012, 2016 5-97


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Data-at-rest encryption key management


• IBM Spectrum Virtualize has built-in key management.
• Two types of keys:
ƒ Master key (one per system/cluster)
ƒ Data encryption key (one per encrypted pool)
• Master key is created when encryption enabled.
ƒ Stored on USB devices
ƒ Required to use a system with encryption enabled
ƒ Required on boot or re-key process, stored in volatile memory on system
ƒ May be changed
• Data encryption key is used to encrypt data and is created automatically
when an encrypted pool is created.
ƒ Stored encrypted with the master key
ƒ No way to view data encryption key
ƒ Cannot be changed
ƒ Discarded when an array is deleted
(secure erase)

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-91. Data-at-rest encryption key management

IBM Storwize V7000 uses the Protection Enablement Process (PEP) to transform the system from
a state that is not protection-enabled to a state that is protection-enabled. This process establishes
from what is called a master encryption access key to access the system and a data encryption key
to encrypt and decrypt data.
When a system is protected-enabled, the system is both encryption-enabled and
access-control-enabled, and an access key is required to unlock the Storwize V7000 so it can it can
transparently perform all required encryption-related functionality.
Encryption access key can be created during the system initialization process by inserting the USB
flash drives into the control enclosures. During this process, you will need to add a check mark on
Encryption to start the encryption wizard.

© Copyright IBM Corp. 2012, 2016 5-98


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

System setup: Activate encryption licenses automatically


• Authorization code is required to obtain keys for each licensed function that
you purchased for your system.
• Generated activatefeature command sends system information to IBM
for authorization code verification to retrieve and apply license keys.

5
1

2
4

Click the Activate


button

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-92. System setup: Activate encryption licenses automatically

Activation of the license can be performed in one of two ways, either Automatically or Manually and
can be performed during either System Setup.
To activate encryption automatically using the System Setup menu, the workstation being used to
activate the license must be connected to an external network. Select the Yes option. If the control
enclosure is not highlighted, select the control enclosure you wish to enable the encryption feature
on. Click the Actions menu. From here, select Activate License Automatically. Click Next.
From the pop-up menu, enter the authorization code specific to the enclosure you have selected.
The authorization code can be found within the licensed function authorization documents. This
code is required to obtain keys for each licensed function that you purchased for your system. Once
authorization code has been entered, click the Activate button.
The system will generate the activatefeature command which connects to IBM in order to verify
the authorization code, retrieve license keys and apply them. This procedure can take a few
minutes to complete.

© Copyright IBM Corp. 2012, 2016 5-99


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Encryption license activated


• Encryption keys are automatically copied on the USB flash drives.

System will timeout after


a few minutes if
problems are detected

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-93. Encryption license activated

Once the encryption is successfully enabled, a green check mark will appear under the Licensed
row.
If a problem occurs with the activation procedure, the Storwize V7000 will timeout after a short time
(approximately 2:30 minutes).
In this case, check that you have a valid activation (not license) code, access to the Internet or any
other problems with the Storwize V7000 Gen2.

© Copyright IBM Corp. 2012, 2016 5-100


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Manual encryption activation


• Activate encryption on previously initialized systems.
ƒ Select Settings > System. Click the Encryption License hyperlink .
ƒ Repeat the steps to select control enclosure and enter authorization code
for verification.

Encryption License
hyperlink

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-94. Manual encryption activation

From the System > Settings menu, you can use the encryption license hyperlink to manually
activate the encryption feature on a previously initialized system. This procedure will displays the
same tasking steps used to complete the System Setup automatic encryption activation.

© Copyright IBM Corp. 2012, 2016 5-101


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Suggested task: Enable Encryption


• Enables Data-at-Rest Encryption
• Re-directs to the Enable Encryption wizard
• Can be enabled later
ƒ Appears as a reminder

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-95. Suggested task: Enable Encryption

Once the encryption license feature has been successfully applied, the Storwize V7000
management GUI Suggest Task provides the option to enable data-at-rest encryption for the
system. The suggested task also serves as a reminder that encryption is not enabled. To perform
this task, click Enable Encryption which will re-direct you to the Enable Encryption wizard, or click
Cancel to continue and enable encryption at a later time.

© Copyright IBM Corp. 2012, 2016 5-102


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Enable Encryption wizard (1 of 3)


• Three USB flash drives are required.
ƒ Insert two USB into one V7000 node canister and one in the second node
canister.
• USB flash drives are used to store copies of the encryption keys.

USB
flash
drives

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-96. Enable Encryption wizard (1 of 3)

Before you can enable encryption for the Storwize V7000, you will need 3 USB flash drives to
complete the process. The UBS drives are used to store copies of the encryption keys. The wizard
will prompt you to insert two USB flash drives into the one V7000 node canister USB ports and
remaining USB flash drive will go into the second V7000 node canister. The location of the USB
ports are highlighted in this image.

© Copyright IBM Corp. 2012, 2016 5-103


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Enable Encryption wizard (2 of 3)


• System detects the USB flash drives.
• Encryption keys are automatically copied on the USB flash drives.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-97. Enable Encryption wizard (2 of 3)

When the system detects the USB flash drives, encryption keys automatically are copied to each
USB flash drive.

© Copyright IBM Corp. 2012, 2016 5-104


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Enable Encryption wizard (3 of 3)


• Click Commit to enable encryption.
ƒ Encryption keys are required in both node canisters for system restart,
upgrades or to access data on an encrypted system.
• Secure UBS keys.
ƒ Remove encryption keys and store in a secure location.
ƒ Create extra copies for backup.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-98. Enable Encryption wizard (3 of 3)

Although the system has copied encryption keys to the three USB flash drives, encryption is not
enabled until you click the Commit button.
Once the system is encrypted, if you wish to access data, perform upgrades, or power on or have a
Storwize V7000 system to automatically restarted, you must have an encryption key installed in
each control enclosure so that both canisters have access to the encryption key.
It is optional to leave one USB flash drive in each node; if the environment is secure. However the
standard practice is to always implement secure operations by making extra copies of the
encryption keys and locking all UBS keys in a secure location to prevent unauthorized access to
system data.
You can verify the system encryption status from the management GUI by selecting Settings >
Security > Encryption. If encryption keys are still inserted, this view will also indicate that they are
accessible.

© Copyright IBM Corp. 2012, 2016 5-105


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Software data encryption and decryption


• Data is encrypted/decrypted when it
is written to/read from storage. Encryption Decryption
ƒ Encryption/decryption performed in Writing to Read from
software using Intel AES-NI
instructions.
í I/O goes into the Platform interface
(PLIF), and gets encrypted there USER DATA USER DATA
before being passed to a driver.
ƒ Hardware encryption is performed
by the SAS hardware for internal
drive only.
Platform Interface Platform interface
• Data is stored encrypted in storage
systems.
• Data is encrypted when transferred
across SAN between IBM Spectrum Data on external Data on external
storage storage
Virtualize system and external
storage (back end).

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-99. Software data encryption and decryption

When encryption is enabled, a access key is provided to unlock the Storwize V7000 so that it can
perform encryption on user data writes and reads to and from the external storage systems.
This visual illustrates how the software encryption encrypt and decrypt user data on a external
storage system. During read and write operations, data is automatically encrypted and decrypted
as passed through the platform interface. Hardware encryption is performed by the SAS hardware.
Therefore, encryption only works to internal drives.
The encryption process is Application-transparent encryption which means that applications are not
aware that encryption and protection are occurring ─ a complete transparent to the users.
Data is not encrypted when transferred on SAN interfaces in other circumstances (front end/remote
system/inter node)
• Intra-system communication for clustered systems
• Remote mirror
• Server connections

© Copyright IBM Corp. 2012, 2016 5-106


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Encryption is enabled at the pool level


• All volumes created in an encrypted pool are automatically encrypted.
• MDisks now have an 'encrypted' or not attribute.
• Can mix external and internal encryption in same pool.
ƒ If an MDisk is self-encrypting (and identified), then per-pool encryption will
not encrypt any data to be sent to that MDisk.
• Child pools can also have keys, which are different to the parent pool.

Encrypted
Encrypted Mdisk 3
MDisk 1
Encrypted
MDisk 2 Encrypted
Volume

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-100. Encryption is enabled at the pool level

Any pools that created after encryption is enabled are assigned a key that can be used to encrypt
and decrypt data. However, if encryption was configured after volumes were already assigned to
non-encrypted pools, you can migrate those volumes to an encrypted pool by using child pools.
When you create a child pool after encryption is enabled, an encryption key is created for the child
pool even when the parent pool is not encrypted. You can then use volume mirroring to migrate the
volumes from the non-encrypted parent pool to the encrypted child pool. You can use either the
management GUI or the command-line interface to migrate volumes to an encrypted pool.

© Copyright IBM Corp. 2012, 2016 5-107


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Supported storage systems (also known as controllers)


http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005418

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-101. Supported storage systems (also known as controllers)

Visit the Storwize V7000 product support website for the latest list of storage systems and their
corresponding supported software and firmware levels.
Refer to the Storwize V7000 Information Center > Configuration > Configuring and servicing
external storage systems, for detailed descriptions of each supported storage system.
Support for additional devices might be added periodically. The website would have more current
information than this handout.

© Copyright IBM Corp. 2012, 2016 5-108


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Keywords
• Candidate disk • Redundant Array of Independent
Disks (RAID)
• Cluster initialization
• Cluster system • SCSI LUNs
• Command-line interface (CLI) • Spare disk
• Back-end storage • Storage provisioning
• Encryption
• Storwize V7000 GUI
• Extent
• Storage pool
• External storage
• Internal storage • Quorum disks

• I/O load balancing • Virtualization


• Managed Disk (MDisks)
• Member disk

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-102. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 5-109


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Review questions (1 of 2)
1. True or False: Storwize V7000 2076-12F/24F expansion
enclosures are displayed in the Storwize V7000 GUI as
external storage resources.

2. What is the mode of the MDisk detected by Storwize V7000


from an external storage?
a. Array
b. Unmanaged
c. Managed
d. Image

3. True or False: The back-end storage system LUNs


discovered on the same fabric as the Storwize V7000 are
assigned to the Storwize V7000 system as MDisks.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-103. Review questions (1 of 2)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 5-110


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Review answers (1 of 2)
1. True or False: Storwize V7000 2145-24F expansion
enclosures are displayed in the Storwize V7000 GUI as
external storage resources.
The answer is false. IBM Storwize V7000 2076-12F/24F are
configured as internal storage with in the management GUI.

2. What is the mode of the MDisk detected by Storwize V7000


from an external storage?
a.Array
b.Unmanaged
c.Managed
d.Image
The answer is unmanaged.

3. True or False: The back-end storage system LUNs


discovered on the same fabric as the Storwize V7000 are
assigned to the Storwize V7000 system as MDisks.
The answer is true.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 5-111


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Review questions (2 of 2)
4. List at least three use attributes of drive objects.

5. True or False: The detect MDisks action (GUI/CLI) causes


the Storwize V7000 to discover MDisks with standard Fibre
Channel SAN device discovery, where the delete MDisk
action causes the Storwize V7000 to remove the MDisk
from Storwize V7000 management.

6. True or False: Three quorum disks are allocated by the


Storwize V7000 automatically and the chquorum command
can be used to manage their location and designate the
active quorum.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-104. Review questions (2 of 2)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 5-112


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Review answers (2 of 2)
4. List at least three use attributes of drive objects.
The answers are unused, failed, candidate, member, and
spare.

5. True or False: The detect MDisks action (GUI/CLI) causes


the Storwize V7000 to discover MDisks with standard Fibre
Channel SAN device discovery where the delete MDisk
action causes the Storwize V7000 to remove the MDisk
from Storwize V7000 management.
The answer is false.

6. True or False: Three quorum disks are allocated by the


Storwize V7000 automatically and the chquorum command
can be used to manage their location and designate the
active quorum.
The answer is true.

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 5-113


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 5. Storwize V7000 storage provisioning

Uempty

Unit summary
• Summarize the infrastructure of Storwize V7000 block storage
virtualization
• Recall steps to define internal storage resources using GUI
• Identify the characteristic of external storage resources
• Summarize how external storage resources are virtualized for Storwize
V7000 management GUI and CLI operations
• Summarize the benefit of quorum disks allocation
• Recognize how external storage MDisk allocation facilitate I/O load
balancing across zoned
• Distinguish between Storwize V7000 hardware and software encryption

Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016

Figure 5-105. Unit summary

© Copyright IBM Corp. 2012, 2016 5-114


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Unit 6. Storwize V7000 host to volume


allocation
Estimated time
01:15

Overview
This unit provides an overview of the IBM Storwize V7000 FC and iSCSI hosts integration. This unit
also identifies striped, sequential and image volume allocations to the supported host to include
benefits I/O load balancing and non-disruptive volume movement between the caching I/O groups.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 6-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Unit objectives
• Summarize host systems functions in an Storwize V7000 system
environment
• Differentiate the configuration procedures required to connect an FCP
host versus an iSCSI host
• Recall the configuration procedures required define volumes to a host
• Differentiate between a volume’s caching I/O group and accessible I/O
groups
• Identify subsystem device driver (SDD) commands to monitor device
path configuration
• Perform non-disruptive volume movement from one caching I/O group
to another

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 6-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000 host topics


• Host system configuration
• Host administration
• Volume (VDisks) allocation
• Host storage access
• Non-disruptive volume move (NDVM)

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-2. Storwize V7000 host topics

This topic discusses the host server configuration in an Storwize V7000 environment.

© Copyright IBM Corp. 2012, 2016 6-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Host terminology
V7000 terminology Description
Caching I/O group The I/O group in the system that performs the cache function for a volume.

Child Pool Child pools are used to control capacity allocation for volumes that are used for
specific purposes.

Host ID A host ID is a numeric identifier that is assigned to a group of host FC ports or


iSCSI host names for LUN mapping.

Host mapping Host mapping refers to the process of controlling which hosts have access to
specific volumes within a cluster (it is equivalent to LUN masking).

iSCSI qualified name (IQN) IQN refers to special names that identify both iSCSI initiators and targets.

I/O Group Each pair of V7000 cluster nodes is known as an input/output (I/O) group.

Parent Pool Parent pools receive their capacity from MDisks.

Storage pool (Managed A storage pool is a collection of storage capacity that is made up of MDisks,
Disk group) which provides the pool of storage capacity for a specific set of volumes.

VLAN Virtual Local Area Network (VLAN) tagging separates network traffic at the layer
2 level for Ethernet transport.

Volume protection Prevent active volumes or host mappings from being deleted inadvertently.

Write-through mode A process in which data is written to a storage device at the same time as the
data is cached.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-3. Host terminology

Listed are a few host-related terms that are used throughout this unit.

© Copyright IBM Corp. 2012, 2016 6-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000: Functional categories


Managing host systems:
Back-end Advance Functions Front-end

y Host objects (FC, iSCSI)


y Volumes
V V V V V
1 2 3 4 5

Managing V7K cluster:


Storwize V7000
y Nodes and I/O groups
Enabling data migration
Storwize V7000
Optimizing storage utilization Array
MDisk
Facilitating data replication: Array
y FlashCopy: Intracluster MDisk
Array
y Remote Mirroring: Intercluster MDisk Array
MDisk

Managing storage systems:


y Storage systems
y MDisks
Array Array Array Array
y Storage pools LUN LUN LUN LUN

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-4. Storwize V7000: Functional categories

The Storwize V7000 integrates intelligence into the SAN fabric by placing a layer of abstraction
between the host server’s logical view of storage (front-end) and the storage systems’ physical
presentation of storage resources (back-end).
By providing this insulation layer the host servers can be configured to use volumes and be
uncoupled from physical storage systems for data access. This uncoupling allows storage
administrators to make storage infrastructure changes and perform data migration to implement
tiered storage infrastructures transparently without the need to change host server configurations.
Additionally, the virtualization layer provides a central point for management of block storage
devices in the SAN through its provisioning storage to host servers that spans across multiple
storage systems. It also provides a platform for advanced functions such as data migration, Thin
Provisioning, and data replication services.

© Copyright IBM Corp. 2012, 2016 6-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Physical storage and logical presentations


Software aggregation/virtualization

Logical
Host-level
Layer virtualization

Host Software
J J
Physical B B
O Lh Li O D La Lb Lc
Layer D

HBA
WYSIWYG Storage-level
WYSIWYG WYSIWYG virtualization
SAN
RAID Controller
J L h Li
B Lz J
O B
D RAIDa O D Lx La Lb Lc
RAIDb

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-5. Physical storage and logical presentations

Physical storage can be presented to a host system on an as-is basis. This is the case with simple
storage devices packaged as just a bunch of disks (JBOD). There is a one-to-one correspondence
between the JBOD and the disks that are seen by the host, that is, “what you see is what you get”
(WYSIWYG).
As faster processors and microchips become commonplace, storage aggregation and virtualization
can conceivably be done at any layer of the I/O path and introduce negligible latency to the I/O
requests. This results in more abstraction, or separation, between the physical hardware and the
logical entity that is presented to the host operating system.
With RAID storage systems the physical disks are configured into logical volumes in the storage
controller and presented to the host as physical disks. The aggregation and virtualization are
implemented in the storage system, outboard from the host.
Aggregation and virtualization can also be done in the host. The logical volume manager might
group or aggregate multiple physical disks (as seen by the host physical layer) and manage those
disks as one logical volume. Or, a logical volume might comprise partitions striped across multiple
physical disks.

© Copyright IBM Corp. 2012, 2016 6-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000 host objects


Host servers with
10 GbE iSCSI NICs
Host servers with Gigabit Fibre-attached
iSCSI NICs hosts

iSCSI
IQNs
LAN SAN Zones
iSCSI
initiator
FC
WWPNs

SAN Storage Zones

External Fibre-attached storage (optional)


Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-6. Storwize V7000 host objects

Hosts can be connected to the Storwize V7000 Fibre Channel ports directly or through a SAN
fabric. For a given host, it is recommended with SAN-attached 8 Gbps and 16 Gbps Fibre
connection using Fibre Channel WWPNs or through 1 Gbps iSCSI using its iSCSI Qualified Names
(IQN) but generally not both at the same time.
Storwize V7000 supports native attachment for host systems using the optional 10 Gbps
iSCSI/Fibre Channel over Ethernet (FCoE) NAS-attached Ethernet. This enables customers to
connect host systems to an Storwize V7000 using higher-performance, lower-cost IP networks and
supporting up to 7x per port throughput over the 1 Gb iSCSI. The 10 Gb port cannot be used for
system to system communication nor can it be used to attached external storage systems.

© Copyright IBM Corp. 2012, 2016 6-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

FC redundant configuration
• Dual fabric is highly recommended.
• Hosts should be connected to all interface nodes.
• Number of paths through the SAN from V7000 nodes to a
host must not exceed eight.

SAN
Fabric 1

SAN
Fabric 2

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-7. FC redundant configuration

In a SAN, a host system can be connected to a storage device across the network. For Fibre
Channel host connections the V7000 must be connected to either SAN switches or directly
connected to a host port. The V7000 detects Fibre Channel host interface card (HIC) ports that are
connected to the SAN. Worldwide port names (WWPNs) associated with the HIC ports are used to
define host objects.
In a manner analogous to the dual fabric (redundant fabric) Fibre Channel environment, a highly
available environment can be created for iSCSI attached hosts by using two Ethernet NICs and two
independent LANs. Both Ethernet ports in an V7000 node are connected to the two LANs and in
conjunction with the two host NICs a multipathing environment is created for access to volumes.
The number of paths through the SAN from V7000 nodes to a host must not exceed eight. For most
configurations, four paths to an I/O Group (four paths to each volume that is provided by this I/O
Group) are sufficient.

© Copyright IBM Corp. 2012, 2016 6-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Preparation guidelines
• List of general procedures that pertain to all hosts:

ƒ Determine the preferred operating system.

ƒ Ensure that your HBA is supported.

ƒ Check the LUN limitations for your host operating system.

ƒ Check the optimum number of paths that should be defined.

ƒ Install the latest supported HBA firmware and driver.

ƒ Install the latest supported multipath driver.

Always check compatibility at SSIC web page.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-8. Preparation guidelines

When managing Storwize V7000 storage that is connected to any host, you must follow basic
configuration guidelines. These guidelines pertain to determining the preferred operating system,
driver, firmware, and supported host bus adapters (HBAs) to prevent unanticipated problems due to
untested levels.
Next, what is the number of paths through the fabric that are allocated to the host, the number of
host ports to use, and the approach for spreading the hosts across I/O groups. They also apply to
logical unit number (LUN) mapping and the correct size of virtual disks (volumes) to use.

© Copyright IBM Corp. 2012, 2016 6-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Maximum generic host configurations


Host objects (IDs) 2048 Both FC ports and iSCSI names
Host objects (IDs) per I/O group 512 Limitations might apply
Volume mapping per host 2048 Host OS restriction can might. Not all host are
capable of accessing and managing this number
of volume
Total FC ports and iSCSI names 8192
per system
Total FC ports and iSCSI names 2048
per I/O group
Total FC ports and iSCSI names 32
per host object
iSCSI names per host object 8

Volumes per I/O group (volumes 2048


per caching I/O group)
Volumes accessible per I/O group 8192

Visit the IBM Support website for the supported configurations.


Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-9. Maximum generic host configurations

The maximum number of host objects in an 8-node V7000 system cluster is 2,048. A total of 512
distinct, configured host worldwide port names (WWPNs) is supported per I/O Group. If the same
host is connected to multiple I/O Groups of a cluster, it counts as a host in each of these groups.
The maximum number of volumes in an 8-node system is 8,192 (having a maximum of 2,048
volumes per I/O Group). The maximum storage capacity supported is 32 PB per system. The
maximum size of a single volume is 256 TB.
To configure more than 256 hosts, you must configure the host to I/O Group mappings on the
Storwize V7000 Gen2. Each I/O Group can contain a maximum of 256 hosts, so it is possible to
create 1024 host objects on an eight-node Storwize V7000 Gen2 clustered system. Volumes can
only be mapped to a host that is associated with the I/O Group to which the volume belongs.
For more information about the maximum configurations that are applicable to the system, I/O
Group, and nodes, visit the IBM Support website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005423.

© Copyright IBM Corp. 2012, 2016 6-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000 host connection topics


• Host system configuration
• Host administration
ƒ Fibre Channel host connectivity
‫ ޤ‬Windows
‫ ޤ‬AIX
ƒ iSCSI host connectivity
• Volume (VDisks) allocation
• Host storage access
• Non-disruptive volume move (NDVM)

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-10. Storwize V7000 host connection topics

This topic discusses the host connection in an Storwize V7000 system environment, beginning with
the FC host attachment.

© Copyright IBM Corp. 2012, 2016 6-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

FC hosts to the Storwize V7000


Citrix
Win AIX Sun HP VMware Linux Blade
Xen Netware
Tru64
Apple
SGI FC Host
And so on

SCSI initiator
Fabric 1 Fabric 2

SCSI target
Storwize
V7000
AC2 Node 1 AC2 Node 2
SCSI initiator

SCSI target
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-11. FC hosts to the Storwize V7000

The Storwize V7000 supports IBM and non-IBM storage systems so that you can consolidate
storage capacity and workloads for open-systems hosts into a single storage pool. In environments
where the requirement is to maintain high performance and high availability, hosts are attached
through a storage area network (SAN) by using the Fibre Channel Protocol (FCP). From the
perspective of the SCSI protocol, the Storwize V7000 is no different from any other SCSI device. It
appears as a SCSI target to the host SCSI initiator. The Storwize V7000 nodes does behave as a
SCSI device to the host objects it services and in turn it acts a SCSI initiator that interfaces with the
back-end storage systems.
For high availability, the recommendation for attaching the Storwize V7000 system to a SAN is
consistent with the recommendations of designing a standard SAN network. That is, build a dual
fabric configuration in which if any one single component fails then the connectivity between the
devices within the SAN is still maintained although possibly with degraded performance.

© Copyright IBM Corp. 2012, 2016 6-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Fibre Channel Protocol


• Fibre Channel (FC) is a technology for transmitting data
between computer devices at data rates of up to 16 Gb.
ƒ Suited for connecting computer servers to shared storage devices and
for interconnecting storage controllers and drives.
• FC uses a worldwide name (WWN) as a unique identity for
each Fibre Channel device.
ƒ End-points in FC communication (Node Port) have a specific WWN,
called a WWPN.

Host switch disk

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-12. Fibre Channel Protocol

Fibre Channel (FC) is the prevalent technology standard in the storage area network (SAN) data
center environment. This standard has created a multitude of FC-based solutions that have paved
the way for high performance, high availability, and the highly efficient transport and management
of data.
Each device in the SAN is identified by a unique worldwide name (WWN). The WWN also contains
a vendor identifier field and a vendor-specific information field, which is defined and maintained
by the IEEE.

© Copyright IBM Corp. 2012, 2016 6-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Host multipath support


• SDDDSM (Windows) and SDDPCM
(AIX) are loadable path control Host
modules for supported storage Multipath driver
devices. multipath manages
support
ƒ Host MPIO device driver along with multiple paths
SDDDSM or SDDPCM enhances MPIO to volume
the data availability and I/O load
balancing of volumes. 2 4
L2 L11 L3 L4
í High availability and load balancing of 4
storage I/O
í Automatic path-failover protection 3
1 4 1 3
í Concurrent download of supported 2 Switch
storage devices’ licensed machine
code
í Prevention of a single-point failure

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-13. Host multipath support

In order for the Storwize V70000 to present volumes to an attached host, the 2145 host attachment
support file set must be installed. The hosts must run a multipathing device driver to limit the
pathing back to a single device. The multipathing driver supported and delivered by Storwize V7000
Gen2 is Subsystem Device Driver (SDD). Native multipath I/O (MPIO) drivers on selected hosts are
supported such as Subsystem Device Driver Device Specific Module (SDDDSM) for Windows
hosts or the Subsystem Device Driver Path Control Module (SDDPCM) for AIX host.
The Subsystem Device Driver (SDD) provides multipath support for certain OS environments that
do not have native MPIO capability. Both the SDDDSM and SDDPCM are loadable path control
modules for supported storage devices to supply path management functions and error recovery
algorithms. The host MPIO device driver along with SDD enhances the data availability and I/O
load balancing of Storwize V7000 volumes. The host MPIO device driver automatically discovers,
configures, and makes available all storage device paths. SDDDSM and SDDPCM then manage
these paths to provide the following functions:
• High availability and load balancing of storage I/O
• Automatic path-failover protection
• Concurrent download of supported storage devices’ licensed machine code
• Prevention of a single-point failure

© Copyright IBM Corp. 2012, 2016 6-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Windows host FC configuration


• Multi-path I/O (MPIO) is an optional built-in feature of Windows Servers
ƒ Installing Windows 2008 MPIO feature:
í Navigate to the Server Manager.
í Under Features Summary and select Add Features.
í Select Multi-path I/O and click Install.
ƒ System reboot is required.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-14. Windows host FC configuration

Multi-path I/O (MPIO) is an optional feature in Windows Server 2008 R2, and is not installed by
default. Installing requires a system reboot. After restarting the computer, the computer finalizes the
MPIO installation.
When MPIO is installed, the Microsoft device-specific module (DSM) is also installed, as well as an
MPIO control panel. The control panel can be used to do the following:
• Configure MPIO functionality
• Install additional storage DSMs
• Create MPIO configuration reports
Microsoft ended support for Windows Server 2003 on July 14, 2015. This change affects your
software updates and security options.

© Copyright IBM Corp. 2012, 2016 6-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Identifying Windows HBAs (example for QLogic)

WWPN of FC HBA port 0

Port 1 in Node 1 of V7000

LUN 0 mapped from V7000

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-15. Identifying Windows HBAs (example for QLogic)

Depending on the host adapter, you can used the HBA application like QLogic’s SANSurfer FC
HBA Manager which provides a graphical user interface (GUI) that lets you easily install, configure,
and deploy QLogic Fibre Channel HBAs. To include perform diagnostic and troubleshooting
capabilities to optimize SAN performance.

© Copyright IBM Corp. 2012, 2016 6-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Verifying host configuration


• Server performance depends on the
system I/O throughput capacity.
• Use system documentation to learn which
slots use which PCI bus.
ƒ Put fastest adapters on the fastest busses.
• For best LAN backup-to-disk performance:
ƒ Put network adapters on different bus than
disk adapters.
• Access -Define multiple disk volumes: One
Multipath MPIO volume per disk (LUN).
enabled feature
• Sequential Access: Use multiple directories
in the device class - one directory per disk
(LUN).
• Multi-path I/O (MPIO) is an optional built-in
Properly recognized feature of Windows Servers.
FC adapter
ƒ Installing Windows 2008 MPIO feature:
í Navigate to the Server Manager.
í Under Features Summary and select Add
Features.
í Select Multi-path I/O and click Install.
ƒ System reboot is required.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-16. Verifying host configuration

On FC host system, you can verify the host HBA driver settings using Device Manager. Device
Manager is an extension of the Microsoft Management Console that provides a central and
organized view of all the Microsoft Windows recognized hardware installed in a computer. Device
Manager can be used for changing hardware configuration options, managing drivers, disabling
and enabling hardware, identifying conflicts between hardware devices, and much more.

© Copyright IBM Corp. 2012, 2016 6-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Identifying Fibre Channel connectivity using GUI


Gen2 panel

50:01:73:68:NN:NN:RR:MP

5 0:05:07:6 8:02:20 :2D :7 C

Port ID (0-3)
IBM Storwize 0 for WWNN
V7000 serial
Module ID (0-f)
number (hex)
0 for WWNN
Rack ID (01-ff)
0 for WWNN
IEEE company ID

Network address authority

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-17. Identifying Fibre Channel connectivity using GUI

SAN zoning connectivity of an Storwize V7000 environment can be verified using the management
GUI by selecting Settings > Network and then select Fibre Channel Connectivity in the Network
filter list. The Fibre Channel Connectivity view display to the connectivity between nodes and
other storage systems and hosts that are attached through the Fibre Channel network.
The GUI zoning output conforms to the guideline that, for a given storage system, zone its ports
with all the ports of the FlashSystem V9000 cluster on that fabric. The number of port dedicated will
determine the number of ports zoned.
In a dual fabric, Storwize V7000 storage system ports and the additional V7000 storage enclosure
ports as well as those ports for external storage are split between the two SAN fabrics. The WWPN
values are specific to the Storwize V7000 node ports of the same fabric.

© Copyright IBM Corp. 2012, 2016 6-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Defining FC host objects


• Host object can be defined by using Hosts > Hosts > Add Host.
ƒ Select Fibre Channel Host.
í FCoE hosts are configured using
the FC Host connection.
ƒ Create a host name.
í Max. of 63 characters
ƒ Click Host Port (WWPN) to add
WWPNs that corresponds to the
host HBAs.
í Click (+) button to add second FC port.
ƒ Advanced Settings:
í All I/O groups are selected by default. Click the Add button
to create host.
• Entitles host to access volumes owned
by any I/O groups
í Generic is the default host type.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-18. Defining FC host objects

A host object can be created by using the GUI Hosts > Hosts menu option. Click the Add Host
button to create either a Fibre Channel or iSCSI host. Before you proceed, make sure you have
knowledge of the host WWPNs or the IQN to verify that it matches back to the selected host.
The Add Host window allows you to specify parameters in which to define an FC host name and
add the port definitions WWPNs that corresponds to your host HBAs.
By default, new hosts are created as generic host types and assigned to all four I/O groups from
which the host can access volumes. You can select the Advanced option to modify the host OS
type such as Hewlett-Packard UNIX (HP-UX) or Sun, select HP_UX (to have more than eight LUNs
supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. Or you can restrict the I/O
groups access to volumes.

© Copyright IBM Corp. 2012, 2016 6-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Identifying AIX adapter attributes


• Run the cfgmgr command to force the port to perform FC login and to display
WWPN within the Storwize V7000 GUI.
• Discovering AIX WWPNs:
ƒ Run the lsdev -Cc adapter |grep fcs command to display installed
adapters:
fcs0 Available 97-T1 Virtual Fibre Channel Client Adapter
fcs1 Available 97-T1 Virtual Fibre Channel Client Adapter
ƒ Run the lscfg -vl fcs* | grep Network command with an asterisk (*)
wildcard character to display WWPNs collectively:
Network Address …………………………………………………C0507604AFA3007C
Network Address …………………………………………………C0507604AFA3007E
‫ ޤ‬You can run lscfg -vpl fcs# (per port)

• Once the cfgmgr command has been issued, WWPNs


are visible in Host Port (WWPN) panel.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-19. Identifying AIX adapter attributes

When defining an AIX host object, for various reasons the host WWPNs might not be displayed
within the Host Port (WWPN) panel such as the AIX FC ports might have logged out from the FC
switch, or a new AIX host has been added to the SAN fabric. You will first need to issue the cfgmgr
command to force the ports to perform a Fibre Channel login and sync the ports with the V7000
nodes. You can display the installed AIX host adapters by using the lsdev -Cc adapter |grep
fcs command. The maximum number of FC ports that are supported in a single host (or logical
partition) is four. These ports can be four single-port adapters, two dual-port adapters, or a
combination as long as the maximum number of ports that attach to V7000 does not exceed four.
The fscsi0 and fscsi1 devices are protocol conversion devices in AIX. They are child devices of fcs0
and fcs1 respectively.
Display the WWPN, along with other attributes including the firmware level, by using the lscfg
-vpl fcs* wildcard command or using the adapter number.
Once the AIX host ports are synced with the V7000 system, the WWPN should be available for
selection. Defining an AIX host object can be done in the same manner as the Windows host.

© Copyright IBM Corp. 2012, 2016 6-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

CLI mkhost command


• The mkhost command to
correlate the selected WWPN Window host
values with the host object
defined.
• Hosts are visible from
the Hosts > Hosts window.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-20. CLI mkhost command

The V7000 GUI generates the mkhost command to correlate the selected WWPN values with the
host object defined and assigns a host ID. When a host object is defined the host count is
incremented by one for each I/O group specified.
If required, you can use the host properties option to modify a host attributes such changing the
host name and host type or restrict host access to volumes in a particular I/O group. You can also
view assigned volumes to include adding or deleting a WWPN.

© Copyright IBM Corp. 2012, 2016 6-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Host object details and I/O group counts


IBM Storwize:V009B:V009B1-admin>lshost 0
id 0
name VB1_WIN
port_count 2
Each host, by default, is entitled
type generic
to access volumes that are owned
mask by all four I/O groups.
111111111111111111111111111111111111111
1111111111111111111111111
iogrp_count 4
status online Each host WWPN (2 ports) is logged
WWPN 21000024FF298910 in to the V7000 nodes.
node_logged_in_count 2
state inactive
WWPN 21000024FF298911
node_logged_in_count 2
state inactive
IBM Storwize:V009B:V009B1-admin>

Physical access to volumes is enabled by SAN zoning for


Fibre Channel hosts.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-21. Host object details and I/O group counts

The lshost <hostname> (or ID) command returns the details that are associated with the specific
host object. It displays the values of all the WWPNs defined for the host object. The
node_logged_in_count is the number of V7000 nodes that the host WWPN has logged in.

© Copyright IBM Corp. 2012, 2016 6-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Manage host object counts per I/O group


IBM Storwize:V009B:V009B1-admin>rmhostiogrp -iogrp io_grp1 VB1-AIX
id name node_count vdisk_count host_count
0 io_grp0 2 4 2
1 io_grp1 2 5 1
2 io_grp2 0 0 1 Host count decreased for
3 io_grp3 0 0 1
4 recovery_io_grp 0 0 0 io_grp1 (from 2 to 1)
IBM Storwize:V009B:V009B1-admin>lshost 1
id 1
name VB1-AIX
port_count 2
type generic
mask 11111111...111111111
iogrp_count 1 AIX host is only entitled to access
status online volumes that are owned by one I/O
WWPN C0507604AFA3007C
node_logged_in_count 4 group (io-group 0))
state inactive
WWPN C0507604AFA3007E
node_logged_in_count 4
state inactive
IBM Storwize:V009B:V009B1-admin>lshostiogrp 1
id name
0 io_grp0

Removing host’s entitlement to access volumes owned by a


specified I/O group is NOT the same as the ability to access.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-22. Manage host object counts per I/O group

A host object can be defined to access fewer I/O groups in order to manage larger environments
where the host object count might exceed the maximum that is supported by an I/O group.
To support more than 256 host objects the rmhostiogrp command is used to remove an I/O group
eligibility from an existing host object.
The host object to I/O group associations only define a host object’s entitlement to access volumes
owned by the I/O groups. Physical access to the volumes requires proper SAN zoning for Fibre
Channel hosts and IP connectivity for iSCSI hosts.

© Copyright IBM Corp. 2012, 2016 6-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Host connectivity connection


• To verify host connectivity configuration, you can select Settings >
Network > Fibre Channel Connectivity option.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-23. Host connectivity connection

Use the Fibre Channel panel to display the Fibre Channel connectivity between nodes, storage
systems, and hosts. This example of the connectivity matrix shows FC hosts and system
information with a listing of the node and port number that they are connected to.

© Copyright IBM Corp. 2012, 2016 6-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000 host connection topics


• Host system functions
• Host administration
ƒ Fibre Channel host
ƒ iSCSI host
• Volume (VDisks) allocation
• Host storage access
• Non-disruptive volume move (NDVM)

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-24. Storwize V7000 host connection topics

This topic discusses the iSCSI host connection in an Storwize V7000 environment.

© Copyright IBM Corp. 2012, 2016 6-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

iSCSI architecture
• Mapping of SCSI architecture model
to IP
ƒ Storage server (target)
ƒ Storage client (initiator)
OS

• Single communication path between SCSI Host


the initiator and target devices
iSCSI Initiator

• Available on most operating Ethernet NIC


systems
switch

Target disk

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-25. iSCSI architecture

Internet Small Computer System Interface (iSCSI) is an alternative means of attaching hosts to the
Storwize V7000 nodes. All communications with external back-end storage subsystems or other
IBM virtual storage systems must be done through a Fibre Channel or FCOE connection. The
iSCSI function is a software function that is provided by the Storwize V7000 code and not the
hardware.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network that is based on IP routers and Ethernet switches. iSCSI is a block-level protocol that
encapsulates SCSI commands into TCP/IP packets and uses an existing IP network. A pure SCSI
architecture is based on the client/server model.

© Copyright IBM Corp. 2012, 2016 6-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

MPIO and iSCSI initiator support for iSCSI


• Start > Programs > Administrative tools, select iSCSI initiator

Install iSCSI initiator


Install iSCSI host OS-
specific MPIO

An iSCSI initiator sends a SCSI command to the iSCSI target,


which in turn provides required input/output data transfers.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-26. MPIO and iSCSI initiator support for iSCSI

Before configuring an iSCSI host access, you need to identify if the target (storage node) you plan
to use supports MPIO and that it supports active-active connections. If the target supports MPIO
but does not support active-active then you can still make an iSCSI MPIO connection but the only
supported mode will be failover. A failover mode provides the network with redundancy but it does
not provide the performance increase as the other MPIO modes.
Some target manufacturers have their own MPIO DSM (Device Specific Module), therefore, it might
be preferable to use the target specified DSM mode. Consult the Storwize V7000 Support website
for supported iSCSI host platforms and if multipathing support is available for the host OS. If you
are using Windows 2008, MPIO support should be implemented when more than one path or
connection is desired between the host and the Storwize V7000 system.
You will also need to install the iSCSI initiator to initiate a SCSI session which sends a SCSI
command to the iSCSI target. The iSCSI target waits for the initiator’s commands and provides
required input/output data transfers. The iSCSI initiator does not provide the LUN, as it cannot
perform read or write commands. Therefore, it has to rely on the target to provide the initiators one
or more LUNs.

© Copyright IBM Corp. 2012, 2016 6-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Configure iSCSI IP addresses


• iSCSI hosts are connected to the Storwize V7000 through the Storwize V7000
node-port IP address.
• Setting > Network > Ethernet Ports
ƒ Configure both the primary (configuration) node and the secondary (failover)
node for host access.
Right-click each node and
enter iSCSI assigned IP
address.

Run the lsportip -delim , -


filtervalue node_id=# command to
verify Eth port connectivity.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-27. Configure iSCSI IP addresses

You can attach the Storwize V7000 to iSCSI hosts by using the Ethernet ports of the Storwize
V7000. The Storwize V7000 nodes have two or four Ethernet ports. These ports are either for 1 Gb
support or 10 Gb support, depending on the model. For each Ethernet port a maximum of one IPv4
address and one IPv6 address can be designated for iSCSI I/O.
An iSCSI host connects to the Storwize V7000 through the node-port IP address. If the node fails,
the address becomes unavailable and the host loses communication with the Storwize V7000.
Therefore, you want to ensure that both the primary (configuration) node and the secondary
(failover) node are configured for host access.
The cfgportip command is generated to enable the component IP address be set for node ID1
port 1 and node ID2 port 2.
The lsportip command output displays the iSCSI IP port configuration of the nodes of the cluster.
Use the -filtervalue node_id= keyword to filter the output by node.

© Copyright IBM Corp. 2012, 2016 6-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

iSCSI initiator and iSCSI target


• iSCSI qualified name (IQN)
ƒ iSCSI initiator (host)
iSCSI Initiator ‫ޤ‬ iSCSI Initiator Properties >
Host Configuration
ƒ Target IQN (Storwize V7000 node)
‫ޤ‬ Settings > Network > iSCSI

iSCSI target 10.6.9.152


10.6.9.151
IP address
NODE1
IP

iSCSI target
10.6.9.211 10.6.9.212 IQNs

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-28. iSCSI initiator and iSCSI target

Each iSCSI initiator and target must have a worldwide unique name which is typically implemented
as an iSCSI qualified name (IQN). In this example, the Windows IQN is shown on the
Configuration tab of the iSCSI Initiator Properties window. The iSCSI initiator host’s IQN is used
to define a host object.
The Storwize V7000 node IQN can be obtained by selecting Settings > Network > iSCSI
Configuration pane of the Storwize V7000 GUI. The verbose format of the lsnode command can
also be used to obtain the Storwize V7000 node IQN.

© Copyright IBM Corp. 2012, 2016 6-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

iSCSI host discover target (NODE) portal (1 of 2)


• Discovery tab
ƒ Enter the iSCSI target portal
by using the iSCSI IP address
that is configured on the
Storwize V7000.
ƒ Port number 3260 (default) Returns each iSCSI Target IP
address to Target portals
ƒ Select Discovery Portal

• Repeat the process for


3
each iSCSI IP address
configured.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-29. iSCSI host discover target (NODE) portal (1 of 2)

Once you have defined the iSCSI initiator IQN and iSCSI target IQN, you need to perform a iSCSI
initiator discovery from the host server of the target portal. This process must be completed using
the installed iSCSI MPIO. From the iSCSI Initiator Properties window Discovery tab, click the
Discover Target Portal button and enter the Storwize V7000 node iSCSI IP port address or DSN
name. Port number 3260 is the default (official TPC/IP port for the iSCSI protocol). Once the portal
address has been entered the available iSCSI targets are displayed.
It is recommend to use the Favorite Targets tab to remove any previous targets mounted. This
might obstruct an iSCSI host discovery of a new target, if the previous mounted targets try to
reconnect.

© Copyright IBM Corp. 2012, 2016 6-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

iSCSI host discover target (NODE) portal (2 of 2)


• Target tab
ƒ Storwize V7000 node IQN automatically
discovered by iSCSI initiator
ƒ Select Connect to connect iSCSI target
(Storwize V7000 NODE#) to iSCSI initiator
(host)
ƒ Enable persistent connection
and multi-path access

• Advanced Settings:
ƒ Set Local adapter to Microsoft iSCSI
Initiator
ƒ Select source IP address on the iSCSI
network
ƒ Set destination address
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-30. iSCSI host discover target (NODE) portal (2 of 2)

The Targets tab lists each Storwize V7000 node IQNs that are automatically discovered by the
iSCSI initiator (Windows in this example). The discovered targets initial status of inactive. Use the
Connect button to connect to the target (Storwize V7000 node). The Connect to Target window
provides options to tailor the behavior of the connection, check both the box for persistent
connections and to enable multipathing access. Use the Advanced button to configure the
individual connects by pairing initiator to target ports in the same subnet to Storwize V7000 node.
Set Local adapter to Microsoft iSCSI Initiator, and select one of the two IP addresses on the
iSCSI network as the source IP. Select the destination address. Once this process is complete, the
initiator to the discovered target (NODE) is now connected.

© Copyright IBM Corp. 2012, 2016 6-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Discovered targets properties


• Establish additional session
between the host initiator and the
Storwize V7000 node that owns
the target address

• Disconnect individual sessions

• View session device information

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-31. Discovered targets properties

If the target supports multiple sessions then the Add Session option under the Target Properties
panel allows you to create an additional session. You can also disconnect individual sessions that
are listed. Use the Devices button to view more information about devices that are associated with
a selected session.
Multiple Connections per Session (MCS) support is defined in the iSCSI RFC to allow multiple
TCP/IP connections from the initiator to the target for the same iSCSI session. This is iSCSI
protocol specific. This allows I/O to be sent over either TCP/IP connection to the target. If one
connection fails then another connection can continue processing I/O without interrupting the
application. Not all iSCSI target support MCS. iSCSI targets that support MCS include but are not
limited to EMC Celerra, iStor, and Network Appliance.

© Copyright IBM Corp. 2012, 2016 6-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Defining iSCSI host object


• iSCSI host must be configured with
the V7000 node iSCSI IP port
address.
• ISCSI Ports
ƒ Enter the host initiator IQN
ƒ Verify IQN corresponds with host IQN
• CHAP authentication (optional)
ƒ Storwize V7000 acts as the
authenticator Copy paste to iSCSI port

í Sends a secret (passphrase) message to


host before access is granted to volumes

• Advanced Settings provides the


Run the lshost command to
same selection options for all host verify host IQN.
being created.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-32. Defining iSCSI host object

An iSCSI host is an alternative means of attaching hosts to the Storwize V7000. However,
communications with back-end storage subsystems and with other Storwize V7000 systems can
occur only through FC.
When you are setting up a host server for use as an iSCSI initiator with Storwize V7000 volumes,
the specific steps vary depending on the particular host type and operating system that you use. An
iSCSI host must first be configured with the Storwize V7000 node iSCSI IP port address for access,
assuming that you have performed the necessary host access requirements.
The Add Host for creating an iSCSI host is comparable to setting up Fibre Channel hosts. Instead
of entering the Fibre Channel ports panel, it requires you to enter the iSCSI Initiator hosts IQN that
was used to discover and pair with the Storwize V7000 node IQN.
You can identify the host objects and the number of ports (IQN) by using the lshost command or
use the iSCSI Initiator to copy and paste the IQN into the iSCSI Ports field.
When the host is initially configured the default authentication method is set to no authentication
and no Challenge Handshake Authentication Protocol (CHAP) secret is set. You can choose to
enable CHAP authentication which involves sharing a CHAP secret passphrase between the
Storwize V7000 and the host before the Storwize V7000 allows access to volumes.

© Copyright IBM Corp. 2012, 2016 6-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty
The Storwize V7000 GUI generates the mkhost command to create the iSCSI host object contains
the -iscsiname parameter followed by the iSCSI host IQN. The maximum for iSCSI hosts per I/O
group is 256 per Storwize V7000 due to the IQN limits.
If you are using Windows, it logs on to the target as soon as you click Connect. Other platforms
such as Linux RH, log on during device discovery.

© Copyright IBM Corp. 2012, 2016 6-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

SDDDSM netstat command


• The netstat –nt command displays active TCP connections.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-33. SDDDSM netstat command

The SDDDSM netstat command can be used to display the contents of various network-related
data structures for active connections. The –n option displays active TCP connections, however,
addresses and port numbers are expressed numerically and no attempt is made to determine
names. When this flag is not specified, the netstat command interprets addresses where possible
and displays them symbolically. This flag can be used with any of the display formats.

© Copyright IBM Corp. 2012, 2016 6-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Example of an auto iSCSI IP addresses failover

Current state
Management IP iSCSI IP
iSCSI targets iSCSI IP Management IP
addresses addresses addresses addresses
Storwize V7000
10.6.9.200 10.6.9.208 10.6.9.202
NODE1 NODE2
10.6.9.201 10.6.9.208 Config node Ptrn node 10.6.9.203

Auto Config node failover from


NODE1 to NODE2 - inherits mgmt IP
Node1 offline
Management IP iSCSI IP
iSCSI targets iSCSI IP Management IP
addresses addresses addresses addresses
Storwize V7000
10.6.9.200 10.6.9.208 10.6.9.202
c NODE1 NODE2 10.6.9.200
Config node Config node 10.6.9.208
10.6.9.201 10.6.9.208 10.6.9.203
c
10.6.9.201 10.6.9.208

Run the lsnode –delim , command to confirms that the NODE1 iSCSI IP
addresses have been transferred to NODE2..
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-34. Example of an auto iSCSI IP addresses failover

This visual illustrates an iSCSI IP address failover that might be the results of a node failure or code
upgrade. In either case, if a node is no longer available then its partner node inherits the iSCSI IP
addresses of the departed node. The partner node port responds to the inherited iSCSI IP address
as well as its original iSCSI IP address. However, if the failed node was the Storwize V7000 cluster
configuration node then the cluster designates another node as the new configuration node. The
cluster management IP addresses are moved automatically to the new configuration node (or
config node).
From the perspective of the iSCSI host, I/O operations proceed as normal. To allow hosts to
maintain access to their data, the node-port IP addresses for the failed node are transferred to the
partner node in the I/O group. The partner node handles requests for both its own node-port IP
addresses and also for node-port IP addresses on the failed node. This process is known as
node-port IP failover. Therefore, the Storwize V7000 node failover activity is totally transparent and
non-disruptive to the attaching hosts.

© Copyright IBM Corp. 2012, 2016 6-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Node failover: Advantage versus disadvantage


• Navigating to the Storwize V7000 management GUI.
ƒ Select Settings > Network > Ethernet Ports.

• Advantage:
ƒ Storwize V7000 node failover activity is transparent and non-disruptive.
ƒ iSCSI host I/O operations proceed as normal.
• Disadvantages:
ƒ Opened CLI sessions are lost when a config node switch occurs.
ƒ Opened GUI sessions might survive the switch.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-35. Node failover: Advantage versus disadvantage

The Storwize V7000 node failover activity is totally transparent and non-disruptive to the attaching
hosts.
If there is an opened CLI session during the node failover then the session is lost when a config
node switch occurs. Depending on the timing, opened GUI sessions might survive the switch.

© Copyright IBM Corp. 2012, 2016 6-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Examples of an auto iSCSI IP addresses failback

1 Node1 return
Management IP iSCSI IP
iSCSI targets iSCSI IP Management IP
addresses addresses addresses addresses
Storwize V7000
10.6.9.200 10.6.9.208 10.6.9.202 10.6.9.200
NODE1 NODE2
10.6.9.201 10.6.9.208 Ptrn node Config node 10.6.9.203 10.6.9.201

Config node does not move. Change


occurs only if its hosting node is no longer
available.

Run the lsnode –delim , command to confirms that the NODE1 iSCSI IP
addresses have been transferred to NODE2..
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-36. Examples of an auto iSCSI IP addresses failback

Once the failed node has been repaired or code upgrade has completed, it is brought back online.
The iSCSI IP addresses previously transferred to NODE2 automatically failback to NODE1. The
configuration node remains intact, and does change node. A configuration node switch occurs only
if its hosting node is no longer available.
When a failed node is re-establishing itself to rejoin the cluster, its attributes do not change (for
example its object name is the same). However, a new node object ID is assigned. Example, if
NODE1, whose object ID was 1, will now be assigned the next sequentially available object ID of 5.

© Copyright IBM Corp. 2012, 2016 6-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Best practice: IP connectivity failure protected by second IP


port
iSCSI host
IP issues do not cause Storwize
V7000 iSCSI IP addresses to failover 10.6.9.51 10.6.9.52

Node1 return
Management IP iSCSI IP
iSCSI targets iSCSI IP Management IP
addresses addresses addresses addresses
Storwize V7000 10.6.9.200
10.6.9.200 10.6.9.208 10.6.9.202
NODE1 NODE2 10.6.9.201
10.6.9.201 10.6.9.208 Ptrn node Config node 10.6.9.203

Configure port 2 for Configure redundant Configure port 2 for


management IP host IP ports iSCSI IP

A host port failure reduces the number of paths between the host and the
Storwize V7000 cluster (4 to 2), but host application I/O continues without
issues due to host multipath support.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-37. Best practice: IP connectivity failure protected by second IP port

Practice redundancy to protect against LAN failures, host port failures, or V7000 node port failures.
Configure dual subnets, two host interface ports, and the second IP port on each node. Implied with
defining multiple iSCSI target addresses to the initiator is the need for host multipathing support.
It is also highly recommended that a second cluster management IP address is defined at port 2 so
that a LAN failure does not prevent management access to the V7000 cluster.
If a host port failure occurs, it reduces the number of paths between the host and the V7000 cluster
from four to two. However, host application I/O continues without issues due to host multipath
support. When the failed host port returns then the original pathing infrastructure from the host to
the V7000 volume is restored automatically.
The second or alternate cluster management IP address assignment is discussed in the last unit of
this course.

© Copyright IBM Corp. 2012, 2016 6-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

iSCSI connectivity best practice


iSCSI initiator
iSCSI
Admin
host
10.6.9.51 10.6.9.52
iSCSI IP Management IP
iSCSI targets addresses addresses
10.6.9.200 10.6.9.208
NODE1
Config node 10.6.9.201 10.6.9.2080
10.6.9.x 10.6.9.1
10.6.9.202 Gateway
NODE2
Partner node 10.6.9.203 Rest of IP
network
Storwize V7000
10.6.9.1
10.6.6.x Gateway

Redundancy enables a
robust LAN configuration iSCSI Email
iSNS
host gateway

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-38. iSCSI connectivity best practice

Redundancy enables a robust LAN configuration both for iSCSI attached hosts as well as V7000
configuration management.

© Copyright IBM Corp. 2012, 2016 6-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Managing host resources


• Host resources can be displayed by using the Actions button or right-
clicking a host object.
Add to column
display

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-39. Managing host resources

Determining the difference between a Fibre Channel host and an iSCSI host that are listed within
the V7000 GUI can be challenging if the hosts are defined with common names. This is one reason
why using a naming convention is important.
To manage the host resources such as modify host mappings, unmap hosts, rename hosts, or
create new hosts, right-click the respective host to view options available.

© Copyright IBM Corp. 2012, 2016 6-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Host properties

Click the Show Details


to modify host
information.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-40. Host properties

The Properties option provides an overview of the selected host. From this view you have the ability
to modify the host name, change the host type or restrict host access to a particular I/O group.
The Mapped Volumes tab provides a view of the volumes that are mapped to the host.
The Port Definition tab provides a quick status update on the host port definitions. From this view,
administrators can also add FC and iSCSI or

© Copyright IBM Corp. 2012, 2016 6-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000 host connection topics


• Host system functions
• Host administration
• Volume (VDisk) allocation
ƒ Caching I/O Group
ƒ Preferred node
ƒ Virtualization types
ƒ Create volumes
ƒ Map volume to host
ƒ Child pool
ƒ Encrypted volumes
• Host storage access
• Non-disruptive volume move (NDVM)

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-41. Storwize V7000 host connection topics

This topic discusses the host to Storwize V7000 volume access infrastructure.

© Copyright IBM Corp. 2012, 2016 6-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Volume allocation
• Volumes must be
Host Objects
mapped to a particular
host object.
FC host iSCSI
• Volumes are accessed WWPNs IQNs
through host WWPNs
or IQN.
I/O grp 0
V1 V2 I/O grp 0
Storwize V7000 Pair Storwize V7000 Pair
• Volumes are
automatically assigned Node canister 1
Node canister 2
Preferred
to an I/O group.
ƒ Uses round-robin Nodes
algorithm
MDisk1 MDisk 3
MDisk 2

Storage pool
A volume is also known as a VDisk

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-42. Volume allocation

The system does not automatically present volumes to the host system. You must map each
volume to a particular host object to enable the volume to be accessed through the WWPNs or
iSCSI names that are associated with the host object. Volumes can be mapped only to a host that
is associated with the I/O Group in which it belongs.
An I/O group contains two Storwize V7000 node canisters and provides access to the volumes
defined for the I/O group. While the Storwize V7000 cluster can have multiple I/O groups the I/O
traffic for a particular volume is handled exclusively by the nodes of a single I/O group. This
facilitates horizontal scalability of the Storwize V7000 cluster.
Upon creation a volume is automatically assigned to a given node within the I/O group and is
associated with one node of the I/O Group. By default, when you create a second volume it is
associated with the next node by using a round-robin algorithm. The assigned nodes are known as
the preferred node which is the node through which the volume is normally accessed. You can
specify a preferred access node which is the node through which you send I/O to the volume
instead of using the round-robin algorithm. Similar to LUN masking provided by storage systems,
host servers can access volumes that have been assigned to them.

© Copyright IBM Corp. 2012, 2016 6-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Volume mapping to host


• Volumes are presented to host by Storwize V7000 over FC or
Ethernet.
ƒ Contains pointers to assigned MDisk extents
ƒ Mapped to host in the same manner as SCSI LUNs
• Best practice:
ƒ Volume size should be a multiple of the extent size of the storage pool.
756 MB
Volume
Extent 1A
Extent 2A
Extent 3A
Extent 1a Extent 2a Extent 3a
Extent 1b Extent 2b Extent 3b
Extent 1c Extent 2c Extent 3c Example:
Extent 2d Extent 3d
Extent 1d
Extent 1e Extent 2e Extent 3e
Volume size of 756 MB
Extent 1f Extent 2f Extent 3f comprised of three 256 MB
Extent 1g Extent 2g Extent 3g
extents

Managed Disks (MDisks)


Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-43. Volume mapping to host

Volumes are presented by the Storwize V7000 system to a host connected over a Fibre Channel or
Ethernet network. A volume essentially contains pointers to its assigned extents of a given storage
pool. The advantage with storage virtualization is that the host is “decoupled” from the underlying
storage so the virtualization appliance can move the extents around without impacting the host
system.
Volumes are mapped to the application server hosts in the SAN conceptually in the same manner
as SCSI LUNs are mapped to host ports from storage systems or controllers (also known as LUN
masking).

© Copyright IBM Corp. 2012, 2016 6-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Three types of virtualized volume


• Striped (default) volume allocates one 1. Striped Extent 1a
Extent 2a
Extent 3a MDisk1
extent in turn from MDisks free extents Extent 1b
1.4 TB
that are unallocated in the storage pool. VW_WV1
Extent 2b
Extent 3b MDisk3
ƒ Consumes all space required to create 10 GB Extent 1c
MDisk2 1.4 TB
volume until satisfied Extent 2c
1.4 TB
Extent 3c
Extent 1d

2. Sequential • Sequential volume allocates extents one


Extent 4a
Extent 4b
after the other from one MDisk to the next
Extent 4c MDisk.
VA_WV2
Extent 4d MDisk4 ƒ Allocates small volume space (volume size is
1 GB
Extent 4e 300 GB
Extent 4f
unaware by host)
Extent 4g

Existing data
• Image volume extents are subsequently 3. Image Extent 5a
moved to other MDisks within the Extent 5b
Extent 5c
storage pool without losing access to VW_DATA Extent 5d VW_DATA
data. 800 MB Extent 5e 800 MB
ƒ Creates a replica of the volume data Extent 5f
Extent 5g
(same size)
Partial extent
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-44. Three types of virtualized volume

When a volume is defined, one of the three virtualization types is specified:


• The first method is striped mode (default). A striped volume is allocated one extent in turn from
each managed disk that has free extents (or a subset of managed disks that are known as a
stripe set) in the storage pool. This process continues until the space required for the volume
has been satisfied. Striped volumes can improve performance and allows the creation for
volumes that are larger than the physical managed disks. Therefore, the I/O writes one block to
the first volume, then the second, then the third, then back to the first until it is completed.
• The second method is sequential mode. A sequential volume is where the extents are
allocated one after the other from one managed disk to the next manage disk to create the
volume, given enough consecutive free extents are available on that managed disk. This is
used for large managed disks that you want to allocate only a small volume amount, such as 10
GB. Therefore, the sequential mode maps a volume to a portion of a managed disk.
• Image mode volumes are special volumes that have a direct relationship with one managed
disk. Extents in an image volume can subsequently be moved to other managed disks within
the storage pool without losing access to data. At this point the volume becomes virtualized, its
type changes from image to striped, and the mode of this managed disk changes from image
mode to managed mode. This facilitates data migration into the virtualized environment without
impacting applications. Image mode allows an administrator to adopt in place existing LUNs

© Copyright IBM Corp. 2012, 2016 6-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty
that contain data already. It is a one-to-one mapping. For example, an administrator can
virtualize a LUN containing Microsoft NTFS and gain the benefits of advanced functions and
improved performance immediately without any data movement.

© Copyright IBM Corp. 2012, 2016 6-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000 volume caching I/O group


• Storwize V7000 contains 32 GB of memory for cache with an optional
32 GB upgrade to support RtC.
• Two boot drives with full installation code
ƒ Writes data from volatile memory across both drives to support larger write
cash

I/O Group0 I/O Group1


V7000 node1 V7000 node2 V7000 node3 V7000 node4
Boot disk Boot disk Boot disk Boot disk

Cache Cache Cache Cache

V1 V2 V3 V4

MDisk1 MDisk2 MDisk3 MDisk1 MDisk2


200 GB 200 GB 200 GB 1.8 TB 1.8 TB

Storage pools
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-45. Storwize V7000 volume caching I/O group

In an IBM Storwize V7000 environment, using active/active architecture, the I/O handling for a
volume can be managed by both nodes of the I/O group. Therefore, it is mandatory for servers that
are connected through Fibre Channel connectors to use multipath device drivers to be able to
handle this capability.
Each node of the Storwize V7000 system has a copy of the system state data, MDisks and storage
pools are system-wide resources available to all I/O groups in the system. Volumes, on the other
hand, are owned and managed by the nodes within one I/O group. The I/O group is known as the
volume’s caching I/O group.
Each node canister in the control enclosure caches critical data and holds state information in
volatile memory.
If power to a node canister fails, the node canister uses battery power to write cache and state data
to its boot drive thus mirroring the fast write cache data. This method is known as the Fire Hose
Dump (FHD).
The V7000 allows for 32 GB memory for cache and an optional 32 GB upgrade for use with
Real-time Compression. To support larger write cache, the V7000 node writes data from volatile
memory across both the drives effectively doubling the rate at which data is written to disk. The dual
boot drives contain full installation of Storwize V7000 code. The boot drives can also help increase
reliability while doing a code upgrade on any V7000 node. During upgrade when a node shuts

© Copyright IBM Corp. 2012, 2016 6-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty
down, the hardened data are written to both the internal drives so that the node can survive even if
one of the internal drives fails.

© Copyright IBM Corp. 2012, 2016 6-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

I/O group and write I/O distributed cache

1 Write I/O request


I/O Group0
V1 V2

Preferred Alternative
node node
2
V7000 node1 2 V7000 node2
Boot disk Boot disk
Cache Cache

3
MDisk1 MDisk2 MDisk3
200 GB 200 GB 200 GB

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-46. I/O group and write I/O distributed cache

A volume is accessible through the ports of its caching I/O group. These ports of access can be
Fibre Channel ports for Fibre Channel hosts and Ethernet ports for iSCSI host. By default each host
is entitled to access volumes that are owned by all four I/O groups of a clustered system.
It is the Multipath drivers (such as SDDDSM and SDDPCM) or SCSI specifications for TPGS
(Target Port Group Support) and ALUA (Asymmetric Logical Unit Access) that enables host I/Os to
be driven to the volume’s preferred node.
The visual illustrates how the host request to send I/O writes to a volume that is assigned to a
preferred node. This is the node through which a volume is normally accessed. However, the
distributed cache can be managed by both nodes of the caching I/O group.
1. The write I/O request (1) from a host is accessing volume V1.
2. The preferred node for V1 is Storwize V7000 node1 and SDD drives the I/O to the preferred
node (2). The write data is cached in Storwize V7000 node1 and a copy of the data is cached in
Storwize V7000 node2. A write status completion is then returned to the requesting host.
3. Some time later, cache management in Storwize V7000 node1 (the preferred node) will cause
the cached data to be destaged to the storage system (3) and the other Storwize V7000 node is
notified that the data has been destaged.

© Copyright IBM Corp. 2012, 2016 6-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty
The Storwize V7000 write cache is partitioned to ensure that in the event of a back-end storage
pool is under-performing so that no impact is introduced to other storage pools managed by
Storwize V7000.

© Copyright IBM Corp. 2012, 2016 6-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

I/O group failover

1 Write I/O request


I/O Group0
V1 V2
2
Alternative
Node path

V7000 node1 V7000 node2


Boot disk 3 Boot disk
Cache Cache
0
MDisk1 MDisk2 MDisk3
200 GB 200 GB 200 GB

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-47. I/O group failover

Both Storwize V7000 nodes can act as failover nodes for their respective partner node within the
I/O Group. If a node failure occurs within an I/O group then the other node in the I/O group takes
over the I/O responsibilities of the failed node. Since the write cache is mirrored in both nodes, data
loss is prevented. If only one node exists in an I/O group (due to failures or maintenance) then the
surviving node accelerates the destaging of all modified data in the cache to minimize the exposure
to failure (0).
1. All I/O writes are processed in write-through mode.
2. A write request to volume V1 (1) is driven automatically by SDD to the alternative Storwize
V7000 node in the I/O group using the alternative node path (2).
3. The changed data is written to the cache and the target storage system (3) before the write
request is acknowledged as having completed. Cache can also be used for read operations.
During this time, the V7000 node batteries maintain internal power until the cache and cluster state
information is striped across both boot disk drives of each node. Each drive has half of the cache
contents.

© Copyright IBM Corp. 2012, 2016 6-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

I/O group node failover

1 Write I/O request


I/O Group0
V1 V2
2
Alternative
node path

V7000 node1 V7000 node2


Boot disk Boot disk
3 Cache
0
MDisk1 MDisk2 MDisk3
200 GB 200 GB 200 GB

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-48. I/O group node failover

Both Storwize V7000 nodes can act as failover nodes for their respective partner node within the
I/O Group. If a node failure occurs within an I/O group then the other node in the I/O group takes
over the I/O responsibilities of the failed node. Since the write cache is mirrored in both nodes, data
loss is prevented. If only one node exists in an I/O group (due to failures or maintenance) then the
surviving node accelerates the destaging of all modified data in the cache to minimize the exposure
to failure (0). All I/O writes are processed in write-through mode. A write request to volume V1 (1) is
driven automatically by SDD to the alternative Storwize V7000 node in the I/O group using the
alternative node path (2) and the changed data is written to the cache and the target storage
system (3) before the write request is acknowledged as having completed. Cache can also be used
for read operations. During this time, the V7000 node batteries maintain internal power until the
cache and cluster state information is striped across both boot disk drives of each node. Each drive
has half of the cache contents.

© Copyright IBM Corp. 2012, 2016 6-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Standard (volume) topology


• Storwize V7000 GUI provides presets (templates) which incorporates best
practices for creating volumes:
ƒ A basic volume is a volume whose data is striped across all available managed disks
(MDisks) in one storage pool.
ƒ A mirrored volume is a volume with two physical copies, where each volume copy can
belong to a different storage pools.
ƒ A custom volume, in the context of this menu, is either a Basic or Mirrored volume
with customization from default parameters.
Displays as Internal
Custom or Custom

Automatically enabled by
default for quick initialization
Based on user-
formatting
defined customization

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-49. Standard (volume) topology

The Storwize V7000 management GUI provides presets, which are templates with supplied default
values that incorporate best practices for creating volumes. The presets are designed to minimize
the necessity of having to specify many parameters during object creation while providing the ability
to override the predefined attributes and values. Each volume preset relates to one of more of the
three types of virtualization modes.
Storage pools are predefined by the storage administrator or the system administrator in which all
volumes are created from the unallocated extents that are available in the pool.
With the v7.6 release, the GUI now includes a Quick Volume Creation option that fills all fully
allocated volumes with zeros as a background task whilst the volume is online. The Advanced
Custom option provides an alternative means to creating volumes that are based on user-defined
customization rather than taking the standard default settings for each of the options under quick
volume creation.
This view is of the standard topology, which is single-site configuration, you can create Basic
volumes
or Mirrored volumes. These volumes are automatically enabled by default for quick initialization
formatting which means that the fully allocated volume will be filled with zeros when the host try to
read them.

© Copyright IBM Corp. 2012, 2016 6-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

HyperSwap (volume) topology


• HyperSwap volumes create copies on separate sites using the Spectrum
Virtualize HyperSwap technology.
• HyperSwap can be implemented on Storwize V7000 Gen1 or Storwize
V7000 Gen2.

HyperSwap
The “HyperSwap Volume” Volume
is a combination of the
Master Volume, Auxiliary Master Auxiliary
Volume. UID1 UID1 UID1

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-50. HyperSwap (volume) topology

You can create other forms of volumes, depending on the type of topology that is configured on
your system.
With HyperSwap topology, which is a three-site High Availability configuration, can be used to
create basic volumes or a HyperSwap volumes.
HyperSwap volumes create copies on separate sites for systems that are configured with
HyperSwap topology. Data that is written to a HyperSwap volume is automatically sent to both
copies so that either site can provide access to the volume if the other site becomes unavailable.
The Stretched topology, which is a three-site disaster resilient configuration, creates basic volumes
or a Stretched volumes. This feature is not supported with the IBM Storwize V7000.

© Copyright IBM Corp. 2012, 2016 6-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

New volume commands (1 of 3)


• Five new CLI commands for administering volumes:
ƒ mkvolume
ƒ mkimagevolume
ƒ Addvolumecopy*
ƒ Rmvolumecopy*
ƒ rmvolume

• lsvdisk now includes volume_id, volume_name, and function


fields to easily identify the individual volumes that make up a
HyperSwap volume.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-51. New volume commands (1 of 3)

IBM Spectrum Virtualize introduced five new CLI commands for administering Volumes, but the
GUI will also continue to use legacy commands, for all volume administration.
The new volume commands:
• mkvolume
• mkimagevolume
• addvolumecopy
• rmvolumecopy
• rmvolume
The lsvdisk command has also been modified to include “volume_id”, “volume_name” and
“function” fields to easily identify the individual volumes that make up a HyperSwap volume

© Copyright IBM Corp. 2012, 2016 6-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

New volume commands (2 of 3)


• The mkvolume command:
ƒ Create a new empty volume using storage from existing storage pools.
ƒ Volume is always formatted (zeroed).
ƒ Can be used to create:
í Basic volume: Any topology
í Mirrored volume: Standard topology
í Stretched volume: Stretched topology
í HyperSwap volume: Hyperswap topology
ƒ The type of volume created is determined by the system topology and the
number of storage pools specified.
• The mkimagevolume command:
ƒ Creates a new image mode volume.
ƒ Can be used to import a volume, preserving existing data.
ƒ Is implemented as a separate command to provide greater differentiation
between the action of creating a new empty volume and creating a volume
by importing data on an existing MDisk.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-52. New volume commands (2 of 3)

The mkvolume, as opposed the mkvdisk command creates an new empty volume using storage
from existing storage pools. The type of volume created is determined by the system topology and
the number of storage pools specified. Volume is always formatted (zeroed).when creating
HyperSwap volumes
The mkimagevolume command creates a new image mode volume. This command be used to
import a volume, preserving existing data. Implemented as a separate command to provide greater
differentiation between the
action of creating a new empty volume and creating a volume by importing data on an existing
mdisk.

© Copyright IBM Corp. 2012, 2016 6-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

New volume commands (3 of 3)


• The addvolumecopy command:
ƒ Adds a new copy to an existing volume (synchronized from the existing
copy).
ƒ For stretched and hyperswap topology systems, this command creates a
highly available volume.
ƒ Can be used to create:
í Mirrored volume: Standard topology
í Stretched volume: Stretched topology
í HyperSwap volume: Hyperswap topology
• The rmvolumecopy and rmvolume commands:
ƒ Remove a copy of a volume but leaves the actual volume intact.
ƒ Converts a mirrored, stretched, or hyperswap volume into a basic volume.
ƒ For a hyperswap volume this includes deleting the active-active relationship
and the change volumes.
ƒ Allows a copy to be identified simply by its site.
ƒ The -force parameter from ‘rmvdiskcopy’ and ‘rmvdisk’ is replaced by
individual override parameters, making it clearer to the user exactly what
protection they are bypassing.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-53. New volume commands (3 of 3)

The addvolumecopy command add a new copy to an existing volume. The new copy will always be
synchronized from the existing copy. For stretched and hyperswap topology systems this creates a
highly available volume. It can be used across all volume topologies:
The rmvolumecopy command removes a copy of a volume, leaving the volume fully intact. It also
converts a Mirrored, Stretched or HyperSwap volume into a basic volume. The rmvolume command
deletes the volume. For a HyperSwap volume this includes deleting the active-active relationship
and the change volumes.
This command also allows a copy to be identified simply by its site.
The –force parameter with rmvdiskcopy is replaced by individual override parameters, making it
clearer to the user exactly what protection they are bypassing.

© Copyright IBM Corp. 2012, 2016 6-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Create a basic (generic) volume


• Select the Basic preset.
ƒ Select the location where
volume(s) will be sourced by
extents that are contained in Thin-provision
only one storage pool. or Compressed
ƒ Specifies the parameter in
which the volume must be
created
í Volume capacity unit of
measurement (bytes, KB,
MB, GB, or TB) Create multiple volumes
ƒ Summary (Appended with prefix)
í Provides a quick view of System automatically
volume details before balances the load
creation between the nodes

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-54. Create a basic (generic) volume

A Basic volume is the simplest form of volume, it consists of a single volume copy, made up of
extents striped across all MDisks in a storage pool. It services I/O using readwrite cache and is
classified as fully allocated, therefore it reports real capacity and virtual capacity and equal.
To create a basic volume, Click the Create Volumes option and follow the procedural steps that are
listed within the Create Volumes wizard. This simple wizard provides common options in which to
create any type of the volume specified. Multiple volumes can be created at the same time by using
an automatic sequential numbering suffix. However, the wizard does not prompt you for a name for
each volume that is created. Instead, the name that you use here becomes the prefix and a number
(starting at zero) is appended to this prefix as each volume is created.
We recommend using an appropriate naming convention of volumes to help you easily identify the
associated host or group of hosts. Once all the characteristics of the Basic volume have been
defined it can be created or created and Map directly to host.
The Quick Volume Creation menu also provides Capacity Savings features with the ability to
alter the provisioning of a Basic or Mirrored volume into Thin-provisioned or Compressed.
A volume is also accessible through its accessible I/O groups. By default, the system
automatically balances the load between the nodes. You can choose a preferred node to handle
the caching for the I/O Group or leave the default values for Storwize V7000 auto-balance.

© Copyright IBM Corp. 2012, 2016 6-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Create a mirrored volume


• Select the Mirrored preset.
ƒ Specifies the parameter in
which the volume must be
created Thin-provision
í Volume capacity unit of or Compressed
measurement (bytes, KB,
MB, GB, or TB)
ƒ Select the locations where
each mirrored volume will be
sourced by extents that are
contained in two separate
storage pools Can be house in the
ƒ Summary same pool or two
í Provides a quick view of separate pools
volume details before
creation

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-55. Create a mirrored volume

A Mirrored volume is creation is similar to the Basic volume, except with this option you are creating
two identical mirrored volumes. Each volume copy can belong to a different pool, and each copy
has the same virtual capacity as the volume.
Mirrored volumes are discussed in details in a later topic.

© Copyright IBM Corp. 2012, 2016 6-60


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Create volume option


• The mkvdisk command is generated for all volume creation.
ƒ This command incorporates the volume parameters that are specified in
previous panels.
ƒ This command is generate for each volume to be created.
Example:
Volume owned by I/O group0
Space from assigned storage pool
Volume capacity = 10 GB
Capacity striped across MDisks
18
volume
1 2

6 exts 5 exts

MDisk1 MDisk2

Run the lsvdiskextent


<volume name> to view Extent = 526 MB
extents assigned
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-56. Create volume option

If you chose the Create option, the GUI proceeds by generating the mkvdisk command that is
incorporated with the volume parameters specified in previous panels. The volume-to-host
mapping can be performed at a later date. Since a volume must be owned by an I/O group, the GUI
has selected one by default. The other keywords and values are actually Storwize V7000 defaults
that do not need to be explicitly coded. These include using Storwize V7000 cache for read/write
operations, create one copy (that is, not mirrored and so sync-rate is not relevant), and assign a
virtualization type of striped. All volumes that are created are assigned with an object ID based on
the order of the volume object category of this cluster.
The lsvdiskextent command requires a volume name or ID. It displays the extent distributions of
the volume across the MDisks providing extents.

© Copyright IBM Corp. 2012, 2016 6-61


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Create and Map Volume to Host option


• In addition to the mkvdisk command, the GUI generates the
mkvdiskhostmap command for each volume being mapped to the
specified host.
• Host must be predefined on the system.
• An alternative: Right-click on volume(s) and select the Actions menu
Map to Host option.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-57. Create and Map Volume to Host option

Before a volume can be mapped directly to a specified host system, a host must be predefined on
the system. This is also indicated by the activated Create and Map to Host button. The GUI
generates a mkvdiskhostmap command for each volume being mapped to the specified host. An
alternative way to map volumes to a host is to right-click on the volume(s) and select from the
Actions menu the option to Map to Host.

© Copyright IBM Corp. 2012, 2016 6-62


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Volume formatting: Quick initialization


• Basic and mirrored volumes are automatically formatted using quick
initialization process.
• This creates fully allocated volumes available for use immediately.
ƒ Task actions such as moving, expanding, shrinking, or adding a volume
copy are disabled during the initialization.
• This feature can be disabled using the Advanced Custom preset.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-58. Volume formatting: Quick initialization

The Quick Volume Creation volume presets are automatically formatted through the quick
initialization process. This process makes fully allocated volumes available for use immediately.
Quick initialization requires a small amount of I/O to complete and limits the number of volumes that
can be initialized at the same time. Some volume actions such as moving, expanding, shrinking, or
adding a volume copy are disabled when the specified volume is initializing. Those actions are
available after the initialization process completes.
The quick initialization process can be disabled in circumstances where it is not necessary using
the Advanced Custom preset. For example, if the volume is the target of a Copy Services function,
the Copy Services operation formats the volume. The quick initialization process can also be
disabled for performance testing so that the measurements of the raw system capabilities can take
place without waiting for the process to complete.

© Copyright IBM Corp. 2012, 2016 6-63


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Create a custom volume


• The Advanced Custom preset creates user-defined volumes, such
as, thin-provision volumes or compressed volumes.
ƒ The Advanced menu consists of a number of submenus.
ƒ A custom volume can be customized with respect to Mirror synch rate,
Cache mode and Fast-Format.

Volume Details and Volume


Location are common features
for all volume creations.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-59. Create a custom volume

The Advanced Custom volume creation provides an alternative method of defining Capacity
savings options i.e. Thin-provisioning and/or Compression, but also expands on the base level
default options for available Basic and Mirrored volumes. A Custom volume can be customized with
respect to Mirror synch rate, Cache mode and Fast-Format.

© Copyright IBM Corp. 2012, 2016 6-64


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Thin-provision and compressed volumes


• Custom allows volumes to be tailored to the specifics of the clients
environment.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-60. Thin-provision and compressed volumes

The Advanced Custom option can be used to define thin-provisioned volumes or compressed
volumes. These volumes are very similar as both volume type behaves as though they were fully
allocated. However, the thin volumes uses grain size to increase the real capacity. This feature is
not supported on compressed volumes. The Advanced Custom allows volumes to be tailored to the
specifics of the clients environment.
Thin-provisioned and compressed volumes are discussed in details in later topics.

© Copyright IBM Corp. 2012, 2016 6-65


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

General tab
• A custom volume is enabled by default to be a format volume. This
feature can be disabled by removing the check mark.
• Cache mode indicates if the cache contains changed data for the
volume.
ƒ By default, the cache mode for all volumes are read and write I/O operations.
• OpenVMS UDID (Unit Device Identifier) is used by Open VMS host to
identify the volume.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-61. General tab

Custom volume is enabled by default to format the new volume before use (formatting writes zeros
to the volume before it can be used. That is, it writes zeros to its MDisk extents). This feature can
be disabled by simply removing the check mark.
All read and write I/O operations that are performed by the volume are stored in cache. This is the
default cache mode for all volumes.
OpenVMS user-defined identifier (UDID) requirement applies only for OpenVMS system.

© Copyright IBM Corp. 2012, 2016 6-66


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Volumes and hosts view


• All the volumes from the same Storwize V7000 cluster are assigned a
Unique Identifier (UID) value.
ƒ Same prefix and only the last couple of bytes vary from volume to volume

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-62. Volumes and hosts view

You can view volumes collectively from the Volumes > Volumes view. Observe the entry for the
newly created volume with an object ID of 0. Each volume is assigned a Unique Identifier (UID)
value. All the volumes from the same Storwize V7000 cluster have the same prefix and only the last
couple of bytes vary from volume to volume. When a volume is mapped to a host the Host
Mappings column confirms mapping.
You can also view the entry that describes the mapping between the host and the volume using the
Hosts > Host Mappings menu option. Observe that the selected host displays the mapped
volumes, its UID, and caching I/O group. You can also view assigned volumes by using the
Volumes by Host menu to select an individual host in the Host Filter list.

© Copyright IBM Corp. 2012, 2016 6-67


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Modify host mappings


• From the Hosts > Hosts menu, right-click on a specific host and select
Modify Volume mappings.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-63. Modify host mappings

The Modify Host Mappings window allows you to map the newly created volumes to the selected or
unmap preexisting volumes. The GUI will generate a rmvdiskhostmap command to unmap the
volume from host.

© Copyright IBM Corp. 2012, 2016 6-68


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Managing volume resources


• Volume resources can be displayed by using the Actions button or
right-clicking on the selection host.
Add to column
display

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-64. Managing volume resources

The volume Actions menu provides similar option in which you can manage volume resources
such as modify volume mappings, unmap volume, rename volume, or create new volumes. In
addition, it offers resources to reduce the complexity of moving data that is transparent to the host.

© Copyright IBM Corp. 2012, 2016 6-69


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Volume properties
• Volume UID is the
equivalent of a hardware
volume serial number.
• Caching I/O Group:
ƒ Specifies the I/O group to
which the volume belongs
• Accessible I/O Group:
ƒ Specifies the I/O groups the
volume can be moved to
• Preferred Node:
ƒ Specifies the ID of the
preferred node for the volume

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-65. Volume properties

The volume properties option provides an overview of the selected volume as seen within the GUI.
You can expand the view by selecting View more details.
All volumes are created with a Volume ID which is assigned by the Storwize V7000 at volume
creation. The Volume UID is the equivalent of a hardware volume serial number. This UID is
transmitted to the host OS and on some platforms it can be displayed by host-based commands.

© Copyright IBM Corp. 2012, 2016 6-70


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

View mapped hosts


• From Volumes > Volumes, right-click on a particular volume and
select View Mapped Hosts.

Column display
can be modified
for viewing

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-66. View mapped hosts

The Host Maps tab shows host mapping information including the LUN number or SCSI ID as seen
by the host.
The Member MDisks tab shows the MDisks supplying extents to this volume. The extents are
spread across all the MDisks of the pool.

© Copyright IBM Corp. 2012, 2016 6-71


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Volume properties using CLI


IBM Storwize:V009B:V009B1-admin>lsvdisk
VB1-WIN0
id 14
name Basic-WIN1 Volume details continued
IO_group_id 0 last_access_time
IO_group_name io_grp0 parent_mdisk_grp_id 0
status online parent_mdisk_grp_name V009B1-RAID10
mdisk_grp_id 0 copy_id 0
mdisk_grp_name TeamA50_GRP1 status online
capacity 10.00GB sync yes
type striped primary yes
formatted no mdisk_grp_id 0
mdisk_id mdisk_grp_name TeamA50_GRP1
mdisk_name type striped
FC_id mdisk_id
FC_name mdisk_name
RC_id fast_write_state empty
RC_name used_capacity 10.00GB
vdisk_UID real_capacity 10.00GB
6005076801810787D800000000000014 free_capacity 0.00MB
throttling 0 overallocation 100
preferred_node_id 2 autoexpand
fast_write_state empty warning
cache readwrite grainsize
udid se_copy no
fc_map_count 0 easy_tier on
sync_rate 50 easy_tier_status balanced
copy_count 1 tier ssd
se_copy_count 0 tier_capacity 0.00MB
filesystem tier enterprise
mirror_write_priority latency tier_capacity 10.00GB
RC_change no tier nearline
compressed_copy_count 0 tier_capacity 0.00MB
access_IO_group_count 1 compressed_copy no
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-67. Volume properties using CLI

You can also use the CLI lsvdisk command to view the volume details. In this example, the
lsvdisk command is specified with an object name or object ID which provides much of the
detailed information that is displayed for the volume by the GUI.
The formatted option indicates that at creation, its entire volume capacity is written with zeros (so
that residual data is overridden). Volume formatting is not invoked by default.
The throttling option limits the amount of I/O that is accepted for the volume either with IOPS or
MB/s. Throttling is not set by default.
Both the format and throttling options are not typically used. This is why the GUI does not provide
an interface for these options.

© Copyright IBM Corp. 2012, 2016 6-72


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

View host mappings and pools details using CLI


IBM_Storwize:V009B:V009B1-admin>lshostvdiskmap -delim , 0
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID,IO_group_id,IO_group_name
0,V009B1-WIN,0,0,VB1-WIN0,6005076802B00109FC00000000000014,0,io_grp0
0,V009B1-WIN,1,1,VB1-WIN1,6005076802B00109FC00000000000015,0,io_grp0
0,V009B1-WIN,7,10,APPVOL3-VB1,6005076802B00109FC00000000000020,0,io_grp0
0,V009B1-WIN,13,18,Basic-WIN1,6005076802B00109FC00000000000033,0,io_grp0
0,V009B1-WIN,14,21,Mirrored-WIN,6005076802B00109FC00000000000034,0,io_grp0
IBM_Storwize:V009B:V009B1-admin>

IBM_Storwize:V009B:V009B1-admin>lsmdisk -delim , -filtervalue


mdisk_grp_name=V009B1-RAID10
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controlle
r_name,UID,tier,encrypt,site_id,site_name,distributed
1,MDisk_01,online,array,0,V009B1-RAID10,1.1TB,,,,enterprise,no,,,no
2,MDisk_02,online,array,0,V009B1-RAID10,1.1TB,,,,enterprise,no,,,no
IBM_Storwize:V009B:V009B1-admin>

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-68. View host mappings and pools details using CLI

To view the host details, the lshostvdiskmap command output displays the host objects and
volumes that are mapped to these hosts objects. It can be filtered to a specific host by specifying
the host name or ID.
The CLI tends to favor object IDs instead of object names. The GUI provides both to be more user
friendly.
Extensive filtering (-filtervalue) is available with the CLI as the number of objects within some
categories will grow larger over time. A common usage of the lsmdisk command is to filter by the
pool name (mdisk_grp_name) to obtain a list of MDisks within a given pool.
The -delim parameter reduces the width of the resulting output by replacing blank spaces
between columns with a delimiter (a comma, for example). When the CLI displays a summary list of
objects each entry generally begins with the object ID followed by the object name of the object.

© Copyright IBM Corp. 2012, 2016 6-73


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Expand volume capacity


• Expand volume capacity without
interruptions to user availability
ƒ Right-click a volume and select
Expand.
ƒ Volume can be mapped to host:
í Host interfaces must be available to
see increased
Windows use diskpart utility

AIX chvg –g vgname

ƒ Extents become striped regardless of


the virtualization type in which it was
created
ƒ Image type volumes cannot be
expanded IBM Storwize:V009B:V009B1-admin>lsvdiskextent 0
id number_extents
0 10 0 1 2
1 9 10 exts 9 exts 8 exts
2 8 DS3K0 DS3K1 DS3K2
3 3

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-69. Expand volume capacity

The size of a volume can be expanded to present a larger capacity disk to the host operating
system. You can increase a volume size with a few clicks using management GUI. Increasing the
size of the volume is done without interruptions to the user availability of the system. However, you
must ensure that the host operating system provides support to recognize that a volume has
increased in size.
For example:
• AIX 5L V5.2 and higher issuing the chvg –g vgname
• Windows Server 2008, and Windows Server 2012 for basic and dynamic disks
• Windows Server 2003 for basic disks and with Microsoft hot fix (Q327020) for dynamic
The command that is generated by the GUI is expandvdisksize with the capacity amount to be
added to the existing volume identified. When a volume is expanded its virtualization type becomes
striped even if it was previously defined as sequential. Image type volumes cannot be expanded.

© Copyright IBM Corp. 2012, 2016 6-74


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Shrink volume capacity


• Shrinking a volume can be data
destructive.
ƒ Recommend that volume NOT be in use
by a host
• Extents are removed from the end
of the volume.
• Best practice:
Do not do this if the volume
ƒ Always have a consistent backup before contains useful data.
attempting to shrink volume.
• Typical usage:
ƒ FlashCopy target or
Mirroring secondary
IBM Storwize:V009B:V009B1-admin>lsvdiskextent 0
of source volume id number_extents
with virtualization 0 7 0 0 1 1 2 2
1 7 7 exts 7 exts 6 exts
type = image 7 exts 7 exts 6 exts
2 6 DS3K0 DS3K1 DS3K2
DS3K0 DS3K1 DS3K2

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-70. Shrink volume capacity

The method that Storwize V7000 uses to shrink a volume is to remove the required number of
extents from the end of the volume. Depending on where the data is on the volume, this action can
be data destructive. Therefore, the recommendation is that the volume should not be in-use by a
host. The shrinking of a volume using the Storwize V7000 is similar to expanding volume capacity.
Ensure that the operating system supports shrinking (natively or by using third-party tools) before
you use this function. In addition, it is best practice to always have a consistent backup before you
attempt to shrink volume.
The shrinkvdisksize command that is generated by the GUI decreases the size of the volume by
the specified size. This interface to reduce the size of a volume is not intended for in-use volumes
that are mapped to a host. It is used for volumes whose content will be overlaid after the size
reduction, such as being a FlashCopy target volume where the source volume has an esoteric size
that needs to be matched.

© Copyright IBM Corp. 2012, 2016 6-75


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Removing MDisk migrates volume extents

IBM Storwize:V009B:V009B1-admin>lsvdiskextent VW_GEN


id number_extents
0 7
1 7
2 6
IBM Storwize:V009B:V009B1-admin>lsvdiskextent VW_GEN Migrate data from DS3K
DS32 to
id number_extents other MDisks in the same
0 10 0 1
1 10 storage
storage pool.
pool
10 exts 10 exts

DS3K0 DS3K1

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-71. Removing MDisk migrates volume extents

If an MDisk is removed from a storage pool then all of the allocated extents of from the removed
MDisk are redistributed to the other MDisks in the pool.
The rmmdisk command that is generated by the GUI contains the -force parameter to remove the
MDisk from its current pool. The -force specification enables the removal of the MDisk by
redistributing the allocated extents of this MDisk to other MDisks in the pool.
Examine the output of the two progressively issued lsvdiskextent 0 commands to view the extent
distribution for a certain volume. The number of extents in use by the volume has decreased. In
order to remove the MDisk, all the extents of this volume need to be migrated from this MDisk to the
remaining MDisks in the pool.
The 22s value in the Running Tasks bubble indicates that the background migration task started
22 seconds ago.

© Copyright IBM Corp. 2012, 2016 6-76


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Protect volumes on delete


• Storwize V7000 supports a global setting that prevents volume and host mapping
from being deleted inadvertently.
ƒ Enabled vdiskprotection protects from unintentional deletions.
í Minimum time is 15 minutes and maximum time is 1440 minutes.
• System detects recent I/O activity:
ƒ If volume is part of a host mapping, FlashCopy mapping, or remote-copy relationship then
system fails to delete volume.
í The –force flag can override the failure.

If enabled, the bottom fields will display the


following:
chsystem -vdiskprotectionenabled yes
disk_protection_time 30
vdisk_protection_enabled yes
product_name IBM Storwize V7000

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-72. Protect volumes on delete

To prevent active volumes or host mappings from being deleted inadvertently, the system supports
a global setting that prevents these objects from being deleted if the system detects that they have
had recent I/O activity. When you delete a volume the system checks to verify whether it is part of a
host mapping, FlashCopy mapping, or remote-copy relationship. For this example, the threshold is
set to 30 minutes. Therefore, the volume you want to delete has received any I/O within the
preceding 30 minutes, you will not be able to delete the volume or unmap the volume from the host.
In these cases, the system fails to delete the volume and the user has to use the -force flag to
override that failure. Using the -force parameter can lead to an unintentional deletion of volumes
that are still active. Active means that the system has detected recent I/O activity to the volume
from any host.
When the vdisk protection is enabled, you are protected for unintentionally deleting a volume even
with the -force parameter added, within whatever time period you have decided on. You can
configure the “idle time” from 15 to 1440 minutes. If the last I/O was within the specified time period,
then the rmvdisk command fails and the user has to either wait until the volume really is
considered idle or disable the system setting, delete/unmap the volume, and re-enable the setting.
That is, the volume has to have been idle for that long before you are allowed to delete it. You can
of course force the deletion by disabling the feature.

© Copyright IBM Corp. 2012, 2016 6-77


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Volume data protection with encryption


• Storwize V7000 software encryption provides encryption of data at rest
for volumes presented for virtualization on a Storwize V7000 Gen2
systems.
ƒ Volumes are automatically encrypted when they are create volume from an
encrypted storage pool.

Encrypted
Encrypted Mdisk 3
MDisk 1
Encrypted
MDisk 2 Encrypted
Volume

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-73. Volume data protection with encryption

IBM Data Protection with Encryption enables software encryption for all volumes created from
Storwize V7000 Gen2 systems using encrypted internal storage pools or external storage systems,
including down stream Storwize systems.
For new volumes the simplest way to create an encrypted volume is to create it in an encrypted
pool. All volumes created in encrypted pool will take on an encrypted attribute. The encryption
attribute is independent of the volume class created.

© Copyright IBM Corp. 2012, 2016 6-78


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Encrypting volumes
• Unencrypted volumes can be converted to encrypted volume using the
Volume Action of Migrate to Another Pool.
ƒ Target pool must be a software encrypted parent pool.
í Not possible to use the migrate option between parent and child Pools.
Encryption
status

Migrate to target
encrypted pool

Migration tasks issues the


migratevdisk command

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-74. Encrypting volumes

With encryption enabled, you can display the encryption status of a volume from the GUI Volumes
> Volumes view. In order to do this you must customize the column view to display the encryption
key status.
Unencrypted volume can be encrypted using the GUI’s Migrate to Another Pool option. This
procedure will execute the command migratevdisk if the target pool is encrypted.
Software encrypted pools will have a unique per-pool key, and so may their Child pools. These may
be different to the parent pool i.e. If a parent pool has no encryption enabled, a child pool can still
enable encryption. If a parent and a child pool have a key then the child pool key will be used for
child pool volumes. Child pools are allocated extents from their parents, but it is possible for the
extents in the child pool to take on different encryption attributes.

© Copyright IBM Corp. 2012, 2016 6-79


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Child pool encryption


• It is possible to create encrypted volumes from an encrypted child pool
in a parent pool that is unencrypted.

Encrypted
Mdisk 1 Encrypted
Volume

Encrypted
Encrypted Mdisk 3
MDisk 1 Encrypted Child pool to take on
MDisk 2 different encryption
attributes

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-75. Child pool encryption

Software encrypted pools will have a unique per-pool key, and so may their Child pools. These may
be different to the parent pool, such as:
• If a parent pool has no encryption enabled, a child pool can still enable encryption.
• If a parent and a child pool have a key then the child pool key will be used for child pool
volumes.
Child pools are allocated extents from their parents, but it is possible for the extents in the child pool
to take on different encryption attributes

© Copyright IBM Corp. 2012, 2016 6-80


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000 host connection topics


• Host system functions
• Host administration
• Volume (VDisk) allocation
• Host storage access
• Non-disruptive volume move (NDVM)

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-76. Storwize V7000 host connection topics

This topic discusses the process in which a host system access storage assigned volumes.

© Copyright IBM Corp. 2012, 2016 6-81


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Discover volumes on the Windows (iSCSI) hosts


• Disk management
ƒ Rescan to detect disks
(extended partitions)
presented to Windows.
ƒ All volumes that are mapped
to a Windows (VW) host or
an iSCSI (VI) host are listed
collectively by a disk number.
Child pool
ƒ Storwize V7000 volumes are volume
seen as standard SCSI
disks.
ƒ Disks partitioned unallocated
space must be initialized and
formatted for use.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-77. Discover volumes on the Windows (iSCSI) hosts

From the Windows host perspective, the Storwize V7000 volumes are standard SCSI disks. All
volumes that are mapped to either a Windows host or an iSCSI host are discovered and displayed
collectively within the Windows Disk Management interface. Windows presents volumes as
unallocated disks that must be initialized and partitioned with a logical drive of the size you
designate by using the new volume feature selected.

© Copyright IBM Corp. 2012, 2016 6-82


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Windows host paths view by using SDDDSM


• SDDDSM datapath
Paths can be
query device used to validate
zoning.
ƒ Disk# correlates to disk
presented in the Windows Disk
Management interface.
ƒ Serial # correlates to the
Storwize V7000 UID # for
volumes.
‫ ޤ‬Recommend issuing this
command before formatting
Windows disks.

The non-asterisk (*) paths indicate the


active device (preferred path) to which
the current I/Os are being sent.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-78. Windows host paths view by using SDDDSM

The SDDDSM datapath query device command can be used to correlate Storwize V7000
volumes based on the serial number that is shown for the disk (which is the Storwize V7000 UID for
the volume).
Depending on the host type, path numbers are displayed with each disk or Storwize V7000 volume
which validates the four-path zoning objective. Even though these volumes are owned by different
I/O groups this information is not obvious from the SDD output.
The -l parameter is appended to the datapath query device command and causes SDDDSM to
flag paths to the alternate (non-preferred node) with an asterisk. The 0 and 1 value of the
command identifies the SDD device number range of the Windows disks to be displayed.
After rescanning disks, writing a signature on the disk, and creating a partition, the Storwize V7000
mapped volume is used like any other drive in Windows.
For example, the SDDDSM output for the datapath query device command identifies a host
device name (Disk4). Based on the serial number that is displayed, disk4 can be correlated to the
Storwize V7000 VW_CPVOL (Child Pool volume) volume UID value.
The paths that are displayed by this output can be used to validate zoning (zoned for four paths
between the host and Storwize V7000 ports). SDDDSM manages path selection for I/Os to the
volume and drives I/O to the paths of the volume’s preferred node.

© Copyright IBM Corp. 2012, 2016 6-83


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Windows hosts Device Manager view


• Device Manager
ƒ SDDDSM determines that each of the 2145 SCSI Disk Device instances
correlates to one Storwize V7000 volume and thus creates a 2145 Multi-
Path Disk Device to represent that volume.
í Each volume gets reported four times during host HBA SAN device discovery.
ƒ Microsoft MPIO manages one 2145 Multi-Path Disk Device per disk.
í Eight paths for each disk that is discovered by the Windows host.
í Four paths to each disk discovered by the iSCSI host.

Windows volumes

iSCSI volumes

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-79. Windows hosts Device Manager view

The configured paths or sessions between the host and the Storwize V7000 nodes are reflected in
the Windows Device Manager for the device. Based on the four-path zoning, each volume has
been reported four times during host HBA SAN device discovery. This is seen in the Windows
Device Manager view for Disk drives.
The SDDDSM (Windows MPIO multipath driver) recognizes and supports the 2145 device type.
SDDDSM determines that each of the 2145 SCSI Disk Device instances correlates to one Storwize
V7000 volume and thus creates a 2145 Multi-Path Disk Device to represent that volume. SDDDSM
also manages the path selection to the four paths of each volume.
For iSCSI host volumes, Windows Device Manager lists one 2145 SCSI disk device reported by
the four paths between the host and the Storwize V7000 node. The MPIO support on Windows
recognizes that these instances of 2145 SCSI disks actually represent one 2145 LUN and manages
these as one 2145 multi-path device.
Four instances of the Storwize V7000 volume are reported to the host through the four configured
paths. Windows MPIO manages the reported instances as one disk with four paths.

© Copyright IBM Corp. 2012, 2016 6-84


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Discover volumes on the AIX host


# lsdev -Cc disk
# cfgmgr Hdisk0 rootvg
# lsdev -Cc disk
hdisk0 Available Virtual SCSI Disk Drive
hdisk1 Available 02-00-02 MPIO FC 2145
hdisk2 Available 02-00-02 MPIO FC 2145

VB1-AIX

io_grp0
VB1-AIX2
VB1-AIX1
ID 6
ID 2 UID…07
UID…03 SCSI ID 1
SCSI ID 0

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-80. Discover volumes on the AIX host

For AIX hosts, historically, SDD configured vpath devices to provide MPIO to hdisk devices.
Therefore, you need to run lsdev -Cc disk to query the Object Data Manager (ODM) to present
the LUNs as hdisks. When the new LUN had been assigned to the AIX host, the cfgmgr command
is executed to pick up the new V7000 disk. LUNs are represented as hdisk0, hdisk1, and hdisk2.
Note that hdisk0 is the root volume group (rootvg) for the AIX OS.

© Copyright IBM Corp. 2012, 2016 6-85


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Mounting V7000 volume on AIX


# mkvg -y VB1-AIXVG hdisk1
0516-1254 mkvg: Changing the PVID in the ODM
A50_AIXVG mkvg command
mkvg command is is
# issued for each hdisk
# mkvg -y VB1-AIXVG hdisk2 discovered
0516-1254 mkvg: Changing the PVID in the ODM
A50_AIXVG
New VG label for each
hdisk included in the
# lspv VGs
hdisk0 00f66aa5063da6c1 rootvg active
hdisk1 00f66aa5fcda0802 A50_AIXVG active
hdisk2 00f66aa5fce32766 A50_AIXVG active

Identify existing VGs


# lsvg -l VB1-AIXVG
A50_AIXVG:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-81. Mounting V7000 volume on AIX

Once the hdisks are discovered the mkvg command is used to create a Volume Group (VG) with
the newly configured hdisks. The lspv output shows the existing Physical Volume (PV) hdisks with
the new VG label on each of the hdisks that were included in the VGs. The lsvg output shows the
existing volume group (VG).

© Copyright IBM Corp. 2012, 2016 6-86


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

AIX host paths view using SDDPCM


# pcmpath query device
Total Dual Active and Active/Asymmetric Devices : 2

DEV#: 1 DEVICE NAME: hdisk1 TYPE: 2145 ALGORITHM: Load Balance


SERIAL: 6005076801810787D800000000000003
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 100 0
1 fscsi0/path1 OPEN NORMAL 87 0
2* fscsi0/path2 OPEN NORMAL 63 0
3* fscsi0/path3 OPEN NORMAL 63 0
4 fscsi1/path4 OPEN NORMAL 92 0
5 fscsi1/path5 OPEN NORMAL 90 0
6* fscsi1/path6 OPEN NORMAL 63 0
7* fscsi1/path7 OPEN NORMAL 63 0

DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2145 ALGORITHM:TheLoadnon-asterisk


Balance (*) paths indicate the
SERIAL: 6005076801810787D800000000000007 active device (preferred path) to which
the current I/Os are being sent.
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 100 0
1 fscsi0/path1 OPEN NORMAL 89 0
2* fscsi0/path2 OPEN NORMAL 63 0
3* fscsi0/path3 OPEN NORMAL 63 0
4 fscsi1/path4 OPEN NORMAL 88 0
5 fscsi1/path5 OPEN NORMAL 92 0
6* fscsi1/path6 OPEN NORMAL 63 0
7* fscsi1/path7 OPEN NORMAL 63 0
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-82. AIX host paths view using SDDPCM

To confirm that the new disks are discovered and that the paths have been configured correctly, the
SDDPCM pcmpath query device command is used. The output of this command is the same
structure as the SDDDSM. The pcmpath query device command validates the I/O distribution
across the paths of the preferred node of the volume (or hdisk). SDDPCM identifies eight paths for
each hdisk because this host is zoned for eight paths access in the example. Currently, all eight
paths have a state of CLOSE because the host access infrastructure still needs to be defined.
An asterisk on a path indicates it is an alternate path.
The SERIAL: number of the AIX hdisk correlates to the V7000 UID value of the volume.

© Copyright IBM Corp. 2012, 2016 6-87


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

AIX path connection data


# odmget –q “name=hdiskx” CuPath
CuPath: CuPath:
name = "hdisk1" name = "hdisk1"
parent = "fscsi0" parent = "fscsi1"
connection = "500507680140f072,0" connection = "500507680130f072,0"
alias = ""
alias = "" 11
path_status = 1 path_status = 1 12
path_id = 0 NODE1 (F072) path_id = 4
NODE1 (F02F)
CuPath: CuPath:
name = "hdisk1" name = "hdisk1"
parent = "fscsi0" parent = "fscsi1"
connection = "500507680110f072,0" connection = "500507680120f072,0"
alias = ""
alias = "" 14 path_status = 1
13
path_status = 1
path_id = 1 NODE1 (F072) path_id = 5 NODE1 (F02F)

CuPath: CuPath:
name = "hdisk1" name = "hdisk1"
parent = "fscsi0" parent = "fscsi1"
connection = "500507680130f0fb,0" connection = "500507680140f0fb,0"
alias = "" alias = ""
path_status = 1 path_status = 1
path_id = 2 path_id = 6

CuPath: CuPath:
name = "hdisk1" name = "hdisk1"
parent = "fscsi0" parent = "fscsi1"
connection = "500507680120f0fb,0" connection = "500507680110f0fb,0"
alias = "" alias = ""
path_status = 1 path_status = 1
path_id = 3 path_id = 7
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-83. AIX path connection data

The AIX Object Data Manager (ODM) contains pathing or connectivity data for the hdisk. The AIX
odmget command can be used to obtain detailed information regarding the paths of a given hdisk.
The WWPN is shown as connection ((remember the Q value in the WWPN). Thus, the connection
identifies the V7000 node and port that is represented by this path. The path_id fields for values of
0, 1, 4, and 5 represent the four ports of V7000 node ID 1 (the volume’s preferred node).

© Copyright IBM Corp. 2012, 2016 6-88


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Host paths to volume’s preferred node


#pcmpath query device 2

DEV#: 1 DEVICE NAME: hdisk1 TYPE: 2145 ALGORITHM: Load Balance


SERIAL: 6005076801810787D800000000000003
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 100 0
1 fscsi0/path1 OPEN NORMAL 87 0
2* fscsi0/path2 OPEN NORMAL 63 0
3* fscsi0/path3 OPEN NORMAL 63 0
4 fscsi1/path4 OPEN NORMAL 92 0
5 fscsi1/path5 OPEN NORMAL 90 0
6* fscsi1/path6 OPEN NORMAL 63 0
7* fscsi1/path7 OPEN NORMAL 63 0

VB1-AIX

fscsi1 fscsi0
Path4 D20B D20A Path1
io_grp0
Path 5 VB1-AIX1
Path0

12 13 ID 2 11 14
UID…03
NODE1 (F072) SCSI ID 0 NODE1 (F072)
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-84. Host paths to volume’s preferred node

From the previous two command output sets, an understanding of the path configuration can be
obtained and host zoning can be validated. Under normal circumstances the SDD distributes I/O
requests across these four paths to the volume’s preferred node.

© Copyright IBM Corp. 2012, 2016 6-89


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Storwize V7000 host connection topics


• Host system functions
• Host administration
• Volume (VDisk) allocation
• Host storage access
• Non-disruptive volume move (NDVM)

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-85. Storwize V7000 host connection topics

This topic discusses the non-disruptive movement of volumes between the I/O groups.

© Copyright IBM Corp. 2012, 2016 6-90


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Moving volume between I/O groups: Host perspective


• Only applicable to multiple I/O group systems
• Host mapped to volumes must support non disruptive volume
movement
ƒ Can be done concurrently with I/O operations
ƒ Might require rescan at the host level to ensure multipathing drive is notify
changes in the preferred node
ƒ Host must be a member of the target I/O group

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-86. Moving volume between I/O groups: Host perspective

Moving a volume between I/O groups is considered a migration task. Hosts mapped to the volume
must support non disruptive volume movement (NDVM). Modifying the I/O group that services the
volume can be done concurrently with I/O operations if the host supports non disruptive volume
move. However, the cached data that is held within the system must first be written to the system
disk before the allocation of the volume can be changed. Since paths to the new I/O group need to
be discovered and managed, the multipath driver support is critical for nondisruptive volume move
between I/O groups. Rescanning at the host level ensures that the multipathing driver is notified
that the allocation of the preferred node has changed and the ports by which the volume is
accessed has changed. This can be done in the situation where one pair of nodes has become
over utilized.
If there are any host mappings for the volume, the hosts must be members of the target I/O group
or the migration fails. Keep in mind that the commands and actions on the host vary depending on
the type of host and the connection method used. These steps must be completed on all hosts to
which the selected volumes are currently mapped.

© Copyright IBM Corp. 2012, 2016 6-91


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

IBM Support: Non-disruptive volume move

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-87. IBM Support: Non-disruptive volume move

Support information for the Storwize V7000 is based on code level. One easy way to locate its web
page is to perform a web search using the keywords IBM Storwize V7000 V7.6 supported hardware
list.
Within the V7.6 support page, locate and click the link to Non-Disruptive Volume Move (NDVM).
Host system multipath driver support information is also found on this web page.

© Copyright IBM Corp. 2012, 2016 6-92


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

NDVM: Supported OS and multipath drivers

A volume in a Metro/Global
Mirror relationship cannot
change its caching I/O group

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-88. NDVM: Supported OS and multipath drivers

The NDVM link provides a list of host environments that supports nondisruptively moving a volume
between I/O groups. The Multipathing column identifies the multipath driver required. After the
move, paths to the prior I/O group might not be deleted until a host reboot occurs.

© Copyright IBM Corp. 2012, 2016 6-93


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Moving volume between I/O groups: Volume perspective


• A volume can be configured so that it can be accessed through all I/O
groups at all times.
ƒ I/O data for the volume is only cached in the volume’s caching I/O group.
ƒ The maximum number of paths that are supported between the volume and all its
I/O groups is eight paths.
• The caching I/O group of a volume must be online to be able to access
the volume.
ƒ Even if the volume is accessible through other I/O groups.
• A volume that is mapped to a host through multiple I/O groups might not
have the same SCSI ID across these I/O groups.
ƒ This might cause problems with some operating systems.
• A volume in a Metro or Global Mirror relationship cannot change its
caching I/O group.
• If a volume in a FlashCopy mapping is moved, its bitmap is left in the
original I/O group.
ƒ This causes additional inter-node messaging during FlashCopy operations.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-89. Moving volume between I/O groups: Volume perspective

The visual shows some addition notes and summary on Non-Disruptive Volume Move (NDVM).

© Copyright IBM Corp. 2012, 2016 6-94


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Changing preferred node using GUI


• Select Volumes > Volumes.
ƒ Right-click on volume to be moved and select Modify I/O Group or select
Actions > Modify I/O Group.
ƒ The wizard provides guided steps to move volume from one I/O to another
I/O group.
í The GUI generates the following svctask commands to move volume ID 3 to a new
caching I/O group:
• The movevdisk -iogrp command enables the caching I/O group of the volume
to be changed.
• The addvdiskaccess -iogrp command adds the specified I/O group to the
volume’s access list.
• The rmvdiskaccess -iogrp command removes the access to the volume from
the ports of the specified I/O group.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-90. Changing preferred node using GUI

You can also use the management GUI to move volumes between I/O groups non-disruptively. In
the management GUI, select Volumes > Volumes. On the Volumes panel, select the volume that
you want to move and select Modify I/O Group or select Actions > Modify I/O Group. The wizard
guides you through all the steps that are necessary for moving a volume to another I/O group,
including any changes to hosts that are required.
A volume is owned by a caching I/O group as active I/O data of the volume is cached in the nodes
of this I/O group. If a volume is not assigned to a host, changing its I/O group is simple, as none of
its data is cached yet.
Make sure you create paths to I/O groups on the host system. After the system has successfully
added the new I/O group to the volume's access set and you have moved selected volumes to
another I/O group, detect the new paths to the volumes on the host.
The GUI generates the following commands to a new caching I/O group:
• The movevdisk -iogrp command enables the caching I/O group of the volume to be changed.
The -node parameter allows the preferred node of the volume to be explicitly specified.
Otherwise, the system load balances between the two nodes of the specified I/O group.

© Copyright IBM Corp. 2012, 2016 6-95


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty
• The addvdiskaccess -iogrp command adds the specified I/O group to the volume’s access
list. The volume is accessible from the ports of both I/O groups. However, the volume’s data is
only cached in its new caching I/O group.
• The rmvdiskaccess -iogrp command removes the access to the volume from the ports of the
specified I/O group. The volume is now only accessible through the ports of its newly assigned
caching I/O group.
The chvdisk -iogrp option is no longer available beginning with v6.4.0.

© Copyright IBM Corp. 2012, 2016 6-96


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Changing preferred node using CLI


To move a volume between I/O groups using the CLI, complete the
following steps:
1. Issue the following command: addvdiskaccess -iogrp iogrp
id/name volume id/name
2. Issue the following command: movevdisk -iogrp destination
iogrp -node new preferred node volume id/name
3. Issue the appropriate commands on the hosts mapped to the volume to
detect the new paths to the volume in the destination I/O group.
4. Once you confirm the new paths are online, remove access from the
old I/O group:
rmvdiskaccess -iogrp iogrp id/name volume id/name
5. Issue the appropriate commands on the hosts mapped to the volume to
remove the paths to the old I/O group.
If no new I/O group is specified, the volume stays in the same I/O
group but changes to the preferred node specified.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-91. Changing preferred node using CLI

With the release of the v7.3 code, changing a preferred node in the I/O can be a simple task.
Previously, this task could only be done using Non-Disruptive Volume Move (NDVM) between I/O
groups. Now you can perform this same task by using the CLI movevdisk command to move the
preferred node of a volume either within the same caching I/O group or to another caching I/O
group.

© Copyright IBM Corp. 2012, 2016 6-97


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Keywords
• Storwize V7000 GUI • Subsystem Device Driver Path Control
Module (SDDPCM)
• Command-line interface (CLI)
• Disk Management
• I/O load balancing
• Device Manager
• Fabric zoning
• Host object
• Virtualization
• FCP Host
• Cluster system
• iSCSI host
• Storage pool
• SCSI LUNs
• MDisks
• Extents
• Internal disks
• Thin-provisioning
• External storage
• Volume mirroring
• Subsystem Device Driver Device
Specific Module (SDDDSM) • Worldwide node name (WWNN)
• Worldwide port name (WWPN)
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-92. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 6-98


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Review questions (1 of 2)
1. True or False: Zoning is used to control the number of paths
between host servers and the Storwize V7000.

2. For a host to access volumes that are provisioned by the


Storwize V7000, which of the following must be true:
a. The host WWPNs or IQN must have been defined and
mapped with the volume’s owning I/O group.
b. Fibre Channel zoning or iSCSI IP port configuration must
have been set up to allow appropriate ports to established
connectivity.
c. The volumes must have been created and mapped to the
given host object.
d. All of the above.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-93. Review questions (1 of 2)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 6-99


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Review answers (1 of 2)
1. True or False: Zoning is used to control the number of paths
between host servers and the Storwize V7000.
The answer is true.

2. For a host to access volumes that are provisioned by the


Storwize V7000, which of the following must be true:
a. The host WWPNs or IQN must have been defined and
mapped with the volume’s owning I/O group.
b. Fibre Channel zoning or iSCSI IP port configuration must
have been set up to allow appropriate ports to established
connectivity.
c. The volumes must have been created and mapped to the
given host object.
d. All of the above.
The answer is all of the above.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 6-100


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Review questions (2 of 2)
3. True or False: A multipath driver is needed when multiple
paths exist between a host server and the Storwize V7000
system.

4. True or False: iSCSI storage systems can be accessed by


iSCSI initiators using the Storwize V7000 system.

5. True or False: If an IP network connectivity failure occurs


between the iSCSI initiator and the Storwize V7000 system
iSCSI target port, the cluster will automatically failover the
iSCSI target port address to the other node’s IP port.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-94. Review questions (2 of 2)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 6-101


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Review answers (2 of 2)
3. True or False: A multipath driver is needed when multiple
paths exist between a host server and the Storwize V7000
system.
The answer is true.

4. True or False: iSCSI storage systems can be accessed by


iSCSI initiators using the Storwize V7000.
The answer is false.

5. True or False: If an IP network connectivity failure occurs


between the iSCSI initiator and the Storwize V7000 system
iSCSI target port, the cluster will automatically failover the
iSCSI target port address to the other node’s IP port.
The answer is false.

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 6-102


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 6. Storwize V7000 host to volume allocation

Uempty

Unit summary
• Summarize host systems functions in an Storwize V7000 system
environment
• Differentiate the configuration procedures required to connect an FCP
host versus an iSCSI host
• Recall the configuration procedures required define volumes to a host
• Differentiate between a volume’s caching I/O group and accessible I/O
groups
• Identify subsystem device driver (SDD) commands to monitor device
path configuration
• Perform non-disruptive volume movement from one caching I/O group
to another

Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016

Figure 6-95. Unit summary

© Copyright IBM Corp. 2012, 2016 6-103


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Unit 7. Spectrum Virtualize advanced


features
Estimated time
01:15

Overview
This unit discusses the IBM Spectrum Virtualize advanced software functions designed to deliver
storage efficiency and optimize storage asset investments. The topics include Easy Tier
optimization of flash memory; volume capacity savings using Thin Provisioned virtualization and
storage capacity utilization efficiency with the achievement of Real-time Compression (RtC).

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 7-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Unit objectives
• Recognize IBM Storage System Easy Tier settings and statuses at the
storage pool and volume levels
• Differentiate among fully allocated, thin-provisioned, and compressed
volumes in terms of storage capacity allocation and consumption
• Recall steps to create thin-provisioned volumes and monitor volume
capacity utilization of auto expand volumes
• Categorize Storwize V7000 hardware resources required for Real-time
Compression (RtC)

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 7-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Storwize V7000 enhanced features topics


• Spectrum Virtualize I/O Stack

• Easy Tier 3rd Generation

• Thin provisioning

• Real-time Compression (RtC)

• Comprestimator utility

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-2. Storwize V7000 enhanced features topics

This topic introduces the IBM Spectrum Virtualize I/O architecture.

© Copyright IBM Corp. 2012, 2016 7-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Spectrum Virtualize software architecture


IOs from Host

SCSI Target

Forwarding
Implemented below
Upper Cache. Neither Replication Compression
Neither host nor Copy implemented within
host nor Copy Services Upper Cache
Services are aware of Thin Provisioning layer
are aware of these of node I/O stack
these special volumes FlashCopy
special volumes
Mirroring

Thin Provisioning Compression

Lower Cache

Virtualization Easy Tier 3

Forwarding

RAID Easy Tier monitors I/O


Virtual Forwarding performance from the device
Array end (after cache)
SCSI Initiator
Drive
IOs to storage controllers
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-3. Spectrum Virtualize software architecture

IBM Storwize V7000 utilizes the same software architecture as the IBM SAN Volume Controller to
implement Volume Mirroring, Thin-Provisioning, and Easy Tier.
Both thin-provisioned and mirrored volumes are implemented in the I/O stack below the upper
cache and Copy Services. Neither the host application servers nor Copy Services functions are
aware of these types of special volumes. Therefore, they are seen as normal created volumes, and
host.
For compressed volumes, the host servers and Copy Services operate with uncompressed data.
Compression occurs on the fly in the Thin Provisioning layer so that physical storage is only
consumed by compressed data.
Easy Tier is designed to reduce the I/O latency for hot spots, it does not however replace storage
cache. Both methods solve a similar access latency workload problem, but each method weigh
differently in the algorithmic construction that is based on locality of reference, regency, and
frequency. Therefore, Easy Tier monitors I/O performance from the device end (after cache), where
it picks up the performance issues that cache cannot solve. The placement of this method balance
the overall storage system performance.

© Copyright IBM Corp. 2012, 2016 7-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Storwize V7000 enhanced features topics


• Spectrum Virtualize I/O Stack

• Easy Tier 3rd Generation

• Thin provisioning

• Real-time Compression (RtC)

• Comprestimator utility

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-4. Storwize V7000 enhanced features topics

This topic introduces the enhanced features of Easy Tier 3rd Generation.

© Copyright IBM Corp. 2012, 2016 7-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

IBM System Storage Easy Tier functional overview


• Easy Tier V3 supports online dynamic
relocation of data
ƒ Monitors real time performance of each 1GiB
extent (sub-volume) to determine the data
‘temperature’
ƒ Easy Tier mode of operations:
Simplicity, flexibility,
í Automatic Mode economy
• Provides automated extent level relocation
granularity
• Requires use of a merged extent pool
í Manual Mode
• Provides online dynamic volume relocation
capability
ƒ CLI/GUI setup and management
ƒ Storage Tier Advisor Tool (advisor tool) for I/O Storwize V7000
analysis and projected benefit

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-5. IBM System Storage Easy Tier functional overview

IBM System Storage Easy Tier is a 3rd generation software with built-in dynamic data relocation
feature. Easy Tier automatically enables non-disruptively automated subvolume data placement
throughout different or within the same storage tiers to intelligently align the system with current
workload requirements and to optimize the usage of SSDs or flash arrays.
IBM Storage Tier Advisor tool (STAT) is a Windows console application that analyzes heat data files
that are produced by Easy Tier and produces a graphical display of the amount of “hot” data per
volume (with predictions about how additional Flash or SSD capacity could benefit the performance
for the system) and per storage pool. STAT is available at no additional cost.

© Copyright IBM Corp. 2012, 2016 7-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier configuration


• Storwize products will now let you configure a single storage pool with
one, two or three different tiers of storage in the same pool.
ƒ Easy Tier is a no charge feature.
ƒ It is supported by all server platforms with no additional software.
• Each of these three types of storage pool utilizes Easy Tier to optimize
storage performance:
ƒ Pools with a single tier of storage and with multiple managed disks will be
optimized so that each managed disk is equally loaded.
ƒ Pools with two or more storage pools will also ensure that the data is stored
on the most appropriate tier of storage.
• The configuration is as simple as adding more than one managed disk
to a storage pool.
ƒ If it is a Storwize array, the tier and capability of the array will be
automatically detected.
ƒ If it is a SAN attached managed disk, the user will need to manually
configure the tier and the capability (easy tier load) of the managed disk.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-6. Easy Tier configuration

IBM Storwize V7000 implements Easy Tier enterprise storage functions, which were originally
available only on the IBM DS8000 and IBM XIV enterprise class storage systems. IBM Easy Tier is
now available on the Storwize V7000 that allows host transparent movement of data among the
internal and external storage subsystem resources. Easy Tier is a no charge feature that automates
the placement of data amongst different storage tiers.

© Copyright IBM Corp. 2012, 2016 7-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

IBM System Storage Easy Tier function


Automatic storage hierarchy Host Volumes and Extents

• Eliminates manual intervention


to assign highly active data on
volumes to faster responding
storage
• System automatically and
nondisruptively moves Hybrid storage pool
frequently accessed data to
faster tier of storage
• Transparent to host applications
• Volumes belong to a single pool Flash/SSD Enterprise Nearline
ƒ Each volume is a collection of
extents on the storage pool
Automatic extent migration
ƒ An extent is either on Flash,
Enterprise, or Nearline disk

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-7. IBM System Storage Easy Tier function

IBM Easy Tier is a function that responds to the presence of drives in a storage pool that also
contains hard disk drives (HDDs). The system automatically and non-disruptively moves frequently
accessed data from HDD MDisks to flash drive MDisks, thus placing such data in a faster tier of
storage. With the Easy Tier technology, clients can improve performance at lower costs through
more efficient use of flash.
The concept of Easy Tier is to transparently move data up and down unnoticed from host and user
point of view. Therefore, Easy Tier eliminates manual intervention where you assign highly active
data on volumes to faster responding storage. In this dynamically tiered environment, data
movement is seamless to the host application regardless of the storage tier in which the data
belongs. Manual controls exist so that you can change the default behavior, for example, such as
turning off Easy Tier on pools that have any combinations of the three types of MDisks.
Easy Tier migration can be performed on the internal flash disks within Storwize V7000 storage
enclosure, or to external storage systems that are virtualized by Storwize V7000 control enclosure.

© Copyright IBM Corp. 2012, 2016 7-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier supports up to three tiers


• Supports any combination of three tiers: Flash, Enterprise (ENT) and Nearline (NL)
ƒ Internal or external (No validation done on External drives)
í MDisks for the Storwize V7000 flash/SSD always displays as Flash
í For all other storage enclosures you must designate the tier

• Each of these three types of storage pool utilizes Easy Tier to optimize storage
performance:
ƒ Pools with a single tier of storage and with multiple managed disks will be optimized so that
each managed disk is equally loaded.
ƒ Pools with two or more storage pools will also ensure that the data is stored on the most
appropriate tier of storage
Automatic storage hierarchy
Tier 0* Tier 1 Tier 2

*Flash/SSD Flash ENT NL


always Tier Flash ENT NONE
0 (only)
Flash NL NONE

NONE ENT NL

Flash NONE NONE

NONE ENT NONE

NONE NONE NL
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-8. Easy Tier supports up to three tiers

The key benefit of Easy Tier 3rd generation software versus the older versions is that it allows tiering
between three tiers (Flash, Enterprise, and Nearline), which helps increase the performance of the
system. This table shows naming convention and all supported combinations of storage tiering
used by Easy Tier. Tier0 are identified as Flash which represents solid state drives, flash drives, or
flash storage that is being virtualized. Hard disk drives are separated into two tiers: Enterprise 15 K
and 10K SAS drives are both classified as Tier1 ENT, and NL is the Nearline 7.2 RPM drives are
Tier2.

© Copyright IBM Corp. 2012, 2016 7-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier modes of operations


• Easy Tier includes the following main operating modes:

Automatic Data
Placement or extent
Evaluation or migration
measurement only
Storage Pool
Easy Tier Balancing
Acceleration
OFF

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-9. Easy Tier modes of operations

There are three types of Easy Tier modes of operation:


Easy Tire evaluation mode: When IBM Easy Tier evaluation mode is enabled for a storage pool
with a single tier of storage, Easy Tier collects usage statistics for all the volumes in the pool.
Automatic data placement: When IBM Easy Tier on Storwize V7000 automatic data placement is
active, Easy Tier measures the host access activity to the data on each storage extent. It also
provides a mapping that identifies high activity extents, and then moves the high-activity data
according to its relocation plan algorithms.
Storage Pool Balancing mode: Assesses the extents that are written in a pool and balances them
automatically across all MDisks within the pool.
Easy Tier also provides an Easy Tier acceleration mode. With this mode you can change the
mode rates of extent migrations through the IBM Easy Tier function and pool balancing functions.
When Easy Tier turned off, no statistics are recorded and no cross tier extent migration occurs. In
this mode only storage pool balancing is active which means extents are migrated within the same
storage pool.

© Copyright IBM Corp. 2012, 2016 7-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Evaluation mode: I/O activity monitoring


• Easy Tier can evaluate I/O on all volumes even if the environment has
no solid-state drives.
ƒ Monitors performance of each extent to determine data temperature
ƒ Evaluation results show the performance improvement potential of adding SSDs
ƒ Monitoring I/O to track the I/O demand from applications and I/O service time
from the Storwize V7000
ƒ I/O rate data is collected for multiple durations, hours, days and weeks

Volume
Exchange
Applications
DB2
Warehouse Volumes
Smart
monitoring
Four extents identified as hot
– candidates for Flash tier

Size: 10 GB
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-10. Evaluation mode: I/O activity monitoring

Easy tier must be enabled on non hybrid pools to collect data. When the Easy Tier evaluation mode
is enabled for a storage pool with a single tier of storage, Easy Tier collects usage statistics for all
the volumes in the pool, regardless of the form factor. The Storwize V7000 monitors the storage
use at the volume extent level. Easy Tier constantly gathers and analyzes monitoring statistics to
derive moving averages for the past 24 hours.
Volumes are not monitored when the easytier attribute of a storage pool is set to off or inactive with
a single tier of storage. You can enable Easy Tier evaluation mode for a storage pool with a single
tier of storage by setting the easytier attribute of the storage pool to on.
If you turn on Easy Tier in a Single Tiered Storage pool, it runs in evaluation mode. This means it
measures the I/O activity for all extents. A statistic summary file is created and can be off-loaded
and analyzed with the IBM Storage Tier Advisory Tool (STAT). This will provide an understanding
about the benefits for your workload if you were to add Flash/SSDs to your pool, prior to any
hardware acquisition.
IBM Easy Tier can be enabled on a volume basis to monitor the I/O activity and latency of the
extents over a 24 hour period. This type of volume data migration works at the extent level, it is
often referred to as sub-LUN migration.

© Copyright IBM Corp. 2012, 2016 7-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Automatic data placement mode


• Easy Tier performs workload statistical data collection, which is enabled
by default in a hybrid storage pool.
ƒ Monitors performance of each extent to determine data “temperature”
ƒ Creates migration plan for optimal extent placement every 24 hours
ƒ Migrates extents within pool per plan over 24 hour period
í Limited number of extents chosen to migrate every five minute interval

Flash/SSD + HDD Hybrid storage pool


Volume

Flash
Cold Extents Hot Extents
Migrate down Migrate up

HDD
1024 MB extents

Size: 10 GB
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-11. Automatic data placement mode

Automatic data placement is enabled by default once multi-tiers are placed in a pool. This process
allows I/O monitoring to be done for all volumes whether the volume is a candidate for automatic
data placement. Once automatic data placement is enabled, and if there is sufficient activity to
warrant relocation, extents will begin to be relocated within a day after enablement. This
sub-volume extent movement is transparent to host servers and applications.
For a single level storage pool and for the volumes within that pool, Easy Tier creates a migration
report every 24 hours on the number of extents it would move if the pool was a multi-tiered pool.
Easy Tier statistics measurement is enabled. Using Easy Tier can make it more appropriate to use
smaller storage pool extent sizes.
A statistic summary file or ‘heat’ file generated by Easy Tier can be offloaded for input to the IBM
Storage Tier Advisor Tool (STAT). This tool produces reports on the amount of extents moved to
Flash/SSD-based MDisks and predictions of performance improvements that could be gained if
more Flash/SSD capacity is available.

© Copyright IBM Corp. 2012, 2016 7-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Automated storage pool balancing


• Automatic balancing within a pool: Enabled for all pools by default
ƒ Migrates extents across MDisks in the pool to make best use of MDisk
performance.
ƒ Not space-based algorithmic “even distribution” like alphaworks script
• Works on tiers within a hybrid pool
ƒ Each mdisk of a given tier classification will be balanced with its peers
ƒ Promote and demote takes into account and places on specific mdisk according
to current balancing requirements
• Performance balance begins after six hours by default
ƒ Adding capacity triggers auto balance to start within minutes
• Free extents needed!

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-12. Automated storage pool balancing

When growing a storage pool by adding more storage to it, Storwize V7000 software can restripe
the system data in pools of storage without having to implement any manual or scripting steps. This
process is called Automated Storage Pool Balancing. Although Automated Storage Pool Balancing
can work in conjunction with Easy Tier, it operates independently and does not require an Easy Tier
license. This helps grow storage environments with greater ease while retaining the performance
benefits that come from striping the data across the disk systems in a storage pool.

© Copyright IBM Corp. 2012, 2016 7-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

How does automated storage pool balancing work


• Automated storage pool balancing restripes the system data in pools of
storage without having to implement any manual or scripting steps.
• Use XML files that are embedded Vol2 Vol3

in the code. Vol0 Workload balanced across all 3 MDisks


Vol1
ƒ Stanzas are based on the drive classes, MDisk0 MDisk1 MDisk2
RAID types/widths and workload
characteristics to determine MDisk
thresholds.
ƒ Internal drives on Storwize systems have
more stanzas.
ƒ Externally virtualized LUNs are based on
controller.
• Performance is improved by balancing
workload across all MDisks in the
Easy Tier only use
storage pool – not an extent Storage Pool Balancing
rebalance. on single-tier pools

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-13. How does automated storage pool balancing work

Automated Storage Pool Balancing uses XML files that are embedded in the software code. The
XML files uses stanzas to records the characteristics of the internal drives by RAID levels that are
built, the width of the array, drive types, and sizes used in the array, and so on, to determine MDisk
thresholds. External virtualized LUNs are based on its controller.
During the Automated Storage Pool Balancing process, it assess the extents that are written in the
pool, and based on the drive stanzas and its IOPs capabilities, data is automatically restriped
across all MDisks within the pool equally. In this case, you can have a single tier pool or mix
different drive type and capacity that you have MDisks on in the same pool. This is only a
performance rebalance – not an extent rebalance.
Automated Storage Pool Balancing can be disabled at the pool.

© Copyright IBM Corp. 2012, 2016 7-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier advanced settings


• Easy Tier acceleration mode
ƒ Allows Easy Tier data migration to move extents
up to four time faster than default setting
ƒ Up to 48 GiB per 5 minutes while in normal mode
it moves up to 12 GiB
ƒ Enable only during low system activity
ƒ System wide setting and disable by default
(easy_tier_acceleration off)
ƒ Modify setting online with no impact to host or
data availability suing the chsystem -
easytieracceleration on command
• MDisk Easy Tier Load
ƒ Five different load types: default, low, medium, high
and very high (SSD and Flash only)
ƒ Change Mdisk when it is under utilized and can
handle more load or when over utilized
• Use with caution as it can impact system
performance
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-14. Easy Tier advanced settings

It is also possible to change more advanced parameters of Easy Tier using: Easy Tier Acceleration
and MDisk Easy Tier Load.
Easy Tier acceleration allow administrators to modify Easy Tier setting. Turning this setting on
makes Easy Tier data migration to move extents up to four times faster than when in default setting.
In accelerate mode Easy Tier can move up to 48 GiB per 5 minutes while in normal mode it moves
up to 12 GiB. Enabling Easy Tier acceleration is advised only during periods of low system activity.
The two most probable use cases for acceleration are:
• When adding new capacity to the pool accelerating Easy Tier can spread quickly existing
volumes onto the new MDisks.
• Migrating the volumes between the storage pools when target storage pool has more tiers than
the source storage pool so Easy Tier can quickly promote or demote extents in the target pool.
This is system wide setting and is disabled by default. This setting can be changed online, without
any impact on host or data availability. To turn on or off Easy Tier acceleration mode use command
chsystem.
The second setting is called MDisk Easy Tier load. This setting is set per MDisk basis and indicates
how much load can Easy Tier put on the particular MDisk. There are five different values that can
be set to each MDisk: default, low, medium, high, very high.

© Copyright IBM Corp. 2012, 2016 7-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty
The system uses default setting based on the storage tier of the presented MDisks, either flash,
nearline or enterprise. In case of internal disk drives the tier is known but in case of external MDisk
tier should be changed by the user to align it with underlying storage.
Change the default setting to any other value only when you are certain a particular MDisk is under
utilized and can handle more load, or the MDisk is over utilized and the load should be lowered.
Change this setting to very high only for SDD and flash MDisks.
Each of these settings should be used with caution because changing the default values can
impact system performance.

© Copyright IBM Corp. 2012, 2016 7-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Extent migration types


• Promote / Swap Flash/SSD Tier Auto Rebalance
ƒ Move hot data to higher performing tier
• Warm Demote
ƒ Prevent performance overload of a tier
by demoting a warm extent to the lower
tier
ƒ Triggered when bandwidth or IOPS Migrations
exceeds predefined threshold occur only Promote Warm Demote
• Cold Demote between
ƒ Coldest data moved to lower HDD tier adjacent tiers

• Expanded Cold Demote


ƒ Demotes appropriate sequential
workloads to the lowest tier to better
utilise Nearline bandwidth Enterprise Tier
• Auto Rebalance
ƒ Re-distribute extents within a tier to Expanded or
Swap
Cold Demote
balance utilization across MDisks for
maximum performance
ƒ Either move or swap

Nearline Tier
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-15. Extent migration types

When enabled, Easy Tier determines the right storage media for a given extent based on the extent
heat and resource utilization. Easy Tier uses the following extent migration types to perform actions
between the three different storage tiers.
• Promote
▪ Moves the relevant hot extents to higher performing tier
• Swap
▪ Exchange cold extent in upper tier with hot extent in lower tier
• Warm Demote
▪ Prevents performance overload of a tier by demoting a warm extent to the lower tier
▪ This action is based on predefined bandwidth or IOPS overload thresholds. Warm demotes
are triggered when bandwidth or IOPS exceeds those predefined thresholds. This allows
Easy Tier to continuously ensure that the higher-performance tier does not suffer from
saturation or overload conditions that might affect the overall performance in the extent
pool.
• Demote or Cold Demote

© Copyright IBM Corp. 2012, 2016 7-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty
▪ Easy Tier Automatic Mode automatically locates and demotes inactive (or cold) extents that
are on a higher performance tier to its adjacent lower-cost tier.
▪ Once cold data is demoted, Easy Tier automatically frees extents on the higher storage tier.
This helps the system to be more responsive to new hot data.
• Expanded Cold Demote
▪ Demotes appropriate sequential workloads to the lowest tier to better utilize Nearline disk
bandwidth
• Storage Pool Balancing
▪ Redistribute extents within a tier to balance utilization across MDisks for maximum
performance
▪ Moves hot extents from high utilized MDisks to low utilized MDisks
▪ Exchanges extents between high utilized MDisks and low utilized MDisks
• It attempts to migrate the most active volume extents up to Flash/SSD first.
• A previous migration plan and any queued extents that are not yet relocated are abandoned.
Extent migration occurs only between adjacent tiers. In three tiered storage pool Easy Tier will not
move extents from Flash/SSD directly to NL-SAS and vice versa without moving them first to SAS
drives.

© Copyright IBM Corp. 2012, 2016 7-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

How does Easy Tier migration work?

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-16. How does Easy Tier migration work?

There are a number of reasons why Easy Tier can choose to move data between managed disks.
Easy tier helps data storage administrators see the potential value of automation with greater
storage efficiency. Easy Tier removes the need for storage administrators to spend hours manually
performing this analysis and migration, eliminates unnecessary investments in high-performance
storage, and improves your bottom line.

© Copyright IBM Corp. 2012, 2016 7-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier Sub-LUN automated movement


• After the 24 hour learning period
SSD Flash ƒ Data is moved automatically between tiers
array í Hottest extents moved up
í Coldest extents moved down
ƒ New volume allocations use extents from Tier1
(fastest Disk) by default
í If no free Tier 1 capacity then Tier 2 will be used
if available, otherwise capacity comes
Less Active Data from Tier 0
ENT Migrates Down
ƒ Easy Tier can operate as long as one extent free
Disk
in the pool
ƒ If no free extents in the pool, then no change
occur until more capacity is added to the pool

• I/O Monitor keeps access history for each


Active Data
Migrates Up virtualization extent
ƒ Extent sizes already defined for the storage pool
(16MB to 8GB in size)
NL
ƒ Default is 1 GB extent size
Disk

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-17. Easy Tier Sub-LUN automated movement

IBM Easy Tier can be enabled on a volume basis to monitor the I/O activity and latency of the
extents over a 24 hour period. This type of volume data migration works at the extent level, it is
often referred to as sub-LUN migration.
It supports data movement across three tiers, and therefore, the data can be classified into the
following three categories based on usage:
• Heavily used data (also called hot data)
• Moderately used data (also called warm data)
• Cold data
When creating new volumes, they are placed by default on the Enterprise or middle Tier 1. If Tier 1
has reached its capacity, then it will used the next lowest tier which would be Tier 2. If all tiers are
full, only then will it allocate extents from Tier 0. Easy Tier will then automatically start migrating
those extents (hot or cold) based on the workload. As a result of extent movement the volume no
longer has all its data in one tier but rather in two or three tiers.

© Copyright IBM Corp. 2012, 2016 7-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Automated data placement plan


Data migrator performs
extent move
Maximum of up to 30 MBps can
be migrated between disk tiers

Data migrator validates


suggestions based on
workload and events

Data migrator receives


suggestions

Heat map created every 24


hours
(replacing previous plan)

I/O stats collected at five


minute intervals

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-18. Automated data placement plan

IBM Easy Tier uses an automated data placement (ADP) plan which involves scheduling and the
actual movement or migration of the volume’s extents up to, or down from, the highest disk tier. This
involves collecting I/O stats in five minute intervals on all volume with in the three tiered pool. Based
on the performance log after the 24 hour learning period, Easy Tier will use data migrator (DM) to
create an extent migration plan and dynamically moves extents based on suggestions and
performance between three tiers, and the heat of extents. Therefore high activity or hot extents are
moved to a higher disk tier such as Flash and SSD within the same storage pool. It also moves
extents whose activity dropped off, or cooled, from higher disk tier MDisk back to a lower tier MDisk.
The extent migration rate is capped so that a maximum of up to 30 MBps is migrated, which
equates to approximately 3 TB per day that is migrated between disk tiers.

© Copyright IBM Corp. 2012, 2016 7-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier analytic processing cycle


• Three different types of analytic schedules:
ƒ Once per day Easy Tier will analyze the statistics to work out which data
should be promoted or demoted.
ƒ Four times per day Easy Tier will analyze the statistics to identify if any data
needs to be rebalanced between managed disks in the same tier.
ƒ Once every five minutes Easy Tier will analyze the statistics to identify if any
of the managed disks is overloaded.

Performance data Continuous Workload Workload analysis


collected ever 5 performed at least
Workload Hotspot ever 24 hours
minutes
Monitoring Analysis
Data collected is for Extent heat is
MDisk activity not IO categorized
from hosts Smart Data based on small
Placement and large IO
activity

Movement of
extents scheduled

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-19. Easy Tier analytic processing cycle

There are three different types of analytics which can decide to perform data migration based on
different schedules:
▪ Once per day Easy Tier will analyze the statistics to work out which data should be
promoted or demoted.
▪ Four times per day Easy Tier will analyze the statistics to identify if any data needs to be
rebalanced between managed disks in the same tier
▪ Once every 5 minutes Easy Tier will analyze the statistics to identify if any of the managed
disks is overloaded.
Each of the analysis phases generates a list of migrations that should be executed. The system will
then spend as long as needed executing the migration plan.
▪ Migration will occur at a maximum rate of 12 GB every 5 minutes for the entire system
▪ The system will prioritize the three types of analysis as follows
- Promote and rebalance get equal priority
- Demote is guaranteed 1 GB every 5 minutes, and receive whatever is left

© Copyright IBM Corp. 2012, 2016 7-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier settings summary See notes for number references


Storage pool Number of tiers Volume copy Easy Tier Volume copy
(MDiskgrp) in pool setting Easy Tier status
Easy Tier setting
Off One off Inactive²
Off One on inactive²
Off 2 to 3 off Inactive²
Off 2 to 3 on inactive²
Measure 1 off measured³
Measure 1 on measured³
Measure 2 to 3 off measured³
Measure 2 to 3 on measured³
Auto 1 off measured³
Auto 1 on balanced
Auto 2 to 3 off measured³
Auto 2 to 3 on active5
On 1 off measured³
On 1 on balanced4
On 2 to 3 off measured³
On 2 to 3 on active5
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-20. Easy Tier settings summary

This table provides a summary of Easy Tier settings. The rows highlighted in yellow are the default
settings. Also observe the reference numbers that are annotated in the Volume copy Easy Tier
status:
1. If the volume copy is in image or sequential mode or is being migrated then the volume copy
Easy Tier status is measured instead of active.
2. When the volume copy status is inactive, no Easy Tier functions are enabled for that volume
copy.
3. When the volume copy status is measured, the Easy Tier function collects usage statistics for
the volume but automatic data placement is not active.
4. When the volume copy status is balanced, the Easy Tier function enables performance-based
pool balancing for that volume copy.
5. When the volume copy status is active, the Easy Tier function operates in automatic data
placement mode for that volume.

© Copyright IBM Corp. 2012, 2016 7-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier default settings: Summary notes


• Default pool setting of auto and volume copy setting of on:
ƒ Pools with one storage tier: Easy Tier status is inactive
í Volume copy with status of inactive means no Easy Tier functions are
enabled
ƒ Pools with two storage tiers: Easy Tier status is active (automatic data
placement is enabled for striped volumes)
í Volume copy with status of active means Easy Tier operates in automatic
data placement mode
• If volume copy type is image or sequential, or if volume copy is being
migrated, its Easy Tier status is set to measured

Volume Copy settings only altered Storage Pool settings only altered in
in CLI CLI
Easy Tier On/Off Easy Tier On/Off/Auto/Measured
setting Setting
Easy Tier Inactive/Active/Measured/ Easy Tier Inactive/Active
status Balanced Status

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-21. Easy Tier default settings: Summary notes

The default Easy Tier setting for a storage pool is Auto, and the default Easy Tier setting for a
volume copy is On (for a single tier pool). This means that Easy Tier functions except pool
performance balancing are disabled for storage pools with a single tier, and that automatic data
placement mode are enabled for all striped volume copies in a storage pool with two or more tiers.
If the single tier pool Easy Tier setting is changed to On, the pool Easy Tier status would become
active and the volume copy Easy Tier status would become measured. This enables Easy Tier
evaluation and analysis of I/O activity for volumes of this pool.
With the default pool Easy Tier setting of auto and the default volume Easy Tier setting of on (for a
two-tier or hybrid pool) this causes the pool and the volume Easy Tier status to become active.
Easy Tier automatic data placement becomes active automatically.
The Easy Tier heat file is generated and continually updated as long as Easy Tier is active for a
storage pool.

© Copyright IBM Corp. 2012, 2016 7-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

What is considered to be hot?


Applications I/Os with sizes less
than 64 KB are considered best
use of SSD-based MDisks.
Exchange

DB2
Warehouse

Backups

User
directories
Applications I/Os with sizes larger
than 64 KB are not considered the Multimedia
best use. This data is not “hot”.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-22. What is considered to be hot?

To help manage and improve performance, Easy Tier is designed to identify hot data at the
subvolume or sub-LUN (extent) level, based on ongoing performance monitoring, and then
automatically relocate that data to an appropriate storage device in an extent pool that is managed
by Easy Tier. Easy Tier uses an algorithm to assign heat values to each extent in a storage device.
These heat values determine on what tier the data would best reside, and migration takes place
automatically. Data movement is dynamic and transparent to the host server and to applications
using the data.
The common question is where should the Flash drives and Easy Tier function be deployed in your
environment? There are several areas to be considered when determining where the Easy Tier
feature and the Flash drives can provide the best value to our clients. If the environment is one
where there is a significant amount of very small granularity striping, such as Oracle or DB2
tablespace striping, then the output of the workload may be significantly reduced. In these cases
there may be less benefit from smaller amounts of SSDs and it may not be economical to
implement an Easy Tier solution. Therefore, you should test the application platform before fully
deploying Easy Tier in to your Storwize V7000 environment.

© Copyright IBM Corp. 2012, 2016 7-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Type of storage pools: Single or multi-tier

Volume Volume Volume Volume

Flash/SSD Pool ENT Pool NL Pool HDD/Flash hybrid Pool

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-23. Type of storage pools: Single or multi-tier

Easy Tier supports a two tiered storage pool hierarchy, where one tier is composed of Flash and
SSD drives and the other tier is composed of HDDs (SATA, SAS or FC). This tier is referred to as
the single-tiered storage with one type for disk tier attributes. Therefore, each disk should have the
same size and performance characteristics.
Easy Tier can also support mixed disk technology with two different disk tier attributes, which
means high performance tier Flash/SSD disks, and generic HDD disks forms multi-tier storage.

© Copyright IBM Corp. 2012, 2016 7-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Example: Easy Tier single tier pool and volume copy


• With single tier pools, Easy Tier can be set to measure I/O activity or
remain inactive.

HDD Pool
HDD Pool Easy Tier = On HDD Pool
Easy Tier = Auto East Tier status = Inactive Easy Tier = Off
East Tier status = Active East Tier status = Inactive

Easy Tier = On/Off


Easy Tier = On/Off East Tier status = Measured Easy Tier = On/Off
East Tier status = Balanced East Tier status = Inactive
Will evaluate volume I/O
Performance-based pool but no ADP performed Will not evaluate I/O
balancing within the intra Will not perform ADP
of the tier only
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-24. Example: Easy Tier single tier pool and volume copy

This diagram illustrates the Easy Tier setting and status for a single tier pool and volume copy. The
first example shows a volume copy setting as Auto and its status as Active, therefore Easy Tier
operates in a storage pool balancing mode for performance-based pool balancing for that volume
copy to migrate extents within the intra (same) of the storage tier. For the second example, the
status On and the status Inactive indicates Easy Tier collects usage statistics for the volume but
automatic data placement (ADP) is not active. For the third example, the Inactive status means that
Easy Tier is neither collecting statistics nor will it enable the APD functions for that volume.

© Copyright IBM Corp. 2012, 2016 7-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Example: Easy Tier two tier pool and volume copy


• With two tier pools, the Easy Tier parameter can be set measure,
active, or inactive.

HDD/Flash_SSD Pool HDD/Flash_SSD Pool


Easy Tier = On/Auto Easy Tier = Off
Easy Tier status = Active Easy Tier status = Inactive

Easy Tier = On Easy Tier = Off Easy Tier = On/Off


Easy Tier status = Active Easy Tier status = Measured Easy Tier status = Inactive

Will evaluate volume I/O Will evaluate I/O, but Will not evaluate I/O
and perform ADP will not perform ADP or perform ADP

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-25. Example: Easy Tier two tier pool and volume copy

This diagram illustrates the Easy Tier setting and status for a two tier pool and volume copy. Easy
Tier automatic mode automatically manages the capacity allocated in a hybrid pool that contains
mixed disk technology (HDD + Flash/SSD). Therefore, Easy Tier will monitor and collect the
volume's I/O statistics, and as warranted, perform ADP functions as required.
If a pool has already been created with both HDD and Flash/SSD-based MDisks, with the Easy
Tier default setting of measured, this will not enable the automatic data placement functions on
volumes in the pool. For the second example, the Inactive status means that Easy Tier is neither
collecting statistics nor will it enable the APD functions for that volume.

© Copyright IBM Corp. 2012, 2016 7-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

External storage with flash drives

Storwize V7000 MDisks from


external storage systems

MDiskX MDiskY

Pre-existing
External Storage External Storage Storwize V7000 900
system LUN created system LUN created will be managed as
with SSD drives; with flash technology; externally virtualized
assigned to assigned to storage
Storwize V7000 Storwize V7000

Examples: DS8000 or Example: IBM


Storwize V7000 Storwize V7000 900
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-26. External storage with flash drives

MDisks from external storage systems created from Flash storage are discovered by the Storwize
V7000 on the SAN as unmanaged mode MDisks with a default technology type of hard disk drive.
Since there is no interface for the Storwize V7000 to automatically discern the technology attributes
of the drives that created the MDisks in attaching storage systems, an interface is provided through
both the GUI and the CLI to enable the administrator to update the technology tier of these MDisks
to SSDs or flash.
In Storwize V7000 terminology, flash is used to denote tier 0 storage. The backing technology could
be SSDs or flash systems (such as the IBM Storwize V7000 900 storage system). All pre-existing
Storwize V7000 900 will be managed as externally virtualized storage, unlike the Storwize V7000
Storage Enclosures. Storwize V7000 cannot provide a single point of configuration for an externally
virtualized Storwize V7000 900, as it can for its own Storage Enclosure(s).

© Copyright IBM Corp. 2012, 2016 7-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Modifying MDisk tier


• Storwize V7000 does not automatically detect the type of external
Mdisks.
ƒ External MDisks are Enterprise tier by default.
• From the Pools > External Storage, right-click on the MDisk and
select Modify Tier (only applies to external MDisks).
ƒ No impact on hosts or availability of the volume

Add Tier column


to display

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-27. Modifying MDisk tier

Storwize V7000 does not automatically detect the type of external MDisks. Instead, all external
MDisks initially are put into the enterprise tier by default. If Flash disk are part of the configuration,
the administrator must manually change the tier of MDisks and add them to storage pools.
To change the tier, from the GUI select Pools > External Storage and click the plus (+) sign next to
the controller which owns the MDisks which you want to change the tier. Then right click on the
desired MDisk and select Modify Tier. This only applies to external Mdisks. The change happens
online and has no impact on hosts or availability of the volumes.
If you do not see the Tier column right-click the blue title row and select the Tier check box as
presented in.

© Copyright IBM Corp. 2012, 2016 7-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier overload protection


• For Internal RAID array, the system calculates the performance of the MDisk
based on pre-programmed performance characteristics of the different drives.
• For SAN attached MDisks, the system cannot calculate the performance
capabilities.
ƒ Therefore, the system uses a number of pre-defined easy tier load parameter
levels (low, medium, high, very_high).
Storwize V7000 Storwize
Does not know: Knows:
• The technology type of an Mdisk until you tell it • The technology type of the MDisk
ƒ Expectations are, if Storwize V7000 enclosure are
attached as back-end storage • The number of components in the Mdisk
• The number of components in the MDisk • The RAID type of the MDisk
• The RAID type of the MDisk Internal XML file contains updated data per drive
type for “expected” IOPs and MB/s from a given
Knows:
MDisk
• The controller make and model that is
presenting the MDisk
• The technology type – once you tell it
Internal XML file contains updated data per
controller type for “expected” IOPs and MB/s for a
given MDisk

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-28. Easy Tier overload protection

Before Easy Tier 3, the system could overload an MDisk by moving too much hot data onto a single
MDisk. Easy Tier 3 understands the “tipping point” for an MDisk and stops migrating extents – even
if there is space capacity available on that MDisk.
The Easy Tier overload protection is designed to avoid overloading any type of drive with too much
work. To achieve this, Easy Tier needs to have an indication of the maximum capability of a
managed disk.
This maximum can be provided in one of two ways:
• For an array made of locally attached drives, the system can calculate the performance of the
managed disk because it is pre-programmed with performance characteristics for different
drives
• For a SAN attached managed disk, the system can not calculate the performance capabilities,
so the system has a number of pre-defined levels, that can be configured manually for each
managed disk. This is called the easy tier load parameter (low, medium, high, very_high)
If you analyze the statistics and find that the system doesn’t appear to be sending enough IOPs to
your SSDs, you can always increase the load parameter.

© Copyright IBM Corp. 2012, 2016 7-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Example of Hybrid storage pool properties view

Multi-tiered
storage

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-29. Example of Hybrid storage pool properties view

You can view the Hybrid pool details from the Pools > MDisks by Pools view. Right-click on the
Flash MDisk and select Properties to display summary information about the hybrid pool and the
capacity details.

© Copyright IBM Corp. 2012, 2016 7-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Example of Hybrid pool dependent volumes and volume


extents

Right-click on volume
and select View Mapped
Hosts

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-30. Example of Hybrid pool dependent volumes and volume extents

You can view a pool’s dependent volumes by right-clicking the MDisk and select Dependent
Volumes. To view the volume extents, right-click on the volume. Select View Mapped Hosts option
and click the Member MDisks tab.

© Copyright IBM Corp. 2012, 2016 7-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Example of Easy Tier volume level using CLI


IBM_Storwize:V009B:V009B1-
admin>lsvdisk VB1-WIN2
Volume Copy settings only altered
id 2
name VB1-WIN2 in CLI
IO_group_id 0 Easy Tier On/Off
IO_group_name io_grp0
setting
status online
mdisk_grp_id 2 Easy Tier Inactive/Active/Measured
mdisk_grp_name VB1-DS3KSATA status
capacity 10.00GB
. . .
copy_id 0
. . .
easy_tier on
Active status performs ADP and collect
easy_tier_status measured
tier generic_ssd
statistics. Measured status evaluates
tier_capacity 0.00MB and collects usage statistics
tier generic_hdd
tier_capacity 10.00GB

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-31. Example of Easy Tier volume level using CLI

At the volume level, the Easy Tier setting has a default value of on, which allows a volume to be
automatically managed by Easy Tier once its pool becomes Easy Tier active. This default settings
for both the pool and volume enable automated storage tiering to be implemented without manual
intervention.
The CLI also displays the volume status information. In addition, the CLI displays the volume’s
Easy Tier setting of on/off. In the same manner as changing the setting on a pool, you will have to
use to CLI to change a Easy Tier volume setting to auto, on, off, or measured.

© Copyright IBM Corp. 2012, 2016 7-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Example of Easy Tier status volume indicator

Easy Tier = Off


Easy Tier status = Measured

Easy Tier = On
Easy Tier status = Active

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-32. Example of Easy Tier status volume indicator

To obtain a quick review of the Easy Tier status of a list of volumes, use the Volumes > Volumes
view and add the Easy Tier Status column to the display. If desired, the search box allows a more
focused list to be displayed.

© Copyright IBM Corp. 2012, 2016 7-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier volume interactions


• Easy Tier will work seamlessly with any type of volume.
Basic Mirrored Thin-provision Compressed

Real
Capacity
Copy 0
Copy 1

• Volume limitations
ƒ Compressed volumes: Easy Tier can only optimize for read performance and
not write performance.
ƒ Volume mirroring: If volume copy data has a different workload
characteristics, the extents migrated by Easy Tier might differ for each
copy.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-33. Easy Tier volume interactions

Easy Tier will work seamlessly with any type of volume. However for compressed volumes Easy
Tier can only optimize for read performance, it can not work for write performance, due to the
nature of the way the compression software stores data on the disk.
Volume mirroring can have different workload characteristics on each copy of the data because
reads are normally directed to the primary copy and writes occur to both copies. Therefore, the
number of extents that Easy Tier migrates between the tiers might be differ for each copy.

© Copyright IBM Corp. 2012, 2016 7-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Disk tier drive types


• MDisks can be created on: Hybrid Pool

ƒ 15K RPM FC or SAS disks Pool_IBMFlash

ƒ Nearline SAS or SATA


R5 R5
ƒ SSDs or Flash storage systems R5
R5

• Single-tier storage pools


Pool_IBMSAS
ƒ Must have the same hardware
characteristics (the same RAID type, array R10
R10 R10

size, disk type, RPM, i.e. R10

• Multi-tier storage pools Pool_IBMNL

ƒ Supports a mix of MDisks with more than R5 R5


one type of disk tier attribute
R5
ƒ Follows the same hardware characteristics
as single-tier pool
• Extent migration occurs only between
adjacent tiers.
ƒ Easy Tier move extents from SSDs/Flash to
SAS first before NL and vice versa.
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-34. Disk tier drive types

Since internal or external MDisks (LUNs) are likely to have different performance attributes
because of the type of disk or RAID array on which they reside. The MDisks can be created on 15K
revolutions per minute (RPM) Fibre Channel (FC) or serial-attached SCSI (SAS) disks, nearline
SAS or Serial Advanced Technology Attachment (SATA), or even SSDs or flash storage systems.
This provide examples of storage pools populated with different MDisk types:
• Single-tier storage pool should have the same hardware characteristics; for example, the same
RAID type, RAID array size, disk type, disk revolutions per minute (RPM), and controller
performance characteristics.
• A multi-tier storage pool supports a mix of MDisks with more than one type of disk tier attribute;
following the same hardware characteristics as the single-tier pool. One belonging to an Flash
array, one belonging to SAS HDD array, and one belonging to an NL-SAS HDD array.

© Copyright IBM Corp. 2012, 2016 7-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Creating a Hybrid pool


• The configuration is as simple as adding more than one managed disk
to a storage pool.
ƒ Create a storage pool.
ƒ Select Add Storage to
add Mdisk resources
to the storage pool.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-35. Creating a Hybrid pool

The configuration is as simple as adding more than one managed disk to a storage pool
• If it is a Storwize array, the tier and capability of the array will be automatically detected.
• If it is a SAN attached managed disk, the user will need to manually configure the tier and the
capability (easy tier load) of the managed disk.

© Copyright IBM Corp. 2012, 2016 7-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Disabling Easy Tier using the GUI


• Change pool and volume setting through
CLI only.
ƒ chvdisk –easytier on or off <Volume
name and ID>
ƒ chmdiskgrp –easytier <Storage Pool
Setting> <Storage Pool/MDisk Group ID or
name>
• Right-click on Flash MDisk and
select RAID Actions > Delete.
ƒ Select Actions > Delete from Pool
to remove the Mdisk.
ƒ Confirm you wish to delete Mdisks.
ƒ In the Volumes by Pools
panel, the Easy Tier status of
the volume and pool
becomes Inactive.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-36. Disabling Easy Tier using the GUI

The Easy Tier function can be disabled at the storage pool and volume level. When disabling a
given volume of a multi-tiered pool from Easy Tier automatic data placement, the volume is set to
Off. Therefore, Easy Tier will not record any statistics or perform cross tier extent migration occurs.
Remember, for this mode only storage pool balancing is active to migrate extents within the same
storage pool.
Easy Tier setting for storage pools and volumes can be changed only via command line. Use
chvdisk command to turn off or turn on Easy Tier on selected volumes and chmdiskgrp to change
status of Easy Tier on selected storage pools.
You can use the management GUI to delete the Flash RAID array from the hybrid pool. This action
will allow you to force the removal of the Flash MDisk array from the configuration even if it contains
data. This action will trigger a migration of the extents to the other MDisks in the same pool. This is
a non-destructive and non-disruptive action, the host systems are unaware that data is being
moved around under the covers.

© Copyright IBM Corp. 2012, 2016 7-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier limitations


• Remove an MDisk using the -force parameter.
ƒ Extents used by the deleted MDisk are migrated to other MDisks in the same
tier. If there are insufficient extents in the tier, extents are used from the other
tier.
• Migrating extents
ƒ The svctask migrateexts CLI command cannot be used on volumes
enabled for Easy Tier ADP.
• When migrating a volume to another storage pool, Easy Tier automatic
data placement between the tiers is temporarily suspended.
ƒ Once volume is placed in the new storage pool, Easy Tier ADP resumes.
• Multi-tier storage pools
ƒ Image mode and sequential volumes are not candidates for Easy Tier automatic
data placement.
ƒ Extents for these volumes must reside on one, specific MDisk and cannot be
moved.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-37. Easy Tier limitations

When you use Easy Tier on the IBM Storwize V7000, keep in mind the following limitations:
• When an MDisk is deleted from a storage pool with the -force parameter, extents in use are
migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from the other tier are used.
• When Easy Tier automatic data placement is enabled for a volume; you cannot use the svctask
migrateexts CLI command on that volume.
• When IBM Storwize V7000 migrates a volume to a new storage pool, Easy Tier automatic data
placement between the two tiers is temporarily suspended. After the volume is migrated to its
new storage pool, Easy Tier automatic data placement between the generic SSD tier and the
generic HDD tier resumes for the moved volume, if appropriate.
▪ When the IBM Storwize V7000 migrates a volume from one storage pool to another, it
attempts to migrate each extent to an extent in the new storage pool from the same tier as
the original extent. In several cases, such as where a target tier is unavailable, the other tier
is used. For example, the generic SSD tier might be unavailable in the new storage pool.
• Multi-tier storage pools containing image mode and sequential volumes are not candidates for
Easy Tier automatic data placement because all extents for those types of volumes must reside
on one, specific MDisk and cannot be moved.

© Copyright IBM Corp. 2012, 2016 7-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

IBM Storage Tier Advisor Tool


• The Storage Tier Advisor Tool (STAT) tool helps determine the
benefits of adding SSDs to an existing system particular workload.

Supports 3 tiered storage


Easy Tier functionality

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-38. IBM Storage Tier Advisor Tool

The IBM Storage Tier Advisor Tool (STAT) is a Microsoft Windows application that used in
conjunction with the Easy Tier function to interprets historical usage information from Storwize
V7000 systems. The STAT utility analyzes heat data files to provides information on how much
value can be derived by placing “hot” data with high I/O density and low response time
requirements on Flash/SSDs while targeting HDDs for “cooler” data that is accessed more often
sequentially and at lower I/O rates.
With the release of the software V7.3 code supporting 3 tiered storage Easy Tier functionality, STAT
can be used to determine the data usage for each of the tiered storage.

© Copyright IBM Corp. 2012, 2016 7-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Download STAT.exe file


• STAT Tool can be download from the IBM Support website.
ƒ Default directory is C: \Program Files\IBM\STAT

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-39. Download STAT.exe file

The STAT tool can be downloaded from the IBM support website. You can also do a web search on
‘IBM Easy Tier STAT tool’ for a more direct link. Download the STAT tool and install it on a Windows
workstation. The default directory is C: \Program Files\IBM\STAT.
IBM Storage Tier Advisor Tool can be downloaded at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000935
You will need an IBM ID to proceed with the download. A suggestion: Use Google search, do not
rely on the URL name.

© Copyright IBM Corp. 2012, 2016 7-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

STAT dpa heat data files for analysis


• Heat file contains a wealth of information about
the extents being monitored by Easy Tier
• STAT heat data file is downloaded from the GUI
Setting > Support /dumps directory
ƒ Once a day ET generates a
dpa_heat.node_name.time_stamp.data file
í Heat data files are erased after 7 days

• The dpa heat file is available on the configuration


node only
• To analyze your easytier DPA heat file, either
open a command prompt and type STAT.exe
<dpa heat filename>
ƒ Or simply drag and drop the dpa heat file
onto STAT.exe Example of a heat
• STAT creates a set of html files (index.html) to
data files
view results from a supported browser Right-click file to
download
• Can be invoked from the CLI PuTTY spc (PSCP)
by specifying the heat data file
dpa_heat.node_name.time_stamp.data

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-40. STAT dpa heat data files for analysis

To evaluate the data, the heat map file needs to be downloaded from Storwize V7000 system using
management GUI with the Download option. On Storwize V7000 the heat data file is located in the
/dumps directory on the configuration node and is named “dpa_heat.node_name.time_stamp.data”.
Heat data files are produced approximately once a day when Easy Tier is active on one or more
storage pools and updates the activity per Volume since the prior heat data file was produced. This
heat information is added to a running tally that will reflect the heat activity to-date for the measured
pools. The file must be off-loaded by the user and Storage Tier Advisor Tool invoked from a
Windows command prompt console with the file specified as a parameter. The user can also
specify the output directory. Any existing heat data file is erased when it has been existing longer
than 7 days.
The program can also be invoked from CLI using the PuTTY scp (PSCP) window with the heat file
name specified. Ensure the heat file is in the same directory as the STAT program when invoking
from the CLI. The Storage Tier Advisor Tool creates an result index.html file to view the results
through a supported browser. Browsers Firefox 27, Firefox ESR_24, Chrome 33 and IE 10 are
supported. The file is stored in a folder called Data_files in either the current directory or the
directory where STAT is installed. The output index.html file can then be opened with a web
browser.

© Copyright IBM Corp. 2012, 2016 7-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Setting Iometer parameters

Yellow icons represent Update frequency can be


logical (mounted) drives between 1 to 60 seconds
Red / means they need to
be prepared before test
starts

IOMeter creates a
iobw.tst file for each
volume generated I/Os

Save default results.csv


file or desired name.csv

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-41. Setting Iometer parameters

In order to received the desired results, the Iometer allows you to specify parameters to get started.
• In the Topology panel on the left side of the Iometer window, under All Managers you select
your system name.
• If there are available mounted drives, they will be appear under the Disk Targets tab view. Blue
icons represent physical drives; they are only shown if they have no partitions on them. Yellow
icons represent logical (mounted) drives; which are only shown if they are writable. A yellow
icon with a red slash through it means that the drive needs to be prepared before the test starts.
• The select disk(s) to use in the test (use Shift and CTRL to select multiple disks). The selected
disks will be automatically distributed among the manager’s workers (threads).
• From the Access Specifications tab, select the disk name. This tab specifies how the disk will be
accessed. If disk is not display, use the Edit button and Default in the Global Access
Specifications window to set the workload parameters. The default is 2-Kilobyte random I/Os
with a mix of 67% reads and 33% writes, which represents a typical database workload. You
can leave it alone or change it.
• The Results Display tab, allows you to set the Update Frequency between 1 second or 60
seconds. For example, if you set the frequency to 10 seconds, the first test results appear in the
Results Display tab, and they are updated every 10 seconds after that.

© Copyright IBM Corp. 2012, 2016 7-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty
Once you have the specifications set, press the Start Tests button (green flag). A standard Save
File dialog appears. Select a file to store the test results (default results.csv). Iometer must run for
24 hours to get accurate results. Press the Stop Test button (stop sign), and the final results are
saved in the results.csv file.

© Copyright IBM Corp. 2012, 2016 7-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Easy Tier STAT CSV files


The STAT tool also creates three CSV files in the Data_files.
• <node>_data_movement.csv
ƒ This contains details about the data movements performed by Easy Tier in the
last day, including the volume copy, the source managed disk, the target
managed disk, and the type of migration (promote, demote, rebalance, …).
• <node>_workload_ctg.csv
ƒ This contains details about the workload characteristics of each volume copy in
each tier.
ƒ For example, the number of inactive extents for volume 75 copy 0 in tier 0.
• <node>_skew_curve.csv
ƒ This contains the data necessary to draw the skew graphs. Basically the
performance (IO/s, MB/s and response time for different types of workloads) of
each group of x extents (100 extents in my example).

Import CSV files using the IBM Storage Tier Advisory Tool Charting Utility from IBM techdocs:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-42. Easy Tier STAT CSV files

The STAT tool also creates 3 CSV files in the Data_files folder containing a very large amount of
information on what is going on. The best way to start investigating this data is to use the IBM
Storage Tier Advisory Tool Charting Utility from IBM techdocs:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251
This tool will import the all three of these CSV into excel and automatically draw the three most
interesting charts automatically. It also contains a tab called Reference, which will explain all of the
terms used in the graphs, as well as providing a useful reminder about the different types of data
migration in Easy Tier.

© Copyright IBM Corp. 2012, 2016 7-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

STAT System Summary

Total capacity of
extents identified All storage volumes with
as being hot from Easy Tier status of active or
collected I/O on measured and the total
the monitored capacity of these volumes
volumes

Extent pool ID that is


generated when the Green portion of the bar Black bar represents
extent pool is created represents data that is unallocated data, that is
managed by Easy Tier not managed by Easy Tier

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-43. STAT System Summary

The output html file from the STAT tool is stored in the same folder as the stat.exe. Open it with a
web browser to review the generated reports.
The System Summary report includes a brief inventory of volumes and capacity measured, amount
of detected hot data to be migrated, and an estimated amount of time for the migration to complete.
It also provides a recommendation of amount of Flash/SSD capacity to add, or to take advantage of
existing SSDs currently not in use, for possible performance improvement.
The Easy Tier V3 STAT Tool analyzes the supported tiers (ENT, SSD (Flash) + ENT, ENT) storage
using the Easy Tier functionality. Each green portion of the Data Management status bar displays
both capacity and I/O percentage of the extent pool (except that the black portion of the bar only
displays the capacity of the unallocated data) by following the “Capacity/IO Percentage” format.

© Copyright IBM Corp. 2012, 2016 7-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

STAT example: Storage pool performance and


recommendation

COLD

COLD/WARM

Not shown is RED = HOT

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-44. STAT example: Storage pool performance and recommendation

You can select any Storage Pool ID to view the performance statistics and improvement
recommendation of that specific pool. The measurement data by pools is where Easy Tier is either
activated automatically (two-tiers of storage technology detected) or manually (single tier pool
where Easy Tier was turned on to run in evaluation mode).
This report represents a brilliant distribution of each tier that constructed the pool displaying the
MDisks, number of IOPS threshold, utilization of MDisk IOPS, and projected utilization of MDisk
IOPS for each MDisk of each tier. There is also a threshold set for maximum allowed IOPS. The
utilization of the MDisk IOPS is the current MDisk IOPS, calculated as a moving average, as a
percentage of the maximum allowed IOPS threshold for the device type (such as SATA and SSD)
and the projected utilization of MDisk IOPS is the expected MDisk IOPS, calculated as a moving
average, after rebalancing operations have been completed, as a percentage of the maximum
allowed IOPS threshold for the device type (such as SATA and SSD). Observe the utilization of the
MDisks IOPS and the projected utilization of MDisk IOPS color bars, which denotes the percentage
of MDisk IOPS utilization in comparison to the average utilization of MDisks IOPS.
The color codes provide the following representation:
• The blue portion of the bar represents the capacity of cold data in the volume. Data is
considered cold when it is either not used heavily or the I/O per second on that data has been
very less.

© Copyright IBM Corp. 2012, 2016 7-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty
• The orange portion of the bar represents the capacity of warm data in the volume. Data is
considered warm when it is relatively heavily used than data that is cold or the IOPS on that
data is relatively more than that on the data that is cold.
• The red portion of the bar represents the capacity of hot data in the volume. Data is considered
hot when it is used most heavily or the IOPS on that data has been highest. (Not shown).

© Copyright IBM Corp. 2012, 2016 7-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

STAT example: Drive configuration recommendations

Recommendations to
add SSD to pool that
contains Enterprise
and NL MDisks

Recommendations to
add NL to all pools to
migrate the less active
data

• Provides recommendations on adding additional tier capacity and performance impact


௅ Tier 0: Flash
௅ Tier 1: Enterprise disk (15 K and 10 K)
௅ Tier 2: Near-line disk (7.2 K)
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-45. STAT example: Drive configuration recommendations

With Easy Tier supporting 3-tier combination in this release, there are five kinds of
recommendations: “Recommended SSD Configuration”, “Recommended ENT Configuration”,
“Recommended SATA Configuration”, “Recommended NL Configuration” and “Recommended
SSD + ENT Configuration”. For each kind of recommendation, the recommendation result will be
listed in a table format, which contains recommendation title, selection list, table head, and table
content. The recommended drive configuration is for each tier combination based on the storage
pool.
STAT can be used to determine what application data can benefit the most from relocation to SSDs,
Enterprise (SAS/FC) drives or Nearline SAS drives. STAT uses limited storage performance
measurement data from a user's operational environment to model potential unbalanced workload
(skew) on disk and array resources. It is intended to supplement and support but not replace
detailed pre-installation sizing and planning analysis. It is most useful to obtain a “rule of thumb”
system-wide performance projection of cumulative latency reduction on arrays and disks when a
Solid State Drive configuration and the IBM Easy Tier function are used in combination with
handling workload growth or skew management.

© Copyright IBM Corp. 2012, 2016 7-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

STAT workload distribution across tiers


Example of skew in a typical client environment
100

90
50% of the extents do 10% of
80 the MB and virtually no
random IOPS!
70
Percent of workload

60

50

40

30
58% of the random IOPS and
20 33% of the MB from about
5% of the extents!
10

0
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Percent of extents

Percent of small Ios Percent of MB

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-46. STAT workload distribution across tiers

This illustration show the skews of the distributed and projective workloads across the system in a
graph to provide a visual in the system performance based on the percentage of allocated capacity.
• Workloads distribution: The X-axis denotes the top x intensive data based on sorted data by a
small I/O. The Y-axis denotes the accumulative small I/O percentage distributed on the top x
intensive data.
• Projective workloads: The top tier workload displays the projected skew of the secondary
storage device. The X-axis denotes the top x intensive data based on sorted data by the small
write I/O. The Y-axis denotes the accumulative small I/O percentage distributed on the top x
intensive data.
This output can be used to compare the workload distribution curved across tiers within and across
pools to help determine optimal drive mix for current workloads.

© Copyright IBM Corp. 2012, 2016 7-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

STAT example: Volume Heat Distribution report (1 of 2)


• Recommend adding SSDs and NL MDisks to pool to migrate hot data
to Tier 0 and the less active data between Tier 1 and Tier 2

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-47. STAT example: Volume Heat Distribution report (1 of 2)

The volume heat distribution of that storage pool which shows the VDisk ID, configuration size, I/O
percentage of extent pool, tier, capacity on tier, and heat distribution for each VDisk of that storage
pool. The heat distribution for each VDisk is displayed using a color bar which represents the type
of data on that volume.
This example provides a more vibrant display in the color bar for storage pool P0, which contains
Enterprise MDisks only. Here you can see that VDisk 10 heat distribution shows a relatively heavily
used in data than VDisk 0, as well as the high IO density. You can also see that VDisk 7, although
smaller in size, shows that the heat distribution for the data in use is balanced between warm and
hot. Therefore, based on the previous information, storage pool P0 can benefit from adding SSDs,
as well as the additional of Nearline MDisks to migrate the less active data between tier 1 and tier 2.

© Copyright IBM Corp. 2012, 2016 7-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

STAT example: Volume Heat Distribution report (2 of 2)


• Recommend adding NL MDisks to the storage pool to migrate the less
active data between Tier 1 and Tier 2

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-48. STAT example: Volume Heat Distribution report (2 of 2)

In this example, the Volume Heat Distribution report for storage pool P1, a two-tier hybrid (Tier 0
and Tier 1) storage pool, provides a distribution of hot, warm and cold data (in terms of capacity) for
each volume monitored. Again, the recommendation for this pool would be to add NL to migrate
less active data to between Tier 1 and Tier 2, before migrating to Tier 0.
Overall, Easy Tier maximum value can be derived by placing hot data with high IO density and low
response time requirements on SSDs, while targeting HDDs for cooler data that is accessed more
sequentially and at lower rates.

© Copyright IBM Corp. 2012, 2016 7-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Best practices: Easy Tier free extents in pool


• Keep some free extents in pool and Easy Tier will attempt to keep
some free per Tier.
• Plan for one (or more) extent times the number of MDisks in the
storage pool plus 16. Easy Tier will try to keep some extents free in
Tiers 0 and 1 if possible.

Flash
Tier 0

ENT
Tier 1

NT
Tier 2

Example: 20 MDisks in an Easy Tier storage pool with either two


or 3 MDisks tiers (20x1) + 16 = 36 extents free in the pool

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-49. Best practices: Easy Tier free extents in pool

It is recommended to keep some free extents with in a pool in order for Easy Tier to function. This
will allow Easy Tier to move the extents between tiers as well as move extents with in the same tier
to load-balance the MDisks with in that tier, without delays or performance impact.
Easy Tier will work using only one extent; however, it will not work as efficient.
Free extents can be estimated based on one extent times the number of MDisks in a storage pool
plus 16.
And remember, Easy Tier heat map is updated every 24 hours for moves between tiers.
Performance rebalance is within a single tier (even in a hybrid pool) looked at and updated much
more often. The system is rebalancing on an hourly basis.

© Copyright IBM Corp. 2012, 2016 7-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Best practices: Easy Tier implementation options


• Use case option 1
ƒ Set Easy Tier to be on for storage pools to generate usage statistics.
ƒ Analyze STAT output to identify pools that might benefit from automated data
placement.
ƒ Add SSD-based MDisks to identified pools to allow Easy Tier to optimize SSD
usage.

• Use case option 2


ƒ Set Easy Tier to be on for all pools to generate usage statistics.
ƒ Analyze STAT output to identify pool and subset of volumes for Easy Tier
management.
• Update Easy Tier volume settings accordingly.
ƒ Add Flash/SSD-based MDisks to pool to allow Easy Tier to optimize Flash/SSD
usage with only the subset of eligible volumes.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-50. Best practices: Easy Tier implementation options

The key item to note is Easy Tier can be used for workload measurement. It can be used to assess
the impact of adding Flash/SSDs to the workload before the actual investment of Flash/SSD-based
MDisks.
Easy Tier is a licensed feature, except for storage pool balancing which is a no charge feature, and
is enabled by default. Easy Tier comes as part of the Storwize V7000 code. For Easy Tier to
migrate extents, you must have disk storage available that has different tiers: a mix of Flash/SSD
and HDD.

© Copyright IBM Corp. 2012, 2016 7-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Best practices: Easy Tier data relocation decision criteria


• Objective is to place data where workload is least likely to overload
storage system resources.
• Consider only small I/O (64 K or less) when prioritizing extents to be
moved to Flash/SSD-based Mdisks.
• Factor in the cost of data movement.
ƒ After steady state achieved, only move data for noticeable benefit.
ƒ Objective is to use resources for workload, not data movement.
ƒ Studies indicate workloads tend to be stable from day to day.
ƒ Therefore extents in stable use will evolve to higher tier of storage.

• Migrates hot data to Flash/SSD-based MDisk with a capped migration


rate of up to 30 MBps and up to 3 TB per day.
• Consumes only HDD extents at initial volume allocation; will use
Flash/SSD extents if and only if no HDD extents are available.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-51. Best practices: Easy Tier data relocation decision criteria

The hot and cold temperature of an extent is dependent upon the measurements of random, small
data transfer I/O operations.
Large sequential transfers are not considered as these tend to perform equally well with
HDD-based MDisks. Thus, Easy Tier only considers extents with I/Os of up to 64 K as migration
candidates.

© Copyright IBM Corp. 2012, 2016 7-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Storwize V7000 enhanced features topics


• Spectrum Virtualize I/O Stack

• Easy Tier 3rd Generation

• Thin provisioning

• Real-time Compression (R

• Comprestimator Utility

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-52. Storwize V7000 enhanced features topics

This topic discusses the concept and attributes of Thin-Provision volumes.

© Copyright IBM Corp. 2012, 2016 7-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin provisioning concept

; Thin-provisioned volumes:
• Extent space-efficient volumes
• Enhances utilization efficiency of physical storage capacity
• Enables just-in-time capacity deployment
• Aligns application growth with its storage capacity growth
• Facilitates more frequent copies/backups while minimizing capacity
consumption

; Storwize V7000 Thin-provisioned volumes:


• Available to all storage pools
• Available to ALL attaching storage systems
• Available as standard feature

Dynamic
growth

Without thin provisioning, pre-allocated With thin provisioning, applications


space is reserved whether the application can grow dynamically, but only consume
uses it or not space they are actually using
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-53. Thin provisioning concept

Thin provisioning function is to extends storage utilization efficiency to all Storwize V7000
supported storage systems by allocating disk storage space in a flexible manner among multiple
users, based on the minimum space required by each user at any given time.
With thin-provisioning, storage administrators can also benefit from reduced consumption of
electrical energy because less hardware space is required, and enable more frequent recovery
points of data to be taken (point in time copies) without a commensurate increase in storage
capacity consumption.

© Copyright IBM Corp. 2012, 2016 7-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Fully allocated versus thin-provisioned volumes


Extent 9
Extent 8 Fully allocated vDisk -entire
Extent 7 LBA range allocated at
Extent 6 creation
Fully
Allocated
5 GB Extent 5
Extent 4
Extent 3
Extent 2
Extent 1 Warning threshold
Extent 0
for pool free space
Storage
pool LBAn
Extent 3 Example extent
LBAn Extent 2 size 512 MB
Extent 1
Real Extent 0
Capacity 5 GB LBA0

Space-efficient vDisk -LBA range


LBA0 allocated ingrain size increments based
on write activity
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-54. Fully allocated versus thin-provisioned volumes

Thin-Provision volumes are sequential volume types that can be created either as fully allocated or
thin-provisioned. Thin-provisioned volumes creates two capacities: virtual and real.
Thin-provisioned volume creates an additional layer of virtualization. This layer gives the
appearance of storage as traditionally provisioned, having more physical resources than are
actually available.

© Copyright IBM Corp. 2012, 2016 7-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin provision total transparency


Host sees the • Virtual capacity is presented to host
requested and Copy Service functions.
volume size of 50
GB ƒ No awareness to discern between
fully allocated versus thin-provisioned

LBAn Virtual
Capacity

Real
Capacity

LBA0
rsize parm
Benefits from lower
cache functions (such
as coalescing writes or
Physical storage prefetching
used is 25 GB

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-55. Thin provision total transparency

For thin-provisioned volumes, only the real capacity is acquired at its creation. The real capacity
defines the amount of capacity actually allocated from disk systems represented by storage pools.
The virtual capacity is only presented to attaching hosts, as well as Copy Services such as
FlashCopy and Metro and Global Mirroring. There is no awareness to discern between fully
allocated versus thin-provisioned volumes. Therefore, Storwize V7000 implementation of
thin-provisioning is totally transparent which includes back-end storage systems.
As a standard feature of the Storwize V7000, thin-provisioned volumes can also reside in any
storage pool representing any attaching storage system.
The thin-provisioned volumes can benefit from lower cache functions (such as coalescing writes or
prefetching), which greatly improve performance.

© Copyright IBM Corp. 2012, 2016 7-60


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Managing thin-provisioned volumes


• Real capacity is controlled by the rsize (real size) parameter.
ƒ Two operating modes:
í Autoexpand - automatically increased by GUI (default)
• Maintains a fixed amount of contingency (unused) real capacity
• Does Not cause real capacity to grow much beyond the virtual capacity
ƒ Non-autoexpand - manually increased using CLI (default)
í Increased more than the maximum that is required by the current virtual capacity
• Contingency capacity is recalculated

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-56. Managing thin-provisioned volumes

A thin-provisioned volume real capacity is controlled by the rsize (real size) parameter. There might
be times where thin provisioning can potentially reach a situation where demands for additional
storage capacity is required.
There are two operating modes in which to expand thin-provisioning real capacity: autoexpand and
non-autoexpand. You can switch the mode at any time. If a thin volume real capacity is increased
using the autoexpand mode, the Storwize V7000 automatically adds a fixed amount of more real
capacity to the thin volume as required. The autoexpand feature attempts to maintain a fixed
amount of contingency (unused) real capacity for the volume. The contingency capacity is initially
set to the real capacity that is assigned when the thin volume is created. Autoexpand mode does
not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated to be the difference between
the used capacity and real capacity. This process can be achieved using the CLI non-expand
mode.

© Copyright IBM Corp. 2012, 2016 7-61


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin provision concept: Auto expand off


LBAn Directory (B-tree)

Capacity
Volume
(size)

Extents
Real Capacity Space allocated in grain size increments:
Virtual Capacity: 50 GB

2 GB • Default grain size = 256K


• Grain size can be 32/64/128/256K
LBA2014

Write/Read Contingency
Activities: Auto-expand capacity
= OFF* (Grain=256K*) (rsize=2GB)
LBA0-511 LBA1536-2047

LBA463 Volume goes offline if write activity exceeds


real capacity (Error code 1865 reported)
Used capacity: sum of all grains allocated
LBA0 Free capacity: real capacity minus used capacity
LBA0
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-57. Thin provision concept: Auto expand off

Space within the allocated real capacity is allocated in grain-sized increments corresponding to a
logical block address (LBA) range dictated on write activity. LBA is a single block of information that
is addressed using a LUN identifier and an offset within that LUN. The default grain size is 256 K,
which represents 512 blocks of 512 bytes each. A write operation to an LBA range not previously
allocated causes a new grain sized space to be allocated.
A directory of metadata is used to map or track allocated space to a corresponding LBA range
based on write activities. When write activity exceeds its real capacity, the volume goes offline (if
the automatically expand setting is off) and application I/Os would fail. Once the real capacity is
expanded, the volume becomes online automatically.

© Copyright IBM Corp. 2012, 2016 7-62


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin provision concept: Auto expand on


LBAn Directory (B-tree)
Directory: Used by read/write requests to
Capacity
Volume translate LBA#s to grains within extents
(size)

Extents
Real Capacity
Space allocated in grain size increments:
Virtual Capacity: 50 GB

2 GB • Default grain size = 256K


• Grain size can be 32/64/128/256K
LBA316

Write/Read Contingency
Activities: capacity
(rsize=2GB)
(Grain=64K)
LBA0-127 LBA256-383 LBA128-255
LBA200

Used capacity: sum of all grains allocated


Free capacity: real capacity minus used capacity
LBA0
LBA0
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-58. Thin provision concept: Auto expand on

With the autoexpand attribute set ON, the rsize specified value serves as a contingency capacity
that is maintained as write activities occur. This contingency or reserved capacity protects the
volume from going offline in the event the storage pool runs out of extents. The contingency
capacity diminishes to zero when the real capacity reaches the total capacity of the volume.
The combination of autoexpand and the existence of the metadata directory might cause the real
capacity of the volume to become greater than the total capacity seen by host servers and other
Storwize V7000 services. A thin-provisioned volume can be converted to fully allocated using the
Volume Mirroring function.
The lsmdisklba command returns the logical block address (LBA) of the MDisk that is associated
with the volume LBA. For mirrored volume, the command lists the MDisk LBA for both the primary
and the copy.
If applicable, the command also lists the range of LBAs on both the volume and MDisk that are
mapped in the same extent, or for thin-provisioned disks, in the same grain. If a thin-provisioned
volume is offline and the specified LBA is not allocated, the command displays the volume LBA
range only.

© Copyright IBM Corp. 2012, 2016 7-63


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin-provision: Metadata management


• Small amount of the real capacity is used for initial metadata
ƒ Metadata directory stored with volume data
• Directory in Storwize V7000 cache based on volume read/write
activity
ƒ Full stride writes for thin volumes no matter what grain size
í Smaller granularities can save more space, but they have larger directories

Directory (B-tree)
< 1 % of volume capacity

Volume’s
extents
Metadata

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-59. Thin-provision: Metadata management

The directory of metadata and user data shares the real capacity allotment of the volume. When an
thin volume is initially created, the volume has no real data capacity stored. However, a small
amount of the real capacity is used for metadata, which it uses to manage space allocation. The
metadata holds information about extents and volume blocks already allocated in a rank. Once the
write I/O to an LBA (which is not previously been written to) causes a grain-sized amount of space
to be marked as used within the volume’s allocated real capacity; and its metadata directory
updated. This metadata that is used for thin provisioning allows the Storwize V7000 to determine
whether new extents have to be allocated or not.
Here are a few examples:
• If the volume default grain size is 256, then 256 K within the allocated real capacity is marked
as used, for the 512 blocks of 512 bytes each, spanning the LBA range in response to this write
I/O request.
• If a subsequent write I/O request is to an LBA within the previously allocated 256 K, the I/O
proceeds as usual since its requested location is within the prior allocated 256 K.
• If a subsequent write I/O request is to an LBA outside the range of a previously allocated 256 K,
then another 256 K within the allocated real capacity is used.

© Copyright IBM Corp. 2012, 2016 7-64


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty
All three of these write examples consult and might update the metadata directory. Read requests
also need to consult the same directory. Consequently, the volume’s directory is highly likely to be
Storwize V7000 cache-resident while I/Os are active on the volume.

© Copyright IBM Corp. 2012, 2016 7-65


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Limitations of virtual capacity


• Maximum thin provisioned volume virtual capacities for an extent size
Extent size in Maximum volume real Maximum thin virtual
MB capacity in GB capacity in GB
16 2048 2000
32 4096 4000
64 8192 8000
128 16,384 16,000
256 32,768 32,000
512 65,536 65,000
1024 131,072 130,000
2048 262,144 260.000
4096 262,144 262,144
8192 262,144 262,144

• Maximum thin provisioned volume capacities for a grain size


Extent size in Maximum volume real Maximum thin virtual
MB capacity in GB capacity in GB
32 4096 4000
64 8192 8000
128 16,384 16,000
256 32,768 32,000
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-60. Limitations of virtual capacity

A few factors (extent and grain size) limit the virtual capacity of thin-provisioned volumes beyond
the factors that limit the capacity of regular volumes.
The first table shows the maximum thin provisioned volume virtual capacities for an extent size. The
second table shows the maximum thin provisioned volume virtual capacities for a grain size.

© Copyright IBM Corp. 2012, 2016 7-66


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Best practices: Monitoring thin-provisioned capacity


• Enable the warning threshold (by using email or an SNMP trap) when
you are working with thin-provisioned volumes.

80%
If no Threshold is set, capacity
volume goes offline when used
write activity exceeds real
capacity Threshold alerts
Threshold alerts
as
as sent the
sent to the
Administrator
Administrator

X
X
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-61. Best practices: Monitoring thin-provisioned capacity

To avoid exhausting the real capacity, you can enable the warning threshold on thin-provisioned
volumes to send alerts to an administrator by using email or an SNMP trap. The administrator can
then (if warranted) increase real capacity and/or change the volume attribute to autoexpand so that
real capacity is increased automatically. You can enable it on the volume, and on the storage pool
side, especially when you do not use the autoexpand mode. Otherwise, the thin volume goes offline
if it runs out of space.
Warning threshold to log a message to the event log when capacity is exceeded (default is 80%).
When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.
For more information and detailed performance considerations for configuring thin provisioning, see
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance
Guidelines.

© Copyright IBM Corp. 2012, 2016 7-67


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Create a thin provision volume


• Select the Custom Preset
ƒ Select the location where
volume(s) will be sourced by
extents that are contained in
only one storage pool
Specify Thin-provision as the
ƒ Specifies the parameter in capacity savings
which the volume must be
created
í Volume capacity unit of
measurement (bytes, KB,
MB, GB, or TB)

Modified view

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-62. Create a thin provision volume

Thin-provision volumes are created using the same procedural steps as any other volume created
within the management GUI Create Volumes wizard.

© Copyright IBM Corp. 2012, 2016 7-68


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin Provisioning tab


• Real capacity
ƒ Specify the real capacity to be allocated
í Real capacity indicated in % or specific number of GB
ƒ Virtual capacity is reported V7000 and to host (FlashCopy or Remote Copy
ƒ Option to automatically expand real capacity
ƒ Option to set warning threshold at a specified %

• Thin Provision Gain Size can be set to 32 KB, 64 KB, 128 KB or 256 KB
ƒ Larger gain sizes produce better performance

Thin-
Provisioned
defaults and
options

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-63. Thin Provisioning tab

The Thin Provisioning tab allows you to modify the default parameters, such as real capacity,
warning threshold or autoexpand enabled or disabled.
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the capacity
of the volume that is reported to other IBM Storwize V7000 components (such as FlashCopy or
remote copy) and to the hosts. For example, the default of 2% of virtual size, which implies that only
2 GB of actual volume size gets allocated.
You can enable the thin volume’s real capacity to automatically expand without user intervention,
set a warning threshold to notify an administrator when the volume has reached the specified
threshold percentage, and change the grain size. These attributes can be overridden; however, the
default settings are general best practice.

© Copyright IBM Corp. 2012, 2016 7-69


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin Provision summary


• Summary calculates the real and virtual capacity
• Thin-provisioned volume is created using the mkvdisk command
• Thin-provisioned volumes can reside in the same pool as fully allocated volumes
ƒ Recognized by its distinguish hourglass icon
• Host sees volume as a fully allocated volume

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-64. Thin Provision summary

The Summary statement calculates the real and virtual capacity value of the volume. The virtual
capacity is the size presented to hosts and other Copy Services such as FlashCopy and
Metro/Global Mirror.
The management GUI generates the mkvdisk command. The thin-provisioned volume is set to
-autoexpand, -grainsize 256, -rsize 2% parameter indicates real capacity is 2% of the actual
volume size with an -warning 80%.

© Copyright IBM Corp. 2012, 2016 7-70


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin-provisioned volume created


• Thin volumes can be easily distinguished from the other volumes by its
hourglass icon
• Support all of the operations that standard volumes
ƒ Exceptions
í Cannot change the segment size
í Cannot enable the pre-read redundancy check
í Cannot be used as the target volume in a Volume Copy
í Cannot be used in a snapshot (legacy) operation VW_THIN2

10 GB
í Cannot be used in a Synchronous Mirroring operation

Storage pool

Distinguished icon

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-65. Thin-provisioned volume created

Thin volumes can be easily distinguished from the other volumes by its hourglass icon. From the
host perspective, the thin-provisioned volume appears as a fully allocated volume. Thin volumes
also support all of the operations that standard volumes do with the following exceptions:
• You cannot change the segment size of a thin volume
• You cannot enable the pre-read redundancy check for a thin volume
• You cannot use a thin volume as the target volume in a Volume Copy
• You cannot use a thin volume in a snapshot (legacy) operation
• You cannot use a thin volume in a Synchronous Mirroring operation

© Copyright IBM Corp. 2012, 2016 7-71


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin-provisioned volume details


Right-click on the volume and select Properties

Select Volume Copy Properties

Volume warning
threshold line

2 GB of real
capacity with
768KB used
for metadata
directory

Extents allocated

View each MDisk


distributed and
consumed extents
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-66. Thin-provisioned volume details

To view a volume details, you will need to right-click on the volume and select Properties. However
to view a volume’s capacity such as the thin provision, you will need to select Volume Copy
Properties. From this view, the is identified as thin-provisioned volume. Only a tiny amount, 2 GB of
real capacity, is allocated; within which 768 KB is used for the volume metadata directory.
The Member Disks tab displays the extent distribution of this volume as well as the actual number
of extents consumed on each MDisk. Observe that the five extents of 512 MB work out to be more
than 2 GB.
Recall that the automatically expand attribute defined for this volume. The real capacity (rsize)
value of 2 GB (or four extents) serves as a contingency buffer maintained for this volume. Because
768 KB of real capacity has been used, the volume was automatically expanded by one extent to
maintain the 2 GB buffer.

© Copyright IBM Corp. 2012, 2016 7-72


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Thin-provisioned threshold warning

Increased
capacity Resolution options:
ƒ Expand volume size
ƒ Convert to fully allocated

Host deletes do not release


capacity or the previously
allocated extents

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-67. Thin-provisioned threshold warning

When a warning threshold is exceeded (for a volume or a pool), the Spectrum Virtualize will
generate an event notification on the Storwize V7000 primary node. For each event, the
notifications include SNMP trap, email notification, and an entry in the Events log. To see the
Events log threshold warning entry, go to Monitoring > Events to view messages and alerts view
of the log. Right-click the entry and select Properties for a more readable and detailed description
of the event. A thin-provisioned volume copy space warning has occurred.
Data or file deletion is managed at the host or OS level. In Windows, a file deletion is just an update
of allocation tables associated with the Windows drive to release allocated space. It is an activity
not known to external storage systems including the Storwize V7000.
Consequently, the real capacity utilization of the volume is not changed nor released even though
the host system indicates more free space for the drive.
Assuming that the application has estimated the volume size correctly, a fully provisioned volume
might be more appropriate. Otherwise, the application might want to reassess the volume virtual
size. A thin-provisioned volume can be converted to a fully allocated volume by using the
management GUI Volume Mirroring function.

© Copyright IBM Corp. 2012, 2016 7-73


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Convert fully allocated volume to thin-provisioned


Reclaim unused space among existing volumes
5 GB 5 GB
After
synchronization,
the fully allocated Volume 00000000000free diet thin
volume can be
deleted
Fully allocated Thin-Provisioned

Existing data volumes: Go BLUE!


(removes grain-sized strings of zeros)
• Reclaims allocated but unused capacity

• Optimizes capacity utilization across storage tiers


ƒ Can become thin during tier migration

• Allows image mode thin volumes to remain thin

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-68. Convert fully allocated volume to thin-provisioned

In an analogous manner, fully allocated volumes can be converted to Thin-Provisioned by using


Spectrum Virtualize Volume Mirroring to add a Thin-Provisioned copy. After synchronization
completes, you can free up extents by deleting the fully allocated volume copy (copy 0) data in the
source pool.
During the synchronization process, grains containing only zeros are not stored (do not use real
capacity).
There is no recommendation for thin-provisioned volumes. The performance of thin-provisioned
volumes depends on what is used in the particular environment. For the best performance, use fully
allocated volumes instead of thin-provisioned volumes.

© Copyright IBM Corp. 2012, 2016 7-74


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Host view of volume attributes and usage


• Thin provisioned volume is a space efficient volume.
ƒ Preserving host access transparency
í Volume is still mapped to its perspective host with read/write access

VB1-THIN

V0B1-DS3KSATA
(SATA)pool

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-69. Host view of volume attributes and usage

From the host perspective, nothing has changed, it see only attributes and usage of the volume.
The volume is still mapped to its perspective host for both read/write access, and still maintain its
assigned object ID and UID number.
The magic of Storwize V7000 virtualization (actually extent pointers) affords the freedom of
changing the backend storage infrastructure without host impact; and the opportunity to exploit
newer technology to optimize storage efficiency for better returns on storage investments.
As the data ages and become less active, it might be worthwhile to migrate it to a lower cost
storage ties; and at the same time release the allocated but unused capacity.

© Copyright IBM Corp. 2012, 2016 7-75


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Storwize V7000 enhanced features topics


• Spectrum Virtualize I/O Stack

• Easy Tier 3rd Generation

• Thin provisioning

• Real-time Compression (RtC)


• Comprestimator utility

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-70. Storwize V7000 enhanced features topics

This topic discusses how Real-time Compression works in an Storwize V7000 environment.

© Copyright IBM Corp. 2012, 2016 7-76


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Real-time Compression

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-71. Real-time Compression

As the industry needs continue grow, the demand for data compression must be fast, reliable, and
scalable. The compression must occur without affecting the production use of the data at any time.
In addition, the data compression solution must also be easy to implement.
Based on these industry requirements, IBM offers IBM Real-time Compression, a combination of a
lossless data compression algorithm with a real-time compression technology.

© Copyright IBM Corp. 2012, 2016 7-77


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Spectrum Virtualize RtC delivers efficiency


• RtC builds on storage efficiency • RtC operates immediately and is
advantages of Storwize V7000. easy to manage.
ƒ RtC delivers 50% or better ƒ No need to schedule periods to run
compression for data that is not post-process compression.
already compressed. ƒ Eliminates need to reserve space
• Compression can help freeze for uncompressed data waiting
storage growth or delay need for post-processing.
additional purchases. • IBM Real-time Compression
• IBM Real-time Compression can supports all Spectrum Virtualize /
be used with active primary data. Storwize V7000
ƒ High performance compression / Storwize V7K storage.
supports workloads off-limits to ƒ Externally virtualized storage from
other alternatives. any vendor with the Storwize
ƒ Greater compression benefits V7000.
through use on more types of data. ƒ Internal or externally virtualized
ƒ No performance impact. storage with the Storwize V7000.
ƒ Can significantly enhance value of
existing storage assets.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-72. Spectrum Virtualize RtC delivers efficiency

IBM Real-time Compression offers innovative, easy-to-use compression that is fully integrated to
support active primary workloads:
• Provides high performance compression of active primary data
▪ Supports workloads off-limits to other alternatives
▪ Expands candidate data types for compression
▪ Derives greater capacity gains due to more eligible data types
• Operates transparently and immediately for ease of management
▪ Eliminates need to schedule post-process compression
▪ Eliminates need to reserve space for uncompressed data pending post-processing
• Enhances and prolongs value of existing storage assets
▪ Increases operational effectiveness and capacity efficiency; optimizing back-end cache and
data transfer efficacy
▪ Delays the need to procure additional storage capacity; deferring additional capacity-based
software licensing
• Supports both internal and externally virtualized storage

© Copyright IBM Corp. 2012, 2016 7-78


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty
▪ Compresses up to 512 volumes per I/O group (v7.3 code)
▪ Exploits the thin-provisioned volume framework

© Copyright IBM Corp. 2012, 2016 7-79


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

RtC embedded RACE architecture


IOs from Host • RACE = Random Access
Compression Engine
SCSI Target
ƒ Embedded into the Thin
Forwarding
Provisioning layer
Replication
• Becomes an organic part of the
Upper Cache
Storwize V7000 / Spectrum
FlashCopy
Virtualize stack
Mirroring
Random Access • Seamlessly integrates with
Compression
Thin Provisioning
Engine existing system management
Lower Cache design
Virtualization ƒ Provides an indication of how much
Forwarding uncompressed data has been
RAID
written to the Volume
Forwarding • All of Storwize V7000 / Spectrum
SCSI Initiator Virtualize advanced functions
supported on compressed
IOs to storage controllers
volumes
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-73. RtC embedded RACE architecture

The Compression technology is delivered by the Random Access Compression Engine (RACE),
which is the core of the IBM Spectrum Virtualize Real-time Compression solution. The RACE has
been integrated seamlessly in the Thin Provisioning layer of the node I/O stack, below the Upper
Cache level. At a high level, the RACE component compresses data that is written into the storage
system dynamically. RACE is an in-line compression technology that allows host servers and Copy
Services to operate with uncompressed data. The compression process occurs transparently to the
attached host system (FC or iSCSI) and Copy Services. All of the advanced features of the
RtC-supported system are supported on compressed volumes. You can create, delete, migrate,
map (assign), and unmap (unassign) a compressed volume as though it were a fully allocated
volume. In addition, you can use IBM Real-time Compression along with IBM Easy Tier on the
same volumes.

© Copyright IBM Corp. 2012, 2016 7-80


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Traditional compression
• Data compression location-based
Host update sequence • Must locate repetition of bytes within a
1 given chuck of data to be compressed
2 • Must detect and calculate the
3 repetition bytes that are stored in the
same chuck
Traditional compression ƒ Locating all bytes might yield a lower
Location compression ratio
Compression
1 Window

3
Three compression actions
(based on physical data location)

# = File Update
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-74. Traditional compression

Traditional data compression is location-based. It compresses data by locating repetitions of bytes


within a given chunk of data to be compressed. The compression ratio is affected by how
repetitions can be detected and how much these bytes stored in the same chunk are related. The
relationship between bytes is affected by the format of the object (for example, text and graphics
embedded together); which might yield a lower compression ratio.
This example shows how the data is split into fixed-size chunks. With this approach, each chunk
gets compressed independently into
variable-length compressed chunks. The resulting compressed chunks are stored sequentially in
the compressed output.
However, there are drawbacks to this approach. An update to a chunk requires a read of the chunk
followed by a recompression of the chunk to include the update. The larger the chunk size chosen,
the heavier the I/O penalty to recompress the chunk. If a small chunk size is chosen, the
compression ratio is reduced, because the repetition detection potential is reduced.

© Copyright IBM Corp. 2012, 2016 7-81


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

RACE innovation: Temporal locality compression


• RACE: Dynamic compression of data
Host update sequence written temporally (around the same
time); not according to physical
1
location
2
• Temporal locality adds time dimension
3 to live application data access patterns
• RACE compression innovation:
ƒ Achieves higher compression ratios

Facilitates on the fly ƒ Enables advanced read ahead


data compression ƒ Obtains superior decompression efficiency
and random access retrieval

One compression action


1 2 3
Time
RACE compression

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-75. RACE innovation: Temporal locality compression

IBM RACE offers an innovation leap by incorporating a time-of-data-access dimension into the
compression algorithm called temporal compression. When host writes arrive, multiple compressed
writes are aggregated into a fixed size chunk called a compressed block. These writes are likely to
originate from the same application and same data type, thus more repetitions can usually be
detected by the compression algorithm.
Due to the time-of-access dimension of temporal compression (instead of creating different
compressed chunks each with its unique compression dictionaries) RACE compression causes
related writes to be compressed together using a single dictionary; yielding a higher compression
ratio as well as faster subsequent retrieval access.

© Copyright IBM Corp. 2012, 2016 7-82


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Example of a compression using a sliding window


• Zip and Gzip are the most common use of compression utilities.

Repetitions of data
are detected within
the sliding window
history, most often
32 kilobytes (KB)

Repetitions outside
of the window
cannot be reduced in
size unless data is
repeated when
window slides to the
next 32 KB

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-76. Example of a compression using a sliding window

The most compression is probably most known to users because of the widespread use of
compression utilities, such as Zip and Gzip. At a high level, these utilities take a file as their input,
and parse the data by using a sliding window technique. Repetitions of data are detected within the
sliding window history, most often 32 kilobytes (KB). Repetitions outside of the window cannot be
referenced. Therefore, the file cannot be reduced in size unless data is repeated when the window
“slides” to the next 32 KB slot.
This example shows compression that using a sliding window, where the first two repetitions of the
string “ABCDEF” fall within the same compression window, and can therefore be compressed using
the same dictionary. However, the third repetition of the string falls outside of this window, and
cannot, therefore, be compressed using the same compression dictionary as the first two
repetitions, reducing the overall achieved compression ratio.

© Copyright IBM Corp. 2012, 2016 7-83


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

RtC from host and copy services perspective


• Host and Copy Services sees only
active primary data.
• Host writes are compressed as it
passes through RACE.
ƒ Writes acknowledged immediately
Data
after received by the write cache
passes
(Upper Cache) through Random
Access
ƒ Only uses physical storage to store the RACE Compression
compressed data Engine Engine

ƒ Volume can be built from a pool using


internal or external MDisks
í They both allow you to use less
physical space on disk than is
presented to the host

Compresses volume
in storage device

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-77. RtC from host and copy services perspective

As part of its staging, data passes through the compression engine, and is then stored in
compressed format onto the storage pool. This means that the write of each host is compressed as
it passes through the RACE engine to the storage disks. Therefore, the physical storage is only
consumed by compressed volume.
Writes are therefore acknowledged immediately after received by the write cache, with
compression occurring as part of the staging to internal or external physical storage.

© Copyright IBM Corp. 2012, 2016 7-84


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Storwize V7000 compression enhancements


• Compression enhancements:
ƒ Main cache stores compressed data (more effective cache capacity)
ƒ Full stride writes for compressed volumes
ƒ Support larger number of compressed volumes
• Up to two Compression Accelerator cards
ƒ Installed in slots 4 and 6
í Can be installed in any I/O slot connected to
second processor – if available
ƒ Up to 512 compressed volumes x4 compressed
accelerator adapters
ƒ Single Compression Accelerator card in each node
can support up to 200 compressed volumes per I/O
group

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-78. Storwize V7000 compression enhancements

The Storwize V7000 nodes must have two processors, 64 GB of memory, and the optional (two
required) Compression Accelerator cards installed in order to use compression. Enabling
compression on AC2 nodes does not affect non-compressed host to disk I/O performance.
I/O group recommendations:
• It is strongly recommend to place Compression Accelerator cards into their dedicated slots 4
and 6. However, if there is no I/O card installed in slot 5, a compression card can be in any slot
connected to this second processor.
• Up to two Compression Accelerator adapters can be installed. Each additional that is installed
in the BB solution improves the I/O performance and in particular the maximum bandwidth
when using compressed volumes.
• With a single Compression Acceleration Card in each node, the existing recommendation on
the number of compressed volumes able to be managed per I/O group remains the same at
200 volumes. However, with the addition of the second Compression Acceleration card in each
node (a total of four cards per I/O group), the total number of managed compressed volumes
increases to 512. A cluster with four (4) I/O groups can support as many as 800 compressed
volumes.

© Copyright IBM Corp. 2012, 2016 7-85


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Spectrum Virtualize Dual RACE

Dual software compression engines for Storwize V7000 Gen2


• Takes advantage of multi-core controller architecture
• Makes better use of Compression Accelerator cards
• Delivers up to 2x throughput for compressed workloads
• Strongly recommend to always configure dual Compression Accelerator
cards in systems using Real-time Compression

• Requires 64 GB per node/canister


• Supported configuration:
ƒ Storwize V7000 with Cache Upgrade, and 2 Compression
Accelerator cards
• CPU cores, memory, and Compression Accelerator cards are
divided between the RACE instances
• Requires only software upgrade on supported configurations

Dual RACE
No bottleneck
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-79. Spectrum Virtualize Dual RACE

With the release of V7.4, enables two instances of the RACE (Random Access Compression
Engine) software, which in short means almost twice the performance for most workloads. The
Storwize V7000 nodes are capable of compression (2nd CPU, extra cache and hardware offload
card(s), when all the compression assist hardware is installed (extra cache and both compression
offload cards).
The allocation of volumes is essentially round robin across both instances, so you do need at least
two volumes to make use of this enhancement, but it takes random IOPs performance from say
175K up to over 330K when the workload behaves well with compression. The bandwidth has also
increased significantly and almost 4.5GB/s can be achieved per Storwize node pair.

© Copyright IBM Corp. 2012, 2016 7-86


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Memory/CPU core allocation: RtC


• An additional 32 GB of memory can be installed in each node canister.
ƒ Currently can only be used by RtC V7.3 code and later
ƒ V7.4 enhanced by the addition of second RACE engine per node

• CPU allocation between System and RACE per one node canister:

• Memory allocation between System and RACE per one node canister:

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-80. Memory/CPU core allocation: RtC

With the initial release there is a fixed memory sizes assigned for RtC use based on how much
memory is installed in each node canister.
This gives a balanced configuration between general IO and RtC performance
▪ Recommendation for serious RtC use is add the extra 32GB of memory per node canister
▪ Second Compression Accelerator is also recommended and requires extra 32GB of
memory

© Copyright IBM Corp. 2012, 2016 7-87


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Storwize V7000 Gen1 versus Gen2:


Max performance (one I/O group)
Uncompressed
Storwize V7000 Gen1 Storwize V7000 Gen2

Read Hit IOPS 850,000 1,300,000


Read Miss IOPS 125,000 238,000
Write Miss IOPS 25,000 50,000
“DB-like” 52,000 100,000
Compressed
Storwize V7000 Gen1 Storwize V7000 Gen2

Read Miss IOPS 2,000-44,000 39,000-149,000


Write Miss IOPS 1,100-17,000 22,500-78,000
“DB-like” 1,500-32,700 41,000-115,000
• Compressed performance shows a range depending on I/O distribution.
• Compressed performance is better than uncompressed in some cases because of
fewer I/Os to drives and additional cache benefits.
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-81. Storwize V7000 Gen1 versus Gen2: Max performance (one I/O group)

This chart shows the difference in performance between the previous Storwize Gen1 and the Gen2
model.
The first chart shows that performance is almost doubled for regular performance capability.
The second chart is with compression enabled. You can see that on the Gen1 model, compression
fluctuates. This is mainly because there just was not enough processing power to handle a serious
case of compression. With the Gen2 model, the number are sufficiently better than the Gen1,
especially for the DBs, VMware and so on

© Copyright IBM Corp. 2012, 2016 7-88


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Real-time Compression license


• RtC offers a 45 day free trial license of compression function.
ƒ Two Compression Accelerator cards
ƒ IBM Storwize V7000 requires Real-time Compression license
í Base license entitles Storwize V7000 (machine type 2076) to all of the licensed
functions, such as Virtualization, FlashCopy, Global Mirror, and Metro Mirror, and Real-
Time Compression
ƒ RtC license is capacity-based
ƒ Click the Settings icon and select System > Licensing to apply capacity
amount

For CLI, run


chlicense -
compression 2

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-82. Real-time Compression license

IBM authorizes existing Storwize V7000 customers to evaluate the potential benefits of Real-time
Compression capability based on their own specific environment and application workloads for free
using it’s the Free Evaluation 45 Days Program. However, before you can use the RtC 45 days trail
period, you must have Storwize V7000 software version 7.4 or later and two RtC Compression
Accelerator cards installed. The 45 days evaluation period begins when the you enable the
Real-time Compression function. At the end of the evaluation period, the you must either purchase
the required licenses for Real-time Compression or disable the function.
IBM Storwize V7000 requires Real-time Compression License. However, with the purchase of the
base license entitles Storwize V7000 (machine type 2076) to all of the licensed functions, such as
Virtualization, FlashCopy, Global Mirror, and Metro Mirror, and Real-Time Compression. With
Storwize V7000, Real-time Compression is licensed by capacity, per terabyte of virtual data.
To apply your Storwize V7000 compression license using CLI, enter the total number of terabytes of
virtual capacity that is licensed for compression. For example, run chlicense -compression 200.

© Copyright IBM Corp. 2012, 2016 7-89


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Mirroring to compressed disk


• Provides non-disruptive conversion between compressed and
uncompressed volumes
• Existing data can be compressed by using volume mirroring
ƒ Can be done online with no downtime or user intervention required
ƒ Transparent and easy

Volume
Fully-allocated mirror Compressed
or Thin-provisioned volume
volume
Copy 0 Copy 1

Only non-zero blocks copied


Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-83. Mirroring to compressed disk

Compression is enabled on a volume-copy basis. Compressed volume copies are a special type of
thinly provisioned volume that is also compressed. In addition to compressing data in real time, it is
also possible to compress existing data. Compressed volume copies can be freely mixed with fully
allocated and regular thin-provisioned (that is, not compressed) volume copies. For existing
volumes, the Volume Mirroring function can be used to non-disruptively add a compressed volume
copy. This compression adds a compressed mirrored copy to an existing volume. The original
uncompressed volume copy can then be deleted after the synchronization to the compressed copy
is complete. A compressed volume can also become uncompressed with the same volume copy
functionality provided by Volume Mirroring.

© Copyright IBM Corp. 2012, 2016 7-90


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Configuring a compressed volume


• Select the Custom Preset
ƒ Select the location where
volume(s) will be sourced by
extents that are contained in only
Specify Compressed as
one storage pool
the capacity savings
ƒ Specifies the parameter in which
the volume must be created
í Volume capacity unit of
measurement (bytes, KB, MB,
Modified view
GB, or TB)
• RtC compression is activated when
the first compressed volume is
created
íDeactivated when the last
compressed volume is removed
from the I/O group

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-84. Configuring a compressed volume

Compressed volumes are configured in the same manner as the other preset volumes.

© Copyright IBM Corp. 2012, 2016 7-91


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Compressed volume: Advanced settings


• Compressed volumes preset options are almost identical to thin-
provisioned.
ƒ If no value options are specified, the volume goes offline.
ƒ Compressed volumes do not have externally controlled grain size.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-85. Compressed volume: Advanced settings

The first time you create a compressed volume, a warning message is automatically displayed. To
use compressed volumes without affecting performance of existing non-compressed volumes in a
pre-existing system, ensure that you understand the way that resources are re-allocated when the
first compressed volume is created.
Hardware resources (CPU cores and cache) are reserved when compression is activated for an I/O
group. These resources are freed when the last compressed volume is removed from the I/O
group.
Compressed volumes have the same characteristics as thin provisioned volumes, the defaults are
almost identical. You must specify that the volume is compressed, specify the rsize, enable
autoexpand, and specify a warning threshold. If not configured properly, the volume can go offline
prematurely. The preferred settings are to set rsize to 2%, enable autoexpand, and set warning to
80%. The only difference between the two presets is the grain size attribute. Compressed volumes
do not have an externally controlled grain size.

© Copyright IBM Corp. 2012, 2016 7-92


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Allocated compressed volume


Example of a compressed 50 GB
volume in io_grp0
object ID 9 created

Compressed
volume mapped
to WIN host
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-86. Allocated compressed volume

The mkvdisk command generates the -compressed parameter which defines the volume being
allocated as a compressed volume.
The -autoexpand, -rsize, and -warning parameters which are used to define thin-provisioned
volumes, are also used to define compressed volumes.
A volume is owned by an I/O group and is assigned a preferred node within the I/O group at volume
creation. Unless overridden, the preferred node of a volume is assigned in round robin fashion by
the system.

© Copyright IBM Corp. 2012, 2016 7-93


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Compressed volume copy details

GUI provided benefit


analysis of compression
for both pool and volume Volume warning
threshold line

[10.23 / (10.23+3.33)] = (0.75442478) 75.45%

Beginning with v7, Easy Tier is supported for compressed volumes


Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-87. Compressed volume copy details

The Copy 0 details of the compressed volume capacity bar has the same appearance as the
thin-provisioned volume. Its capacity details contain statistics related to compression. At initial
creation, only capacity for volume metadata has been allocated. This compressed volume is owned
by io_grp0 and NODE2 has been assigned as its preferred node. Data compression is performed
by the preferred node of the volume. In this case, the total compression savings based on volume
data is about 75%.
Upon the creation of the first compressed volume in a storage pool, the Compression Savings bar
is included in the pool details to display compression statistics at the pool level. You can view the
compression statistics for the volume entry is displayed in the Compression Savings column. The
statistics related to compression are dynamically calculated by the GUI. These calculations of
percentages of savings are not available with the CLI.
IBM Easy Tier supports compressed volumes. Only random read operations are monitored for
compressed volumes (versus both reads and writes). Extents with high random reads (64 K or
smaller) of compressed volumes are eligible to be migrated to tier 0 storage.

© Copyright IBM Corp. 2012, 2016 7-94


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Compressed volume

Compression done
by volume’s
preferred node Host see the fully
allocated 50 GB
volume

Host reads and writes


are handle as normal
IOs

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-88. Compressed volume

Storwize V7000 can increase the effective capacity of your flash storage up to 5 times using IBM
Real-time Compression. Compression requires dedicated hardware resources within the nodes
which are assigned or de-assigned when compression is enabled or disabled. Compression is
enabled whenever the first compressed volume in an I/O group is created and is disabled when the
last compressed volume is removed from the I/O group.
However, each Storwize V7000 nodes has 16-cores (two 8-core processors), each with 64 GB of
memory. Without RtC activated, non-compressed volumes utilizes 8-cores for system processing.
When RtC compression is activated, the second 8-core is used only (at this time) to open the PCIe
lanes as well as schedule traffic into and out of the compression accelerator cards installed.
Compression CPU utilization can be monitored from Monitoring > Performance. Use the
drop-down list to select and view CPU utilization data of the preferred node of the volume.
Behind the scene, compression is managed by the preferred node of the volume. As data is written,
it is compressed on the fly by the preferred node before written to the storage pool. Just as all
volume created by the Storwize V7000 system, Real-time Compression is totally transparent to the
host. A compressed volume appears a standard volume with its full capacity to the attaching host
system. Host reads and writes are handles as normal I/O. As write activity occur, compression
statistics are updated for the volume.

© Copyright IBM Corp. 2012, 2016 7-95


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Migrating to compressed data


• Right-click volume entry and select Add Mirrored Copy.
ƒ Copies do not need to reside in same pool; though might be desirable if
uncompressed volume copy be deleted subsequently.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-89. Migrating to compressed data

You can use Volume Mirroring or Add Mirrored Copy to convert between compressed volume
copies and other kinds. This process can be used to compress or migrate existing data to a
compressed volume or to move an already compressed volume back to a generic volume or
uncompressed while volume is still in use.
Compressed volume copies can be assigned to different storage pool. To add a volume copy to the
selected volume by right click the volume entry and select Add Mirrored Copy. From the Add
Volume Copy pane, select the a second pool (optional) or keep both copies in the same pool as
selected volume.

© Copyright IBM Corp. 2012, 2016 7-96


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Compressed volume details at creation

GUI provided benefit


Compare capacity utilization
analysis of compression for
of volume copies
both pool and volume

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-90. Compressed volume details at creation

While the synchronization is running, there is no change to system behavior, all reads and writes
are going to the original uncompressed volume. Once the synchronization is complete, there are
two identical copies of the data, one is the original (uncompressed) and the other is compressed.
You can change the role of the volume by making Copy1 the copy primary. Compression is
performed by the volume’s preferred node. The compression savings of 76.31% for volume copy 1
is within the advertised 5% accuracy range.

© Copyright IBM Corp. 2012, 2016 7-97


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Existing volume converted to compressed

Delete
uncompressed Volume
volume mirror

Make compressed
Copy 0 Copy 1
volume the Primary

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-91. Existing volume converted to compressed

After the compressed volume copy is created (copy 1 in this case) and synchronized, you can make
the compressed volume the primary and then the fully allocated volume copy (copy 0) can be
deleted. Through Volume Mirroring, a volume with existing data has now become a compressed
volume.

© Copyright IBM Corp. 2012, 2016 7-98


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Storwize V7000 enhanced features topics


• Spectrum Virtualize I/O Stack

• Easy Tier 3rd Generation

• Thin provisioning

• Volume Mirroring

• Real-time Compression (RtC)


• Comprestimator utility

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-92. Storwize V7000 enhanced features topics

This topic examines the benefits of the Comprestimator utility tool.

© Copyright IBM Corp. 2012, 2016 7-99


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Compression implementation guidelines


• Target workloads suitable for compression - examples:
Data types/applications Compression ratio
Oracle/DB2/SQL/SAP Up to 80%
MS Office 2003 Up to 60%
CAD/CAM Up to 70%
Oil/Gas Up to 50%
VMware - Windows OS Up to 45-55%
VMware - Linux OS Up to 70%

• Avoid workloads not suitable for compression:


ƒ Pre-compressed data (MS Office 2007 or later, video, audio,
images, executables, zipped, encrypted, PDFs)
ƒ Heavy sequential writes and reads
• Use Comprestimator utility to evaluate expected compression
benefits for data on existing volumes.
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-93. Compression implementation guidelines

Not all workloads are good candidates for compression. The best candidates are data types that
are not compressed by nature. These data types involve many workloads and applications such as
databases, character/ASCII based data, email systems, server virtualization infrastructures,
CAD/CAM, software development systems, and vector data.
Use the IBM Comprestimator utility to evaluate workloads or data on existing volumes for potential
benefits of compression. Implement compression for data with an expected compression ratio of
45% or higher.
Do not attempt to compress data that is already compressed or with low compression ratios. They
consume more processor and I/O resources with small capacity savings.
RtC algorithms are optimized for application workloads that are more random in nature. Heavy
sequential read/write application access profiles might not yield optimal compression ratios and
throughput.
Refer to the Redpaper REDP-4859: Real-time Compression in SAN Volume Controller and the
Storwize V7000 for reference.

© Copyright IBM Corp. 2012, 2016 7-100


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Comprestimator: Compression benefits

http://www-
304.ibm.com/webapp/set2/sas/f/
comprestimator/home.html

Supported host OS’s:


Download tool and documentation
http://www-304.ibm.com/webapp/set2/sas/f/comprestimator/home.html • AIX, SUSE, HP-UX, Linux, Solaris
• IBM i series through VIOS host
• ESXi
Comprestimator: • Windows (32/64-bit)
• Identify compression candidates
• Runs on hosts that have access to candidate volumes to be compressed
• Samples (reads, no writes) existing volume content to analyze potential compression savings
• Provides an estimated compression savings range

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-94. Comprestimator: Compression benefits

Not all workloads are good candidates for compression. The best candidates are data types that
are not compressed by nature. These data types involve many workloads and applications such as
databases, character/ASCII based data, email systems, server virtualization infrastructures,
CAD/CAM, software development systems, and vector data.
Use the IBM Comprestimator Utility to evaluate workloads or data on existing volumes for potential
benefits of compression. Implement compression for data with an expected compression ratio of
45% or higher.
Do not attempt to compress data that is already compressed or with low compression ratios. They
consume more processor and I/O resources with small capacity savings.
The Comprestimator is a host based command line executable available from the IBM support
website. The utility and its documentation can also be found by performing a web search using the
key words ‘IBM Comprestimator’.
The Comprestimator supports a variety of host platforms. The utility runs on a host that has access
to the devices that will be analyzed, and performs only read operations so it has no effect
whatsoever on the data stored on the device.

© Copyright IBM Corp. 2012, 2016 7-101


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty
Comprestimator version 1.5.2.2 adds support for analyzing expected compression savings in
accordance with XIV Gen3 storage systems running version 11.6, and Storwize V7000 and SAN
Volume Controller (SVC) storage systems running software version 7.4 or higher.

© Copyright IBM Corp. 2012, 2016 7-102


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

IBM Comprestimator utility


• Run over a block device • Method
ƒ Can run on a single file as well ƒ Random sampling and compression
throughout the volume
• Estimates
ƒ Collect enough non-zero samples to
ƒ Portion of non-zero blocks in the gain desired confidence
volume
• More zero blocks slower (takes
ƒ Compression rate of non-zero blocks more time to find non-zero
• Performance blocks)
ƒ Runs FAST! < 60 seconds, no matter
what the volume size is
ƒ Provides accuracy level for the
estimation: ~5 % max error

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-95. IBM Comprestimator utility

The Comprestimator Utility is designed to provide a fast estimated compression rates for
block-based volumes that contain existing data. It uses random sampling of non-zero data on the
volume and mathematical analysis to estimate the compression ratio of existing data. By default, it
runs in less than 60 seconds (regardless of the volume size). Optionally, it can be invoked to run
longer and obtain more samples for an even better estimate of the compression ratio.
Given the Comprestimator is sampling existing data, the estimated compression ratio becomes
more accurate or meaningful for volumes that contain as much relevant active application data as
possible. Previously deleted old data on the volume or empty volumes not initialized with zeros are
subject to sampling and will affect the estimated compression ratio. It employs advanced
mathematical and statistical algorithms to efficiently perform read-only sampling and analysis of
existing data volumes owned by the given host. For each volume analyzed, it reports an estimated
compression capacity savings range; within an accuracy range of 5 percent.

© Copyright IBM Corp. 2012, 2016 7-103


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

IBM Comprestimator utility examples


• To execute the IBM Comprestimator tool enter the following commands:
ƒ Type: cd \Software\Comprestimator
ƒ Type: comprestimator.exe –l
í Where the –l parameter lists available disk numbers in Windows.
• From the Command Prompt window, enter the Comprestimator –n X –p
-s SVC command
ƒ Where the –n X identifies the Windows disk number and –P requests the output to be
in easier to read paragraph format, and the -s is the storage system type (only
parameters SVC or XIV are supported
C:\Software\COMPRESTIMATOR>Comprestimator.exe –n 4 –p –s SVC
Analysis started at 26/04/2016 16:26:34.556465

Device Name: 1&7f6ac24&0&36303053037363830324230303130394643


3030303030303030303030303144
Sample #: 3400
Size<GB>: 50.0
Compressed Size<GB>: 10.6
Total Savings<GB>: 39.4%
Total Savings: 78.7%
Compression Savings: 32.4%
Compression Accuracy Range: 5.0%
Thin Provisioning Savings: 68.8%

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-96. IBM Comprestimator utility examples

To execute the Comprestimator Utility, log into the server using an account with administrator
privileges. Open a Command Prompt with administrator rights (Run as Administrator). Run
Comprestimator with the Comprestimator –n X –p -s SVC command. For Storwize V7000 and
Storwize systems, you will need to use SVC.
The Comprestimator output for the volume example shown indicates that the real storage capacity
consumption for this volume would be reduced from 50 GB to 10.6 GB. This represents a saving of
32.4% within an accuracy range of 5.0%. Only 68.5% of the capacity savings would be derived from
Thin-Provisioning.
The guideline for a volume to be considered as a good candidate is a compression savings of 45%
or more.

© Copyright IBM Corp. 2012, 2016 7-104


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

RtC recommendations for implementation


Before using compressed volumes in production:
• Determine application/workload alignment with compression-friendly
data types.
• Review OS file system alignments to avoid misaligned I/Os.
• Clone application workload environment to test and verify:
ƒ Validate compression meets application performance expectations.
ƒ Assess impact to non-participating bystander applications.
ƒ Review volume backup requirements and measure backup window
duration.
• Manage implementation on per I/O group basis.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-97. RtC recommendations for implementation

In addition to the above statements, recall compression is performed by the volume’s preferred
node. The preferred node is assigned in round-robin fashion within the I/O group as each volume is
created. Over time, as volumes are created and deleted, monitor and maintain the distribution of
compressed volumes across both nodes of the I/O group.
In the example scenarios of this unit, compressed volumes and non-compressed volumes share
the same storage pool. For certain configurations and environments, it might be beneficial to
segregate compressed volumes into a separate pool to minimize impact on non-compressed
volumes. Review your environment with your IBM support representative when activating Real-time
Compression.

© Copyright IBM Corp. 2012, 2016 7-105


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Keywords
• Easy Tier V3
• Automatic data placement mode
• Evaluation mode
• Easy tier indicators
• Drive use attributes
• Thin provisioning
• Auto expand
• Overallocation
• Volume mirroring
• Real time Compression (RtC)
• Comprestimator utility
• Easy Tier STAT
• Data relocation
• Volume Heat Distribution report
• Storage Pool Recommendation report
• System Summary report

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-98. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 7-106


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Review questions (1 of 2)
1. What are three tier levels supported with a Storwize V7000
using Easy Tier technology?

2. True or False: A thin-provisioned volume is created with two


capacities, a virtual capacity as seen by host servers and a
real capacity defined by the amount of actual allocated
storage capacity.

3. Space within the allocated real capacity of a thin-provisioned


volume is assigned in what size increments driven by write
activity?
a. Extent size increments as defined by the storage pool extent
size
b. Grain size increments with a default grain size of 256 K
c. Blocksize increments as defined by the application that owns
the volume

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-99. Review questions (1 of 2)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 7-107


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Review answers (1 of 2)
1. What are three tier levels supported with a Storwize V7000 using Easy
Tier technology?
The answer is Flash tier, Enterprise tier and Nearline tier.

2. True or False: A thin-provisioned volume is created with two capacities,


a virtual capacity as seen by host servers and a real capacity defined by
the amount of actual allocated storage capacity.
The answer is true.

3. Space within the allocated real capacity of a thin-provisioned volume is


assigned in what size increments driven by write activity?
a. Extent size increments as defined by the storage pool extent size
b. Grain size increments with a default grain size of 256 KB
c. Blocksize increments as defined by the application that owns the volume
The answer is grain size increments with a default grain size of 256 KB.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 7-108


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Review questions (2 of 2)
4. True or False: Each copy of a mirrored volume can be mapped to its
own unique host.

5. True or False: Volume mirroring can be used to convert fully allocated


volumes to thin-provisioned or compressed and migrate to a different
storage tier.

6. True or False: Easy Tier can collect and analyze workload statistics
even if no SSD-based MDisks are available.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-100. Review questions (2 of 2)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 7-109


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Review answers (2 of 2)
4. True or False: Each copy of a mirrored volume can be
mapped to its own unique host.
The answer is false.

6. True or False: Volume mirroring can be used to convert fully


allocated volumes to thin-provisioned or compressed and
migrate to a different storage tier.
The answer is true.

7. True or False: Easy Tier can collect and analyze workload


statistics even if no SSD-based MDisks are available
The answer is true.

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 7-110


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 7. Spectrum Virtualize advanced features

Uempty

Unit summary
• Recognize IBM Storage System Easy Tier settings and statuses at the
storage pool and volume levels
• Differentiate among fully allocated, thin-provisioned, and compressed
volumes in terms of storage capacity allocation and consumption
• Recall steps to create thin-provisioned volumes and monitor volume
capacity utilization of auto expand volumes
• Categorize Storwize V7000 hardware resources required for Real-time
Compression (RtC)

Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016

Figure 7-101. Unit summary

© Copyright IBM Corp. 2012, 2016 7-111


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Unit 8. Spectrum Virtualize data


migration
Estimated time
01:30

Overview
This unit discusses the data migration concept and examines the data migration options provided
by the IBM Spectrum Virtualize Software to move data across the Storwize V7000 managed
infrastructure.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 8-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Unit objectives
• Analyze data migration options available with Storwize V7000
• Implement data migration from one storage pool to another
• Implement data migration of existing data to Storwize V7000 managed
storage using the Import and System Migration Wizards
• Implement the Export migration from a striped type volume to image
type to remove it from Storwize V7000 management
• Differentiate between a volume migration and volume mirroring

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 8-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration agenda


• Data migration overview
• Data migration options
ƒ Pool to pool migration
ƒ Import Wizard
ƒ Export Wizard
ƒ System migration
ƒ Volume mirroring

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-2. Data migration agenda

In this topic, we will review the concept of data migration. In addition, looks at several options in
which data migration can be performed. We will begin with Data Migration concept.

© Copyright IBM Corp. 2012, 2016 8-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration concept


• Non-virtualized image type
ƒ Allows existing data to become Storwize V7000-managed without data
conversion or movement
• Virtualized striped type
ƒ Allows the change tin the mapping between volumes extents and MDisk
extents without impacting host access

NetApp
Data Migration DS5000
DS4000
N series
DS3000
EMC
Moving workload (data extents) to: Storwize

HPQ
; Balance usage distribution family
; Move data to lower-cost storage tier XIV

HDS ; Expand or convert to new storage DS8000


systems; decommission old systems DS6000
Sun ; Optimize SSDs with Easy Tier FlashSystem
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-3. Data migration concept

For volumes managed by the Storwize V7000, the mapping of volume extents to MDisk extents can
be dynamically modified without interrupting or affecting a host’s access to these volumes. The
process of moving the physical location is known as data migration. Most implementations allow for
this to be done in a non-disruptive manner that is concurrently while the host continues to perform
I/O to the logical disk (or LUN).
This capability can be used to redistribute workload within an Storwize V7000 cluster across
back-end storage such as:
• Moving workload to rebalance a changed workload.
• Moving workload onto newly installed storage capacity - either new disk drives to expand a
currently installed storage system or a new storage system.
• Moving workload to a lower-cost storage tier.
• Moving workload off older equipment in order to decommission that equipment.
In addition, migration of existing data to Storwize V7000 management takes place without data
conversion and movement. Once under Storwize V7000 management, transparent data migration
allows existing data to gain the benefits and flexibility of data movement without application
disruption.

© Copyright IBM Corp. 2012, 2016 8-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty
There are two aspects to data migration. One is to move data from a non-Storwize V7000
environment to an Storwize V7000 environment (and vice versa). The other is to move data within
the Storwize V7000 managed environment.
While host-based data migration software solutions are available, the Storwize V7000 import
capability can be used to move large quantities of non-Storwize V7000 managed data under
Storwize V7000 control in a relatively small amount of time.
Moving existing volumes of data to Storwize V7000 control (and vice versa) involves an interruption
of host or application access to the data. Moving data within the Storwize V7000 environment is not
disruptive to the host and the application environment.

© Copyright IBM Corp. 2012, 2016 8-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Image mode: Migrating existing data

Existing data
Extent 5a

Extent 5b

Extent 5c
BLUDATV Extent 5d BLUDATA
800 MB 800 MB
Extent 5e

Extent 5f

Extent 5g

Partial extent

Best practice: Have a separately defined storage pool set


aside to house SCSI LUNs containing existing data

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-4. Image mode: Migrating existing data

The image mode, one of the three virtualization types, facilitates the creation of a one-to-one direct
mapping between a volume and a MDisk that contains existing data. Image mode simplifies the
transition of existing data from a non-virtualized to a virtualized environment without requiring
physical data movement or conversion.
The best practice recommendation is to have a separately defined storage pool set aside to house
SCSI LUNs containing existing data. Use the image type volume attribute to securely bring that
data under Storwize V7000 management. Once under Storwize V7000 management, migration to
the virtualized environment (striped type) is totally transparent from host access.
If desired, run with Storwize V7000 management but without virtualization (image mode). Migration
of the volume from the image virtualization to striped virtualization type can occur either
immediately or at a later point in time.

© Copyright IBM Corp. 2012, 2016 8-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Facilitating migration using volume extent pointers


Segregating data access from
Migrate volume storage infrastructure
(migratevdisk) volume management

Extent 1a Extent 1a
Extent 2a Extent 2a
Extent 3a Extent 3a
Extent 1b Extent 1b

Extent 2b Extent 2b
Extent 3b Extent 3b
Extent 1c Extent 1c
Extent 1a Extent 2c
Extent 2c
Storage PoolA Extent 3c Extent 2a Extent 3c
Storage PoolB
Extent 3a

R5 R5 R5 Extent 1b
R5 R5 R5
Extent 2b
Chucks are
Extent 3b copied in 16
R5
LUN
R5
LUN
R5
LUN Extent 1c MB R5 R5 R5
LUN LUN LUN
Extent 2c
RAID Controller
Extent 3c RAID Controller
Storage SystemA
Storage SystemB
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-5. Facilitating migration using volume extent pointers

Since the volume represents the mapping of data extents rather than the data itself, the mapping
can be dynamically updated as data is moved from one extent location to another.
Regardless of the extent size the data is migrated in units of 16 MB. During migration the reads and
writes are directed to the destination for data already copied and to the source for data not yet
copied.
A write to the 16 MB area of the extent that is being copied (most likely due to Storwize V7000
cache destaging) is paused until the data is moved. If contention is detected in the back-end
storage system that might impact the overall performance of the Storwize V7000, the migration is
paused to allow pending writes to proceed.
Once an entire extent has been copied to the destination pool, the extent pointer is updated and the
source extent is freed.
For data to migrate between storage pools, the extent size of the source and destination storage
pools must be identical.

© Copyright IBM Corp. 2012, 2016 8-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Replace storage system or migrate to different storage tier

Segregating data access from


storage infrastructure
Application access management
Server1 volume

Storage PoolA Storage PoolB

MDisks R5 R5 R5 R5 R5 R5

R5 R5 R5 R5 R5 R5
SCSI LUNs LUN LUN LUN LUN LUN LUN

Storage system
migration
RAID Controller RAID Controller

Storage SystemA Storage SystemB


Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-6. Replace storage system or migrate to different storage tier

The volume migration (migratevdisk) function of the Storwize V7000 enables all the extents
associated with one volume to be moved to MDisks in another storage pool.
One use for this function is to move all existing data that mapped by volumes in one storage pool
for a legacy storage system to another storage pool for another storage system. The legacy storage
system can then be decommissioned without impact to accessing applications.
Another example of usage is enabling the implementation of a tiered storage scheme using multiple
storage pools. Lifecycle management is facilitated by migrating aged or inactive volumes to a
lower-cost storage tier in a different storage pool.

© Copyright IBM Corp. 2012, 2016 8-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Import existing LUN data


Non-virtualized - Existing data coexistence
Image type Image mode
Extent 5a
volume Extent 5b
MDisk
Extent 5c
BLUDATV Extent 5d BLUDATA
800 MB Extent 5e 800 MB
Extent 5f
Extent 5g
BLUEPOOL
Partial extent

BLUE1 BLUE3
300 GB 300 GB BLUDATA
Migrate BLUE2 800 MB
Striped Extent 1x 300 GB BLUE4
Extent 3y 300 GB
Extent 4g
BLUDATV
Virtualized 800 MB
Extent 5d
Extent 5e
Managed mode
Extent 5f
Extent 1y

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-7. Import existing LUN data

Image type volumes have the special property that its last extent might be a partial extent. The
migration function of the Storwize V7000 allows one or more extents of the volume to be moved
and thus change the volume from the image to the striped virtualization type. Several methods are
available to migrate the image type volume to striped:
If the image type volume is mapped to a storage pool that is set aside to map only to image type
volumes then migrate the volume from that storage pool to another storage pool using the volume
migration function.
If the image type volume is mapped to the same storage pool as other MDisks to which the extents
are to be moved then the extent migration function facilitates:
• Migrating one extent of the image type volume. If the last extent is a partial extent then that
extent is automatically selected and migrated to a full extent.
• Migrating multiple extents.
• Migrating all the extents off the image mode MDisk.

© Copyright IBM Corp. 2012, 2016 8-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Reverse migration: Export from striped to image


Non-virtualized: Existing Data Coexistence
Image Image mode
Extent 5a
Extent 5b
MDisk
Extent 5c
BLUDATV Extent 5d BLUDATA
Migrate 800 MB Extent 5e 800 MB
Extent 5f
Extent 5g
BLUEGRP1

Migrate to image
BLUE1 BLUE3
300 GB 300 GB
BLUE2
Striped Extent 1x 300 GB BLUE4
Extent 3y 300 GB
Extent 4g
BLUDATV Extent 5d
800 MB 800 MB
Virtualized Extent 5e
Extent 5f
Extent 1y
Unmanaged
mode
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-8. Reverse migration: Export from striped to image

An export option (migratetoimage) is available to reverse the migration from the virtualized realm
back to non-virtualized. Data extents associated with a striped type volume are collocated to an
empty or unmanaged destination MDisk. The volume is returned to the image virtualization type
with its destination MDisk placed in image access mode.
The image volume can then be deleted from Storwize V7000 management causing its related
MDisk or SCSI LUN to be removed from the storage pool and set in unmanaged access mode. The
SCSI LUN can then be unassigned from the Storwize V7000 ports and assigned directly to the
original owning host using the storage system’s management interfaces.
The migrate to image function also allows an image type volume backed with extents of an MDisk
in one storage pool to be backed by another MDisk in the same or different storage pool while
retaining the image virtualization type.
In essence, the volume virtualization type is not relevant to the migrate to image function. The
outcome is one MDisk containing all the data extents for the corresponding volume.

© Copyright IBM Corp. 2012, 2016 8-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Removing MDisk from storage pool


Extent 1a
Extent 2a
Migrate extents Extent 1a
Extent 2a
Extent 3a Extent 1f
Extent 1b Extent 1b
Extent 2b Extent 2b
Extent 3b volume Extent 2d
Extent 1c Extent 1c
Extent 2c Extent 2c
Extent 3c Extent 1e
Storage pool
Extent 1a Extent 2a Extent 3a
Extent 1b Extent 2b Extent 3b
Extent 1c Extent 2c Extent 3c
Extent 2d
Remove
Extent 1d Extent 3d
Extent 1e Extent 2e Extent 3e
Extent 1f Extent 2f Extent 3f
Redeploy
Extent 1g Extent 2g Extent 3g

Volume extents redistributed


Managed disks
among remaining MDisks

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-9. Removing MDisk from storage pool

The extent migration (migrateexts) function is used to move data of a volume from extents
associated with one MDisk to another MDisk within the same storage pool without impacting host
application data access.
When the Easy Tier function causes extents of volumes to move from HDD-based MDisks to
SSD-based MDisk of a pool, migrateexts is the interface used for the extent movement.
When an MDisk is to be removed from a storage pool and that MDisks contains allocated extents,
then a forced removal of the MDisk causes data associated with those extents to be implicitly
migrated to free extents among remaining MDisks within the same storage pool.

© Copyright IBM Corp. 2012, 2016 8-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration options

Volume o Pool to pool migration (tier to tier migration)

New volume
Import Wizard (create image volume, migrate to striped)
(existing data)
o Export to image mode (migrate striped MDisk to image
Volume
MDisk for export)
New volume
(existing data) Migration Wizard (import multiple volumes; map to host)

Volume copy o Volume mirroring

o Replace one storage system with another


Extents of
volumes o Remove MDisks from pool, or extent redistribution within
a pool
o While application is blissfully unaware…
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-10. Data migration options

A wealth of data migration options are provided by the Storwize V7000. We will examine each of
these options listed as we explore data migration.

© Copyright IBM Corp. 2012, 2016 8-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration topics


• Data migration overview
• Data migration options
ƒ Pool to pool migration
ƒ Import Wizard
ƒ Export Wizard
ƒ System migration
ƒ Volume mirroring

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-11. Data migration topics

This topic discusses the pool to pool volume migration option.

© Copyright IBM Corp. 2012, 2016 8-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Migrate volume to another pool (another tier/box)


• Before migrating data to another pool:
ƒ Ensure free space is available.
ƒ Issue the lsvg command to ensure
volumes are in good status.
• Select Volumes > Volumes by Host.
ƒ Right-click a volume and select Migrate to
Another Pool.
í Select a target pool – only pools with the same
extent size as source are listed.
í GUI generates the migratevdisk command.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-12. Migrate volume to another pool (another tier/box)

You can move a volume to a different storage pool only if the destination pool has free space equal
to the size of the volume. If the pool does not have enough space, the volume will not move. Before
performing a volume to pool migration, you need to ensure that the volumes are in good status. You
can issue the lsvg command to view the current status of a volume.
Migrating a volume to another pool also means its extents (the data that belongs to this volume) are
moved (actually copied) to another pool. The volume itself remains unchanged from the host’s
perspective.
Migrate a volume to another pool can be invoked by clicking Volumes > Volumes by Host and
selecting the desired host in the Host Filter list. Right-click the volume entry and then select
Migrate to Another Pool from the menu list.
A list of storage pools eligible to receive the volume copy extents is displayed. The GUI only
displays target pools with the same extent size as the source pool and only if these pools have
enough free capacity needed for the incoming extents. Once you have selected a target pool, the
management GUI generates the migratevdisk command which causes the extents of the volume
copy to be migrated to the selected target storage pool.
A volume might potentially have two sets of extents, typically residing in different pools. The
granularity of volume migration is at the volume copy level. Therefore the more technically precise

© Copyright IBM Corp. 2012, 2016 8-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty
terminology for a volume is actually a volume copy. Data migration occurs at the volume copy level
and migrates all the extents associated with one volume copy of the volume.

© Copyright IBM Corp. 2012, 2016 8-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume copy migration is transparent


• Volume extents are redistributed.
ƒ Host remains unchanged.
ƒ I/O operation proceeds as normal.
• Destination storage pool is updated with volume name.
Extents
SATA pool

AIX_CHIPSV
Basic-WIN1

Extents
Hybrid pool

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-13. Volume copy migration is transparent

As an extent is copied from the source pool to the destination pool the extent pointer for the volume
is updated to that of the destination pool. The extent in the source pool becomes free space eligible
to be reassigned to another volume.
Due to the Storwize V7000 implementation of volume extent pointers the volume migration is totally
transparent to the host. Nothing has changed from a host perspective. The fact that the copied
volume extents are now sourced by another pool is totally transparent and inconsequential to the
attaching host. I/O operations proceed as normal during the data migration.
Once the volume copy’s last extent has been moved to the target pool then the volume’s pool name
is updated to that of the target pool.

© Copyright IBM Corp. 2012, 2016 8-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration topics


• Data migration concept
• Data migration options
ƒ Pool to pool migration
ƒ Import Wizard
ƒ Export Wizard
ƒ System migration
ƒ Volume mirroring

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-14. Data migration topics

This topic reviews how to use the Import Wizard to bring a volume that contains existing data under
Storwize V7000 control as an image type volume.
Finally, we will examine the list of procedures to be completed once the volume has been migrated
to its new pool.

© Copyright IBM Corp. 2012, 2016 8-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Image mode volumes overview


• Image mode volume provides a direct block-for-clock translation of
data.
ƒ Preserves existing external LUNs data as it is migrated into the IBM
Storwize V7000 environment.
ƒ Minimum block size of 512 bytes and always occupy at least one
extent.
ƒ Only use image mode to import or export existing data into or out of the
IBM Storwize V7000.
ƒ Do not assign them to the pools because this action deletes the data.

A storage box
on the SAN

APPLUN

Storwize V7000 management

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-15. Image mode volumes overview

An image mode volume provides a direct block-for-block translation from the managed disk (MDisk)
to the volume with no virtualization. This mode is intended to provide virtualization of MDisks that
already contain data that was written directly, not through a Storwize V7000. Image mode volumes
have a minimum size of 1 block (512 bytes) and always occupy at least one extent.
To preserve LUN data that are hosted on external storage systems, the LUN must be imported into
IBM Storwize V7000 environment as an image-mode volume using the Import option. Hosts that
were previously directly attached to those external storage systems can continue to use their
storage that is now presented by the IBM Storwize V7000 instead.
Do not leave volumes in image mode. Only use image mode to import or export existing data into or
out of the IBM Storwize V7000. Migrate such data from image mode MDisks to other storage pools
to benefit from storage virtualization.
If you need to preserve existing data on the unmanaged MDisks, do not assign them to the pools
because this action deletes the data.

© Copyright IBM Corp. 2012, 2016 8-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Importing existing data from external storage

A storage box
on the SAN

APPLUN

Image Striped
unmanaged mode APP3VOL APP3VOL type
type

image APPLUN Extents moved MDisk0


mode MDisk
managed mode
Migration_Pool Same extent size
General_Pool

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-16. Importing existing data from external storage

Importing LUNs with existing data from an external storage system into the Storwize V7000
environment involves the following:
1. The LUN being imported to the Storwize V7000 has to be unassigned from the host in storage
box. The application which had been using that LUN obviously has to take an outage. The LUN
will then need to be re-assign to Storwize V7000
2. Detect MDisk (LUN) on the Storwize V7000 which becomes unmanaged mode MDisk.
3. Rename unmanaged MDisk to link to correlate to the LUN application.
4. Import unmanaged MDisk; GUI Import Wizard performs the following:
i. Defines migration pool with the exact same size of the LUN being migrated. If the Import
option is used and no existing storage pool is chosen, a temporary migration pool is
created to hold the new image-mode volume.
ii. Create image volume and MDisk pair in migration pool. All existing data volumes
brought under Storwize V7000 management with the Migration Wizard have an image
type copy initially. Then the option to add a striped volume copy is offered as part of the
import process. Subsequent writes to both volume copies are then maintained by the
Volume Mirroring function of the Storwize V7000 until the image volume copy is deleted.

© Copyright IBM Corp. 2012, 2016 8-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty
iii. Migrates image volume to striped volume. Do not leave volumes in image mode. Only
use image mode to import or export existing data into or out of the IBM Storwize V7000
Some additional host housekeeping is typically involved. For example, in the UNIX environment
this generally entails an unmount of the file system, vary off and export the volume group. There
might be host OS unique activities such as remove the hdisk(s) associated with the volume group
in AIX. Analogous activity for Windows might involve the removal of the drive letter to take the drive
(application) offline.
Depending on the OS and the storage systems previously deployed the multipathing driver
software might need to be replaced which generally requires a host reboot.

© Copyright IBM Corp. 2012, 2016 8-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Perform SAN device discovery to detect MDisk


• To perform a Storwize V7000 FC SAN device discovery:
ƒ From the Pools > External Storage view, select Action > Discover storage.
• GUI generates the detectmdisk command for device discovery.
• Right-click the MDisk to view its Properties and verify that the MDisk UID
matches the LUN UID.

Supports multi-LUN
migration concurrently

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-17. Perform SAN device discovery to detect MDisk

The LUN needs to be imported to the Storwize V7000 for management. If the external controller is
not listed in the Pools > External Storage view, you will need to perform a SAN device discovery
by selecting Action > Discover storage. The GUI will issued the detectmdisk command to cause
the Storwize V7000 to perform Fibre Channel SAN device discovery.
The newly detected LUN is treated as an unmanaged MDisk with an assigned default name and
object ID. There is no interface for the Storwize V7000 to discern if this MDisk contains free space
or existing data. You will need to confirm that the correct external storage volume had been
discovered by examining the details of the MDisk. Right-click the MDisk and select Properties.

© Copyright IBM Corp. 2012, 2016 8-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Rename Mdisk to correlate to application owner


• MDisk is renamed for clarity:
ƒ Right-click MDisk entry and select Rename.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-18. Rename Mdisk to correlate to application owner

It is best practice to rename the MDisk to clarify its identity or to match its existing name in the LUN
data being imported from the external storage system. To do so, right-click the MDisk and select
the Rename option from the menu. Specify a name and click the Rename button.

© Copyright IBM Corp. 2012, 2016 8-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Import to image mode


• Right-click the newly named with the unassigned MDisk mode and select
Import.
• Select one of the following methods to import the existing data:
• Import to temporary pool as image-mode volume creates an image type
volume so that the external storage data is brought under Storwize V7000
management.
ƒ Migrate to existing pool option migration of this volume from image to the
striped virtualization type in the desired storage pool.

Click if the volume


contains CopyServices

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-19. Import to image mode

To start the import to image mode process, right click an unmanaged MDisk that correlates to the
external storage LUN and select Import from the drop-down menu. The import wizard guides you
through a quick import process to bring the volume’s existing data under Storwize V7000
management.
There are two methods in which you can import existing data:
Import to temporary pool as image-mode volume option allows you to virtualize existing
data from the external storage system without migrating the data from the source MDisk (LUN)
and then present them to host as image mode volume. This data will become accessible via
IBM Storwize V7000 system while still be available on the backend storage system original
LUN.
Migrate to existing pool option allows you to create an image mode volume and start migrate the
data to the selected storage pool. After the migration process completes the data will be removed
from the original MDisk and placed on the MDisks in the target storage pool.

© Copyright IBM Corp. 2012, 2016 8-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Import as image volume then migrate to striped


• MigrationPool_1024 was created to temporary housed the volume.
ƒ Pool name contains the extent size in which it was created.

3
MDisk

10
DS3K volume

Extent = 1024 MB

MigrationPool_1024
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-20. Import as image volume then migrate to striped

For this example, we chose to Import to temporary pool as image-mode volume. During this
process, the MDisk transitions from unmanaged mode to image mode. Immediately after, the
image type volume is migrated to the MigrationPool to become virtualized. Migrating the image
volume to striped is to be performed later and outside the control of the Import Wizard.
The MigrationPool_1024 is normally used as a vehicle to migrate data from existing external
LUNs into storage pools, either located internally or externally, on the IBM Storwize V7000. You
should not use image-mode volumes as a long-term solution for reasons of performance and
reliability.

© Copyright IBM Corp. 2012, 2016 8-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Generated commands to import (create) volume

3
volume

10
DS3K volume

Extent = 1024 MB

MigrationPool_1024

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-21. Generated commands to import (create) volume

The Wizard generates several tasks. It first creates a storage pool called MigrationPool_1024 using
the same extent size (-ext 1024) as the intended target storage pool.
The mkvdisk command is used to concurrently perform two functions. It places the DS3K MDisk
into the MigrationPool_1024 and at the same time creates an image type volume based on this
MDisk. At this point, there is a one-to-one relationship between the MDisk and the volume. This
volume’s extents are all sourced from this MDisk. The MDisk has an access mode of image and the
volume has a virtualization type of image. You will notice there is no reference to the volume’s
capacity as it is implicitly derived from the capacity of its MDisk.

© Copyright IBM Corp. 2012, 2016 8-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Map image volume to host

Makes the image mode


volume accessible for
A storage box I/O operations to the
on the SAN host

APPLUN

unmanaged mode Image


APP3VOL APP3VOL
type

image APPLUN
MDisk0
mode MDisk

managed mode
Migration_Pool
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-22. Map image volume to host

In terms of host access to the existing data, as soon as the mkvdisk command completes, the
volume can be mapped to the host object that was previously using the data that the MDisk now
contains. The GUI issues the mkvdiskhostmap command to create a new mapping between a
volume and a host. This makes the image mode volume accessible for I/O operations to the host.
After the volume is mapped to a host object, the volume is detected as a disk drive with which the
host can perform I/O operations.

© Copyright IBM Corp. 2012, 2016 8-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Migration volume to new storage pool


• Select a storage pool to migrate image mode volume data to managed-
mode disk to transition from image to striped.

Striped APP3VOL

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-23. Migration volume to new storage pool

To virtualize the storage on an image mode volume, the volume needs to be transformed it into a
striped volume. This process migrates the data on the image mode volume to managed-mode disks
in another storage pool. Issue the migratevdisk command to migrate an entire image mode
volume from one storage pool to another storage pool.

© Copyright IBM Corp. 2012, 2016 8-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Import migration completed

Input LUN no
longer used

Rename volume to
correlate to the external
storage LUN name

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-24. Import migration completed

The migration is now complete. The MigrationPool_1024 pool no longer contains the volume and
the volume allocation capacity of this pool even though the volume count for this pool is zero, the
managed mode MDisk is still in this pool.
The default name given to the volume created by the Import Wizard is a concatenation of the
storage system name followed by the MDisk LUN number can now be renamed. As best practice,
use the Rename option to rename the volume to a more descriptive name typically to identify it as
being used by its assigned host.

© Copyright IBM Corp. 2012, 2016 8-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume extent distribution after migration


• Volume extents are distribute across all MDisk of the pool.

10
volume

1 2
5exts 6 exts

MDisk01 MDisk02

Extent = 1024 MB
Hybrid pool

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-25. Volume extent distribution after migration

Click the Member MDisks tab of the volume details panel to display the MDisks supplying extents
to this volume. By default, the Storwize V7000 attempts to distribute extents of the volume across
all MDisks of the pool.
All extents of the MDisk have been freed.
Remember that the MDisk's access mode became managed when the migratevdisk process
began.

© Copyright IBM Corp. 2012, 2016 8-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Delete MigrationPool
• Volume and MigrationPool_1024 can now be deleted.
ƒ You have the option to keep the MigrationPool for subsequent imports.

APPLUN
MDisk

Migration_Pool
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-26. Delete MigrationPool

Having migrated the volume data from the original LUN and to a new storage pool, the MDisk and
the temporary MigrationPool_1024 storage pool are no longer.
The finalize the import migration, the image type volume is deleted, its corresponding MDisk is
automatically removed from the storage pool. The empty MigrationPool_1024 can either be deleted
or kept for subsequent imports. The data migration to IBM Storwize V7000 is done.
Additional steps will need to be perform to unassign the LUNs in the storage system from the
Storwize V7000 cluster.

© Copyright IBM Corp. 2012, 2016 8-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

DS3K MDisk in unmanaged mode


• MDisk returns to unmanaged mode.
• LUN can now be unassigned from the Storwize V7000 host group to
prevent future detection.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-27. DS3K MDisk in unmanaged mode

All the Mdisk is now unmanaged and no longer being used by the Storwize V7000.
Return to the external storage system and unassign the LUNs from the Storwize V7000 host group
(Storwize V7000 cluster). This will prevent the LUN from being detected the next time the Storwize
V7000 performs SAN device discovery. Consequently the Storwize V7000 removes the MDisk
entries from its inventory.
If the external storage device is scheduled for decommissioning, the SAN zoning needs to be
updated so that the Storwize V7000 can no longer see the its FC ports.

© Copyright IBM Corp. 2012, 2016 8-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Host apps oblivious to pool changes

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-28. Host apps oblivious to pool changes

Volumes have been migrated to their destination pools. Due to the virtualization provided by the
Storwize V7000, the storage infrastructure changes can be made without impact to the host
applications. Therefore, nothing has changed from a host perspective. Host I/O operations proceed
as normal.

© Copyright IBM Corp. 2012, 2016 8-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration topics


• Data migration overview
• Data migration options
ƒ Pool to pool migration
ƒ Import Wizard
ƒ Export Wizard
ƒ System migration
ƒ Volume mirroring

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-29. Data migration topics

This topic discusses the Export to image mode option to remove striped type volume from Storwize
V7000 management. It also highlights the steps in which to reassign the volume data to the host
directly from a storage system.

© Copyright IBM Corp. 2012, 2016 8-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Export volume from striped to image

n
n
Volume
copy Volume
copy

ID n exts
Extents of volume n
ID n exts ID n exts ID n exts
migrated
MDisk1 MDisk2 MDisk3 MDisk4 to one MDisk
Extent = 1024 MB
RAID10 pool
Same
capacity
or bigger as
volume
unmanaged
Extent = 1024 MB
MDisk

Any_same_extent_size_pool

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-30. Export volume from striped to image

The process to export or revert a striped type volume back to image is transparent to the host. The
migratetoimage function is used to relocate all extents of a volume to one MDisk of a storage
system and to recreate the image mode pair. The image type volume is then deleted and the
unmanaged MDisk can then be removed from Storwize V7000 management and reassigned to the
host from the owning storage system. The deletion of the Storwize V7000 volume and the
reassignment of the storage system LUN to the host is disruptive to the host and its applications.
In the example, the Storwize V7000 export volume function (migratetoimage) enables all the
extents associated with a volume copy to be relocated to just one destination MDisk. The access
mode of the MDisk must be unmanaged for it to be selected as the destination MDisk. The capacity
of this MDisk must be either identical to or larger than the capacity of the volume.
As a result of the export process, the volume copy’s virtualization type changes to image and its
extents are sourced sequentially from the destination MDisk.
The image volume and MDisk pair can reside in any pool as long as the resident pool has the same
extent size as the pool that contained the volume copy initially.

© Copyright IBM Corp. 2012, 2016 8-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Create temporary export pool to match extent size


Issue the lsmdiskgrp -delim , to verify the extent size of the existing pool
that contains the volume to be exported.

Issue the following command to create an export pool using the same extents size:
mkmdiskgrp -ext 1024 -name ExportPool_1024
IBM Storwize:V009B:V009B1-admin>lsmdiskgrp 5
id 5
name ExportPool_1024
status online
mdisk_count 0
vdisk_count 0
capacity 0
extent_size 1024
free_capacity 0
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-31. Create temporary export pool to match extent size

As a general practice, image mode pairs should be kept in a designated migration pool instead of
being intermingled in a pool with striped volumes. A preparatory step needed prior to exporting the
volume copy is to have a storage pool with the same extent size as the volume’s storage pool.
In this case, you will need to determine the extents size of the pool in which the volume to be
exported resides in. Based on the pool’s extent size 1024, create an ExportPool _1024 with the
same extent size.
The subsequent lsmdiskgrp 5 command shows the details for the storage pool just created and
confirms the extent size of 1024 MB for the empty pool.

© Copyright IBM Corp. 2012, 2016 8-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Export volume from striped to image mode (1 of 2)


• To export a volume to an image mode volume, right-click on the volume and select
Export to Image Mode.
• Select the MDisk to which you want to export the volume.

The target MDisk must


be the same size or
larger than source
volume

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-32. Export volume from striped to image mode (1 of 2)

Image mode provides a direct block-for-block translation from the MDisk to the volume with no
virtualization. An image mode MDisk is associated with exactly one volume. This feature can be
used to export a volume to a non-virtualized disk and to remove the volume from storage
virtualization, for example, to map it directly from external storage system to host. If you have two
copies of a volume, you can choose one to export to image mode. To export a volume copy from
striped to image mode, right-click on the volume and select Export to Image Mode from the menu
list.
From the Export to Image Mode window, select an unmanaged MDisk that is the size of the volume
copy or larger to export the volume’s extents. In this example, we selected the APP3VOL MDisk of
the same capacity that is still in the Storwize V7000 inventory as an eligible destination MDisk. Click
Next.

© Copyright IBM Corp. 2012, 2016 8-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Export volume from striped to image mode (2 of 2)


• Select a target storage pool with the same extent as the source
storage pool.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-33. Export volume from striped to image mode (2 of 2)

Select the ExportPool_1024 storage pool for the new image mode volume and click Finish.
Since the target storage pool has to have the same extent size as the source storage pool, this pool
was pre-defined for that purpose. Also, the target storage pool may be an empty pool, so the
selected MDisks will be the target pool’s only member at the end of migration procedure. But the
target storage pool does not have to be empty. It can have other image mode or striped MDisks. In
case you have image and striped MDisks in the same pool, volumes created in this pool will only
use striped MDisks because MDisks that are in image mode already have image mode volume
created on top of them and cannot be used as an extent source for other volumes.

© Copyright IBM Corp. 2012, 2016 8-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

CLI migratetoimage command

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-34. CLI migratetoimage command

The GUI generates the migratetoimage command to identify the destination MDisk to use for the
volume copy and the pool to contain the image mode pair.
The Running Task status indicates one migration task is now in progress.

© Copyright IBM Corp. 2012, 2016 8-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume Details: Extents being migrated

3 3
Volume volume
copy 1 copy 1

4 exts

APP3VOL
APP3VOL

Extent = 1024 MB
ExportPool_1024
RAID10 pool
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-35. Volume Details: Extents being migrated

The migratetoimage command causes the data extents for the selected volume to be migrated
from its current MDisks to one destination MDisk.
The Member MDisks tab of the volume detail shows the redistribution snapshot of the volume’s
extents from the RAID10 storage pool to the one MDisk of the ExportPool_1024 pool.

© Copyright IBM Corp. 2012, 2016 8-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

MDisk image access mode

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-36. MDisk image access mode

At the completion of the export process the ExportPool_1024 pool contains one image access
mode MDisk with no free space. All the extents of this MDisk has been assigned to the volume to
be exported.

© Copyright IBM Corp. 2012, 2016 8-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume copy migrated to image in new pool


• Host I/O operation is unchanged.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-37. Volume copy migrated to image in new pool

From the Volumes > Volumes by Host confirms that the APP3VOL volume is still mapped to the
host. This volume now has the ExportPool_1024 pool as backing storage. Nothing has changed
from a host perspective. During the migration to image mode the I/O operations continue to
proceed as normal.

© Copyright IBM Corp. 2012, 2016 8-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume Details: New pool with image mode

Observe the virtualization


type is now in image mode

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-38. Volume Details: New pool with image mode

Right-click the selected volume to view the volume details. This panel confirms that the volume is
an image virtualization type with all of its extents from the APP3VOL MDisk.

© Copyright IBM Corp. 2012, 2016 8-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Delete volume copy to exit Storwize V7000


• Stop host application I/O or shut down host.
• Return to the GUI and delete the volume.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-39. Delete volume copy to exit Storwize V7000

Remove the volume and MDisk from Storwize V7000 control and present the former MDisk as a
DS3K LUN to the Windows host.
First, stop application activity. Either remove the drive letter in Windows to take the drive offline, or
shut down the Windows host.
From the Storwize V7000 GUI select the host system and right-click the volume you want to delete.
Next, select the Delete option from the menu list. Since the volume copy is the image type, the
MDisk backing the image type volume is removed from the storage pool and becomes unmanaged.
Ensure the correct volume is selected. If the volume has a striped type then the data extents
typically span multiple MDisks and these extents are freed. Like most if not all storage systems,
there is no volume undelete function.
The GUI requires a confirmation that the correct volume has been selected for deletion. To delete
the host mapping (since this volume had been mapped to NAVYWIN1) verify the correct volume is
listed. Check the box to Delete the volume even if it has host mappings or is used in
FlashCopy mappings or remote copy relationships and click the Delete button.
The will GUI-generate rmvdisk command contains the -force parameter so that the host mapping
is deleted along with the deletion of the volume.

© Copyright IBM Corp. 2012, 2016 8-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Map LUN (unmanaged MDisk) directly to host


• Map LUN from external storage to host.

Host HBA management interface


can be used to verify the Fibre
Channel LUNs reported to this
host

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-40. Map LUN (unmanaged MDisk) directly to host

Since the storage system LUN is to be directly assigned to the host, you will need to update host
SAN zoning to enable access to the storage system. Also verify the appropriate device drivers have
been installed on the host.
From the storage system, the example shows how to reassign the unmanaged mode MDisk from
the Storwize V7000 to the Windows host. For this process, you need to ensure that the correct LUN
is chosen by verifying the LUN number of the unmanaged mode MDisk and that the correct storage
system is being updated.
You will need to reboot the Windows server particularly if different drivers need to be installed.
Windows recognizes the volume label and will attempt to reassign the same driver letter if it is
available.

© Copyright IBM Corp. 2012, 2016 8-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration topics


• Data migration overview
• Data migration options
ƒ Pool to pool migration
ƒ Export Wizard
ƒ Import Wizard
ƒ System migration
ƒ Volume mirroring

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-41. Data migration topics

This topic discuss the procedures to migrate existing data on external storage systems using the
IBM Spectrum Virtualize storage system migration wizard.

© Copyright IBM Corp. 2012, 2016 8-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Storage system migration overview


• System migration
ƒ Wizard-based tool features easy-to-follow panes that guides you
through the entire migration process.
ƒ Identifies restrictions and prerequisites for using the storage migration
wizard.
ƒ Migrate data from existing storage systems to the Storwize V7000 by
placing the external FC-connected LUNs under the V7000 control.
í Ability to migrate multiple LUNs concurrently from external storage

A storage box
on the SAN

APPLUN APPLUN
APPLUN

Storwize V7000 management

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-42. Storage system migration overview

IBM Spectrum Virtualize Storage System Migration is an wizard-based tool that is designed to
simply the migration task. The wizard features easy-to-follow pane that guides you through the
entire migration process.
System Migration uses volume mirroring instead of migratevdisk command to migrate existing
data into the virtualized environment. Similar to the Import Wizard this step can be optional.

© Copyright IBM Corp. 2012, 2016 8-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

System Migration Wizard tasking


• To start the Migration Wizard:
ƒ From Pool menu, select System Migration and click Start New Migration.

A storage box
on the SAN

copy 0 copy 1
image striped unmanage
APPLUN APPLUN type APPVOL type d
APPLUN
mode

MDisk2
Unmanaged mode
MDisk1
For a MDisk3
Image APPLUN
given LUN
mode MDisk
Enables import of Other_Pool_1024
large capacity LUNs MigrationPool_8192 (any extent size)

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-43. System Migration Wizard tasking

To start the Migration Wizard, slick Pools in the navigation tree and select System Migration menu
from the quick navigation drop-down list. Then click the Start New Migration button.
Migration Wizard generates step-by-step commands to:
a. Defines a migration pool with extent size 8192. The large extent size enables the Migration
Wizard to support the import of extremely large capacity LUNs to Storwize V7000
management.
b. Create image volume and MDisk pair in migration pool. All existing data volumes brought
under Storwize V7000 management with the Migration Wizard have an image type copy
initially. Then the option to add a striped volume copy is offered as part of the import
process. Subsequent writes to both volume copies are then maintained by the Volume
Mirroring function of the Storwize V7000 until the image volume copy is deleted.
c. If a host has not been defined yet, the wizard provides additional guidance as part of the
import process to create the host object.
d. Administrators will then have the ability to map image volume to host object
e. The migration wizard will also add mirrored copy to each image volume to mirror the volume
data to an appropriate storage pool.
f. Finalize: Remove image copy of volume

© Copyright IBM Corp. 2012, 2016 8-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

System migration verification restrictions and prerequisites


• Excluded environments are not supported within the Migration Wizard.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-44. System migration verification restrictions and prerequisites

Before you begin migrating external storage, confirm that the restrictions and prerequisites are met.
The Storwize V7000 system supports migrating data from external storage system to the system
using either direct serial-attached SCSI (SAS) connections and Fibre Channel or Fibre Channel
over Ethernet connections. The list of excluded environments are not built into the guided Migration
Wizard procedure.
• Cable this system into the SAN of the external storage that you want to migrate.
Ensure that your system is cabled into the same storage area network (SAN) as the external
storage system that you are migrating. If you are using Fibre Channel, connect the Fibre
Channel cables to the Fibre Channel ports in both canisters of your system, and then to the
Fibre Channel network. If you are using Fibre Channel over Ethernet, connect Ethernet cables
to the 10 Gbps Ethernet ports.
• Change VMWare ESX host settings, or do not run VMWare ESX.
If you have VMware ESX server hosts, you must change settings on the VMWare host so
copies of the volumes can be recognized by the system after the migration is completed. To
enable volume copies to be recognized by the system for VMWare ESX hosts, you must
complete one of the following actions:
▪ Enable the EnableResignature setting.
▪ Disable the DisallowSnapshotLUN setting.
To learn more about these settings, consult the documentation for the VMWare ESX host.

© Copyright IBM Corp. 2012, 2016 8-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Preparing for data migration


• IBM Storwize V7000 External Virtualize license is required per-enclosure.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-45. Preparing for data migration

The following are required to prepare external storage systems and IBM Storwize V7000 for data
migration.
• In order for the IBM Storwize V7000 to virtualize external storage, a per-enclosure external
virtualization license is required. You can temporarily set the license without any charge only
during the migration process. Configuring the external license setting prevents messages from
being sent that indicate that you are in violation of the license agreement. When the migration is
complete, the external virtualization license must be reset to its original limit.
• I/O operations to the LUNs must be stopped and changes made to the mapping of the storage
system LUNs and to the SAN fabric zoning. The LUNs must then be presented to the Storwize
V7000 and not to the hosts.
• The hosts must have the existing storage system multipath device drives removed, and the be
configured for the Storwize V7000 attachment. This might require further zoning changes to be
made for host-to V7000 SAN connections.
• Storwize V7000 discovers the external LUNs as unmanaged Mdisks.

© Copyright IBM Corp. 2012, 2016 8-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Preparing host and SAN environment


• Examine host data content before migration.
• Use Disk Management to examine current disk status on the host server.
• Use Device Manager to confirmed by the storage device entries (example
1726-4xx is the product identifier for the DS3400).

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-46. Preparing host and SAN environment

In order to ensure that data is not corrupted during the migration process, all I/O operations on the
host side must be stopped. In addition, SAN zoning needs to be modified to remove the zoning
requirements between the old external storage system and host to the Storwize V7000 and old
external storage system.
Before migrating storage, administrator should record the hosts and their WWPNs for each volume
that is being migrated, and the SCSI LUN when mapped to this system.

© Copyright IBM Corp. 2012, 2016 8-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Remap host server’s LUNs to Storwize V7000


• Verify that LUNs have been reassigned to the Storwize V7000 host
group.
• Logical drive IDs: LUNs should match the worldwide unique LUN
names reported by the QLogic HBA management interface.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-47. Remap host server’s LUNs to Storwize V7000

You can use the external storage DS Storage Manager Client interface to verify the map host LUNs
to the Storwize V7000 host group. This remap of LUNs to the Storwize V7000 host group can be
performed either prior to invoking the Migration Wizard or before the next step in the Migration
Wizard.
The LUN number assigned to the logical drives can be any LUN number. In this example, by default
the DS3400 storage unit uses the next available LUN numbers for the target host or host group.
The LUN number is assigned as LUN 1 for APP1DB. The logical drive ID of the LUN should match
the worldwide unique LUN names reported by the QLogic HBA management interface.

© Copyright IBM Corp. 2012, 2016 8-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

System migration MDisk (LUN) discovery


• System discovery MDisks (LUNs) that have been presented to the
Storwize V7000
ƒ Right-click on each MDisk to rename them to match LUN on the external
storage system.

Right-click to
rename Mdisk to
correlated to
the LUNs on the
external storage
system

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-48. System migration MDisk (LUN) discovery

The Storage V7000 management GUI will issue the svctask detectmdisk command to scan the
environment to detect the available LUNs that have been mapped to the Storwize V7000 host
group. The lsdiscoverystatus command list the unmanaged MDisks to be assigned to the
V7000. If the MDisks were not renamed during the GUI external system discovery, you can
right-click on each Mdisk to rename them to correspond to the LUNs from the external storage
system.

© Copyright IBM Corp. 2012, 2016 8-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Select MDisks to import


• Migration Wizard supports concurrently importing multiple unmanaged
Mdisks.

MDisks were previously


renamed to correlate to the
Green line tracks System external storage LUN name
Migration process

Right-click on an
MDisk and
select Properties
to verify MDisk
UID to LUN UID

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-49. Select MDisks to import

The Migration Wizard supports concurrently importing multiple unmanaged MDisks. The LUNs are
presented as unmanaged mode MDisks. The LUN numbers range from 0 to 255 range and are
surfaced to the Storwize V7000 as a 64-bit number with the low-order byte containing the external
storage assigned LUN number in hexadecimal format. The MDisk properties provide additional
confirmation that includes the storage system name and UID.

© Copyright IBM Corp. 2012, 2016 8-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Create image type volume for each MDisk


• A MigrationPool_8192 is created to store MDisks.
• The mkvdisk command is generated for each unmanaged MDisk selected to
create the one-to-one image volume pair.
ƒ Extract image of the LUN being assigned from the external storage system with unchanged
data. 6
volume
4 5
volume volume

10 13
mdisk1 mdisk2
16
mdisk3

MigrationPool_8192

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-50. Create image type volume for each MDisk

For each of the selected MDisk, a svctask mkvdisk command is generated to create the
one-to-one volume pair with a virtualization type of image. The image mode means that the volume
is an extract image of the LUN that is on the external storage system with its data completely
unchanged. Therefore the Storwize V7000 is simple presenting an active image of the external
storage LUN.
The svctask mkmdiskgrp command is used to create a MigrationPool whose extent size is 8192
MB. Using the largest extent size possible for this pool enables MDisk addressability when
importing extremely large capacity LUNs.
The unmanaged image volumes are moved into the migration pool with an access mode of image
and a corresponding image type volume is created with all its extents pointing to this MDisk. The
name assigned to each volume follows the format of storage system name concatenated with the
storage system assigned LUN number for the MDisk.
As with all Storwize V7000 objects, an object ID is assigned to each newly created volume. As a
preferred practice, map the volume to the host with the same SCSI ID before the migration.

© Copyright IBM Corp. 2012, 2016 8-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Verify host mapping


• If you are unsure of the host configuration, right-click on a host and use
the Properties option to view host information.
ƒ Number of Fibre Channel ports defined for the host
ƒ I/O groups the host is entitled to access volumes
• Host objects can been defined at this point or after the migration
completes.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-51. Verify host mapping

Before you proceed, to map image volumes to a host, you need to verify that the potential host
system have been installed with the supported drivers and properly zoned within the Storwize
V7000 SAN fabric.
If a host object has not been defined to the Storwize V7000 yet, click the Add Host option.
Configuring host objects using the System Migration Wizard is optional as it can be perform after
volumes have been migrated to a specified pool.

© Copyright IBM Corp. 2012, 2016 8-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Review and rename image type volumes


• Prior to mapping the image volumes to the host, change the default
name to correspond to the LUNs on the host server.

14

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-52. Review and rename image type volumes

The System Migration Map Volumes to Hosts (optional) pane presents the image volumes under
the default name that contains the name of the external storage system along with the
corresponding MDisk name. All columns with in the wizard can be modified for viewing purposes to
view information like the volume object IDs.

© Copyright IBM Corp. 2012, 2016 8-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Map image volumes to host


• Verify image volumes are mapped to selected host.
• GUI generates the mkvdiskhostmap command for each volume.
ƒ It is possible to map a different host to each image volume.

Host discovery can be


performed. Reboot of host
server might be required.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-53. Map image volumes to host

From this pane, the selected image volumes can now be mapped to the desired host. This task can
be completed using the Map to Host option or from the Action menu select Map to Host.
With today’s SAN-aware operating systems and applications, a change in the SCSI ID (LUN
number) of a LUN presented to the host is not usually an issue. Windows behavior is consistent.
Therefore, it is not an issue for a disk to be removed from the system and the represented with a
different ID/LUN number. Windows will typically reassign the same drive letter if it is still available.
Once the image volumes are mapped to the host object, host device discovery can be performed. It
might be appropriate to reboot the server as part of the host device discovery effort.

© Copyright IBM Corp. 2012, 2016 8-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Select pool and create mirrored volume copies


• Select the storage pool to migrate image volumes from image to
striped.
• GUI generates the addvdiskcopy command for each volume.
ƒ Another copy or set of extents for the volume is being allocated for each volume
in the selected storage pool.

Striped
Storage pool volume
copy 1 extents

Image MigrationPool_8192
volume copy 0 extents

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-54. Select pool and create mirrored volume copies

Migrating image volumes to a selected pool is an optional. If it is desired to migrate these volumes
to the virtualized environment (virtualization type of striped) then select a target pool.
Unlike the Import Wizard, the Migration Wizard uses the Storwize V7000 Volume Mirroring function
(instead of migratevdisk) to implement the migration to the striped virtualization type. The GUI
generates one addvdiskcopy command to create a second volume copy (copy 1) for each volume.
Since Volume Mirroring is used then the target pool extent size does not need to match the
migration pool extent size.
If no target pool is selected for this step then volumes and their corresponding MDisks are left as
image mode pairs. The System Migration Wizard can be invoked at a later point in time to complete
the migration to the virtualized environment.

© Copyright IBM Corp. 2012, 2016 8-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration to virtualize has begun


• The System Migration view displays the volumes and the progress of each
volume copy synchronization task.
ƒ Data is now being copied in the background between the external (old)
storage system to the (new) Storwize V7000 environment for
management.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-55. Data migration to virtualize has begun

The GUI starts volume synchronization on each volume copy. This part of the System Migration
Wizard is complete. However the end of the storage migration wizard is not the end of the data
migration process. Click the Finish button.

© Copyright IBM Corp. 2012, 2016 8-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

System migration complete: Managed volumes

A storage box Host has continuous access to


on the SAN volume data while the migration
occurs in the background

APPLUN APPLUN
APPLUN
Managed
copy 0 copy 1 mode
Unmanaged mode
image striped
type APPVOL type
MDisk2

For a MDisk1
given MDisk3

LUN
Image APPLUN
mode MDisk

Other_Pool_1024
MigrationPool_8192 (any extent size)

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-56. System migration complete: Managed volumes

Since Storwize V7000 environment is virtualized and we were able to successfully map volumes to
the host, the host will have continuously access to the volume data while migration occurs in the
background. The application can be restart and the host will have no awareness of the migration
process.

© Copyright IBM Corp. 2012, 2016 8-60


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Finalize system migration: Delete image volumes


• Finalize task triggers a subsequent Storwize V7000 SAN device
discovery action will delete these MDisk entries from the Storwize
V7000 inventory.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-57. Finalize system migration: Delete image volumes

After Volume Mirroring synchronization has reached 100%, you can finalize the migration process.
The image copy (copy 0) is deemed to be no longer needed since data have been migrated into the
Storwize V7000 virtualized environment.
From the System Migration pane, select the Finalize option. A subsequent Storwize V7000 SAN
device discovery action will delete each Copy 0 image volume copy from the Storwize V7000
inventory.

© Copyright IBM Corp. 2012, 2016 8-61


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Delete MigrationPool_8192
• MigrationPool_8192 can either be deleted or kept for subsequent
imports.
A storage box
on the SAN

APPLUN APPLUN
APPLUN

External storage can now


be removed old system
from SAN fabric

Volume can be
renamed to correlate
to the LUN data

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-58. Delete MigrationPool_8192

When the finalization completes, the image type volumes are deleted, its corresponding MDisk is
automatically removed from the storage pool. You can unzone and remove the older storage
system from the IBM Storwize V7000 SAN fabric. The empty MigrationPool_8192 can either be
deleted or kept for subsequent imports. The data migration to IBM Storwize V7000 is done.
Additional steps will need to be perform to unassign the LUNs in the storage system from the
Storwize V7000 cluster and then remove external storage from the Storwize V7000 system SAN
fabric.

© Copyright IBM Corp. 2012, 2016 8-62


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration topics


• Data migration overview
• Data migration options
ƒ Pool to pool migration
ƒ Import Wizard
ƒ Export Wizard
ƒ System migration
ƒ Volume mirroring

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-59. Data migration topics

This topic discusses how volume mirroring can be used to migrate data from one pool to another
pool.

© Copyright IBM Corp. 2012, 2016 8-63


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume mirror concept

Volume
Volume Volume
Copy0 Copy1

• Stores two copies of a volume, usually on separate disk systems


ƒ Maintains both copies in sync and writes to both copies
• Volume mirroring: A simple RAID 1-type function
• Intended to protect critical data against failure of a disk system or disk
array
ƒ A local high availability function

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-60. Volume mirror concept

Volume Mirroring is a function where Spectrum Virtualize software stores two copies of a volume
and maintains those two copies in synchronization. Volume mirroring is a simple RAID 1-type
function that allows a volume to remain online even when the storage pool backing it becomes
inaccessible.
Volume mirroring is designed to protect the volume from storage infrastructure failures by seamless
mirroring between
storage pools that might impact availability of critical data or applications. Accordingly Volume
Mirroring is a local high availability function and is not intended to be used as a disaster recovery
function.

© Copyright IBM Corp. 2012, 2016 8-64


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume mirroring offers better performance


IOs from Host
• Significant performance improvement
SCSI Target

Forwarding • Located above the lower cache layer


Replication
• Both copies has its own cache
Upper Cache
ƒ Destaging of the cache can now be
FlashCopy
done independently for each copy
Mirroring

Thin Provisioning

Lower Cache

Virtualization

Forwarding

RAID
Forwarding

SCSI Initiator

IOs to storage controllers


Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-61. Volume mirroring offers better performance

Starting with V7.3 and the introduction of the new cache architecture, mirrored volume performance
has been significantly improved. Now, lower cache is beneath the volume mirroring layer, which
means both copies have its own cache. This approach helps in cases of having copies of different
types, for example generic and compressed, because now both copies use its independent cache
and performs its own read prefetch. Destaging of the cache can now be done independently for
each copy, so one copy does not affect performance of a
second copy.
Also, because the Storwize destage algorithm is MDisk aware it can tune or adapt the destaging
process, depending on MDisk type and utilization, for each copy independently.

© Copyright IBM Corp. 2012, 2016 8-65


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume mirroring nondisruptive


• Improves availability of volumes by
protecting them from a single storage
system failure
ƒ Volume are automatically synchronized

• Provides concurrent maintenance of a


storage system that does not natively
support concurrent maintenance
ƒ Copies are automatically resynchronized
after repair
• Provides an alternative method of data
migration with better availability
characteristics
• Use volume mirroring to mirror the host
data between storage systems at the two
independent storage systems sites

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-62. Volume mirroring nondisruptive

You can use mirrored volumes for the following reasons:


• Improving availability of volumes by protecting them from a single storage system failure.
▪ If a failing disk system is returned to service, Storwize V7000 management GUI
automatically resynchronizes the two copies.
▪ Volume mirroring is designed to protect the volume from storage infrastructure failures by
seamless mirroring between storage pools that might impact availability of critical data or
applications.
• Providing concurrent maintenance of a storage system that does not natively support
concurrent maintenance.
▪ A storage controller might fail or be taken offline for maintenance and not affect application
access.
• Providing an alternative method of data migration with better availability characteristics.
▪ While a volume is being migrated using the data migration feature, it is vulnerable to failures
on both the source and target storage pool. Volume mirroring provides an alternative
because you can start with a non-mirrored volume in the source storage pool, and then add
a copy to that volume in the destination storage pool. When the volume is synchronized,

© Copyright IBM Corp. 2012, 2016 8-66


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty
you can delete the original copy that is in the source storage pool. During the
synchronization process, the volume remains available even if there is a problem with the
destination storage pool.
• In addition, you can use volume mirroring to mirror the host data between storage systems at
the two independent storage systems (primary and secondary) sites.

© Copyright IBM Corp. 2012, 2016 8-67


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume mirroring flexibility


• Volume copy can be any type of Striped Volume
volume: striped, image, sequential , (default)
and thin-provisioned or full allocated. Extent 1a
Extent 2a
Extent 3a
• Volume copy can be added to an Extent 1b
Extent 2b
existing volume with different Extent 3b
Extent 1c
virtualization policy. Extent 2c
Extent 3c

Volume Volume
Copy0 Copy1
Extent 1a Extent 1a
Extent 2a Extent 2a
Extent 3a Extent 3a
Extent 1b Extent 1b
Extent 2b Extent 2b
Extent 3b
Extent 1c
Extent 2c
Extent 3c

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-63. Volume mirroring flexibility

The ability to create a volume copy affords additional management flexibility. Volume copy uses the
same virtualization policy and can be create as a striped, sequential, and image volumes. Volume
mirroring also offers non non-disruptive conversions between fully allocated volumes and
thin-provisioned volumes.
A volume copy can also be added to an existing volume. In this case, the two copies do not have to
share the same virtualization policy. When a volume copy is added, the Spectrum Virtualize
software automatically synchronizes the new copy so that it contains the same data as the existing
copy.

© Copyright IBM Corp. 2012, 2016 8-68


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume copies on different storage


Extent 9
Copy created:
Two copies of extents Extent 8
• At creation or,
Extent 7
Host sees ONE • After creation
Extent 6
5 GB volume
(LBA0 – LBAn) Copy 0 Extent 5

5 GB
Extent 4
Extent 3
Pool1
Extent 2
extent size
Extent 1
Volume 512 MB
Extent 0

Copy 1 Extent 4
Extent 3
Pool2
Extent 2
Copy has its own:
Extent 1 extent size
• Storage pool
Extent 0 1024 MB
• Virtualization type
• Fully allocated or thin
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-64. Volume copies on different storage

A volume can be migrated from one storage pool to another and acquire a different extent size. The
original volume copy can either be deleted or you can split the volume into two separate volumes –
breaking the synchronization. The process of moving any volume between storage pools is
non-disruptive to host access. This option is a quicker version of the “Volume Mirroring and Split
into New Volume” option. You might use this option if you want to move volumes in a single step or
you do not have a volume mirror copy already.

© Copyright IBM Corp. 2012, 2016 8-69


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume mirror I/O processing


• Volume Copy0 (primary copy) Read/write requests
handles all reads and writes.
ƒ Writes are also directed to the
Volume Copy1. Volume

ƒ Copy 1 can handle reads and writes if


Copy 0 is unavailable.
reads
ƒ Primary owner can be changed at writes writes
anytime.

• Enhancements
Pool1 Pool2
ƒ Support for different tiers in the mirror. extents extents

í Does not slow to slowest tier on writes

ƒ Full stride write for CDM no matter


what grain size.
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-65. Volume mirror I/O processing

By default, volume copy 0 is assigned as the primary copy of the volume. However, from an I/O
processing point of view under normal conditions, reads and writes always go through the primary
copy. Writes are also sent to volume copy 1 so that synchronization is maintained between the two
volume copies. The location of the primary volume can also be changed by the user to either
account for load-balancing or possibly different performance characteristics for the storage of each
copy.
If the primary copy is unavailable - for example volume copy 0’s pool became unavailable due to its
storage system has been taken offline - the volume remains accessible to assigned servers. Reads
and writes are handled with volume copy 1. The Storwize V7000 tracks changed blocks of volume
copy 1 and resynchronize these blocks with volume copy 0 when it becomes available. Reads and
writes then revert back to volume copy 0. It is also possible to set volume copy 1 as the primary
copy if desired.

© Copyright IBM Corp. 2012, 2016 8-70


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Create a (simple) volume copy


• Right-click on an existing
volume and select Add Volume
Copy.
ƒ Identifies the location where the
volume copies extents are
contained in only one storage
pool or two different storage pool
Extents
í Uses the same parameter locations
selections as the other volume
creation
Sync rate
ƒ Summary set at 80
í Provides a quick view of volume
details before creation
addvdiskcopy
í Volumes in the mirrored pair are command adds a
created equally in capacity copy to an existing
volume

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-66. Create a (simple) volume copy

One of the simplest way to create a volume copy is to right-click on a particular volume and select
the Add Volume Copy from the menu. This task will create the mirroring volumes with two copies of
its extents. This procedure allows you to place mirrored volume in a single pool or specify a primary
and a secondary pool to migrate data between two storage pools.
Summary statement calculates the real and virtual capacity value of the volume. The virtual
capacity is the size presented to hosts and other Copy Services such as FlashCopy and
Metro/Global Mirror.
The addvdiskcopy command adds a copy to an existing volume, which changes a non-mirrored
volume into a mirrored volume. Use the -copies parameter to specify the number of copies to add
to the volume; this is currently limited to the default value of 1 copy. Use the -mdiskgrp parameter
to specify the managed disk group that will provide storage for the copy; the lsmdiskgrp CLI
command lists the available managed disk groups and the amount of available storage in each
group.

© Copyright IBM Corp. 2012, 2016 8-71


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Create a volume mirror using Mirrored or Custom presets


• Select the Mirrored or
Custom preset
ƒ Mirrored preset formats
volume and sets volume copy
sync rate at 50 MB
ƒ Custom Preset allow you to
disable volume format and
change default sync rate
ƒ Complete volume details using
the same parameter Extents
selections as the other volume locations
creation
ƒ Volume Location
í Specify storage pool(s) location

Mirrored preset default


sync rate is 50

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-67. Create a volume mirror using Mirrored or Custom presets

You also can create mirrored volumes using the GUI Create Volumes Mirrored and Custom preset
options. Mirrored preset create mirrored volumes with predefined parameters such a volume format
and default sync rate.
The custom preset allows you to modify and specify specific parameters such as changing the sync
rate parameter to specify the rate at which the volume copies will resynchronize after loss of
synchronization.

© Copyright IBM Corp. 2012, 2016 8-72


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume copy created: Sync the two copies

Copy 0*= reads/writes


Copy 1= writes

Copy 1is assigned to


different pool (Team
B50_Grp2) or a different
storage system pool

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-68. Volume copy created: Sync the two copies

The new mirrored volume entry displays two copies. By default, the asterisk associated with volume
copy 0 is used to identify the primary copy. This copy used by Storwize V7000 storage system for
reads and writes. The addvdiskcopy request added copy 1 for this volume. Copy 1 is used for
writes only.
The two volume copies need to be synchronized. Spectrum Virtualize Volume Mirroring
automatically copies the data of copy 0 to copy 1; while supporting concurrent application
reads/writes.
The Running Tasks status bubble indicates that one volume synchronization task is running in the
background. You can click within the display to view the task progress.

© Copyright IBM Corp. 2012, 2016 8-73


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Writes performed on both copies


• Writes forked to both copies of the volume
ƒ With 7.3.0 new cache, these writes complete almost instantly
ƒ Still support the latency and redundancy modes
• Reads always from the primary copy
ƒ Primary copy can be switched dynamically
• Balancing load
ƒ Large sets of mirrored volumes can be balanced by alternating the pool that provides
the primary copy
ƒ Copies no longer need to be comparable in latency due to new cache design
• Common use for migration between pools and volume types
ƒ Can start/stop – unlike migrate tasks
ƒ Thick to thin, or compressed conversion
• Synchronization issues 256KB reads and writes (grains)

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-69. Writes performed on both copies

When a server writes to a mirrored volume, the system writes the data to both copies. If the Primary
volume copy is available and synchronized, any reads from the volume are directed to it. However,
if the primary copy is unavailable, the system use the secondary copy to read. Volume
Mirroring support two possible values for I/O Time-out Configuration (attribute
mirror_write_priority):
• Latency (default value): short time-out prioritizing low host latency. This option indicates a copy
that is slow to respond to a write I/O goes out of sync if the other copy successfully writes the
data.
• Redundancy: long time-out prioritizing redundancy. This option indicates a copy that is slow to
respond to a write I/O may use the full Error Recovery Procedure (ERP) time. The response to
the I/O is delayed until it completes to keep the copy in sync if possible.
Volume Mirroring ceases to use the slow copy for a period of between 4 to 6 minutes, and
subsequent I/O data is not affected by a slow copy. Synchronization is suspended during this
period. After the copy suspension completes, Volume Mirroring resumes, which allows I/O
data and synchronization operations to the slow copy that will, typically, shortly complete the
synchronization.

© Copyright IBM Corp. 2012, 2016 8-74


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Mirrored volume properties


• Volumes are identical and assigned to different storage pools.
ƒ Depending on the option in which you create the volume mirroring pair, the sync rate
might differ.
ƒ If you created the mirrored pair using the Mirrored preset option, you can issue the
chvdisk -syncrate command using CLI to modify the default sync rate to
increase background copy rate.

Background copy sync rate:


50 - 2 MBps
60 - 4 MBps
70 - 8 MBps
80 - 16 MBps
90 - 32 MBps
100 - 64 MBps

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-70. Mirrored volume properties

The volume property details confirms that the two volume copies are identical, but assigned to
different storage pools. The capacity bar for the volume copies indicates that both volumes are fully
allocated volumes with writes performed on both copies. The synchronization or background copy
rate defaults to 50% (depending on the method used to create the volume mirroring), which is set to
2 MBps. You can change the synchronization rate to one of the specified rates to increase the
background copy rated. You can issue a chvdisk -syncrate command to change the
synchronization rate using the CLI.
The background synchronization rate can be monitored from the Monitoring > Performance view.
The default synchronization rate is typically too low for Flash drive mirrored volumes. Instead, set
the synchronization rate to 80 or above.

© Copyright IBM Corp. 2012, 2016 8-75


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume mirroring high availability


• Volume mirroring processing is independent of the storage pool
extent size.
• Two sets of extents created.
ƒ Volume mirror copies are still visual from both storage pools.

Migrate data between


two
twostorage
storage pools
pool of
of
different extent sizes

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-71. Volume mirroring high availability

Volume Mirroring processing is independent of the storage pool extent size. When a volume copy is
created it has, one set of extents (copy 0), and a second set of extents created on the secondary
volume copy (copy 1). The two sets of extents or volume copies can reside in the same or different
storage pools.
Using volume mirroring over volume migration is beneficial because with volume mirroring storage
pools do not need to have the same extent size as is a case with volume migration. This allows
volume mirroring to eliminate the impact to volume availability if one or more MDisks, or the entire
storage pool fails. If one of the mirrored volumes copies becomes unavailable, updates to the
volume are logged to by the Storwize V7000, allowing for the resynchronization of the volume
copies when the mirror is reestablished. The resynchronization between both copies is incremental
and is started by the Storwize V7000 automatically. Therefore, volume mirroring provides higher
availability to applications at the local site and reducing or minimizing the requirement to implement
host-based mirroring solutions.

© Copyright IBM Corp. 2012, 2016 8-76


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Change volume primary copy (read/write)


• Right-click on the Copy 1 volume
and select Make Primary.
ƒ GUI issues the chvdisk –primary
command designating Copy 1 as the
primary volume.
• Copy 1* now has been given full
reads/writes of the mirrored pair.
• Copy 0 now acts as the secondary
with writes only.
• Change is reflected in both storage
pool of the volume copy.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-72. Change volume primary copy (read/write)

The primary copy is used by Storwize V7000 for both reads and writes. You can change volume
copy 1 to be the primary copy by right-clicking on its entry and select Make Primary from the menu
list.
The GUI generates the chvdisk -primary command to designate volume copy 1 as the primary
copy for the selected volume, volume ID.
A use case for designating volume copy 1 as the primary copy is the migration of a volume to a new
storage system.
For a test period, it might be desirable to have both the read and write I/Os directed at the new
storage system of the volume while still maintain a copy in the storage system scheduled for
removal.

© Copyright IBM Corp. 2012, 2016 8-77


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Delete volume mirror copy


If volume copy is the primary, it is
not necessary to switch to primary
prior to deleting a volume copy

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-73. Delete volume mirror copy

You can convert a mirrored volume into a non-mirrored volume by deleting one copy or by splitting
one copy to create a new non-mirrored volume. During the deletion process for one of the volume
copies, the management GUI issues a rmvdiskcopy command followed by the -copy number (in
this example Copy 0). Once the process is complete, only volume copy 1 of the volume remains. If
volume copy 1 was a thin-provisioned volume, it is automatically converted to a fully allocated copy.
The volume can now be managed independently by Easy Tier; based on the activity associated
with extents of the individual volume copy.

© Copyright IBM Corp. 2012, 2016 8-78


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Split into new volume


• Volume copies can be split into two single
volumes.
ƒ Right-click on one of the mirrored copies
and select Split into New Volume.
ƒ Create a new volume name.
ƒ GUI issues the splitvdisk –copy for
the selected copy.
ƒ Map volume to a host.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-74. Split into new volume

The two copies created by volume mirroring may be split apart and either of the copies may be
retained to support the active Volume. The remaining copy is available as a static version of the
data. This capability may be used to migrate a volume between managed disk groups with different
extent sizes.
Volume mirroring does not create a second volume before you split copies. Volume mirroring adds
a second copy of the data under the same volume so you end up having one volume presented to
the host with two copies of data connected to this volume. Only splitting copies creates another
volume and then both volumes have only one copy of the data.

© Copyright IBM Corp. 2012, 2016 8-79


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Transparent to host and user applications


• Volume mirrored allocated as single
volume
ƒ No change from the host perspective
ƒ Host only see a single volume allocation
ƒ I/O operations proceed as normal Volume

Volume Volume
Copy0 Copy1
Extent 1a Extent 1a
Extent 2a Extent 2a
Extent 3a Extent 3a
Extent 1b Extent 1b
Extent 2b Extent 2b

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-75. Transparent to host and user applications

Although the two volume copies are identical, they appear to the host as one volume. If one of the
mirrored volume copies is temporarily unavailable, for example, because the storage system that
provides the storage pool is unavailable, the volume remains accessible to servers. The system
remembers which areas of the volume are written and resynchronizes these areas when both
copies are available. The secondary can service read I/O when the primary is offline without user
intervention. All volume migration activities occur within the Storwize V7000 it is totally transparent
to attaching servers and user applications.

© Copyright IBM Corp. 2012, 2016 8-80


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume mirroring protection


Quorum
• Best practice: Set up quorum disks State Disk
where multiple quorum candidate Data

disks are allocated on different


storage systems.
• Volume mirroring maintains some
state data on the quorum disks.
Volume
• Synchronization status for volume
mirroring is recorded on the
Storwize V7000 quorum disk.
ƒ If a quorum disk is not accessible Volume Volume
and volume mirroring is unable to Copy0 Copy1
update state data:
í A mirrored volume can be taken
offline to maintain data integrity.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-76. Volume mirroring protection

To protect against mirrored volumes being taken offline, and to ensure the high availability of the
system, follow the guidelines for setting up quorum disks where multiple quorum candidate disks
are allocated on different storage systems.
The Storwize V7000 system maintains quorum disks which contains a reserved area that is used
exclusively for system management to record a backup of system configuration data to be used in
the event of a disaster. Volume mirroring maintains some state data on the quorum disks. If a
quorum disk is not accessible and volume mirroring is unable to update the state information, a
mirrored volume might need to be taken offline to maintain data integrity.
Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs
because synchronization status for mirrored volumes is recorded on the quorum disk.

© Copyright IBM Corp. 2012, 2016 8-81


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Volume mirroring summary


Listed here are a few of the usage and characteristics of volume
mirroring:
• Creating a mirrored volume:
ƒ Maximum number of copies is two.
ƒ Both copies will be created with the same virtualization policy.
ƒ Both copies can be located/migrated to different storage pools.
• Add a volume copy to an existing volume.
• Remove a volume copy from a mirrored volume.
• Split a volume copy from a mirrored volume and create a new
volume with the split copy.

• Expand or shrink a volume.


• Delete a volume.
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-77. Volume mirroring summary

• When creating a mirrored volume, you can only have a maximum number of two copies. Both
copies will be created with the same virtualization policy. The first Storage Pool specified will
contain the primary copy.
▪ To have a volume mirrored using different policies, you need to add a volume copy with a
different policy to a volume that has only one copy.
▪ Both copies can be located in different Storage Pools.
▪ It is not possible to create a volume with two copies when specifying a set of MDisks.
• You can add a volume copy to an existing volume. Each volume copy can have a different
space allocation policy. However, the two existing volumes with one copy each cannot be
merged into a single mirrored volume with two copies.
• You can remove a volume copy from a mirrored volume, only one copy remains.
• You can split a volume copy from a mirrored volume and create a new volume with the split
copy. This function can only be performed when the volume copies are synchronized;
otherwise, use the -force command.
▪ Volume copies can not possible to recombined after they have been split.
▪ Adding and splitting in one workflow enables migrations that are not currently allowed.

© Copyright IBM Corp. 2012, 2016 8-82


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty
▪ The split volume copy can be used as a means for creating a point-in-time copy (clone).
• You can expand or shrink both of the volume copies at once.
▪ All volume copies always have the same size.
▪ All copies must be synchronized before expanding or shrinking them.
• When a volume gets deleted, all copies get deleted.

© Copyright IBM Corp. 2012, 2016 8-83


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Benefit of the add/remove MDisks options


Goal: Replace DS3K with
volume volume volume volume Storwize V7000
without affecting host access;
volume volume volume volume volume volumewithout invoking migratevdisk
for each individual volume

0 1

MDisk MDisk
12 15
13 14
Add to pool 3
2
DS3KNAVY4
DS3KNAVY5 DS3KNAVY6 DS3KNAVY7
MDisk MDisk

External storage pool Storwize V7000 MDisks


Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-78. Benefit of the add/remove MDisks options

This storage system replacement approach is the one that most likely take the shortest elapsed
time. It might be the most appropriate for time sensitive situations such as impending lease
terminations of an old storage system where lease extensions might be too costly. Two steps are
involved:
• The add MDisks step: After the LUNs from the new storage system have been discovered as
unmanaged MDisks they then are added to the existing pool that represents the system being
replaced. The storage pool temporarily contain MDisks from both storage systems.
• The remove MDisks step: Remove at the same time all the MDisks representing the departing
storage system. The removal causes the allocated extents of all volumes in the pool to be
migrated from these MDisks to the newly added MDisks. The removed MDisks become
unmanaged. The storage system can then be removed from Storwize V7000 management.

© Copyright IBM Corp. 2012, 2016 8-84


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Data migration: Review summary

Volume o Pool to pool migration (tier to tier migration)

New volume
Import Wizard (create image volume, migrate to striped)
(existing data)
o Export to image mode (migrate striped MDisk to image
Volume
MDisk for export)
New volume
(existing data) Migration Wizard (import multiple volumes; map to host)

Volume copy o Volume mirroring

o Replace one storage system with another


Extents of
volumes o Remove MDisks from pool, or extent redistribution within
a pool
o While application is blissfully unaware…
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-79. Data migration: Review summary

You should now be aware that the only time that data migration is disruptive to applications is when
a storage system LUN is moved to or from Storwize V7000 control. In all other cases, Storwize
V7000 managed data movement is totally transparent. Applications proceed blissfully unaware of
changes being made in the storage infrastructure.

© Copyright IBM Corp. 2012, 2016 8-85


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Keywords
• Non-virtualized image type
• Virtualized striped type
• Multipathing
• Zoning
• Striped mode
• Image mode
• Sequential mode
• Volume copy
• Destination pool
• Extent size
• Import Wizard
• System migration
• MDisks
• Volume
• Volume copy

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-80. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 8-86


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Review questions (1 of 2)
1. The three virtualization types for volumes are:

2. True or False: When using volume mirroring to migrate a


volume from one pool to another, the extent size of the two
pools must be identical.

3. True or False: Migrating a volume from image virtualization


type to striped or from striped back to image is completely
transparent to host application I/Os.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-81. Review questions (1 of 2)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 8-87


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Review answers (1 of 2)
1. The three virtualization types for volumes are:
The answers are striped, sequential, and image.

2. True or False: When using volume mirroring to migrate a


volume from one pool to another, the extent size of the two
pools must be identical.
The answer is false.

3. True or False: Migrating a volume from image virtualization


type to striped or from striped back to image is completely
transparent to host application I/Os.
The answer is true.

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 8-88


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Review questions (2 of 2)
4. Which of the following is not performed by the Import Wizard
when a volume from an external storage system is being
migrated to the Storwize V7000:
a. Create a migration pool with the proper extent size
b. Unzone and unmap the volume from the external storage system
c. Create an image type volume to point to storage on the MDisk being
imported
d. Migrate the volume from image to striped type

5. True or False: Once a volume is under Storwize V7000


management, that data can no longer be exported to
another storage system.

6. True or False: To remove an external storage MDisk from


Storwize V7000, its access mode should be unmanaged.
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-82. Review questions (2 of 2)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 8-89


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Review answers (2 of 2)
4. Which of the following is not performed by the Import Wizard when
a volume from an external storage system is being migrated to the
Storwize V7000:
a. Create a migration pool with the proper extent size
b. Unzone and unmap the volume from the external storage system
c. Create an image type volume to point to storage on the MDisk being
imported
d. Migrate the volume from image to striped type
The answer is unzone and unmap the volume from the external
storage system.

5. True or False: Once a volume is under Storwize V7000


management, that data can no longer be exported to another
storage system.
The answer is false.

6. True or False: To remove an external storage MDisk from Storwize


V7000, its access mode should be unmanaged.
The answer is true.
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 8-90


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 8. Spectrum Virtualize data migration

Uempty

Unit summary
• Analyze data migration options available with Storwize V7000
• Implement data migration from one storage pool to another
• Implement data migration of existing data to Storwize V7000 managed
storage using the Import and System Migration Wizards
• Implement the Export migration from a striped type volume to image
type to remove it from Storwize V7000 management
• Differentiate between a volume migration and volume mirroring

Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016

Figure 8-83. Unit summary

© Copyright IBM Corp. 2012, 2016 8-91


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Unit 9. Spectrum Virtualize Copy


Services: FlashCopy
Estimated time
00:45

Overview
The Spectrum Virtualize provides data replication services for mission-critical data using FlashCopy
(point-in-time copy).
This unit examines the functions provided by FlashCopy illustrates their usage with example
scenarios.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 9-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Unit objectives
• Identify I/O access to source and target volumes during a FlashCopy
operation
• Classify the purpose of consistency groups for both FlashCopy and
Remote Copy operations
• Summarize FlashCopy use cases and correlate to GUI provided
FlashCopy presets
• Recognize usage scenarios for incremental FlashCopy and reverse
FlashCopy
• Discuss host system considerations to enable usage of a FlashCopy
target volume and the Mirroring auxiliary volume
• Recognize the bitmap space needed for Copy Services and Volume
Mirroring

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 9-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Spectrum Copy Services: FlashCopy


• FlashCopy
ƒ Functionality and overview
ƒ Create snapshot
ƒ Create consistency group with multi-select
ƒ Incremental Copy option
ƒ Indirection layer/bitmap space
ƒ Tivoli Storage FlashCopy Manager

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-2. Spectrum Copy Services: FlashCopy

This topic overviews the FlashCopy functionality and features.

© Copyright IBM Corp. 2012, 2016 9-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Spectrum Virtualize software architecture: Replication


IOs from Host • FlashCopy enhancements:
ƒ Near instant prepare (versus minutes)
SCSI Target
ƒ Multiple snapshots of golden image now
Forwarding share cache data (instead of N copies)
Replication ƒ Full stride write for FlashCopy volumes no
Upper Cache matter what grain size
ƒ Configure up to 255 FlashCopy consistency
FlashCopy
groups
Mirroring

Thin Provisioning Compression

Lower Cache

Virtualization Easy Tier 3

Forwarding

RAID
Forwarding Virtual

SCSI Initiator Array


Drive
IOs to storage controllers
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-3. Spectrum Virtualize software architecture: Replication

This illustrates the Spectrum Virtualize software architecture and the placement of the Replication
(Copy Services) function below the Upper Cache. With the latest and previous software code
release, the FlashCopy function is implemented above the Upper Cache (fast-write cache) in the
I/O stack. The I/O stack cache rearchitecture improves the processing of FlashCopy operations
with:
• Near instant prepare (versus minutes) same for Global Mirror with Change Volumes
• Multiple snapshots of golden image share cache data (instead of N copies)
• Full stride write for FlashCopy volumes no matter what the grain size
• You can now configure 255 FlashCopy consistency groups - up from 127 previously
With the new cache architecture, you now have a two-layer cache – upper cache and lower cache.
You will notice that cache now sit above FlashCopy as well as below FlashCopy. So in this design,
before you can take a FlashCopy, if there is anything in the upper cache it needs to be transferred
to the lower cache before the pointer table can be taken. The pointer table can also be taken
without having destage cache to disk before hand.

© Copyright IBM Corp. 2012, 2016 9-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy point in time


• Create a point in time (PiT) copy of one or
more volumes Up to 256 targets
ƒ Volume level PiT copy with any mix of thin and
fully allocated
ƒ Volumes might remain online and active while
you create consistent copies of the data sets FlashCopy
í Transparent to Host - Performed at the block level, it relationships
operates below the host OS and cache
• Requires the data to be copied from the
source to the target in the background
Source
ƒ Copy time can be long Volume
ƒ The resulting data on the target volume copy
appears to have completed immediately

• Accomplished through the use of a bitmap


(or bit array) that tracks changes to the data
after the FlashCopy is initiated

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-4. FlashCopy point in time

The Storwize V7000 offers a network-based, SAN-wide FlashCopy (point-in-time copy) capability
obviating the need to use copy service functions on a storage system-by-storage system basis.
The FlashCopy function is designed to create copies for backup, parallel processing, testing, and
development, and have the copies available almost immediately. As part of the Storwize V7000
Copy Services function, you can create a point-in-time copy (PiT) of one or more volumes for any
storage being virtualized. Volumes can remain online and active while you create consistent copies
of the data sets. Because the copy is performed at the block level, it operates below the host
operating system and cache and is therefore not apparent to the host. FlashCopy accomplished
through the use of a bitmap (or bit array) that tracks changes to the data after the FlashCopy is
initiated, and an indirection layer, which allows data to be read from the source volume
transparently.
This function is included with the base IBM Spectrum virtualize license.

© Copyright IBM Corp. 2012, 2016 9-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy functions
• Full / incremental copy
ƒ Copies only the changes from either the source or target data since the last
FlashCopy operation
• Multi-target FlashCopy
ƒ Supports copying of up to 256 target volumes from a single source volume
• Cascaded FlashCopy
ƒ Creates copies of copies and supports full, incremental, or nocopy operations
• Reverse FlashCopy
ƒ Allows data from an earlier point-in-time copy to be restored with minimal
disruption to the host
• FlashCopy nocopy with thin provisioning
ƒ Provides a combination of using thin-provisioned volumes and FlashCopy
together to help reduce disk space requirements when making copies
• Consistency groups
ƒ Addresses issue where application data is on multiple volumes

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-5. FlashCopy functions

Listed are a list of FlashCopy functions are included in the IBM Spectrum Virtualize software
license.
• In an incremental FlashCopy, the initial mapping copies all of the data from the source volume
to the target volume. Subsequent FlashCopy mappings only copy data that has been modified
since the initial FlashCopy mapping. This reduces the amount of time that it takes to re-create
an independent FlashCopy image. You can define a FlashCopy mapping as incremental only
when you create the FlashCopy mapping.
• Multiple target FlashCopy mappings allows up to 256 target volumes to be copied from a
single source volume. Each relationship between a source and target volume is managed by a
unique mapping such that a single volume can be the source volume in up to 256 mappings.
Each of the mappings from a single source can be started and stopped independently. If
multiple mappings from the same source are active (in the copying or stopping states), a
dependency exists between these mappings.
• The Cascaded FlashCopy function allows a FlashCopy target volume to be the source volume
of another FlashCopy mapping.
• A Reverse FlashCopy functions only allows the data that is required to bring the target volume
current is copied. If no updates have been made to the target since the last refresh, the
direction change can be used to restore the source to the previous point-in-time state.

© Copyright IBM Corp. 2012, 2016 9-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
• When using a FlashCopy nocopy with thin provisioning function, there are two variations of
this option to consider:
▪ Space-efficient source and target with background copy: Copies only the allocated space.
▪ Space-efficient target with no background copy: Copies only the space that is used for
changes between the source and target and is referred to as “snapshots”.
This function can be used with multi-target, cascaded, and incremental FlashCopy.
• A consistency group is a container for FlashCopy mappings, Global Mirror relationships, and
Metro Mirror relationships. You can add many mappings or relationships to a consistency group,
however FlashCopy mappings, Global Mirror relationships, and Metro Mirror relationships
cannot appear in the same consistency group.

© Copyright IBM Corp. 2012, 2016 9-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy implementation
Target volume Source volume
Storwize V7000 Must be on the same as Must be on the same as the
Source Volume Target Volume

Same virtual size

Storage pool Does not need to be in same Does not need to be same as
as Source Volume Target Volume
Size • Must be same as Source • Must be same as Target
Volume Volume
• The size of the source and • The size of the source and
target volumes cannot be target volumes cannot be
altered (increased or altered (increased or
decreased) while a decreased) while a FlashCopy
FlashCopy mapping is mapping is defined
defined

Thin provisioned Can be Can be

Performance FlashCopy operations perform FlashCopy operations perform in


in direct proportion direct proportion
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-6. FlashCopy implementation

Listed are several guidelines to consider before implementing a FlashCopy in your Storwize V7000
storage environment.
• The source and target volumes must be in the same Storwize V7000 cluster and volumes must
be the same “virtual” size.
• The Spectrum Virtualize capabilities enable SAN-wide copy. The target volume can reside in a
storage pool backed by a different storage system from the source volume, enabling more
flexibility than traditional storage systems based point-in-time copy solutions.
• The source and target volumes do not need to be in the same I/O Group or storage pool.
However, they can be within the same storage pool, across storage pools, and across I/O
groups.
• The storage pool extent sizes can differ between the source and target.
• The I/O group ownership of volumes affects only the cache and the layers above the cache in
the Storwize V7000 I/O stack. Below the cache layer the volumes are available for I/O on all
nodes within the Storwize V7000 cluster.
• FlashCopy operations perform in direct proportion to the performance of the source and target
disks. If you have a fast source disk and slow target disk, the performance of the source disk is

© Copyright IBM Corp. 2012, 2016 9-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
reduced because it must wait for the write operation to occur at the target before it can write to
the source.
This applies only if the original is block, and is not copied before to the target (a background copy).

© Copyright IBM Corp. 2012, 2016 9-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy attributes
• Source volumes can have up to 256 target volumes (Multiple Target
FlashCopy).
• Target volumes can be the source volumes for other FlashCopy
relationships (cascaded FlashCopy).
• Consistency groups are supported to enable FlashCopy across multiple
volumes in the same time.
• Up to 255 FlashCopy consistency groups are supported per system.
• Up to 512 FlashCopy mappings can be placed in one consistency
group.
• Target volume can be updated independently of the source volume.
• Maximum number of supported FlashCopy mappings is 4096 per
Storwize V7000 system.
• Size of the source and target volumes cannot be altered (increased or
decreased) while a FlashCopy mapping is defined.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-7. FlashCopy attributes

The FlashCopy function in the Storwize V7000 features using the lists of following attributes:
• Only 256 FlashCopy mappings that can exist with the same source.
• You can have up to 4096 FlashCopy mappings that can exist with the same source Volume per
system.
• The maximum of FlashCopy Consistency Group that you can have per system is 255, which is
the arbitrary limit that is policed by the software.
• You have a maximum limit of 512 FlashCopy mappings per Consistency Group. The set amount
is based on the time that is taken to prepare a Consistency Group with many mappings.

© Copyright IBM Corp. 2012, 2016 9-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy process

When FlashCopy operation


starts, copy is immediately
available after bitmaps are
T0 built. Read/write to copy is
Target
possible.
Source

write read

Blocks that are not yet


written to the target are
T0 = t read from the source.
Before a write to the
source, data is copied to
Source Target the target.

Grain = region to be copied


Grain size =256 KB/64 KB When the background copy
is complete, the source and
target are logically
independent and the
New FlashCopy mapping can be
Source Source deleted without affecting
the target.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-8. FlashCopy process

This diagram illustrates the general process for how FlashCopy works while the full image copy is
being completed in the background. Also the handling of the redirection of the host I/O which is
being written to the source volume with respect to a T0 point in time while the target volume is held
true to T0.
To create an instant copy of a volume, you must first create a mapping between the source volume
(the disk that is copied) and the target volume (the disk that receives the copy). The source and
target volumes must be of equal size. The volumes do not have to be in the same I/O group or
storage pool. When a FlashCopy operation starts, a checkpoint is made of the source volume. No
data is actually copied at the time a start operation occurs. Instead, the checkpoint creates a bitmap
that indicates that no part of the source volume has been copied. Each bit in the bitmap represents
one region of the source volume. Each region is called a grain.
When data is copied from the source volume to the target volume it is copied in units known as
grains. The default grain size is 256 KB. To facilitate copy granularity for incremental copy the grain
size can be set to 64 KB at initial mapping definition. If a compressed volume is in a FlashCopy
mapping then the default grain size is 64 KB instead of 256 KB.
The priority of the background copy process is controlled by the background copy rate. A rate of
zero indicates that only data being changed on the source should have the original content copied
to the target (also known as copy-on-write or COW). Unchanged data is read from the source. This

© Copyright IBM Corp. 2012, 2016 9-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
option is designed primarily for backup applications where a point-in-time version of the source is
only needed temporarily.
A background copy rate of 1 to 100 indicates that the entire source volume is to be copied to the
target volume.

© Copyright IBM Corp. 2012, 2016 9-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy: Background copy rate


• Background copy rates
ƒ 0 - no copy
ƒ 50 - 2 MBps (the default)
ƒ 60 - 4 MBps
ƒ 80 - 16 MBps
ƒ100 - 64 MBps
• Copy rate can be dynamically modified

Source FlashCopy Target


volume Mapping volume

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-9. FlashCopy: Background copy rate

The priority of the background copy process is controlled by the background copy rate. A rate of
zero indicates that only data being changed on the source should have the original content copied
to the target (also known as copy-on-write or COW). Unchanged data is read from the source. This
option is designed primarily for backup applications where a point-in-time version of the source is
only needed temporarily.
A background copy rate of 1 to 100 indicates that the entire source volume is to be copied to the
target volume. The rate value specified corresponds to an attempted bandwidth during the copy
operation:
• 01 to 10 - 128 KBps
• 11 to 20 - 256 KBps
• 21 to 30 - 512 KBps
• 41 to 50 - 2 MBps (the default)
• 51 to 60 - 4 MBps
• 61 to 70 - 8 MBps
• 71 to 80 - 16 MBps
• 81 to 90 - 32 MBps

© Copyright IBM Corp. 2012, 2016 9-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
• 91 to 100 - 64 MBps
The background copy rate can be changed dynamically during the background copy operation.
The background copy is performed by one of the nodes of the I/O group in which the source volume
resides. This responsibility is failed over to the other node in the I/O group in the event of a failure of
the node performing the background copy.

© Copyright IBM Corp. 2012, 2016 9-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy reads/writes: Full background copy


Both volumes are available for I/O PIT clone or
backup
copyrate=0
Source Target
Read C Read

D Write
copy on demand D'
Write
F copy-on-write F
F' (COW)
copied blocks
Write X' X Read
background
Read Y copy Y' Write

Grain size =256 KB/64 KB


Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-10. FlashCopy reads/writes: Full background copy

The background copy is performed backwards. That is, it starts with the grain containing the highest
logical block addresses (LBAs) and works backwards towards the grain containing LBA 0. This is
done to avoid any unwanted interactions with sequential I/O streams from the using application.
After the FlashCopy operation has started, both source and target volumes can be accessed for
read and write operations:
• Source reads: Business as usual.
• Target reads: Consult its bitmap. If data has been copied then read from target. If not, read from
the source.
• Source writes: Consult its bitmap. If data has not been copied yet then copy source to target
first before allowing the write (copy on write or COW). Update bitmap.
• Target writes: Consult its bitmap. If data has not been copied yet then copy source to target first
before the write (copy on demand). Update bitmap. One exception to copying the source is if
the entire grain is to be written to the target then copying the source is not necessary.

© Copyright IBM Corp. 2012, 2016 9-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy reads/writes: No background copy


Both volumes are available for I/O PIT
snapshot
copyrate=0
Source Target
Read C Read

D Write
copy on demand D'
Write
F copy-on-write F
F' (COW)
no
Write X' background
copy
Read Y
Minimize disk capacity
utilization if using
Grain size =256 KB/64 KB
Thin-Provisioned target
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-11. FlashCopy reads/writes: No background copy

For a copyrate=0 FlashCopy invocation the background copy is not performed. The target is often
referred to as a snapshot of the source. After the FlashCopy operation has started, both source and
target volumes can be accessed for read and write operations.
Write activity occurs on the target when:
• Write activity has occurred on the source and the point-in-time data has not been copied to the
target yet. The original source data (based on grain size) must be copied to the target before
the write to the source is permitted. This is known as copy-on-write.
• Write activity has occurred on the target to a subset of the blocks managed by a grain where the
point-in-time data has not been copied to the target yet. The original source data (based on
grain size) has to be copied to the target first.
• Read activity to the target is redirected to the source if the data does not reside on the target.
Since no background copy is performed, using a Thin-Provisioned target often minimizes the disk
capacity required.

© Copyright IBM Corp. 2012, 2016 9-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy: Sequence of events


1. Create Establish FlashCopy mapping between source and target
svctask mkfcconsistgrp
and
svctask mkfcmap
Idle_or_copied

2. Prepare Flush write cache for source, discard cache for target,
place source volume in write-through mode
svctask prestartfcconsistgrp
or
Preparing svctask prestartfcmap
3. Start Set metadata, allow I/O, start copy
svctask startfcconsistgrp
or
Prepared svctask startfcmap
Copying
4. Delete

Discard FlashCopy
Idle_or_copied mapping
*Prepare step can be embedded with start step (manual or automatic)
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-12. FlashCopy: Sequence of events

A series of events or steps need to occur to establish the FlashCopy process:


Create: A mapping is created between the source and target volumes. The pair of volumes must be
the same size and the target must not be in any other FlashCopy mappings. The mapping might be
placed into a consistency group. At this point, the source and target volumes behave as
independent volumes and are said to be in idle_or_copied state.
Prepare: The prepare event performs housekeeping for the mapped volumes in anticipation of the
FlashCopy start event. It places the mapping in the preparing state. The following activities occur
while the mapping is in the preparing state:
• Flush modified write data associated with the source volume from the cache. Read data for the
source volume is left in cache. This ensures the copy being made is a clone of the source data
on the storage system.
• Place caching for the source volume in write-through mode.
• Discard any read or write data associated with the target volume from cache (since the target is
about to become a clone of the source). This act of preparing might corrupt data that previously
resided on the target. Do not invoke the prepare event unless the FlashCopy start event is to be
executed.

© Copyright IBM Corp. 2012, 2016 9-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
Upon completion of the preparing event, the mapping is said to be in the prepared state, ready for
the copy operation to be triggered. The source volume is in write-through mode. The target volume
is placed in a not accessible state in anticipation of the FlashCopy start event.
The prepare function can be optionally integrated with the start function.
Start: Once the mappings in a consistency group are in the prepared state, the FlashCopy
relationship can be started or triggered. The optional -prepare parameter allows the prepare and
start functions to be performed together (that is, the FlashCopy is triggered as soon as the prepare
event is completed). During the start:
• I/O is briefly paused on the source volumes to ensure ongoing reads and writes below the
cache layer have been completed.
• Internal metadata are set to allow FlashCopy.
• I/O is then resumed on the source volumes.
• The target volumes are made accessible.
• Read and write caching is enabled for both the source and target volumes. Each mapping is
now in the copying state.
Unless a zero copy rate is specified, the background copy operation copies the source to target
until every grain has been copied. At this point, the mapping progresses from the copying state to
the Idle_or_copied state.
Delete: A FlashCopy mapping is persistent by default (not automatically deleted after the source
has been copied to the target). It can be reactivated by preparing and starting again. The delete
event is used to destroy the mapping relationship. If desired, the mapping can be automatically
deleted at the completion of the background copy if the -autodelete parameter is coded when the
mapping is defined with mkfcmap or changed with chfcmap commands.
FlashCopy can be invoked using the CLI or GUI. Scripting using the CLI is also supported. Refer to
the Redbooks Implementing the IBM System Storage Storwize V7000 V7.6 (SG24-7938) for
guidance regarding scripting.

© Copyright IBM Corp. 2012, 2016 9-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy mapping states

Source FlashCopy Target


2. Prepare volume volume
Mapping

Preparing Online Online, not accessible

Prepared Online Online, not accessible

3. Start

Copying Online Online

Idle_or_Copied Online Online

Stopping Online Online/Offline

Stopped Online Offline

Suspended Offline Offline

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-13. FlashCopy mapping states

During the prepare event, writes to the source volume experience additional latency because the
cache is operating in write-through mode while the mapping progresses from preparing to
prepared mode. The target volume is online but not accessible.
The two mechanisms by which a mapping can be stopped are by I/O errors or by command. The
target volume is set offline. Any useful data is lost. To regain access to the target volume start the
mapping again.
If access to the bitmap and metadata has been lost (such as if access to both nodes in an I/O group
has been lost) the FlashCopy mapping is placed in suspended state. In this case, both source and
target volumes are placed offline. When access to metadata becomes available again then the
mapping will return to the copying state and both volumes will become accessible and the
background copy resumed.
The stopping state indicates that the mapping is in the process of transferring data to a dependent
mapping. The behavior of the target volume depends on whether the background copy process had
completed while the mapping was in the copying state. If the copy process had completed then the
target volume remains online while the stopping copy process completes. If the copy process had
not completed then data in the cache is discarded for the target volume. The target volume is taken
offline and the stopping copy process runs. When the data has been copied then a stop complete
asynchronous event is notified. The mapping transitions to the idle_or_copied state if the

© Copyright IBM Corp. 2012, 2016 9-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
background copy has completed, or to the stopped state if it has not. The source volume remains
accessible for I/O.
Stopped: The FlashCopy was stopped either by user command or by an I/O error. When a
FlashCopy mapping is stopped, any useful data in the target volume is lost. Because of this, while
the FlashCopy mapping is in this state, the target volume is in the Offline state. In order to regain
access to the target the mapping must be started again (the previous FlashCopy will be lost) or the
FlashCopy mapping must be deleted. While in the Stopped state any data which was written to the
target volume and was not flushed to disk before the mapping was stopped is pinned in the cache.
It cannot be accessed but does consume resource. This data will be destaged after a subsequent
delete command or discarded during a subsequent prepare command. The source volume is
accessible and read and write caching is enabled for the source.
Suspended: The target has been point-in-time copied from the source, and was in the copying
state. Access to the metadata has been lost, and as a consequence, both source and target
volumes are offline. The background copy process has been halted. When the metadata becomes
available again, the FlashCopy mapping will return to the copying state, access to the source and
target volumes will be restored, and the background copy process resumed. Unflushed data which
was written to the source or target before the FlashCopy was suspended is pinned in the cache,
consuming resources, until the FlashCopy mapping leaves the suspended state.

© Copyright IBM Corp. 2012, 2016 9-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Copy Services: FlashCopy options


FlashCopy option (fast path)
• Automatically creates targets in same pool as
source
• Automatically defines FlashCopy mapping and
starts the copy (with embedded prepare step)
• Automatically creates and starts consistency
group if multiple source volumes selected
FlashCopy Mappings option
• Allows more user control (less automatic)
• Permits preset overrides
• Creates mappings using user provided targets
(from any pool)
• Places mapping in a consistency group if
desired by user
Consistency Groups option
• Creates consistency group container for
FlashCopy mappings
• Creates and manages contained mappings

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-14. Copy Services: FlashCopy options

The Storwize V7000 management GUI supports the FlashCopy functionality with three menu
options within the Copy Services menu option:
• The FlashCopy menu option is designed to be a fast path with extensive use of pre-defined
automatic actions embedded in the FlashCopy presets to create target volumes, mappings, and
consistency groups.
• The Consistency Groups menu option is designed to create, display, and manage related
mappings that need to reside in the same consistency group.
• The FlashCopy Mappings menu option is designed to create, display, and manage the
individual mappings. If mappings reside in a consistency group then this information is also
identified.
FlashCopy mappings can be defined from all three menu options but the process is much more
automatic (or less user control) from the FlashCopy menu.
The ensuing examples are designed to illustrate the FlashCopy functions provided by the Storwize
V7000 as well as the productivity aids added with the GUI.

© Copyright IBM Corp. 2012, 2016 9-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Fast path one-click preset selection


• Create Snapshot preset
í Thin-Provisioned target, rsize = 0, with
autoexpand
í Background copy rate = 0 (no copy)

• Create Clone preset


í Target identical to primary copy of source
í Background copy rate = 50
í Auto-delete FlashCopy mapping at copy
completion

• Create Backup preset


í Target identical to primary copy of source
í Background copy rate = 50
í Incremental copy = on (mapping not
deleted at copy completion)

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-15. Fast path one-click preset selection

For fast path FlashCopy processing, select Copy Services > FlashCopy to view a volume list.
Select a volume entry and right-click to select the desired FlashCopy preset.
The Storwize V7000 management GUI provides three FlashCopy presets to support the three
common use case examples for point-in-time copy deployments.
These presets templates that implement best practices as defaults to enhance administrative
productivity. For the FlashCopy presets the target volumes can be automatically created and
FlashCopy mappings defined. If multiple volumes are involved then a consistency group to contain
the related mappings is automatically defined as well. Unique attributes by preset include:
Typical FlashCopy usage examples include:
• Create a target volume such that it is a snapshot of the source (that is, the target contains only
copy-on-write blocks or COW). If deployed with Thin Provisioning technology then the snapshot
might only consume a minimal amount of storage capacity. Use cases for snapshot targets
include:
▪ Backing up source volume to tape media where a full copy of the source on disk is not
needed.
▪ Exploiting Thin Provisioning technology by taking more frequent snapshots of the source
volume and hence facilitate more recovery points for application data.

© Copyright IBM Corp. 2012, 2016 9-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
• Create a target volume that is a full copy, or a clone, of the source where subsequent
resynchronization with the source is expected to be either another full copy or is not needed.
Use cases for clone targets include:
▪ Testing applications with pervasive read/write activities.
▪ Performing what-if modeling or reports generation where using static data is sufficient and
separation of these I/Os from the production environment is paramount.
▪ Obtaining a clone of a corrupted source volume for subsequent troubleshooting or
diagnosis.
• Create a target volume that is to be used as a backup of the source where periodic
resynchronization is expected to be frequent and hence incremental updates of the target would
be more cost effective. Use cases for backup targets include:
▪ Maintaining a consistent standby copy of the source volume on disk to minimize recovery
time.
▪ Implementing business analytics where extensive exploration and investigation of business
data for decision support requires the generated intensive I/O activities to be segregated
from production data while the data store needs to be periodically refreshed.
Both the snapshot and backup use cases address data recovery. The recovery point objective
(RPO) denotes at what point (in terms of time) should the application data be recovered or what
amount of data loss is acceptable. After the application becomes unavailable, the recovery time
objective (RTO) indicates how quickly it is needed to be back online or how much down time is
acceptable.
The unit of measure for both RPO and RTO is time with values ranging from seconds to days to
weeks. The closer an application’s RPO and RTO values are to zero the greater the organizations
dependence on that particular process and consequently the higher the priority when recovering
the systems after a disaster.

© Copyright IBM Corp. 2012, 2016 9-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy event notifications

PREPARE_COMPLETED

Cluster
COPY_COMPLETED Event SNMP
Log traps

Email

STOP_COMPLETED

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-16. FlashCopy event notifications

FlashCopy events that complete asynchronously are logged and can be used to generate SNMP
traps for notification purposes.
PREPARE_COMPLETED is logged when the FlashCopy mapping or consistency group has
entered the prepared state as a result of a user request to prepare. The user is now able to start (or
stop) the mapping/group.
COPY_COMPLETED is logged when the FlashCopy mapping or consistency group has entered
the idle_or_copied state when it was previously in the copying state. This indicates that the target
volume now contains a complete copy and is no longer dependent on the source volume.
STOP_COMPLETED is logged when the FlashCopy mapping or consistency group has entered
the stopped state as a result of a user request to stop. It is distinct from the error that is logged
when a mapping or group enters the stopped state as a result of an IO error.

© Copyright IBM Corp. 2012, 2016 9-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Spectrum Copy Services: FlashCopy


• FlashCopy
ƒ Functionality and overview
ƒ Snapshot
ƒ Clone consistency group with multi-select
ƒ Incremental Copy
ƒ Indirection layer/bitmap space
ƒ Tivoli Storage FlashCopy Manager

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-17. Spectrum Copy Services: FlashCopy

This topic examines the procedurals in which to create a new snapshot.

© Copyright IBM Corp. 2012, 2016 9-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Create snapshot volume (1 of 2)


• To create a snapshot, select Copy Services > FlashCopy.
ƒ Right-click a volume and select Create Snapshot or click Action > Create
Snapshot.
• Creates a thin provisioned volume with a point in time backup of
production data:
ƒ Not intended to be an independent copy
ƒ Holds only data from regions of the production volume

Automatically starts the


defined mapping with
embedded prepare

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-18. Create snapshot volume (1 of 2)

The snapshot creates a point-in-time backup of production data. The snapshot is not intended to be
an independent copy. Instead, it is used to maintain a view of the production data at the time that
the snapshot is created. Therefore, the snapshot holds only the data from regions of the production
volume that changed since the snapshot was created. Because the snapshot preset uses thin
provisioning, only the capacity that is required for the changes is used.
To create and start a snapshot, from the Copy Services > FlashCopy window, right-click on the
volume that you want to create a snapshot of or click Actions > Create Snapshot. Upon selection
of the Create Snapshot option, the GUI automatically:
• Creates a volume using a name based on the source volume name with a suffix of _01
appended for easy identification. The real capacity size starts out as 0% of the virtual volume
size and will automatically expand as write activity occurs.

© Copyright IBM Corp. 2012, 2016 9-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Create snapshot volume (2 of 2)


The -autoexpand
• GUI generates several commands: parameter indicates it’s
ƒ mkfcmap command a Thin-provisioned
volume
ƒ startfcmap -prep 0 command
• Snapshot uses the following preset
parameters:
ƒ Background copy: No
ƒ Incremental: No
ƒ Delete after completion: No
ƒ Cleaning rate: No
ƒ Primary copy source pool: Target pool FC mapping ID 4
created and started
Target
volume

No
background
copy
-copyrate 0

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-19. Create snapshot volume (2 of 2)

The Storwize V7000 GUI defines a FlashCopy mapping using the mkfcmap command with a
background copy rate of 0.
• Starts the mapping using the startfcmap -prep 4 command where 4 is the object ID of the
mapping, and -prep embeds the FlashCopy prepare process with the start process.
The target volume is now available to be mapped to host objects for host I/O. With a FlashCopy (or
Snapshot in the GUI), it uses disk space only when updates are made to the source or target data
and not for the entire capacity of a volume copy.

© Copyright IBM Corp. 2012, 2016 9-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy mapping properties for snapshot

Volume can now be


mapped to a host

Background Copy
Rate=0

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-20. FlashCopy mapping properties for snapshot

The target volume is now available to be mapped to host objects for host I/O. The Snapshot
Thin-provisioned volume uses disk space only when updates are made to the source or target data
and not for the entire capacity of a volume copy.
A running FlashCopy can be modified during the task by right-clicking the mapping. As long as the
task is running, this is possible. (Running Tasks menu). The user is able to modify this parameter as
desired by dragging the slider bar.

© Copyright IBM Corp. 2012, 2016 9-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy mapping details using CLI (no writes)


• No write have occurred on the source.
ƒ Content of blocks (COW) being changed are copied to target_01.

IBM_Storwize:V009B:V009B1-admin>lsfcmap fcmap3
id 4
name fcmap3
source_vdisk_id 18
source_vdisk_name Basic-WIN1
target_vdisk_id 25
target_vdisk_name Basic-WIN1_01
group_id Target_01
group_name
status copying
progress 14
copy_rate 0 0% COWs
start_time 160610131930
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 0
incremental off
difference 100
grain_size 256
…………….
restore_progress 0
fc_controlled no COW = copy-on-write
IBM_Storwize:V009B:V009B1-admin>

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-21. FlashCopy mapping details using CLI (no writes)

All FlashCopy mappings are displayed from the Copy Services > FlashCopy Mappings view.
Observe the default mapping name of fcmap3 assigned to the mapping for source volume and note
the current copy progress of 15 percent is in the mapping entry. Since this mapping has a copy rate
set to 0, the copy progress represents the copy-on-write (COW) activity.
Use the CLI lsfcmap command with either the object name or ID of the mapping to view detailed
information about a mapping. The mapping grain size can be found in this more verbose output.
The grain size for a FlashCopy mapping bitmap defaults to 256 KB for all but the compressed
volume type; which has a default grain size of 64 KB. The default size value can be overridden if
the CLI is used to define the mapping. However, the best practice recommendation is to use the
default values.
The example shows the status of the source and target volume with no write activity.

© Copyright IBM Corp. 2012, 2016 9-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy mapping details with writes using CLI


• Subsequent writes occur on the source.
ƒ Content of blocks (COW) being changed are copied to target_01.

IBM_Storwize:V009B:V009B1-admin>lsfcmap fcmap3
id 4
name fcmap3
source_vdisk_id 18
source_vdisk_name Basic-WIN1
target_vdisk_id 25
target_vdisk_name Basic-WIN1_01
group_id Target_01
group_name
status copying
progress 7
copy_rate 0 7% COWs
start_time 160610131930
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 0
incremental off
difference 100
grain_size 256
…………….
restore_progress 0
fc_controlled no COW = copy-on-write
IBM_Storwize:V009B:V009B1-admin>

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-22. FlashCopy mapping details with writes using CLI

The example shows the status of the source and target volume with writes in progress. Since the
background copy rate is set to 0, the progress of 13% shows that 13% are written to the sources
and caused 14% copy on writes to the target. 14% of the target storage is in use, which is the same
amount of data which was changed on the source after the mapping was started.
When subsequent writes occur on the source volume, the content of the blocks being changed
(written to) is copied to the target volume in order to preserve the point-in-time snapshot target
copy. These blocks are referred to as copy-on-write (COW) blocks; the ‘before’ version of the
content of these blocks is copied as a result of incoming writes to the source. This write activity
caused the real capacity of the Thin-Provisioned target volume to automatically expand: Matching
the quantity of data being written.
It might be worthwhile to emphasize that the FlashCopy operation is based on block copies
controlled by grains of the owning bitmaps. Storwize V7000 is a block level solution so, by design
(and actually per industry standards), the copy operation has no knowledge of OS logical file
structures. The same information is available by using CLI with the help of the lsmap command.

© Copyright IBM Corp. 2012, 2016 9-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Spectrum Copy Services: FlashCopy


• FlashCopy
ƒ Functionality and overview
ƒ Snapshot
ƒ Clone consistency group with multi-select
ƒ Incremental Copy option
ƒ Bitmap space
ƒ Tivoli Storage FlashCopy Manager

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-23. Spectrum Copy Services: FlashCopy

This topic examines the ability to create consistency group by selecting multiple mappings to be
managed as a single entity.

© Copyright IBM Corp. 2012, 2016 9-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Consistency groups
• FlashCopy consistency groups are used to group multiple copy
operations together that have a need to be controlled at the same time.
ƒ Group can be controlled by starting or stopping with a single operation.
ƒ Ensures that when stopped for any reason, the I/Os to all group members have
all stopped at the same point in time.
í Ensures time consistency across volumes.

Mapping
Mapping
Mapping
Mapping
Mapping
Source Volume Target Volume
Source Volume Target Volume
Source Volume Target Volume
Source Volume Target Volume
Source Volume Target Volume

Slide contain animations

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-24. Consistency groups

Consistency Groups address the requirement to preserve point-in-time data consistency across
multiple volumes for applications that include related data that spans multiple volumes. For these
volumes, Consistency Groups maintain the integrity of the FlashCopy by
ensuring that “dependent writes” are run in the application’s intended sequence.
When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy
Consistency Group, which performs multiple copy operations on all FlashCopy mappings that are
contained within the Consistency Group at the same time. This allows administrator to tasking
operations such as starting, stopping, and so on, with a single operation. Therefore, if the copying
has to be stopped for any reason, the I/Os to all group members are stopped at the same
“point-in-time” in terms of the host writes to the primary volumes, ensuring time consistency across
volumes.
After an individual FlashCopy mapping is added to a Consistency Group, it can be managed as part
of the group only. Operations, such as prepare, start, and stop, are no longer allowed on the
individual mapping.

© Copyright IBM Corp. 2012, 2016 9-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Create clone as consistency group with multi-select


• One click process that automatically:
ƒ Creates the consistency group container.
ƒ Creates the appropriate targets.
ƒ Defines the FlashCopy mappings and places the mappings into the
consistency group.
ƒ Starts the consistency group.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-25. Create clone as consistency group with multi-select

When multiple volumes are selected from the Copy Services > FlashCopy menu, the GUI presets
operate at the consistency group level (instead of mapping level). Besides automatically creating
targets and mappings, a consistency group is also defined to allow multiple mappings to be
managed as a single entity. The copy is automatically started at the consistency group level.
Consistency groups might also be established for FlashCopy mappings of volumes that span
multiple volumes. This allows the FlashCopy operation on multiple volumes to take place as an
atomic operation.
Some installations using non-IBM storage systems have been used to having to wait for the storage
system to mirror copies of LUNs that then needed to be split away from the original LUN before the
cloned LUN might be used by a host. This is a time-consuming process, the time depending on the
size of the LUN. With IBM FlashCopy, the targets, regardless of size can be used immediately after
Start processing completes (seconds).
Consistency groups also be created, modified, and deleted with concise, direct CLI commands.

© Copyright IBM Corp. 2012, 2016 9-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Generated commands for both selected volumes


Create FlashCopy consistency group Clone preset -
svctask mkfcconsistgrp -autodelete automatically deletes
The FlashCopy consistency group (ID 1) was created successfully. FlashCopy objects
at copy completion
Creating a volume, size 10,737,418,240 b
svctask mkvdisk -iogrp io_grp0 -mdiskgrp TeamB50_GRP2 -name DEV_DB_01 -
size 10737418240 -unit b
The volume (ID 4) was successfully created.

Creating FlashCopy mapping between DEV_DB and DEV_DB_01


svctask mkfcmap -autodelete -cleanrate 50 -consistgrp 1 -copyrate 50 -
source DEV_DB -target DEV_DB_01
The FlashCopy mapping (1) between DEV_DB and DEV_DB_01 was successfully created.

Creating a volume, size 10,737,418,240 b


svctask mkvdisk -iogrp io_grp0 -mdiskgrp TeamB50_GRP2 -name DEV_LOG_01 -
size 10737418240 -unit b
The volume (ID 5) was successfully created.

Creating FlashCopy mapping between DEV_LOG and DEV_LOG_01


svctask mkfcmap -autodelete -cleanrate 50 -consistgrp 1 -copyrate 50 -
source DEV_LOG -target DEV_LOG_01
The FlashCopy mapping (2) between DEV_LOG and DEV_LOG_01 was successfully created.

Start FlashCopy Consistency Group


svctask startfcconsistgrp -prep 1
The task completed.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-26. Generated commands for both selected volumes

The commands issued by the management GUI for this Clone preset invocation example have
been extracted and highlighted:
• A consistency group is created with the -autodelete parameter; which causes the Storwize
V7000 to automatically delete the consistency group when background copy completes.
• Two fully allocated target volumes are created. The name and size of the target volumes
derives from the source volumes; following the GUI naming convention for FlashCopy. Two
FlashCopy mappings are defined each with the default copy rate of 50 (or 2 MBps).
• Once the volume have been created and a FlashCopy mapping has been established, the
consistency group automatically starts with the startfcconsistgrp command which contains
an embedded prepare.

© Copyright IBM Corp. 2012, 2016 9-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy mappings and consistency group details

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-27. FlashCopy mappings and consistency group details

The Copy Services > FlashCopy view displays the two defined individual mappings that are both
associated with the fccstgrp0 consistency group.
The progress bar for each FlashCopy mapping provides a direct view of the progress of each
background copy. This progress data is also provided through the Running Tasks interface; which
is accessible from any GUI view.
The background copy has a default copy rate of 50 which can be changed dynamically.

© Copyright IBM Corp. 2012, 2016 9-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy mappings and consistency group details

svctask chfcmap -copyrate uses


-copyrate to increase from the
default 2 MBps to 64 MBps

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-28. FlashCopy mappings and consistency group details

To change the background rate, right-click a fcmap mapping entry and select Edit Properties. Drag
the Background Copy Rate: slider bar in the Edit FlashCopy Mapping box all the way to the right
to increase the value to 100 then click Save.
The generated Storwize V7000task chfcmap -copyrate uses -copyrate to increase the copy rate
to 100 for the specified mapping whose ID. The 100 value causes the background copy rate to
increase from the default 2 MBps to 64 MBps.

© Copyright IBM Corp. 2012, 2016 9-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Clone consistency copy completed


• Consistency group has a status of copying as long as one of its mappings is
in the copying status.
fcmap0 copy rate
increased to 100

• Once the copy operation between the source volume and target volume is
complete, the consistency group and mappings are deleted automatically.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-29. Clone consistency copy completed

From the Copy Services > Consistency Group view, you can see the changes in the fcmp1 target
volume now that the background copyrate has been increased. The consistency group has a status
of copying as long as one of its mappings is in the copying status. Once the copy operation for the
source volume to the target volume is complete, the -autodelete specification will take effect.

© Copyright IBM Corp. 2012, 2016 9-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Spectrum Copy Services: FlashCopy


• FlashCopy
ƒ Functionality and overview
ƒ Snapshot
ƒ Clone consistency group with multi-select
ƒ Incremental Copy option
ƒ Indirection layer/bitmap space
ƒ Tivoli Storage FlashCopy Manager

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-30. Spectrum Copy Services: FlashCopy

The topic review the incremental or “no copy” features to create a point-in-time copy of a database
implemented directly on attached FlashCopy or Storwize V7000 systems.

© Copyright IBM Corp. 2012, 2016 9-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Incremental FlashCopy
• Incremental FlashCopy can
substantially reduce the time that Start incremental FlashCopy
is required to re-create an
independent image.
ƒ Copies only the parts of the source Data copied as normal
or target volumes that changed
since the last copy Later …
ƒ Reduces the completion time of
the copy operation
ƒ First copy process copies all of the Some data changed by apps
data from the source volume to the
target volume
Start incremental FlashCopy

Only changed data copied


by background copy

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-31. Incremental FlashCopy

The FlashCopy Incremental Copy makes it possible to perform a background copy between the
source volumes and target volumes without having to copy all of the tracks in the process.
Therefore, Incremental copy reduces the amount of data that needs to be copied subsequent to the
initial invocation of a FlashCopy mapping. The first copy process copies all of the data from the
source volume to the target volume. Rather than copying the entire volume again, only the portions
of the source volume that have been updated at either the source or target are copied. The quantity
of data that needs copying is affected by the grain size. The 64 KB grain size provides more copy
granularity at the expense of using more bits or larger bitmaps than the 256 KB grain size. To be
able to monitor the difference between source and target a “difference” value is maintained in the
FlashCopy mapping details.

© Copyright IBM Corp. 2012, 2016 9-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Create incremental FlashCopy mapping


• Create FlashCopy Mapping
ƒ Select source volume.
ƒ Select target volume.
ࡳ GUI automatically determines a
list of eligibility targets.
ƒ Target must be same size as
source. AbilityAbility to
to select
select target
eligible target
ƒ Source and target are paired.
ƒ Volume can reside in different
pools.

VB1-NEW
VB1-NEW _TGT

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-32. Create incremental FlashCopy mapping

If less automation or more administrator control is desired, a FlashCopy mapping can be manually
defined from the Copy Services > FlashCopy Mappings panel by clicking the Create FlashCopy
Mapping button. This path expects the target volume to have been created already.
From the Create FlashCopy Mapping dialog box, specify the source volume and target volume. The
GUI automatically determines the list of eligible targets. An eligible target volume must be the same
size of the source and must not be serving as a target in other FlashCopy mappings. After a target
has been Added, the GUI confirms the source and target pairing.
From the volume entries, the UIDs of the source and target volumes; also observe that they reside
in different storage pools representing different storage systems.

© Copyright IBM Corp. 2012, 2016 9-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Backup preset (full + incremental)


• Select the Preset Backup.
ƒ Method used to recover data or
objects if system experiences Background
data loss copy rate;
50 is default
ƒ Can be copied multiple times
from source to target

• Select Advanced Settings to


modify background copy rate.
• You have the option to add
the mapping to a consistency
group.
ƒ GUI generates the mkfcmap
command contains the -
incremental parameter.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-33. Backup preset (full + incremental)

The intent is to use incremental FlashCopy, therefore the Backup preset is selected.
The Advance Settings pane allows the ability to change or override mapping attributes, such as the
copy rate, as the mapping is being defined. You also have the option to add the mapping to a
consistency group. Since this is a one volume one mapping example, a consistency group is not
necessary.
The GUI generates the mkfcmap command contains the -incremental parameter. The incremental
copy option can only be specified at mapping definition. In other words, after a mapping has been
created, there is no way to change its attribute to an incremental copy without deleting the mapping
and redefining it again.

© Copyright IBM Corp. 2012, 2016 9-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Start FlashCopy mapping manually


• Start FlashCopy Mapping manually.
ƒ Right-click the fcmap# entry and select Start.

ƒ Can be copied multiple times from source to target.

• Select Advanced Settings to modify background copy rate.

• You have the option to add the mapping to a consistency group.


ƒ GUI generates the startfcmap command with the –prep 0 parameter to start
mapping.

VB1-NEW
VB1-NEW _TGT

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-34. Start FlashCopy mapping manually

To start the FlashCopy mapping, right-click the mapping entry and select Start from the menu list.
The management GUI generates the startfcmap command with an embedded prepare to start the
mapping.

© Copyright IBM Corp. 2012, 2016 9-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Background copy completed


• If FlashCopy Mapping has a status of copying as long as one of its
mappings is in the copying status
fcmap0 copy rate
increased to 100

• If FlashCopy mapping has a status of Idle or Copies, the source and target
volumes can act as independent volumes even if a mapping exists between
the two
ƒ Both volume in the mapping relationship has read and write caching is enabled

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-35. Background copy completed

From Copy Services > FlashCopy Mapping, you can view the background copying progress. The
FlashCopy Mapping has a status of copying as long as one of its mappings is in the copying
status. The background copy is performed by both nodes of the I/O Group in which the source
volume is found.
When the status changes to Idle or Copies, the source and target volumes can act as independent
volumes even if a mapping exists between the two. Both volume in the mapping relationship has
read and write caching is enabled.
If the mapping is incremental and the background copy is complete, the mapping records the
differences between the source and target volumes only. If the connection to both nodes in the I/O
group that the mapping is assigned to is lost, the source and target volumes are offline.

© Copyright IBM Corp. 2012, 2016 9-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Host I/O operations


• Host I/O activity continues during the background copying.

• Target volume contains the point-in-time content of the source volume


as subsequent writes occurs.

More writes
on source

VB1-NEW

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-36. Host I/O operations

As data is added over a period of times to the source volume, the host I/O activity continues while
the background copy is in progress. Incremental FlashCopy copies all of the data when you first
start FlashCopy and then only the changes when you stop and start FlashCopy mapping again.
The target volume contains the point-in-time content of the source volume. Even though
subsequent write activity has occurred on the source volume, it isn’t reflected on the target volume.

© Copyright IBM Corp. 2012, 2016 9-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Example: Source and target content differences


IBM_Storwize V009B:V009B1-admin>lsfcmap 0
id 0
name fcmap0
source_vdisk_id 2
source_vdisk_name VB1-NEW
target_vdisk_id 7
target_vdisk_name VB1-NEW_TGT
group_id
group_name
status idle_or_copied
progress 100
copy_rate 100 VB1-NEW
start_time 160619110406 VB1-NEW _TGT
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental on
difference 22
grain_size 256 Difference value indicates the
IO_group_id 0 percentage of grains that have
IO_group_name io_grp0 changed between the source
partner_FC_id and target volumes.
partner_FC_name
restoring no
rc_controlled no
IBM_2145:Team50A_Storwize V7000:TeamAdmin>

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-37. Example: Source and target content differences

The CLI lsfcmap command is used in this example to view the FlashCopy mapping details. The
copy_rate had been updated to 100 percent. Background copy has completed hence the status of
this mapping is idle_or_copied. Recall the Backup preset was selected - causing this mapping to
be defined with autodelete off and incremental on.
Since this mapping is defined with incremental copy, bitmaps are used to track changes to both the
source and target (recall reads/writes are supported for both source and target volumes). The
difference value indicates the percentage of grains that have changed between the source and
target volumes.
This difference percentage represents the amount of grains that need to be copied from the source
to the target with the next background copy. The value of 22 percent in this example is the result of
data having been added or written to the source volume.

© Copyright IBM Corp. 2012, 2016 9-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Example: Incremental copy to sync target


IBM_Storwize V009B:V009B1-admin >startfcmap -prep 0
IBM_Storwize V009B:V009B1-admin>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,grou
p_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name
,restoring,start_time,rc_controlled
0,fcmap0,0,VB1-NEW,VB1-NEW_TGT,,,copying,77,100,100,on,,,no, 160619110406,no

IBM_Storwize V009B:V009B1-admin> lsfcmap -delim , 0


name fcmap0
source_vdisk_id 2
source_vdisk_name ROOT_ BEER
target_vdisk_id 7
target_vdisk_name ROOT_BEER_TGT
group_id
group_name
status idle_or_copied
progress 100 VB1-NEW
copy_rate 100 VB1-NEW _TGT
start_time 160619111005
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental on
difference 0 Difference value indicates the
grain_size 256 source and target volumes are
......
IBM_Storwize V009B:V009B1-admin> now identical.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-38. Example: Incremental copy to sync target

In this example, we are using the CLI startfcmap -prep 0 command to start the mapping. This
command does not return a successful submission of a long running asynchronous job. In this
case, the background incremental copy.
Since it is an incremental copy, only those blocks related to the changed grains (the 22%) are
copied to the target. The immediately submitted lsfcmap command concise output displays a
status of copying and a progress of 77% already.
A short time later, the lsfcmap 0 verbose output shows the completion of the background copy -
progress 100 and difference 0.
After incremental copy completes, the content of the target volume is updated. At this point, the
content of the two volumes are identical. Subsequent changes to both source and target volumes
are now being tracked anew by Storwize V7000 FlashCopy.

© Copyright IBM Corp. 2012, 2016 9-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Issue: Data corruption occurs on source volume


• Data corruption occurred to the source, possibly due to subsequent
write activity or during the debugging.
VB1_NEW ROOT BEER_TGT

Time 1 VB1_NEW_Ale VB1_NEW_Ale


2
Full Copy

VB1_NEW_Ale
3 VB1_NEW_Stout VB1_NEW_Ale
VB1_NEW_Stout

4
Incremental
Copy
VB1_NEW_Ale
VB1_NEW_Stout
Data corrupted
5
/
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-39. Issue: Data corruption occurs on source volume

This example illustrates the incremental copy option of FlashCopy. At some point along the way,
data corruption to the source occurs. This might be due to subsequent write activity it is now
deemed that a logical data corruption occurred - perhaps due to a programming bug.

© Copyright IBM Corp. 2012, 2016 9-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Solution: Reverse FlashCopy to restore source from target


• Use target volume to
restore source volume to VB1_NEW VB1_NEW_TGT
its previous point in time
volume image. VB1_NEW_Ale
VB1_NEW_Stout VB1_NEW_Ale
VB1_NEW_Stout
ƒ Supports multiple targets
(up to 256) equal to
6B
multiple rollback points Reverse
Copy
Time
ƒ Does not destroy target 6A Full
Copy

• Create an optional copy


for debugging before VB1_NEW_Ale
reserving copy. VB1_NEW_Stout

VB1_NEW_DEBUG

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-40. Solution: Reverse FlashCopy to restore source from target

Reverse FlashCopy enables FlashCopy targets to become restore points for the source without
breaking the FlashCopy relationship and without having to wait for the original copy operation to
complete. FlashCopy provides the option to take a point-in-time copy of the corrupted volume data
for debugging purposes. It supports multiple targets (up to 256) and therefore multiple rollback
points.
You also have the ability to create an optional copy of the source volume to be made before the
reverse copy operation starts. This ability to restore back to the original source data can be useful
for diagnostic purposes.
A key advantage of the Storwize V7000 Multiple Target Reverse FlashCopy function is that the
reverse FlashCopy does not destroy the original target, which allows processes by using the target,
such as a tape backup, to continue uninterrupted.
This image illustrates that the corrupted volume image is to be captured for future problem
determination (step 6A).
Then the reverse copy feature of FlashCopy is used to restore the source volume from the target
volume (step 6B).

© Copyright IBM Corp. 2012, 2016 9-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Create Clone FlashCopy mapping


• Right-click the source volume and select Create Clone.
• Clone Preset automatically generates a target_01 that captures the
corrupted volume for debugging.

VB1_NEW VB1-NEW_01

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-41. Create Clone FlashCopy mapping

To obtain a volume copy of the corrupted source volume for later debugging, the fast path Copy
Services > FlashCopy menu is used.
Right-click the source volume entry and select Create Clone. The Clone preset will automatically
generate commands to create the target volume, define the source to target FlashCopy mapping,
and start the background copy.

© Copyright IBM Corp. 2012, 2016 9-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Create incremental FlashCopy mapping


• Reserve the source and target
paths for FlashCopy Mapping.
ƒ Target volume becomes the
source.
ƒ Source volume becomes the
target.
ࡳ GUI generates a warning that
target volume is also a source
volume in another mapping.
• Normal for restore
ƒ Mapping background copy
does not have to be completed
when reserve mapping is
started.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-42. Create incremental FlashCopy mapping

To restore the source volume to its prior point-in-time copy, a reverse FlashCopy mapping is
defined. This procedural is similar to the used to create the initial FlashCopy Mappings, accept in
this procedural you will reserve the path by creating a new mapping to identify the target volume as
the source, and the original source volume will become the target.
A Warning dialog is displayed by the GUI to caution that the target volume is also a source volume
in another mapping. This is normal for restore and for this example, it is by design.
It is important to known that the source volume to target volume mapping background copy does
not have to be completed when the reverse mapping is started. This is because the reverse copy is
a one time use case, the Clone preset is selected so that the GUI would generate a mapping with
the automatic deletion upon copy completion attribute.

© Copyright IBM Corp. 2012, 2016 9-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FC mappings and rename volume

Start mapping

VB1-NEW_
VB1-NEW
TGT

VB1-NEW_
DEBUG

Rename
debug volume
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-43. FC mappings and rename volume

Since the reverse mapping had been defined using the Copy Services > FlashCopy Mapping
menu, its status is Idle. The administrator (instead of the GUI) controls when to start mapping.
The FlashCopy target volume, target_01, which contains the source volume image with corrupted
data, should have a more descriptive name than the default name assigned by the fast path
FlashCopy GUI. You can use the Edit interface from the volume details panel for target_01 to
rename the volume to “target name_DEBUG”.

© Copyright IBM Corp. 2012, 2016 9-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Start FlashCopy PiT copy restore


• Right-click the new source (target volume_TGT) and select Start.
ƒ GUI generates a -restore procedural even if the target volume is being
used as a source in another active FlashCopy mapping.

VB1-NEWR
VB1-NEW_TGT

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-44. Start FlashCopy PiT copy restore

To restore the content of the source volume from the target volume, right-click the new
source_TGT volume entry in the reverse FlashCopy mapping and select Start from the pop-up
menu.
Observe the startfcmap command generated by the GUI contains the -restore parameter. The
-restore parameter allows the mapping to be started even if the target volume is being used as a
source in another active FlashCopy mapping.

© Copyright IBM Corp. 2012, 2016 9-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Monitor FlashCopy progress


IBM_Storwize V009B:V009B1-admin> lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,grou
p_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name
,restoring,start_time,rc_controlled
0,fcmap0,2,BEERS,7,BEERS_TGT,,,idle_or_copied,100,100,100,on,2,fcmap2,no,1606202404,no
1,fcmap1,2,BEERS,8,BEERS_DEBUG,,,copying,8,50,100,off,,,no, 160620111623,no
2,fcmap2,7,BEERS_TGT,2,BEERS,,,copying,73,50,100,on,0,fcmap0,yes, 160620112743,no
IBM_Storwize V009B:V009B1-admin> chfcmap -copyrate 100 1
IBM_Storwize V009B:V009B1-admin> chfcmap -copyrate 100 2

ID 0
ID 1 ROOT_ ID 2 ROOT
BEERS
BEERS_TGT
ROOT_BEERS_
_DEBUG
IBM_Storwize V009B:V009B1-admin> lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,grou
p_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name
,restoring,start_time,rc_controlled
0,fcmap1,2,BEERS,7,BEERS_TGT,,,idle_or_copied,100,100,100,on,,,no, 160620112404,no
1,fcmap0,2,BEERS,8,BEERS_DEBUG,,,copying,84,100,100,off,,,no, 160620111623,no
IBM_Storwize V009B:V009B1-admin> lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,grou
p_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name
,restoring,start_time,rc_controlled
0,fcmap1,2,VB1-NEW,7,VB1-NEW_TGT,,,idle_or_copied,100,100,100,on,,,no, 160620112404,no
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-45. Monitor FlashCopy progress

The lsfcmap command output shows the three mapping summaries:


The first is the original source (fcmap0) to target mapping which has a partner mapping to fcmap2.
Next, we have the source to DEBUG target mapping. This was created as a Clone so the mapping
would be deleted automatically at copy completion.
The fcmap2 mapping is used to reverse the mapping and a partner with mapping fcmap0. With the
progress of 73%, Storwize V7000 FlashCopy has determined only a small percentage of grains
need to be copied to restore the source volume.
The second set of lsfcmap commands only shows mapping IDs 0 and 1. Background copy of the
corrupted content from the source volume has been copied to the DEBUG volume and the fcmap1
mapping was deleted. The restore operation for the source volume was still in progress at 84
percent; and then also completes with the mapping deleted.

© Copyright IBM Corp. 2012, 2016 9-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Source restored: Rebooted host view


• For recovery situations, it might be best to shut down the host and reboot
the server before using the restored volume.
• Reverse FlashCopy enables FlashCopy targets to restore the source to a
PiT without breaking the FlashCopy relationship.

VB1-NEW

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-46. Source restored: Rebooted host view

As with any FlashCopy mapping, after background copy has started, both the source and target
volumes are available for read/write access.
For recovery situations, it might be best to shut down the host and reboot the server before using
the restored volume.
This view shows the original source volume has been restored to the content level of the target
volume.
Based on the SDD reported disk serial number, the target_DEBUG volume has been assigned to
the host as drive letter E. It contains the corrupted content of the source volume. Reverse
FlashCopy enables FlashCopy targets to become restore points for the source without breaking the
FlashCopy relationship and without having to wait for the original copy operation to complete.

© Copyright IBM Corp. 2012, 2016 9-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Reverse multi-target FlashCopy operations


Original Storwize V7000
Relationships
Later Recovery
Source Target Point in Time (PiT)
Volume Volume
X Y
SAN
Target
Multi Target Volume
FlashCopy W
Storwize V7000
operation

Source Target
1. Optional Copy Volume Volume
of Original X Y
Relationship SAN
OR
Target 2. Reverse
Volume Target
Z FlashCopy Volume

operation W

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-47. Reverse multi-target FlashCopy operations

The multi-target FlashCopy operation allows several targets for the same source. This can be used
for backup to tape later. Even if the backup is not finished, the user can create an additional target
for the next backup cycle and so on.
Reverse FlashCopy enables FlashCopy targets to become restore points for the source without
breaking the FlashCopy relationship and without having to wait for the original copy operation to
complete. It supports multiple targets (up to 256) and thus multiple rollback points.
A key advantage of the IBM Spectrum Virtualize Multiple Target Reverse FlashCopy function is that
the reverse FlashCopy does not destroy the original target, which allows processes by using the
target, such as a tape backup, to continue uninterrupted.
IBM Spectrum Virtualize also provides the ability to create an optional copy of the source volume to
be made before the reverse copy operation starts. This ability to restore back to the original source
data can be useful for diagnostic purposes.
In this example, the multi-target FlashCopy operation has occurred an error or virus on the source.
Therefore, the administrator needs to reverse FlashCopy the Snapshot data on target1 or target2
can be flashed back. This process is incremental and thus very fast. The host can then work with
the clean data. If a root cause analysis of the original source is store the corrupted data for later
analysis.

© Copyright IBM Corp. 2012, 2016 9-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
Reverse FlashCopy:
• Does not require the original FC copies to have been completed.
• Does not destroy the original target content (for example, does not disrupt tape backups
underway).
• Does allow an optional copy of the corrupted source to be made (for example, for diagnostics)
before starting the reverse copy.
• Does allow any target of the multi-target chain to be used as the restore or reversal point.

© Copyright IBM Corp. 2012, 2016 9-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Benefits of backup as consistency group


• Maintain consistency of data across multiple disk volume at a
backup location 24 hours a day
• Helps reduce the time needed for data backups
• Optimize thin provisioning to minimize storage space
Thin-provisioned

Mapping

Source Volume Target Volume

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-48. Benefits of backup as consistency group

If your company data must maintain the consistency of data across multiple disk volumes at a
backup location and available 24 hours a day, having eight hours of downtime is unacceptable.
Using the FlashCopy service as part of the backup process can help reduce the time needed for the
backup. When the FlashCopy process is started, your application stops for just a moment, and then
immediately resumes.
Using FlashCopy for backup can also help you optimize the use of thin provisioning, which occurs
when the virtual storage of a volume exceeds its real storage. For example, you can use the
FlashCopy service to map a fully allocated source volume to a thin-provisioned target volume. The
thin-provisioned target volume serves as a consistent snapshot copy that you can use to back up
your data to tape. Because this type of target volume uses less real storage than the source
volume, it can help you reduce costs in power, cooling, and space.
FlashCopy consistency groups ensure data consistency across multiple volumes by putting
dependent volumes in an extended long busy state and then performing the backup. This is
supposed to guarantee integrity of the data in the dependent volumes at the physical level and not
the logical database level.

© Copyright IBM Corp. 2012, 2016 9-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Spectrum Copy Services: FlashCopy


• FlashCopy
ƒ Functionality and overview
ƒ Create snapshot
ƒ Create consistency group with multi-select
ƒ Incremental Copy option
ƒ Cache layer/bitmap space
ƒ Tivoli Storage FlashCopy Manager

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-49. Spectrum Copy Services: FlashCopy

In this topic, we will review how FlashCopy utilizes bitmaps to track grains in FlashCopy mappings
or mirroring relationship. In addition, review the functions of Tivoli Storage Manager.

© Copyright IBM Corp. 2012, 2016 9-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

FlashCopy internal cache operations


• FlashCopy sits between the upper IOs from Host

Write
and lower cache layers.

Reade
ƒ FlashCopy Indirection layer isolates
the additional latency created by
COW.
í COW latency is handled by the internal
cache operations and not by active Upper Cache
application.

Stage

Destage
ƒ Bitmap governs the I/O redirection
FlashCopy
between both nodes of the Storwize
Indirection layer
V7000.

Destage
FlashCopy bitmap
• Prime location allows FlashCopy to

Stage
benefit from read prefetching and
Lower Cache
coalescing writes to backend

Write

Reade
Write

Reade
storage.
ƒ Much faster because upper cache IOs to storage
write data does directly to the lower controllers
cache. Source Target
Volume Volume
Copy from source to target
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-50. FlashCopy internal cache operations

Starting with V7.3 the entire cache subsystem was redesigned and changed accordingly. Cache
has been divided into upper and lower cache. Upper cache serves mostly as write cache and hides
the write latency from the hosts and application. Lower cache is a read/write cache and optimizes
I/O to and from disks.
This copy-on-write process introduces significant latency into write operations. To isolate the active
application from this additional latency, the FlashCopy indirection layer is placed logically between
upper and lower cache. Therefore, the additional latency that is introduced by the copy-on-write
process is encountered only by the internal cache operations and not by the application.
The two level cache design provides additional performance improvements to FlashCopy
mechanism. Because now the FlashCopy layer is above lower cache in the IBM Spectrum
Virtualize software stack, it can benefit from read prefetching and coalescing writes to backend
storage. Also, preparing FlashCopy is much faster because upper cache write data does not have
to go directly to backend storage but to lower cache layer. Additionally, in the multi-target
FlashCopy the target volumes of the same image share cache data. This design is opposite to
previous IBM Spectrum Virtualize code versions where each volume had its own copy of cached
data.
The bitmap governs the I/O redirection (I/O indirection layer) which is maintained in both nodes of
the IBM Storwize V7000 I/O Group to prevent a single point of failure. For the FlashCopy volume

© Copyright IBM Corp. 2012, 2016 9-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty
capacity per I/O Group, you have a maximum limit on the quantity of FlashCopy mappings that are
using bitmap space from this I/O Group. This maximum configuration uses all 4 GiB of bitmap
space for the I/O Group and allows no Metro or Global Mirror bitmap space. The default is 40 TiB.

© Copyright IBM Corp. 2012, 2016 9-60


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Example: Bitmap space defaults for data replication


IBM_Storwize:V009B:V009B1-admin>lsiogrp 0
id 0
name io_grp0
node_count 2
vdisk_count 8
host_count 4
flash_copy_total_memory 20.0MB Default bitmap space for
flash_copy_free_memory 19.9MB FlashCopy, Remote Copy
remote_copy_total_memory 20.0MB
remote_copy_free_memory 20.0MB (MM/GM) and Volume Mirroring
mirroring_total_memory 20.0MB
mirroring_free_memory 20.0MB
raid_total_memory 40.0MB
raid_free_memory 40.0MB
maintenance no
compression_active no
accessible_vdisk_count 8
compression_supported yes

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-51. Example: Bitmap space defaults for data replication

Bitmaps are internal Storwize V7000 data structures used to track which grains in FlashCopy
mappings or mirroring relationships, have been copied from the source volume to the target
volume; or from one copy of a volume to another for Volume Mirroring.
Bitmaps consume bitmap space in each I/O group’s node cache. The maximum amount of cache
used for bitmap space is 552 MiB per I/O Group, which is shared among FlashCopy bitmaps,
Remote Copy (Metro/Global Mirroring) bitmaps, Volume Mirroring, and RAID processing bitmaps.
When an Storwize V7000 cluster is initially created, the default bitmap space assigned is 20 MiB
each for FlashCopy, Remote Copy, and Volume Mirroring; and 40 MiB for RAID metadata.
The verbose lsiogrp command output displays, for a given I/O group, the amount of bitmap space
allocated and currently available for each given bitmap space category.

© Copyright IBM Corp. 2012, 2016 9-61


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Bitmap space and copy capacity (per I/O group)


1 MB 20 MB 512 MB
Copy Grain Size
bitmap bitmap bitmap
Service (KB) space space space
2 TB 40 TB 1024 TB
FlashCopy 256 source volume source volume source volume
capacity capacity capacity
512 GB 10 TB 256 TB
FlashCopy 64 source volume source volume source volume
capacity capacity capacity
1 TB 20 TB 512 TB
Incremental
256 source volume source volume source volume
FlashCopy capacity capacity capacity
256 GB 5 TB 128 TB
Incremental
64 source volume source volume source volume
FlashCopy capacity capacity capacity
2 TB 40 TB 1024 TB
Metro Mirror
256 volume volume volume
Global Mirror capacity capacity capacity
2 TB 40 TB 1024 TB
Volume
256 volume volume volume
Mirroring capacity capacity capacity

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-52. Bitmap space and copy capacity (per I/O group)

By default, each I/O group has allotted 20 MB of bitmap space each for FlashCopy, Remote Copy,
and Volume Mirroring.
For FlashCopy, the default 20 MB of bitmap space provides a copy capacity to track 40 TB of target
volume space if the default grain size of 256 KB is used. The 64 KB grain size means four times as
many bits are needed to track the same amount of space; this increased granularity decreases the
total copy capacity to 10 TB or one fourth the amount as the 256 KB grain size. The tradeoff is a
potential decrease in the amount of data that needs to be incrementally copied, which in turn,
reduces copy time and Storwize V7000 CPU utilization.
Incremental FlashCopy requires tracking changes for both the source and target volumes, thus two
bitmaps are needed for each FlashCopy mapping. Consequently for the default grain size of 256
KB, the total copy capacity is reduced from 40 TB to 20 TB. If the 64 KB grain size is selected, the
total copy capacity is reduced from 10 TB to 5 TB.
For Remote Copy (Metro and Global mirroring), the default 20 MB of bitmap space provides a total
capacity of 40 TB per I/O group; likewise for Volume Mirroring.

© Copyright IBM Corp. 2012, 2016 9-62


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Example: Bitmap space configuration and usage


IBM_Storwize:V009B:V009B1-admin>l>chiogrp -feature flash -size 30 0
IBM_Storwize:V009B:V009B1-admin>l>chiogrp -feature remote -size 10 0
IBM_Storwize:V009B:V009B1-admin>l>chiogrp -feature mirror -size 25 0

IBM_Storwize:V009B:V009B1-admin>l>lsiogrp 0
id 0
name io_grp0
node_count 2
vdisk_count 8
host_count 4 Update bitmap space for
flash_copy_total_memory 30.0MB
flash_copy_free_memory 29.9MB FlashCopy, Remote Copy
remote_copy_total_memory 10.0MB (MM/GM) and Volume Mirroring
remote_copy_free_memory 10.0MB
mirroring_total_memory 25.0MB
mirroring_free_memory 25.0MB
raid_total_memory 40.0MB
raid_free_memory 40.0MB
maintenance no
compression_active no
accessible_vdisk_count 8
compression_supported yes

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-53. Example: Bitmap space configuration and usage

The chiogrp command is used to control the amount of bitmap space to be set aside for each IO
group.
Use the chiogrp command to release the default allotted cache space if the corresponding function
is not licensed. For example, if Metro/Global Mirror is not licensed, change the bitmap space to 0 to
regain the I/O group cache for other use.
By the same token, if more copy capacity is required, use the chiogrp command to increase the
amount of memory set aside for bitmap space. A maximum of 552 MB, shared among FlashCopy,
Remote Copy, Volume Mirroring, and RAID functions, can be specified per IO group.

© Copyright IBM Corp. 2012, 2016 9-63


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Spectrum Copy Services: FlashCopy


• FlashCopy
ƒ Functionality and overview
ƒ Create snapshot
ƒ Create consistency group with multi-select
ƒ Incremental Copy option
ƒ Indirection layer/bitmap space
ƒ Spectrum Protect (Tivoli Storage FlashCopy Manager)

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-54. Spectrum Copy Services: FlashCopy

In this topic, we will review the functions of Tivoli Storage Manager.

© Copyright IBM Corp. 2012, 2016 9-64


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Tivoli Storage Manager for Advanced Copy Services (1 of 2)


Offload Snapshot backup to TSM
System server
Application • Transfer outboard of
System Local application server to minimize
Application Snapshot impact to application
Data Versions • Copies on TSM server provide
Snapshot long-term retention and
Backup disaster recovery
Backup • Very fast restore from the
to TSM snapshot

Support for multiple,


persistent snapshots
Restore from TSM • Persistent snapshots retained
locally

With Optional Policy-based management of


TSM Backup local, persistent snapshots
Integration • Retention policies may be
different for local snapshots
and copies on TSM server
Storage hierarchy • Automatic reuse of local
snapshot versions expire
Restore can be performed from deployment
• Local snapshot version
• TSM storage hierarchy
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-55. Tivoli Storage Manager for Advanced Copy Services (1 of 2)

The management of many large FlashCopy relationships and Consistency Groups is a complex
task without a form of automation for assistance.
IBM Spectrum Virtualize FlashCopy Manager provides fast application-aware backups and restores
leveraging advanced point-in-time image technologies available with the IBM Storwize V7000. In
addition, it provides an optional integration with IBM Tivoli Storage Manager, for long-term storage
of snapshots.
This example shows the integration of Tivoli Storage Manager and FlashCopy Manager from a
conceptual level. Tivoli Storage Manager can be supported on SAN Volume Controller, Storwize
V7000, DS8000, DS3400, DS3500 and DS5000, plus others.

© Copyright IBM Corp. 2012, 2016 9-65


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Tivoli Storage Manager for Advanced Copy Services (2 of 2)


Offload Snapshot backup to TSM
FlashCopy Manager System server
• Transfer outboard of
Application application server to minimize
System Local
Application impact to application
Snapshot
Data Versions
• Copies on TSM server provide
long-term retention and
Snapshot
disaster recovery
Backup
• Very fast restore from the
snapshot

Support for multiple,


persistent snapshots
• Persistent snapshots retained
locally
• Very fast restore from
snapshot
With Optional
TSM Backup Policy-based management of
Integration local, persistent snapshots
Storage hierarchy • Retention policies may be
different for local snapshots
Restore can be performed from and copies on TSM server
• Local snapshot version • Automatic reuse of local
• TSM storage hierarchy snapshot versions expire
deployment
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-56. Tivoli Storage Manager for Advanced Copy Services (2 of 2)

Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for Advanced
Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli FlashCopy
Manager, you can coordinate and automate host preparation steps before you issue FlashCopy
start commands to ensure that a consistent backup of the application is made. You can put
databases into hot backup mode and flush the file system cache before starting the FlashCopy.
FlashCopy Manager also allows for easier management of on-disk backups that use FlashCopy,
and provides a simple interface to perform the “reverse” operation.
This example shows the FlashCopy Manager feature.

© Copyright IBM Corp. 2012, 2016 9-66


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Keywords
• FlashCopy
• Full background copy
• No background copy
• Consistency groups
• GUI presets
• Event notifications
• Thin provisioned target
• Target
• Source
• Copy rate
• Clone
• Incremental FlashCopy
• Bitmap space
• Tivoli Storage FlashCopy Manager

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-57. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 9-67


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Review questions (1 of 2)
1. True or False: Both the source and target volumes of a
FlashCopy mapping are available for read/write I/O
operations while the background copy is in progress.

2. True or False: A FlashCopy target volume can be


Thin-Provisioned and reside in another Storwize V7000
cluster.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-58. Review questions (1 of 2)

© Copyright IBM Corp. 2012, 2016 9-68


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Review answers (1 of 2)
1. True or False: Both the source and target volumes of a
FlashCopy mapping are available for read/write I/O
operations while the background copy is in progress.
The answer is true.

2. True or False: A FlashCopy target volume can be Thin-


Provisioned and reside in another Storwize V7000 cluster.
The answer is false.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 9-69


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Review questions (2 of 2)
3. True or False: Incremental FlashCopy assumes an initial full
background copy so that subsequent background copies
only need to copy the changed blocks to resynchronize the
target.

4. True or False: Bitmap space for Copy Services is managed


in the node cache of the I/O group.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-59. Review questions (2 of 2)

© Copyright IBM Corp. 2012, 2016 9-70


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Review answers (2 of 2)
3. True or False: Bitmap space for Copy Services is managed
in the node cache of the I/O group.
The answer is true.

4. True or False: Incremental FlashCopy assumes an initial full


background copy so that subsequent background copies
only need to copy the changed blocks to resynchronize the
target to the source.
The answer is true.

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 9-71


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 9. Spectrum Virtualize Copy Services: FlashCopy

Uempty

Unit summary
• Identify I/O access to source and target volumes during a FlashCopy
operation
• Classify the purpose of consistency groups for both FlashCopy and
Remote Copy operations
• Summarize FlashCopy use cases and correlate to GUI provided
FlashCopy presets
• Recognize usage scenarios for incremental FlashCopy and reverse
FlashCopy
• Discuss host system considerations to enable usage of a FlashCopy
target volume and the Mirroring auxiliary volume
• Recognize the bitmap space needed for Copy Services and Volume
Mirroring

Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016

Figure 9-60. Unit summary

© Copyright IBM Corp. 2012, 2016 9-72


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Unit 10. Spectrum Virtualize Copy


Services: Remote Copy
Estimated time
00:45

Overview
The Spectrum Virtualize provides data replication services for mission-critical data using Remote
Copy (Metro Mirror - synchronous copy and Global Mirror - asynchronous copy) of volumes.
This unit examines the functions provided by the Remote Copy features of the Storwize V7000 and
illustrates their usage with example scenarios.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 10-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Unit objectives
• Summarize the use of the GUI/CLI to establish a cluster partnership,
create a relationship, start remote mirroring, monitor progress, and
switch the copy direction
• Differentiate among the functions provided with Metro Mirror, Global
Mirror, and Global Mirror with change volumes

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 10-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Spectrum Copy Services: Remote Copy


• Remote Copy
ƒ Metro Mirror and Global Mirror
ƒ Partnership
ƒ Connectivity
ƒ Examples of MM/GM configuration

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-2. Spectrum Copy Services: Remote Copy

This topic examines the functions of Remote Copy Services Metro Mirror and Global Mirror.

© Copyright IBM Corp. 2012, 2016 10-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

When disaster occurs


• Why do you need a disaster recovery plan?
ƒ Unintended deletion of a database object (rows, columns, tables).
ƒ Unintended deletion of a server object (databases, chunks, dbspace)
ƒ Data corruption or incorrect data created
ƒ Hardware failure (such as when a disk that contains chunk files fails)
ƒ Database server failure
ƒ Natural disaster
Site failure

Primary
Site 1 Secondary
Site 2

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-3. When disaster occurs

Today, when businesses are often required to be operational 24x7x365, and potential disasters due
to weather, power outages, fire, water, or even terrorism pose numerous threats, the importance of
real time disaster recovery and business continuance have become absolutely necessary for many
businesses. Some disasters happen suddenly, stopping all processing at a single point in time, or
interrupts operations in stages that occur over several seconds or even minutes. This is often
referred to as a rolling disaster. Therefore, it is business critical requirement to plan for recovery to
eliminate those potential disaster that can causes system failures where they are immediate,
intermittent, or gradual.

© Copyright IBM Corp. 2012, 2016 10-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Remote Copy replication types


1. Metro Mirror (synchronous)

PRIMARY SECONDARY

2. Global Mirror without cycling (asynchronous)

Peak write bandwidth required to maintain


low RPOs SECONDARY

PRIMARY

3. Global Mirror with cycling and change volumes (GMCV)

PRIMARY
SECONDARY
PRIAMRY Lower bandwidth requirement at
Change expense of higher RPOs SECONDARY
FlashCopy volume FlashCopy Change
Volume
mapping mapping
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-4. Remote Copy replication types

IBM Remote Copy Services offers several data replication methods which are a synchronous
remote copy called Metro Mirror (MM), asynchronous remote copy called Global Mirror (GM), and
Global Copy with Changed Volumes.
Each methods will be discussed in details.

© Copyright IBM Corp. 2012, 2016 10-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Remote Copy Services replication


• Disaster recovery (DR) for block based volumes
• Intercluster copy remote mirroring is over distance in which the two
copies are geographically isolated between two IBM Storwize V7000
systems
• Intracluster copy within the same I/O group (where, both source and
target volumes must be in the same I/O group)
• Available on nearly all systems capable of running IBM Spectrum
Virtualize
Site 1 Site 2
Any Storwize Relationship Any Storwize
Family product Family product

VDisks VDisks
SAN SAN

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-5. Remote Copy Services replication

Remote Copy services provides a single point of control when remote copy is enabled in your
network (regardless of the disk subsystems that are used) if those disk subsystems are supported
by the IBM Storwize V7000.
Synchronous and asynchronous transmissions are two different methods of transmission
synchronization. Synchronous transmissions are synchronized by an external clock, while
asynchronous transmissions are synchronized by special signals along the transmission medium.
The general application of remote copy services is to maintain two real-time synchronized copies of
a volume, know as remote mirroring which is a volume mirroring function. The typical requirement
for remote mirroring is over distance. In this case, intercluster copy is used across two Storwize
V7000 clusters using a Fibre Channel interswitch link (ISL) or alternative SAN distance extension
solutions.
Often, the two copies are geographically dispersed between two IBM Storwize V7000 systems.
Although it is possible to use MM or GM within a single system. This is supported using Intracluster
copy support is within the same I/O group (that is, both source and target volumes must be in the
same I/O group). If the master copy fails, you can enable an auxiliary copy for I/O operation.

© Copyright IBM Corp. 2012, 2016 10-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Remote Copy within the I/O stack


Node I/O Stack
IOs from Host
Metro/Global
SCSI Target Mirror
Forwarding

Replication

Upper Cache
• Advanced independent networked-based
FlashCopy (SAN-wide)
Mirroring
• Implemented in the Replication Layer
Thin Provisioning

Lower Cache • Transaction acknowledgement depends the


replication technologies (synchronous vs
Virtualization
asynchronous) used by the host to send the
Forwarding
transaction to its local storage
RAID
Forwarding • Optional feature - License required
SCSI Initiator

IOs to storage controllers


Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-6. Remote Copy within the I/O stack

IBM Remote Copy is an advanced independent networked-based, SAN-wide, storage system copy
service provided by the Storwize V7000.
For both synchronous and asynchronous replication, the storage array on the primary site will send
the transaction acknowledgment to the host on the primary site. The difference between the two
replication technologies is the order of events that take place after the host sends the transaction to
the local storage array.
Therefore, remote Copy is implemented near the top of the Storwize V7000 I/O stack to allow the
host I/O write data to be forwarded to the remote secondary site once it arrives from the host to
facilitate parallelism and minimize latency.
In parallel to forwarding, the data is also being sent to fast-write cache for local processing.
Because Remote Copy replication sits above the cache layer, it binds to an I/O group.
Metro Mirror and Global Mirror are optional features of the Storwize V7000. The idea is to provide
storage system independent, or outside the box copy capability. The customer does not have to
use or license the Copy Services functions on a box by box basis.

© Copyright IBM Corp. 2012, 2016 10-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Synchronous Metro Mirror

• Up to 300km between sites for business continuity


ƒ As with any synchronous remote replication, performance requirements
might limit usable distance.
• Host I/O completed only when data stored at both locations
• Operates between Storwize V7000 clusters at each site
ƒ Local and remote volumes might be on any Storwize V7000 supported
disk systems.
• Simplest way to maintain an identical copy of data

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-7. Synchronous Metro Mirror

Metro Mirror supports copy operations between volumes that are separated by distances up to
300 km. Synchronous mode provides a consistent and continuous copy, which ensures that
updates are committed at both the primary and the secondary sites before the application
considers the updates complete. The host application writes data to the primary site volume but
does not receive the status on the write operation until that write operation is in the Storwize V7000
cache at the secondary site. Therefore, the volume at the secondary site is fully up to date and an
exact match of the volume at the primary site if it is needed in a failover.
Metro Mirror provides the simplest way to maintain an identical copy on both the primary and
secondary volumes.

© Copyright IBM Corp. 2012, 2016 10-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Synchronous Metro Mirror communication


• Synchronized by an external clock
ƒ Communication occurs in real time
(simultaneous)

(1) Host on primary site sends a write Host


transaction to the primary storage
master volume
(2) The primary storage master volume (4) Act
commits the transaction to cache and (1) Write
sends an acknowledgement
immediately to the storage auxiliary
volume
(3) Secondary storage auxiliary volume Master
sends an acknowledgement to the volume
primary storage master volume (3) Act write
(4) Primary storage master volume sends (2) Mirror
an acknowledgement to the host write

Auxiliary
volume
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-8. Synchronous Metro Mirror communication

Synchronous communications is the more efficient method of communications as the host write
operations to the master volume is mirrored to the cache of the auxiliary volume before an
acknowledgment of the write is sent back to the host that issued the write. This process ensures
that the auxiliary is synchronized in real time, if it is needed in a failover situation
Therefore, both storage volumes process the transaction before an acknowledgment is sent to the
host, meaning the arrays will always be synchronized.

© Copyright IBM Corp. 2012, 2016 10-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Asynchronous Global Mirror

• Up to 20,000km distance between sites for business continuity


ƒ Up to 250ms round trip latency
• Does not wait for secondary I/O before completing host I/O
ƒ Helps reduce performance impact to applications

• Designed to maintain consistent secondary copy at all times


• Operates between Storwize V7000 clusters at each site
ƒ Local and remote volumes might be on any Storwize V7000 supported disk
systems

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-9. Asynchronous Global Mirror

A Global Mirror relationship allows the host application to receive confirmation of I/O completion
without waiting for updates to have been committed to the secondary site. In asynchronous mode,
Global Mirror enables the distance between two Storwize V7000 clusters to be extended while
reducing latency by posting the completion of local write operations independent from the
corresponding write activity at the secondary site.
Global Mirror provides an asynchronous copy, which means that the secondary volume is not an
exact match of the primary volume at every point in time. The Global Mirror function provides the
same function as Metro Mirror Remote Copy without requiring the hosts to wait for the full round-trip
delay of the long-distance link; however, some delay can be seen on the hosts in congested or
overloaded environments. This asynchronous copy process reduces the latency to the host
application and facilitates longer distance between the two sites. The secondary volume is
generally less than one second behind the primary volume to minimize the amount of data that
must be recovered in the event of a failure. However this requires a link with peak write bandwidth
be provisioned between the two sites.
Make sure that you closely monitor and understand your workload. The distance of Global Mirror
replication is limited primarily by the latency of the WAN Link provided.
Previously, Global Mirroring supported up to 80ms round-trip-time for the GM links to send data to
the remote location. With the release of V7.4, it now support up to 250ms round trip latency and

© Copyright IBM Corp. 2012, 2016 10-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty
distances of up to 20,000km are supported. Combined with the performance improvements in the
previous software release, these changes and enhancements have greatly improved the reliability
and performance even over poor links.

© Copyright IBM Corp. 2012, 2016 10-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Asynchronous Global Mirror operations


• Synchronized by special signals
along the transmission medium
ƒ Communication does not occur at
the same time
Host
(1) Host on primary site sends a write
transaction to the primary storage
master volume
(1) Write
(2) The primary storage master volume
commits the transaction to cache and (2) Act write
sends an acknowledgement
immediately back to the host
Master
(3) Primary storage master volume sends volume
the update to the secondary storage
auxiliary volume following a time delay (3) Mirror Delay
(4) Secondary storage auxiliary volume write
eventually sends an
acknowledgement to the primary
storage master volume
Auxiliary
volume
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-10. Asynchronous Global Mirror operations

In an asynchronous global operation, as host send write operations to the master volume,
transaction is process by cache and acknowledgment is immediately sent back to the host issuing
the write before the write operation is mirrored to the cache for the auxiliary volume. An update of
this write operation is sent to the secondary site at a later stage, which provides the capability to
perform Remote Copy over distances exceeding the limitations of synchronous Remote Copy.

© Copyright IBM Corp. 2012, 2016 10-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Global Mirror without cycling


• Global Mirror without cycling (default)

Peak write bandwidth required to maintain


low RPOs
AUXILARY
Master
MASTER

ƒ Initial background copy from master to auxiliary


ƒ Dependent writes to master sent in sequence to auxiliary to ensure
they are applied in the same order
ƒ Bandwidth sized for peak write rates
ƒ RPO within seconds and across the board for all relationships
ƒ Maximum 250 ms roundtrip latency

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-11. Global Mirror without cycling

Traditional Global Mirror operates without cycling; write operations are transmitted to the auxiliary
or secondary volume on a continuously basis triggered by write activity. The secondary volume is
generally within seconds behind the primary volume for all relationships. This achieves a low
recovery point objective (RPO) to minimize the amount of data that must be recovered.
This requires a network to support peak write workloads as well as minimal resource contentions at
both sites. Insufficient resources or network congestion might result in error code 1920 and stopped
GM relationships.

© Copyright IBM Corp. 2012, 2016 10-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Global Mirror with cycling and change volumes


Remote Copy
Background Remote Copy FlashCopy
Copy Secondary
Primary Mapping
(space-efficient)
MASTER AUXILIARY

MASTER Requires less AUXILIARY


change link bandwidth change
FlashCopy volume
Mapping volume
(space-efficient)
Guarantees a
Host
consistent copy
I/O

• Provide a low-bandwidth tolerant Remote Copy function by offloading mirroring


functions to a FlashCopy of the production data
• Initial background copy copies from master to auxiliary
• A background copy of the volume is performed across the infrastructure enabling
a higher RPO, and using significantly less bandwidth
ƒ Change volume holds 256 K grains of changed data during transmission cycle
ƒ Flexible bandwidth requirements at expense of higher RPO
ƒ RPO can be tailored on per relationship basis
ƒ Maximum 250 ms roundtrip latency
• FlashCopy of the production volume can be space-efficient so as to conserve
storage capacity
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-12. Global Mirror with cycling and change volumes

A Global Mirror relationship with cycling and changes volumes leverage FlashCopy functionality to
mitigate peak bandwidth requirements by addressing average instead of peak throughput at the
expense of higher recovery point objectives (RPOs).
Replication communication is made possible because all updates to the primary volume are
tracked and where needed, copied to intermediate change volumes. A delta of changed blocks
(known as grains) since the last cycle is transmitted to the secondary periodically. The secondary
volume is much further behind than the primary volume (an older recover point), thus more data
must be recovered in the event of a failover. Because the transmission of changed data can be
smoothed over a longer time period, a lower bandwidth option (hence lower cost) can be deployed.
Change volumes enable background replication of point-in-time images based on cycling periods
(default is every 300 seconds). Same blocks updated repeatedly within a cycle only need to be sent
once thus reducing some amount of the transmission traffic load.
If the background copy does not complete within the cycling period, the next cycle isn’t started until
the prior copy load completes. This would lead to increased or higher RPOs. It also enables the
recovery point objectives to be configurable at the individual relationship level.
A freeze time value is maintained in the GMCV relationship entry. It reflects the time of the last
consistent image being present at the auxiliary site. The RPO might be up to two cycles of time if

© Copyright IBM Corp. 2012, 2016 10-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty
the background copy completes within the cycle time. If the background copy does not complete
within the cycle time, the RPO of current time minus freeze time would then exceed two cycles.
Benefits of Global Mirroring with Change volumes:
• Bursts of host workload are smoothed over time so much lower link bandwidths can be used
• In the future, acceptance of higher latency on the link can lead to support for distances greater
than 8,000km
• Almost zero impact; I/O pause when triggering next change volume (due to near instant
prepare)
• Less impact to source volumes while prepare, as prepare bound by normal destage, not a
forced flush.

© Copyright IBM Corp. 2012, 2016 10-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Global Mirror cycling mode processing


• Master system signals auxiliary system to start its
FlashCopy snapshot.
• Bitmaps for the master volume and its change volume
are merged to identify grains (256K blocks) needed MASTER
for the cycle.
MASTER
• FlashCopy snapshot is started at the master system
to obtain this cycle’s view of the master volume FlashCopy change
content. PiT volume

ƒ Subsequent writes to master cause copy on write blocks


(COWs) on its change volume.
• Blocks to be copied are read from the master change Cycle period defaults to 300
volume and sent to auxiliary site. seconds (5 minutes); can be
tailored (1 min to 24 hours)
ƒ Prior to auxiliary volume being updated, corresponding for each relationship
COWs are written to its change volume (for recovery if
needed).
• After the changed blocks are sent (which might
exceed cycling period), a freeze time is updated with Auxiliary
the cycle start time (the last consistent image or
recovery point).
• FlashCopy snapshots are stopped and used capacity Auxiliary FlashCopy
of thin snapshots are released and readied for the change PiT
next cycle. volume

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-13. Global Mirror cycling mode processing

Since change volumes are space efficient volumes, they are same size as the primary and auxiliary
volumes.
When Global Mirror operates in cycling mode, after the initial background copy, changes are
tracked and the data is captured on the master volume and the changed data is copied to
intermediate change volumes using FlashCopy point-in-time copy technology. This process does
not required the change volume to copy the entire content of the master volume, instead it only has
to store data for regions of the master volume that change until the next capture step.
The primary change volume is then replicated to the secondary Global Mirror volume at the target
site periodically, which is then captured in another change volume on the target site. This provides
an always consistent image at the target site and protects your data from being inconsistent during
resynchronization.
The mapping between the two sites are updated on the cycling period (60 seconds to 1 Day.) This
means that the secondary volumes are much further behind the primary volume, and more data
must be recovered in the event of a failover. Because the data transfer can be smoothed over a
longer time period, however, lower bandwidth is required to provide an effective solution.
The data stored on the change volume is the original data from the point that FlashCopy captured
the master volume, and allows the system to piece together the whole master volume state from
that time.

© Copyright IBM Corp. 2012, 2016 10-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty
The data captured here uses FlashCopy to provide data consistency using this consistent,
point-in-time copy, the changes can be streamed to the DR auxiliary site during the next copy step.
This causes the FlashCopy to pause IOs to the master volume while generating the consistent
point-in-time image. This will show visibility as a single spike in read and write response time. The
spike can be to a few tens of milliseconds for volumes being individually replicated, or to up to a
second or more if volumes are being replicated as part of a large, 100-volume or more, application.
More on this in a bit.
Simultaneously, the process captures the DR volume's data onto a change volume on the DR site
using FlashCopy. This consistently captures the current state of the DR copy, ensuring we can
revert to a known good copy if connectivity is lost during the next copy step. The data stored on the
change volume on the DR site will be regions changed on the DR copy during the next copy step,
and will consist of the previous data for each region, allowing the reversion of the whole DR copy
back to the state at this capture.

© Copyright IBM Corp. 2012, 2016 10-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Remote Copy advantages and disadvantages


Advantages Disadvantages

Asynchronous • Simple, doesn't require • Large relative overhead, a


transmission synchronization of both high proportion of the
communication sides transmitted bits are uniquely
• Cheap, because asynchronous for control purposes and thus
transmission requires less hardware carry no useful information
• Setup is faster than other
transmissions, so well suited for
applications where messages are
generated at irregular intervals, for
example data entry from the
keyboard, and the speed depends on
different applications

Synchronous • Lower overhead and thus, greater • Slightly more complex


transmission throughput • Hardware is more expensive

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-14. Remote Copy advantages and disadvantages

There are some advantages and disadvantages asynchronous between synchronous remote copy
operations:
• All synchronous copies over remote distances can possible have performance impact to host
applications. This performance impact is related to the distance between primary and
secondary volumes and depending on application requirements, its use might be limited based
on the distance between sites. The distance between the two sites is limited to the latency and
bandwidth of the communication link; along with the latency the host application can tolerate.
Therefore, applications are fully exposed to the latency and bandwidth limitations of the
communication link to the secondary. In a truly remote situation, this extra latency can have a
significant adverse effect on application performance.
• In an asynchronous coping, if a failover occur, certain updates (data) might be missing at the
secondary. The application must have an external mechanism for recovering the missing
updates, if possible. This mechanism can involve user intervention.
Recovery on the secondary site involves starting the application on this recent backup and then
rolling forward or backward to the most recent commit point.

© Copyright IBM Corp. 2012, 2016 10-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Spectrum Copy Services: Remote Copy


• Remote Copy
ƒ Metro Mirror and Global Mirror
ƒ MM/GM Partnership and Relationship
ƒ Connectivity
ƒ Examples of MM/GM configuration

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-15. Spectrum Copy Services: Remote Copy

This topic examines the functions of creating a Metro Mirror and Glob Mirror relationship and
partnership.

© Copyright IBM Corp. 2012, 2016 10-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Multi-cluster Remote Copy partnerships


• Spectrum Virtualize can maintain copy of
your data from up to three other locations Remote Copy topologies
ƒ Supports consolidated DR strategies
• Enables Metro and Global Mirror
relationships between up to four Storwize
V7000 systems
MM or GM Consolidated
ƒ By using asynchronous replication (Global Relationship DR Site MM or GM
Mirror), up to 25,000km away Relationship

ƒ By using synchronous replication (Metro


Mirror), up to 300km away
ƒ Available over Fibre Channel, 10Gb Ethernet
and 1Gb Ethernet
ƒ Three site Metro-Global Mirror like DS8000 not
MM or GM
supported Relationship

ƒ Replication of a volume to multiple sites not


supported
• Max MM/GM relationships increased to
8192
í Support for 256 Consistency Groups
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-16. Multi-cluster Remote Copy partnerships

Metro Mirror and Global Mirror partnerships define an association between a local cluster and a
remote cluster. Each cluster can maintain up to three partnerships, and each partnership can be
with a single remote cluster. Up to four clusters can be directly associated with each other.
Clusters also become indirectly associated with each other through partnerships. If two clusters
each have a partnership with a third cluster, those two clusters are indirectly associated. A
maximum of four clusters can be directly or indirectly associated.
Multi-cluster mirroring enables the implementation of a consolidated remote site for disaster
recovery. It also can be used in migration scenarios with the objective of consolidating data centers.
A volume can be in only one Metro or Global Mirror relationship - which defines the relationship to
be at most with two clusters. Up to 8192 relationships (mix of Metro and Global) are supported per
cluster.

© Copyright IBM Corp. 2012, 2016 10-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Multi-cluster Remote Copy topologies


• Volumes can be part of only one remote copy relationship.
Star Triangle
B C
A C
A

D B
System A can be a central DR site for the three A ĺ B, A ĺ C, and B ĺ C
other locations

Fully
connected Daisy chained
A C
A B C D
Subsequently, one after the other

B D
A ĺ B, A ĺC, A ĺ D, B ĺ D, and C ĺ D
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-17. Multi-cluster Remote Copy topologies

Multiple system mirroring allows for various partnership topologies. Each Storwize V7000 system
can maintain up to three partner system relationships, which allows as many as four systems to be
directly associated with each other. This Storwize V7000 partnership capability enables the
implementation of disaster recovery (DR) solutions.
By using a star topology, you can migrate applications by using a process, such as the process that
is described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
3. Create the A → C relationship (or the B → C relationship).
A fully connected mesh in which every system has a partnership to each of the three other systems.
This topology allows volumes to be replicated between any pair of systems, for example: A → B, A
→ C, and B → C.
All of the preceding topologies are valid for the intermix of the IBM SAN Volume Controller with the
Storwize V7000 if the Storwize V7000 is set to the replication layer and running IBM Spectrum
Virtualize code 6.3.0 or later.

© Copyright IBM Corp. 2012, 2016 10-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Defining a Storwize V7000 to Storwize partnerships


Storwize V7000 Storwize V7000
partnership
cluster A cluster B

partnership

V V V V
V V V V
SWV7K SWV7K
system A system B
layer = replication layer = storage

partnership
SWV7K SWV7K
partnership
system C system D
layer = storage layer = storage
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-18. Defining a Storwize V7000 to Storwize partnerships

A Storwize V7000 system can be configured with one and only one other Storwize V7000 system to
form a partnership relationship for intercluster mirroring. The partnership is defined on both system.
To facilitate replication partnership of volumes between two Storwize V7000 systems, the system
has to have a layer attribute value of either replication or storage. In addition, the layer attribute is
also used to enable one Storwize system to virtualize and manage another Storwize system.
The rules of usage for the layer attribute are:
• The Storwize V7000 only operates with a layer value of replication; its layer value cannot be
changed.
• The Storwize system has a default layer value of storage.
• A Remote Copy partnership can only be formed between two partners with the same layer
value. A partnership between an Storwize V7000 and a Storwize system requires the Storwize
system to have a layer value of replication.
• An Storwize V7000 cluster can virtualize a Storwize system only if the Storwize system has a
layer value of storage.
• A Storwize system with a layer value of replication can virtualize another Storwize system with a
layer value of storage.

© Copyright IBM Corp. 2012, 2016 10-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty
• If the connection is broken between the IBM Storwize V7000 systems that are in a partnership,
all (intercluster) MM/GM relationships enter a Disconnected state.

© Copyright IBM Corp. 2012, 2016 10-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Creating a partnership relationship using CLI


• Use the lspartnershipcandidate command to list the systems that are
available for setting up a two-system partnership (not supported on IP
partnerships.
• To create a multiple or one-way MM/GM partnership between two Storwize
V7000 systems (local and remote):
ƒ Use the mkfcpartnership command for traditional Fibre Channel (FC or FCoE)
connections.
ƒ Use mkippartnership for IP-based connections.
ƒ Apply commands on both systems.
ƒ When partnership is created, specify bandwidth to be used by the background copy
process (defaults to 50 MPs).

Storwize V7000 Storwize V7000


cluster A cluster B

partnership

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-19. Creating a partnership relationship using CLI

Use the lspartnershipcandidate command to list the systems that are available for setting up a
two-system partnership. This command is a prerequisite for creating MM/GM relationships. This
command is not supported on IP partnerships. Use mkippartnership for IP
connections.
To create an IBM Storwize V7000 system partnership, use the mkfcpartnership command for
traditional Fibre Channel (FC or FCoE) connections or mkippartnership for IP-based
connections. To establish a fully functional MM/GM partnership, you must issue this command on
both systems. This step is a prerequisite for creating MM/GM relationships between volumes on the
IBM Storwize V7000 systems.
When the partnership is created, you can specify the bandwidth to be used by the background copy
process between the local and remote IBM Storwize V7000 system. If it is not specified, the
bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to
the bandwidth that can be sustained by the intercluster link.

© Copyright IBM Corp. 2012, 2016 10-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Managing partnership relationship using CLI


• You can view all of these parameter values by using the lssystem
<system_name> command.
IBM_2076:PINK_SWV7K:PINKadmin>lssystem
id 00000200A2E000B6
name PINK_SWV7K
. In this example, SWV7K can
Or provide storage to
. provide storage to an Storwize
V7000 cluster (the default) another SWV7K
layer storage
• To change system parameters specific to any remote copy or Global
Mirror only, use the chsystem command.
ƒ Update the layer value prior to zoning the ports of the Storwize V7000 with
ports of the Storwize V7000 cluster or another SWV7K

IBM_Stowize V009B:V009B1-admin>chsystem -layer replication


IBM_Stowize V009B:V009B1-admin>lssystem
id 00000200A2E000B6
name PINK_SWV7K
. In this example, SWV7K can
. Or obtain storage from
partner with a Storwize V7000
layer replication cluster for Metro/Global Mirror another SWV7K

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-20. Managing partnership relationship using CLI

Uses the lssystem command to list the layer value setting of a Storwize system. The layer value
defaults to storage.
Typically changing the layer value is done at initial system setup or as part of a major
reconfiguration event. In order to change the layer value, the Storwize system must not be zoned
with ports of other Storwize V7000 or Storwize systems. This might require changes to the SAN
zoning configuration.
Use the chsystem -layer command to change the layer setting to either replication or storage.

© Copyright IBM Corp. 2012, 2016 10-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Creating a MM/GM relationship


• Use the mkrcrelationship command to create a new MM/GM
relationship.
ƒ Metro Mirror relationship is defined by default – must use – global optional
parameter to create a GM relationship.
• Use the lsrcrelationshipcandidate command to list the volumes that
are eligible to form an MM/GM relationship.

The copy direction is from


primary to secondary can
be manipulated

Metro Mirror (MM)


Global Mirror (GM) Standalone
Relationship MASTER AUXILARY relationship
Source Target
Synchronous
intracluster Same size
copy must be of
the same I/O
group

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-21. Creating a MM/GM relationship

When two volumes are paired using FlashCopy they are said to be in a mapping. When two
volumes are paired using Metro Mirror or Global Mirror, they are known to be in a relationship.
Use the mkrcrelationship command to create a new MM/GM relationship. This relationship
persists until it is deleted. If you do not use the -global optional parameter, a Metro Mirror
relationship is created instead of a Global Mirror relationship.
You can use the lsrcrelationshipcandidate command to list the volumes that are eligible to
form an MM/GM relationship.
When the command is issued, you can specify the master volume name and auxiliary system to list
the candidates that comply with the prerequisites to create a MM/GM relationship. If the command
is issued with no parameters all of the volumes that are not disallowed by another configuration
state, such as being a FlashCopy target, are listed.
A MM/GM relationship allows the two volumes (a master volume and an auxiliary volume) to be
updated by an application where one volume are mirrored on the other volume. The master volume
contains the production data for application access and the auxiliary volume is a duplicate copy to
be used in disaster recovery scenarios. For the duration of the relationship, the master and auxiliary
attributes never change. The volumes can be in the same Storwize V7000 clustered system or on
two separate Storwize V7000 systems. The two volumes must be the same size.

© Copyright IBM Corp. 2012, 2016 10-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty
For intracluster copy, they must be in the same I/O group. The master and auxiliary volume cannot
be in an existing relationship and they cannot be the target of a FlashCopy mapping. This
command returns the new relationship (relationship_id) when successful.
When a relationship is initially created, the master volume is assigned the role of the primary
volume, containing a valid copy of data for application read/write access; the auxiliary volume is
assigned the role of the secondary volume, containing a valid copy of data, but it is not available for
application write operations. The copy direction is from primary to secondary. The copy direction
can be manipulated.
A relationship that is not part of a consistency group is called a standalone relationship.

© Copyright IBM Corp. 2012, 2016 10-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Creating an MM/GM consistency group


• Multiple relationships connected together so that all remote DR copies update
in a consistent way.
ƒ Up to 256 consistency groups
• Use the mkrcconsistgrp command to create an empty MM/GM
consistency group.
ƒ Consistency group name must be unique across all consistency groups.
ƒ No relationships are defined – use the chrelationship command to add
MM/GM relations to the consistency group.
Consistency Group Consistency group =
container for one or more
MM/GM relationships
EMPTY STATE

Consistency Group

DATA
Relationship DATA
30 GB 30 GB
LOG
Relationship LOG
1 GB 1 GB
Atomic copy of multiple volumes

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-22. Creating an MM/GM consistency group

Similar to FlashCopy, a consistency group enables the grouping of one or more relationships so
that they are manipulated in unison.
A consistency group can contain zero or more relationships. All relationships in the group must
have matching master and auxiliary clusters and the same copy direction.
Use the mkrcconsistgrp command to create an empty MM/GM Consistency Group.
The MM/GM consistency group name must be unique across all consistency groups that are known
to the systems owning this consistency group. If the consistency group involves two systems, the
systems must be in communication throughout the creation process.
The new consistency group does not contain any relationships and is in the Empty state.
You can add MM/GM relationships to the group (upon creation or afterward) by using the
chrelationship command or it can be a stand-alone MM/GM relationship if no Consistency Group
is specified.
.

© Copyright IBM Corp. 2012, 2016 10-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Partnerships: Supported code levels matrix

• Supported
• Supported for:
ࡳ Partnership between SVC
system
ࡳ Partnership between
V7000
ࡳ Partnerships between a
SVC and a V7000s
required both system to
be running software code
6.3.0 or later
• Not supported
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-23. Partnerships: Supported code levels matrix

This IBM support document provides a compatibility table for Metro Mirror and Global Mirror
relationships between SAN Volume Controller and Storwize family system software versions. A
partnership can be formed across Storwize V7000 clusters and Storwize systems (with layer set to
replication) running differing code levels.
An up-to-date code level compatibility matrix for intercluster Remote Copy is maintained by
Storwize V7000 support personnel.
There are some limitations that applies. Note 1 reference that Global Mirror with Change Volume
(GMCV) relationships are not supported between 7.1.0 and earlier levels and 7.5.0 and later levels.
This restriction does not affect Metro Mirror or Global Mirror (cycling mode set to 'none')
relationships. A concurrent upgrade path is available by upgrading the down level system to 7.2.0
or later first.
When planning an upgrade, refer to the concurrent compatibility references and release notes for
the new software level for any additional restrictions that may apply.

© Copyright IBM Corp. 2012, 2016 10-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Partnerships code compatibility


• Code level of each system must be supported for interoperability
with the code level of every other system in the network – even if
there no direct partnership in place between systems.

Storwize V7000 Storwize V7000


cluster A cluster B
Direct
partnership

partnership
Direct
Storwize V7000
cluster C

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-24. Partnerships code compatibility

In a network of connected systems, the code level of each system must be supported for
interoperability with the code level of every other system in the network. This applies even if there is
no direct partnership in place between systems. For example, in the figure below, even though
system A has no direct partnership with system C, the code levels of A and C must be compatible,
as well as the partnerships between A-B and B-C.

© Copyright IBM Corp. 2012, 2016 10-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Remote Copy replication sequence of events


1. Establish
Establish partnership between two Storwize V7000 clusters
Partnership
svctask mkpartnership

2. Create Define consistency group and relationships


svctask mkrcconsistgrp
and
svctask mkrcrelationship
Empty
3. Start Synchronize master and auxiliary
svctask startrcconsistgrp
Inconsistent_stopped or
svctask startrcrelationship
Inconsistent_copying Stop copy
4. Stop svctask stoprcconsistgrp
or
svctask stoprcrelationship
Consistent_synchronized
Idling/Consistent_stopped
5. Switch
Reverse roles for copy
svctask switchrcconsistgrp
Direction or
svctask switchrcrelationship
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-25. Remote Copy replication sequence of events

A series of events or steps need to occur to establish a Metro/Global Mirror process:


Establish partnership: Identify a remote cluster as a partner to a given cluster. This task must be
done on both clusters to establish a fully configured two-way partnership.
Create:
• Define a consistency group to contain related relationships and identify a remote or auxiliary
cluster for intercluster copies. The state of the consistency group starts out initially as empty.
• Define a relationship between the master and auxiliary volumes. The pair of volumes must be
the same size and must not be in any other Metro/Global Mirror relationships. The relationship
might be placed into a consistency group or left standalone. At this point the state of the
relationship between the two volumes is inconsistent_stopped.
Start:
• Activate the relationship starts the background copy process to clone the auxiliary volume from
the master volume. During this time, the state is set to inconsistent_copying. After the
background copy process completes, the state transitions to consistent_synchronized.
Subsequent write operations are duplicated on both volumes.
Stop:

© Copyright IBM Corp. 2012, 2016 10-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty
• Stop the relationship or the write synchronization between the two volumes. If the auxiliary
volume was granted write access with the stop command, it transitions to the idling state;
otherwise it is placed in the consistent_stopped state.
Switch:
• Reverse the direction of the copy so that the auxiliary volume becomes the primary volume for
the copy process.
• Remote mirroring is also referred to as Remote Copy (rc), hence all the commands reflect the
acronym of “rc”.
• Similar to FlashCopy, SNMP traps can be generated on state change events.
When the two clusters can communicate, the clusters and the relationships spanning them are
described as connected. When they cannot communicate, the clusters and relationships spanning
them are described as disconnected.

© Copyright IBM Corp. 2012, 2016 10-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Spectrum Copy Services: Remote Copy


• Remote Copy
ƒ Metro Mirror and Global Mirror
ƒ MM/GM Partnership and Relationship
ƒ Connectivity
ƒ Examples of MM/GM configurations

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-26. Spectrum Copy Services: Remote Copy

This topic examines the functions of Remote Copy Services Metro Mirror and Glob Mirror.

© Copyright IBM Corp. 2012, 2016 10-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Planning for remote copy services


(or Storwize with
layer = replicate)
Site A Site B
Define
Storwize V7000 Storwize V7000
partnerships

NODE 1 NODE 1
Hosts and controllers

Hosts and controllers


NODE 2 ZONE 1 NODE 2

ZONE 3
ZONE 3
NODE 3 NODE 3

NODE 4 NODE 4

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-27. Planning for remote copy services

Designed for disaster recovery, Remote Copy allows volumes at a primary site be mirrored to a
secondary site. By using the Metro Mirror and Global Mirror Copy Services features, you can set up
a relationship between two volumes so that updates that are made by an application to one volume
are mirrored on the other volume. The volumes can be in the same Storwize V7000 clustered
system or on two separate Storwize V7000 systems.
The graphic supports the recommended zoning guideline for Remote Copy:
• For each node that is to be zoned to a node in the partner system, zone exactly two Fibre
Channel ports.
• For a dual-redundant fabric, split the two ports from each node between the dual fabric so that
exactly one port from each node is zoned with the partner nodes.

© Copyright IBM Corp. 2012, 2016 10-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Connecting partnership using ISLs


Site-A

Storwize V7000

Switch
Managed Disks (MDisks)
Blue1 Blue2 Blue3

DS5000 DS3000
Storwize
DS4000 Flash XIV
V7000 System DS8000 Switch
Storwize V7000
Interswitch Link (ISL)
Transmit using:
• FCP Managed Disks (MDisks) Site-B
• FCIP
Blue1 Blue2 Blue3
• TCP/IP
with WAN DS5000
DS3000
XIV
DS8000

acceleration
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-28. Connecting partnership using ISLs

The SAN fabrics at the two sites are connected with interswitch links (ISL) or SAN distance
extension solutions. For testing or continuous data protection purposes, intracluster mirroring
operations are also supported.
Implicit with connecting the two clusters with ISLs is that the two fabrics of the two clusters must
merge (excluding non-fabric merge solutions from SAN vendors), that is, no switch domain ID
conflicts, no conflicting switch operating parameters, and no conflicting zone definitions. A zone
definition containing all the Storwize V7000 ports of both clusters must be added to enable the two
Storwize V7000 cluster nodes to communicate with one another.
The ISL is also referred to as the intercluster link. It is used to control state changes and coordinate
updates.
The maximum bandwidth for the background copy processes between the clusters must be
specified. Set this value to less than the bandwidth that can be sustained by the intercluster link. If
the parameter is set to a higher value than the link can sustain, the background copy processes
uses the actual available bandwidth.
A mirroring relationship defines a pairing of two volumes with the expectation that these volumes
will contain exactly the same data through the mirroring process.

© Copyright IBM Corp. 2012, 2016 10-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Metro and Global Mirror over longer distances


Site-A
Storwize V7000 Pair

Switch
Managed Disks (MDisks)
Blue1 Blue2 Blue3

DS5000 DS3000
Storwize
DS4000 Flash XIV
V7000 System
IP
WAN

FCIP
Interswitch Link (ISL) Switch
Storwize V7000 Pair

FCIP implemented with:


4Cisco MDS 9000 IPS module (VSAN) Managed Disks (MDisks) Site-B
4Brocade Multiprotocol Router (LSAN) Blue1 Blue2 Blue3

Storwize DS8000
Round-trip latency maximum is 250 millisec for GM DS5000
DS3000
V7000
XIV

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-29. Metro and Global Mirror over longer distances

The FCIP protocol extends the distance between SANs by enabling two Fibre Channel switches to
be connected across an IP network. The IP network span is transparent to the FC connection. The
two SANs merge as one fabric as FCIP implements virtual E_Ports or a stretched ISL between the
two ends of the connection. Fibre Channel frames are encapsulated and tunneled through the IP
connection. The UltraNet Edge Router is an example of a product that implements FCIP where the
two edge fabrics merge as one.
SAN extended distance solutions where the SAN fabrics do not merge are also supported. Visit the
Storwize V7000 support page (http://www.ibm.com/storage/support/2145) for more information
regarding:
• The Cisco MDS implementation of InterVSAN Routing (IVR).
• The Brocade SAN Multiprotocol Router implementation of logical SANs (LSANs).
Distance extension using extended distance SFPs are also supported.
The term - intercluster link - is used to generically include the various SAN distance extension
options that enable two Storwize V7000 clusters to be connected and form a partnership.

© Copyright IBM Corp. 2012, 2016 10-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Spectrum Copy Services: Remote Copy


• Remote Copy
ƒ Metro Mirror and Global Mirror
ƒ MM/GM Partnership and Relationship
ƒ Connectivity
ƒ Examples of MM/GM configurations

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-30. Spectrum Copy Services: Remote Copy

This shows examples of Remote Copy Services Metro Mirror and Glob Mirror configurations.

© Copyright IBM Corp. 2012, 2016 10-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Copy Services: Remote Copy configurations

Remote Copy option


• Creates either intracluster/intercluster
consistency group container
• Creates relationships using user provided
targets (from any pool)
• Place relationship in consistency group if
desired by user
Partnerships option
• Defines partnership between this cluster and
another cluster

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-31. Copy Services: Remote Copy configurations

The Storwize V7000 management GUI supports the Remote Copy functionality with two menu
options within the Copy Services function icon:
• The Remote Copy menu option is designed to create, display, and manage consistency groups
and relationships for Metro Mirror and 1Global Mirror.
• The Partnership menu option is designed to define, display, and manage partnerships between
Storwize V7000 clusters.

© Copyright IBM Corp. 2012, 2016 10-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Remote Copy scenarios: Example environment


NAVY_Storwize V7000
Define - One
OLIVE_Storwize V7000
standalone
MM relationship
WINES_A
WISKEE_GA WINES_M

Define - Three
cluster partnerships
Define - One
standalone
GM relationship;
change to GMCV

WISKEE_GM

PINK_SWV7K

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-32. Remote Copy scenarios: Example environment

This example illustrates the process of establishing cluster partnerships scenarios:


Define a standalone Metro Mirror relationship, from NAVY_Storwize V7000 to OLIVE_Storwize
V7000 with an initial copy direction from the NAVY_Storwize V7000 to the OLIVE_Storwize V7000.
The copy direction will be switched to the remote site and subsequently back to the local site. The
relationship contains an application volume:
• WINES_M as the primary or master volume.
• WINES_A as the secondary or auxiliary volume.
Define a standalone Global Mirror relationship, from PINK_SWV7K to NAVY_Storwize V7000, with
a copy direction from PINK_SWV7K to NAVY_Storwize V7000. The relationship contains an
application volume:
• WISKEE_GM as the primary or master volume.
• WISKEE_GA as the secondary or auxiliary volume.
This relationship will subsequently be modified to operate in cycling mode with change volumes.

© Copyright IBM Corp. 2012, 2016 10-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Define partnership between two Storwize V7000s


NAVY_Storwize V7000
NAVY_Storwize
V7000

OLIVE_Storwize
V7000

Bandwidth value is a cap to the


background copy rate that all
remote copies are allowed to use
to synchronize.

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-33. Define partnership between two Storwize V7000s

To implement mirroring between two volumes from two different Storwize V7000 clusters, a cluster
partnership must be defined first. Cluster partnerships must be defined by each of the two Storwize
V7000 clusters forming the partnership.
After SAN connectivity between two Storwize V7000 clusters has been established, define the
partnership from one cluster to another using Copy Services > Partnerships, then click the New
Partnership button.
In this example, after SAN zoning has been updated, NAVY_Storwize V7000 is able to detect the
OLIVE_Storwize V7000 as a system eligible to form a partnership. The Create button is clicked to
establish the partnership from the perspective of the NAVY_Storwize V7000.
The Bandwidth value at the partnership level defines the maximum background copy rate (in MBps)
that Storwize V7000 Remote Copy would allow as the sum of background copy synchronization
activity for all relationships from the direction of this cluster to its partner. The background copy
bandwidth for a given pair of volumes is set to a maximum of 25 MBps by default. Both of these
bandwidth rates might be modified dynamically.

© Copyright IBM Corp. 2012, 2016 10-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Partnership established from NAVY to OLIVE

NAVY_Storwize V7000

NAVY_Storwize
V7000

OLIVE_Storwize
V7000

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-34. Partnership established from NAVY to OLIVE

The GUI generates the mkpartnership command to establish a partnership with the
OLIVE_Storwize V7000. Each cluster has a cluster name and a hexadecimal cluster ID. The GUI
generated commands tend to refer to a cluster by its cluster ID instead of its name.
The partnership is now partially configured; as the attempt to form a partnership must also occur
from the partner-to-be.

© Copyright IBM Corp. 2012, 2016 10-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Define partnership from OLIVE to NAVY Storwize V7000s

OLIVE_Storwize V7000

NAVY_Storwize
V7000

OLIVE_Storwize
V7000

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-35. Define partnership from OLIVE to NAVY Storwize V7000s

Repeat the same steps to establish the partnership from the OLIVE_Storwize V7000 to the
NAVY_Storwize V7000. Once completed, a fully configured partnership is said to exist between the
two clusters.
The Bandwidth value at the partnership level defines the maximum background copy rate (in
MBps) that Storwize V7000 Remote Copy would allow as the sum of background copy
synchronization activity for all relationships from the direction of this cluster to its partner. The
bandwidth value does not have to be identical between the two partner clusters.

© Copyright IBM Corp. 2012, 2016 10-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Partnership established across both clusters

OLIVE_Storwize V7000

NAVY_Storwize V7000

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-36. Partnership established across both clusters

In this scenario, the OLIVE_Storwize V7000 is running with the Storwize V7000 v6.4.1.4 software
level and NAVY_Storwize V7000 is at Storwize V7000 v7.1.0.2. Partnerships have been defined
from both clusters and the partnership between the two clusters have transitioned from partially to
fully configured.
The Storwize V7000 cluster name is always displayed at the root of the bread crumb pathing
beginning with v6.2 of the GUI. When more than one Storwize V7000 cluster is being managed, it is
important to be aware of the cluster to which the GUI is connected so that any configuration
manipulation is performed on the correct cluster.

© Copyright IBM Corp. 2012, 2016 10-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Establish cluster partnerships using the CLI


IBM_2076:PINK_SWV7K:PINKadmin>lspartnershipcandidate
id configured name PINK_SWV7K
0000020062C17C56 no OLIVE_Storwize V7000
0000020063617C80 no NAVY_Storwize V7000
IBM_2076:PINK_SWV7K:PINKadmin>mkpartnership -bandwidth 50 OLIVE_Storwize V7000
IBM_2076:PINK_SWV7K:PINKadmin>mkpartnership -bandwidth 50 NAVY_Storwize V7000
IBM_2076:PINK_SWV7K:PINKadmin>lspartnership
id name location partnership bandwidth
00000200A8402FAF PINK_SWV7K local
0000020062C17C56 OLIVE_Storwize V7000 remote partially_configured_local 50
0000020063617C80 NAVY_Storwize V7000 remote partially_configured_local 50

IBM_2145:OLIVE_Storwize V7000:OLIVEadmin>lspartnershipcandidate OLIVE_Storwize V7000


id configured name
00000200A8402FAF yes PINK_SWV7K
IBM_2145:OLIVE_Storwize V7000:OLIVEadmin>mkpartnership -bandwidth 50 PINK_SWV7K
IBM_2145:OLIVE_Storwize V7000:OLIVEadmin>lspartnership
id name location partnership bandwidth
0000020062C17C56 OLIVE_Storwize V7000 local
0000020063617C80 NAVY_Storwize V7000 remote fully_configured 75
00000200A8402FAF PINK_SWV7K remote fully_configured 50

IBM_2145:NAVY_Storwize V7000:NAVYadmin>mkpartnership -bandwidth 50 PINK_SWV7K


IBM_2145:NAVY_Storwize V7000:NAVYadmin>lspartnership
id name location partnership bandwidth
0000020063617C80 NAVY_Storwize V7000 local
0000020062C17C56 OLIVE_Storwize V7000 remote fully_configured 50
NAVY_Storwize V7000
00000200A8402FAF PINK_SWV7K remote fully_configured 50
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-37. Establish cluster partnerships using the CLI

The CLI can be used directly to create partnerships between clusters.


After SAN connectivity among the clusters must have been established, the
lspartnershipcandidate command is used to identify available clusters for partnerships.
As seen previously from the GUI partnership examples, the mkpartnership command is used to
establish a partnership between two clusters.
Note the lspartnership command output shows the cluster ID, cluster name, as well as its
partnership state.
Each cluster can participate in up to three cluster partnerships.
For ease of identification, the cluster name is always part of the command prompt.

© Copyright IBM Corp. 2012, 2016 10-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Updated partnerships: GUI views

NAVY_Storwize V7000

OLIVE_Storwize V7000

PINK_SWV7K

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-38. Updated partnerships: GUI views

At this point of the example scenario, each cluster or system is fully configured or in partnerships
with two partners.

© Copyright IBM Corp. 2012, 2016 10-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Metro Mirror relationship: Scenario steps


NAVY_Storwize V7000
Define - One
OLIVE_Storwize V7000
standalone
MM relationship
WINES_A
WINES_M

Scenario Steps:
1. Start Relationship, write on master
2. Stop with write access, writes from both sites
3. Start Relationship, primary = aux with force
4. Stop at remote site with write access
5. Start at remote site, primary = aux with force
6. Switch mirroring direction from local site

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-39. Metro Mirror relationship: Scenario steps

This example illustrates a standalone Metro Mirror relationship where the NAVY_Storwize V7000 is
the local site and the OLIVE_Storwize V7000 is the remote site.
The application being mirrored is initially operating at the local site. It is then moved to the remote
site (due to either planned or unplanned events). Eventually operations for the application is
returned to the local site.
Metro Mirror is a synchronous copy environment which provides for a recovery point objective of
zero (no data loss). This example illustrates the Storwize V7000 implementation of remote mirroring
along with its terminology.

© Copyright IBM Corp. 2012, 2016 10-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Define Metro Mirror relationship (1 of 2)

2
1
WINES_A

WINES_M

3 4
writes

WINES_M
WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-40. Define Metro Mirror relationship (1 of 2)

The WINES_M volume on the NAVY_Storwize V7000 (local) is to be in a Metro Mirror relationship
with the WINES_A volume on the OLIVE_Storwize V7000 (remote).The copy direction of the
mirroring relationship is controlled by where the relationship is defined - thus, this relationship must
be defined from the NAVY_Storwize V7000.
To define a relationship with the GUI from the local cluster, click Copy Services > Remote Copy >
New Relationship to open the New Relationship dialog.
Select the type of relationship desired (Metro Mirror, Global Mirror, or Global Mirror with Change
Volumes), and specify whether this relationship is an intracluster or intercluster relationship. For an
intercluster relationship, the remote cluster name needs to be chosen.
The local cluster is referred to as the master cluster and the local volume is called the master
volume. The remote cluster is referred to as the auxiliary cluster and the remote volume is called
the auxiliary volume.
From the local NAVY_Storwize V7000, the WINES_M volume is identified as the master volume.
Communication between the two clusters caused a list of eligible auxiliary volumes to be sent from
the auxiliary OLIVE_Storwize V7000 to the master NAVY_Storwize V7000.
An eligible auxiliary volume is defined to be the same size as the master volume and must not be
the auxiliary in another Remote Copy relationship.

© Copyright IBM Corp. 2012, 2016 10-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Define Metro Mirror relationship (2 of 2)

NAVY_Storwize V7000

writes

WINES_M WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-41. Define Metro Mirror relationship (2 of 2)

Indicate whether the two volumes are already synchronized:


• The choice of Yes indicates that it is not necessary to perform the background copy of the initial
content from the master volume to the auxiliary volume. This option is designed primarily for
new applications where no data exists yet. The master volume contains no relevant data since
nothing has been written to it yet.
• The choice of No indicates the two volumes are not synchronized but need to be. The content
of the master volume should be copied to the auxiliary volume. This option is a safer
mechanism to ensure that master and auxiliary volumes will be synchronized.
Lastly the administrator is to indicate whether the command to start the mirroring activity is to occur
immediately after the definition of the relationship. In this example, the choice was made to only
define the relationship. The mkrcrelationship command generated by the GUI identifies the
auxiliary volume, the auxiliary cluster ID, and the master volume.

© Copyright IBM Corp. 2012, 2016 10-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Metro Mirror relationship: View from each cluster


NAVY_Storwize V7000

writes

WINES_M WINES_A

OLIVE_Storwize V7000

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-42. Metro Mirror relationship: View from each cluster

Even though the relationship was defined from the NAVY_Storwize V7000, the relationship entry
exists in both clusters.
In the NAVY_Storwize V7000 > Copy Services > Remote Copy view, a relationship with the
default name of rcrel0 is shown with an ID of 10. This ID value is derived from the object ID of the
master volume WINES_M.
In the OLIVE_Storwize V7000 > Copy Services > Remote Copy view, a relationship with the
default name of rcrel0 is shown with an ID of 3. This ID value is derived from the object ID of the
auxiliary volume WINES_A.
Based on the choices made when the relationship was defined, the current state of the relationship
is inconsistent_stopped. The content of the two volumes are not consistent and the relationship
has not been started yet.
Note the value of the Primary Volume field: It indicates the current copy direction of the
relationship. The volume name listed is the ‘copy from’ volume.

© Copyright IBM Corp. 2012, 2016 10-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Relationship rcrel0 details


IBM_2145:NAVY_Storwize V7000:NAVYadmin> IBM_2145:OLIVE_Storwize
lsrcrelationship rcrel0 V7000:OLIVEadmin>
id 10 lsrcrelationship rcrel0
name rcrel0 id 3
master_cluster_id 0000020063617C80 name rcrel0
master_cluster_name NAVY_Storwize V7000 master_cluster_id 0000020063617C80
master_vdisk_id 10 master_cluster_name NAVY_Storwize V7000
master_vdisk_name WINE_M master_vdisk_id 10
aux_cluster_id 0000020062C17C56 master_vdisk_name WINE_M
aux_cluster_name OLIVE_Storwize V7000 aux_cluster_id 0000020062C17C56
aux_vdisk_id 3 aux_cluster_name OLIVE_Storwize V7000
aux_vdisk_name WINES_A aux_vdisk_id 3
primary master aux_vdisk_name WINES_A
consistency_group_id primary master
consistency_group_name consistency_group_id
state inconsistent_stopped consistency_group_name
state inconsistent_stopped
IBM_2145:NAVY_Storwize IBM_2145:OLIVE_Storwize
V7000:NAVYadmin> V7000:OLIVEadmin>
lsvdisk WINE_M lsvdisk WINE_A
id 10 id 3
….. ….
RC_id 10 RC_id 3
RC_name rcrel0 RC_name rcrel0
vdisk_UID vdisk_UID
60050768018D85F2000000000000000B 60050768018B05F15800000000000003
throttling 0 throttling 0
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-43. Relationship rcrel0 details

The verbose format of the lsrcrelationship command provides a bit more information than the
GUI in a more compact format.
Observe the cluster and volumes are identified by either master or auxiliary. Verify the object IDs of
each volume and their correlation to the relationship object IDs between the two clusters.
The primary entry has a value of master - meaning the copy direction is from the master volume to
the auxiliary volume. The GUI, being more user friendly, plugs in the name of the master volume in
its display. Either way, the copy direction is identified by the value contained in the primary entry.
At this point, application writes to the master volume are likely occurring. The relationship copy
direction has been set but mirroring has not been started yet.
Below the dotted line is the detailed information of the volumes of the relationship can be obtained
using the lsvdisk command with the ID or name of the volume.
If a volume is defined in a Remote Copy relationship, the relationship name and ID are maintained
in the volume entry. Compare the two volume entries: Their RC_name value is the same, but the
RC_id value matches the ID of the individual volumes.
Note the UID of each volume for later reference.

© Copyright IBM Corp. 2012, 2016 10-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Current content of master volume WINE_M

NAVY_Storwize V7000

WINES_M

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-44. Current content of master volume WINE_M

The master volume, WINES_M, is used by a Windows application. Its current content is shown.
Generally, the auxiliary volume is not assigned to a host until the mirroring operation has been
stopped. The auxiliary volume is not available for write operations and should not be made
available for read operations as well.
Again it should be emphasized that Remote Copy is based on block copies controlled by grains of
the owning bitmaps. The copy operation has no knowledge of the OS logical file structures. Folders
used in these examples facilitate easier before/after comparisons and are for illustrative purposes
only.

© Copyright IBM Corp. 2012, 2016 10-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Start relationship (mirroring) rcrel0


IBM_2145:NAVY_Storwize V7000:NAVYadmin> startrcrelationship rcrel0
IBM_2145:NAVY_Storwize IBM_2145:OLIVE_Storwize
V7000:NAVYadmin> V7000:OLIVEadmin>
lsrcrelationship rcrel0 lsrcrelationship rcrel0
id 10 id 3
name rcrel0 name rcrel0
master_cluster_id 0000020063617C80 master_cluster_id 0000020063617C80
master_cluster_name NAVY_Storwize V7000 master_cluster_name NAVY_Storwize V7000
master_vdisk_id 10 master_vdisk_id 10
master_vdisk_name WINE_M master_vdisk_name WINE_M
aux_cluster_id 0000020062C17C56 aux_cluster_id 0000020062C17C56
aux_cluster_name OLIVE_Storwize V7000 aux_cluster_name OLIVE_Storwize V7000
aux_vdisk_id 3 aux_vdisk_id 3
aux_vdisk_name WINES_A aux_vdisk_name WINES_A
primary master primary master
consistency_group_id consistency_group_id
consistency_group_name consistency_group_name
state inconsistent_copying state inconsistent_copying
bg_copy_priority 50 writes bg_copy_priority 50
progress 5 progress 12
freeze_time freeze_time
status online WINES_M status online WINES_A
sync sync
copy_type metro copy_type metro
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-45. Start relationship (mirroring) rcrel0

A standalone mirroring relationship can be started using the CLI startrcrelationship command.
This command can be issued from either cluster.
After starting, the lsrcrelationship command is issued on each cluster to confirm that
synchronization or background copy has begun. Until the content of the master volume has been
copied to the auxiliary volume, the relationship is in the state of inconsistent_copying.
The background copy rate for a given pair of volumes is set to 25 MBps by default and is subject to
the maximum ‘copy from’ partnership level bandwidth value since multiple background copies might
be in progress concurrently. The 25 MBps default rate can be changed with the chsystem
-relationshipbandwidthlimit parameter. Be aware that this value is controlled at the cluster
level. The changed copy bandwidth value is applicable to the background copy rate of all
relationships.

© Copyright IBM Corp. 2012, 2016 10-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Background copy and auxiliary volume offline


IBM_2145:NAVY_Storwize V7000:NAVYadmin> lsrcrelationshipprogress rcrel0
id progress
10 8
IBM_2145:OLIVE_Storwize V7000:OLIVEadmin>Storwize V7000info lsrcrelationshipprogress rcrel0
id progress
3 20
IBM_2145:OLIVE_Storwize V7000:OLIVEadmin>lsvdisk -delim , -filtervalue name=WINE*
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,F
C_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,R
C_change,compressed_copy_count
3,WINES_A,0,io_grp0,offline,0,OLIVEDS3K_SASpool,10.00GB,striped,,,3,rcrel0,60050768018B05
F15800000000000003,0,1,not_empty,0,no,0
IBM_2145:OLIVE_Storwize V7000:OLIVEadmin>lsrcrelationshipprogress 3
id progress
3 95
IBM_2145:OLIVE_Storwize V7000:OLIVEadmin>lsrcrelationshipprogress 3
id progress
3
IBM_2145:OLIVE_Storwize V7000:OLIVEadmin>lsvdisk -delim , -filtervalue name=WINE*
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,F
C_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,R
C_change,compressed_copy_count
3,WINES_A,0,io_grp0,online,0,OLIVEDS3K_SASpool,10.00GB,striped,,,3,rcrel0,60050768018B05
F15800000000000003,0,1,not_empty,0,no,0
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-46. Background copy and auxiliary volume offline

Progress of the background copy operation can be monitored from either cluster with the
lsrcrelationshipprogress command using either the relationship name or object ID. Its output
displays the copy progress as a percentage to completion. When the progress reaches 100% or
copy completion, the command output displays a null value for the relationship object ID.
While the background copy is in progress, the status of auxiliary volume is offline. After the copy is
completed, the volume status transitions to online automatically.

© Copyright IBM Corp. 2012, 2016 10-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Synchronization and mirror suspension


IBM_2145:NAVY_Storwize IBM_2145:OLIVE_Storwize
V7000:NAVYadmin> V7000:OLIVEadmin>
lsrcrelationship rcrel0 lsrcrelationship rcrel0
… …
state consistent_synchronized state consistent_synchronized
bg_copy_priority 50 bg_copy_priority 50
… …

IBM_2145:OLIVE_Storwize V7000:OLIVEadmin> stoprcrelationship -access rcrel0


IBM_2145:NAVY_Storwize IBM_2145:OLIVE_Storwize
V7000:NAVYadmin> V7000:OLIVEadmin>
lsrcrelationship 10 lsrcrelationship 3
… …
aux_vdisk_name WINES_A aux_vdisk_name WINES_A
primary primary
consistency_group_id consistency_group_id
consistency_group_name consistency_group_name
state idling state idling
bg_copy_priority 50 bg_copy_priority 50
progress progress
freeze_time freeze_time
status status
sync in_sync sync in_sync
copy_type metro copy_type metro
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-47. Synchronization and mirror suspension

Use the stoprcrelationship command to suspend mirroring relationships. If write access is


desired for the auxiliary volume, the stop command must be issued with the -access keyword.
Use cases for write access of the auxiliary volume include disaster recovery testing, actual disaster
recovery, or application relocation.
The copy direction is now ambiguous: Notice the primary field is blank, and the state of the
relationship changes to idling. Write activity is allowed to occur for both clusters.
Immediately after the relationship is stopped, it is said to be in_sync. A write operation to either
volume will cause the relationship to be out of sync.

© Copyright IBM Corp. 2012, 2016 10-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Content of volumes: Identical at stop relationship

WINES_A

WINES_M

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-48. Content of volumes: Identical at stop relationship

The two Remote Desktop sessions show that the content of the two volumes is identical.
The master volume might still be processing application read/write I/Os.
Since the relationship was stopped with write access, read/write I/Os are allowed on the auxiliary
volume as well.

© Copyright IBM Corp. 2012, 2016 10-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Content of volumes: After independent writes

writes

WINES_M

writes

WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-49. Content of volumes: After independent writes

This drift on data content of the volumes is illustrated with the different folders being written from
each host on its own drive/volume.
To resume mirroring of the two volumes, a decision has to be made as to which host is deemed to
have the current data.

© Copyright IBM Corp. 2012, 2016 10-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Out of sync and start of copy relationship


IBM_2145:NAVY_Storwize IBM_2145:OLIVE_Storwize
V7000:NAVYadmin> V7000:OLIVEadmin>
lsrcrelationship 10 lsrcrelationship 3
… …
sync out_of_sync sync out_of_sync
copy_type metro copy_type metro

IBM_2145:OLIVE_Storwize V7000:OLIVEadmin> startrcrelationship -primary aux -force 3


IBM_2145:NAVY_Storwize IBM_2145:OLIVE_Storwize
V7000:NAVYadmin> V7000:OLIVEadmin>
lsrcrelationship 10 lsrcrelationship 3
… …
aux_vdisk_name WINES_A aux_vdisk_name WINES_A
primary aux primary aux
consistency_group_id consistency_group_id
consistency_group_name consistency_group_name
state inconsistent_copying state inconsistent_copying
bg_copy_priority 50 bg_copy_priority 50
progress 87 progress 83
freeze_time freeze_time
status online status online
sync sync
copy_type metro copy_type metro
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-50. Out of sync and start of copy relationship

In the CLI commands listed above the dotted line, the volumes are out of sync. Any write activity on
either volume causes the relationship to become no longer synchronized.
Since the copy direction is ambiguous when the relationship is in the idling state, at restart, the
copy direction must be specified with the -primary parameter.
If write activity has occurred, on either or both volumes, then the two volumes need to be
resynchronized. The -force keyword must be coded to acknowledge the out of synchronization
status. Background copy is invoked to return the relationship to the consistent and synchronized
state again.
In this example, the WINES_A volume is deemed to contain the valid data and the mirroring
direction is to be reversed. The startrcrelationship command is coded with -primary aux and
-force so that the relationship can be returned to the consistent and synchronized state.
Note the primary aux value in the verbose lsrcrelationship command output - the copy
direction is now from the auxiliary volume to the master volume. The auxiliary volume is now
functioning in the primary role.

© Copyright IBM Corp. 2012, 2016 10-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Auxiliary volume content mirrored to master


IBM_2145:OLIVE_Storwize V7000:OLIVEadmin> stoprcrelationship -access
rcrel0
writes

WINES_M WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-51. Auxiliary volume content mirrored to master

Again, the relationship has been stopped with -access so that the content of the volumes can be
examined.
Because of the change in copy direction (primary=aux), the content of the auxiliary volume
(WINES_A) has been propagated to the master volume (WINES_M).
Another way to look at this - the WINES_A volume is now functioning in the primary role (copy
direction is primary=aux), and its content is being mirrored to the WINES_M volume (now
functioning in the secondary role).

© Copyright IBM Corp. 2012, 2016 10-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Restart relationship with force


IBM_2145:OLIVE_Storwize V7000:OLIVEadmin> startrcrelationship -primary aux -force 3

writes

WINES_M WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-52. Restart relationship with force

The relationship is restarted with -force since application write activity is on-going.

© Copyright IBM Corp. 2012, 2016 10-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Switch back to original copy direction


IBM_2145:NAVY_Storwize V7000:NAVYadmin> lsrcrelationship -delim ,
id,name,master_cluster_id,master_cluster_name,master_vdisk_id,master_vdisk_name,aux_cluster_id,aux_cluster
_name,aux_vdisk_id,aux_vdisk_name,primary,consistency_group_id,consistency_group_name,state,bg_copy_pri
ority,progress,copy_type,cycling_mode,freeze_time
10,rcrel0,0000020063617C80,NAVY_Storwize V7000,10,WINE_M,0000020062C17C56,OLIVE_Storwize
V7000,3,WINES_A,aux,,,consistent_synchronized,50,,metro,,
IBM_2145:NAVY_Storwize V7000:NAVYadmin> switchrcrelationship -primary master 10

writes

WINES_M WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-53. Switch back to original copy direction

When the relationship state is consistent and synchronized, the copy direction can be changed
dynamically with the switchrcrelationship command.
In this example, the -primary master resets the copy direction to the original value at the start of
this example. Write access to the auxiliary volume, WINES_A, is removed, and reverted to the
master volume, WINES_M. The master volume is functioning in the primary role again.
Because of the change in write access capability between the two volumes in the relationship, it is
crucial that no outstanding application I/O is in progress when the switch direction command is
issued. Typically the host application would be shut down and restarted for every direction change.
The scenario illustrates that it is very easy to transfer application workloads from one site to
another, for example to serve as disaster recovery tests.

© Copyright IBM Corp. 2012, 2016 10-60


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

rcrel0 relationship: Original copy direction


IBM_2145:NAVY_Storwize IBM_2145:OLIVE_Storwize
V7000:NAVYadmin> V7000:OLIVEadmin>
lsrcrelationship 10 lsrcrelationship 3
id 10 id 3
name rcrel0 name rcrel0
master_cluster_id 0000020063617C80 master_cluster_id 0000020063617C80
master_cluster_name NAVY_Storwize V7000 master_cluster_name NAVY_Storwize V7000
master_vdisk_id 10 master_vdisk_id 10
master_vdisk_name WINE_M master_vdisk_name WINE_M
aux_cluster_id 0000020062C17C56 aux_cluster_id 0000020062C17C56
aux_cluster_name OLIVE_Storwize V7000 aux_cluster_name OLIVE_Storwize V7000
aux_vdisk_id 3 aux_vdisk_id 3
aux_vdisk_name WINES_A aux_vdisk_name WINES_A
primary master primary master
consistency_group_id consistency_group_id
consistency_group_name consistency_group_name
state consistent_synchronized state consistent_synchronized
bg_copy_priority 50 bg_copy_priority 50
progress progress
freeze_time writes freeze_time
status online status online
sync sync
copy_type metro WINES_M copy_type metro WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-54. rcrel0 relationship: Original copy direction

The direction of the copy is reset back to the original value - primary master. To summarize:
The direction of the copy operation can be reversed, perhaps as part of a graceful failover, when
the relationship is in a synchronized state. The switchrcrelationship command will only succeed
when the relationship is in one of the following states:
• Consistent_synchronized
• Consistent_stopped and in sync
• Idling and in sync
If the relationship is not synchronized, the startrcrelationship command can be used with the
-primary and -force parameters to manage the copy direction.

© Copyright IBM Corp. 2012, 2016 10-61


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Update GM relationship to cycling mode (GMCV)


NAVY_SVC

WISKEE_GA

IBM_2145:PINK_SWV7K:PINKadmin>
mkrcrelationship -master WISKEE_GM
Define - One -aux WISKEE_GA -cluster NAVY_SVC
standalone -name WISK_Rel1 -global
GM relationship; RC Relationship, id [2], successfully created
change to GMCV

WISKEE_GM

PINK_SWV7K

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-55. Update GM relationship to cycling mode (GMCV)

This existing Global Mirror relationship will now be updated to cycling mode.

© Copyright IBM Corp. 2012, 2016 10-62


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Create change volume for the master volume


PINK_SWV7K

2 3

Master

Master
change
FlashCopy
volume
mapping
4
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-56. Create change volume for the master volume

Use the GUI to allocate the master change volume by right-clicking the relationship entry at the
master system site (PINK_SWV7K). Select Global Mirror Change Volumes > Create New to
cause the GUI to generate the appropriate mkvdisk command to create a Thin-Provisioned change
volume - based on the size and pool of the master volume. The the chrcrelationship command is
used to add the newly created master change volume to the relationship.
The relationship can be active (does not need to be idle or stopped) to add a change volume.

© Copyright IBM Corp. 2012, 2016 10-63


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Properties of GUI created master change volume


PINK_SWV7K

3 WISKEE_GM

vdisk0

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-57. Properties of GUI created master change volume

The newly created change volume can be seen in the Master Change Volume column of the
relationship entry - notice its default name.
Right-click the relationship entry to select Global Mirror Change Volumes > Properties (Master)
to view details of the change volume. Notice it is already defined in two FlashCopy mappings.

© Copyright IBM Corp. 2012, 2016 10-64


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Rename master change volume

WISKEE_GM

WISKEE_GMFC

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-58. Rename master change volume

The volume Edit interface can be used to change the default name of the change volume to one
that is more relevant to the specific Global Mirror relationship.

© Copyright IBM Corp. 2012, 2016 10-65


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Create change volume for the auxiliary volume

2 3

Auxiliary

Auxiliary
FlashCopy change
mapping volume

4
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-59. Create change volume for the auxiliary volume

A change volume is also needed at the auxiliary cluster site (NAVY_SVC) of the relationship.
Right-click the relationship entry then select Global Mirror Change Volumes > Create New to
cause the GUI to generate the appropriate mkvdisk command to create a Thin-Provisioned change
volume - same size and in the same pool as the auxiliary volume. The the chrcrelationship
command is then used to add the newly created auxiliary change volume to the relationship.

© Copyright IBM Corp. 2012, 2016 10-66


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Review GUI created auxiliary change volume

2
WISKEE_GA

3 WISKEE_GAFC

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-60. Review GUI created auxiliary change volume

The newly created change volume can be displayed in the Auxiliary Change Volume column of
the relationship entry.
Right-click the relationship entry to select Global Mirror Change Volumes > Properties (Auxiliary)
to view details of this change volume.
Again a more appropriate name can be assigned to the newly created auxiliary change volume.
Notice this volume is already defined in two FlashCopy mappings as well.

© Copyright IBM Corp. 2012, 2016 10-67


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

FlashCopy mappings generated by GMCV


PINK_SWV7K

WISKEE_GM

WISKEE_GMFC

NAVY_SVC

WISKEE_GA

WISKEE_GAFC

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-61. FlashCopy mappings generated by GMCV

Go to the Copy Services > FlashCopy Mappings view of each partnership system to view the two
FlashCopy mappings automatically defined by SVC Remote Copy as a result of adding change
volumes to the Global Mirror relationship.
Examine the background copy rates of the two mappings at each site:
Mapping with a background copy rate of 0: For reach cycle, this mapping provides a snapshot or
consistent point-in-time image of the source volume (either master or auxiliary volume)
Mapping background copy rate of 50: For each cycle, this mapping provides a means to recover
the source volume (master or auxiliary volume) to the prior recovery point if needed. Under normal
circumstances this mapping is not started.

© Copyright IBM Corp. 2012, 2016 10-68


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

FlashCopy mappings for the master volume


IBM_2076:PINK_SWV7K:PINKadmin> IBM_2076:PINK_SWV7K:PINKadmin>
lsfcmap -delim , 0 lsfcmap -delim , 1
id,0 id,1
name,fcmap0 name,fcmap1 PINK_SWV7K
source_vdisk_id,2 source_vdisk_id,3
source_vdisk_name,WISKEE_GM source_vdisk_name,WISKEE_GMFC
target_vdisk_id,3 target_vdisk_id,2
target_vdisk_name,WISKEE_GMFC target_vdisk_name,WISKEE_GM
group_id, group_id,
group_name, group_name,
status,idle_or_copied status,idle_or_copied
progress,0 progress,0
copy_rate,0 WISKEE_GM copy_rate,50 WISKEE_GM
start_time, start_time,
dependent_mappings,0 WISKEE_GMFC dependent_mappings,0 WISKEE_GMFC
autodelete,off autodelete,off
clean_progress,100 clean_progress,100
clean_rate,50 Used during cycling clean_rate,50 Used only if
incremental,off for COWs due to incremental,off master needs
difference,100 writes to master difference,0 to be recovered
grain_size,256 grain_size,256
IO_group_id,0 IO_group_id,0
IO_group_name,io_grp0 IO_group_name,io_grp0
partner_FC_id,1 partner_FC_id,0
partner_FC_name,fcmap1 partner_FC_name,fcmap0
restoring,no restoring,no
rc_controlled,yes rc_controlled,yes

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-62. FlashCopy mappings for the master volume

Examine the details of both FlashCopy mappings associated with the master volume and note that
they are controlled by SVC Remote Copy (rc_controlled, yes).
When a cycle is automatically started, the delta of changed blocks (grains) of the master volume is
identified for transmission from the master cluster to the auxiliary cluster.
On-going application writes might be occurring on the master volume during transmission. The
snapshot FlashCopy mapping (from the master volume to its change volume) is automatically
started at the beginning of the cycle so that incoming writes to the master volume cause
copy-on-write blocks (COWs) to be written to its change volume.
The changed blocks (grains) being sent are read from the master change volume. The actual data
is obtained from the master (if no subsequent writes occurred) or its change volume (the blocks
have been updated on the master volume after the cycle began).
The FlashCopy mapping from the master change volume to the master volume is only used if a
recovery situation is detected. The point-in-time snapshot change volume can be used to recover
the content of the master volume.

© Copyright IBM Corp. 2012, 2016 10-69


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

FlashCopy mappings for the auxiliary volume


IBM_2145:NAVY_SVC:NAVYadmin> IBM_2145:NAVY_SVC:NAVYadmin>
lsfcmap -delim , 1 lsfcmap -delim , 2
id,1 id,2
name,fcmap2 name,fcmap0
NAVY_SVC
source_vdisk_id,4 source_vdisk_id,16
source_vdisk_name,WISKEE_GA source_vdisk_name,WISKEE_GAFC
target_vdisk_id,16 target_vdisk_id,4
target_vdisk_name,WISKEE_GAFC target_vdisk_name,WISKEE_GA
group_id, group_id,
group_name, group_name,
status,idle_or_copied status,idle_or_copied
progress,0 progress,0
copy_rate,0 WISKEE_GA copy_rate,50 WISKEE_GA
start_time, start_time,
dependent_mappings,0 WISKEE_GAFC dependent_mappings,0 WISKEE_GAFC
autodelete,off autodelete,off
clean_progress,100 clean_progress,100
clean_rate,50 Used during cycling clean_rate,50 Used only if
incremental,off for COWs due to incremental,off auxiliary needs
difference,100 updates to auxiliary difference,100
grain_size,256 grain_size,256 to be recovered
IO_group_id,1 IO_group_id,1
IO_group_name,io_grp1 IO_group_name,io_grp1
partner_FC_id,2 partner_FC_id,1
partner_FC_name,fcmap0 partner_FC_name,fcmap2
restoring,no restoring,no
rc_controlled,yes rc_controlled,yes

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-63. FlashCopy mappings for the auxiliary volume

When a cycle is automatically started, a signal is sent to the auxiliary site to prepare for the
incoming changed blocks (grains) to be written to the auxiliary volume.
Because the auxiliary volume will be updated with new blocks of data, the snapshot FlashCopy
mapping (from the auxiliary volume to its change volume) is automatically started at the beginning
of the cycle to provide a consistent recovery point for the auxiliary volume. Incoming writes to the
auxiliary volume cause copy-on-write blocks (COWs) to be written to its change volume.
The FlashCopy mapping from the auxiliary change volume to the auxiliary volume is only used if a
recovery situation is detected. The point-in-time snapshot change volume content can be used to
recover the content of the auxiliary volume to its prior recovery point.

© Copyright IBM Corp. 2012, 2016 10-70


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Change to cycling mode and update cycle period


PINK_SWV7K

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-64. Change to cycling mode and update cycle period

To change to cycling mode from the default of none to multi, the relationship needs to be in the
idling or stopped state.
The cycling period defaults to 300 seconds with a valid value range from 60 to 86400 seconds
(86400 being 24 hours).
This example decrease the cycling period to 180 seconds to have more frequent recovery points.
And to illustrate what happens if the copy time required exceeds the cycle interval.

© Copyright IBM Corp. 2012, 2016 10-71


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Updated relationship entries in both systems


PINK_SWV7K

WISKEE_GM WISKEE_GA

WISKEE_GMFC WISKEE_GAFC

NAVY_SVC

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-65. Updated relationship entries in both systems

The updated cycling mode parameters are reflected in the relationship entry of both the master and
auxiliary clusters.
The relationship is now in cycling mode. Grains associated with updated blocks on the master
volume will be identified and transmitted to the auxiliary cluster automatically every 180 seconds.

© Copyright IBM Corp. 2012, 2016 10-72


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

GMCV relationship state and progress


IBM_2076:PINK_SWV7K:PINKadmin>lsrcrelationship 2
id 2 PINK_SWV7K
name WISK_rel1
master_cluster_id 00000200A8402FAF
master_cluster_name PINK_SWV7K
master_vdisk_id 2
master_vdisk_name WISKEE_GM
aux_cluster_id 0000020063617C80
aux_cluster_name NAVY_SVC
aux_vdisk_id 4 writes
aux_vdisk_name WISKEE_GA
primary master
consistency_group_id
consistency_group_name
state consistent_stopped WISKEE_GM WISKEE_GA
bg_copy_priority 50
progress 69 WISKEE_GMFC WISKEE_GAFC
freeze_time 2013/08/08/18/51/28
status online
sync out_of_sync
copy_type global
cycle_period_seconds 180
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name WISKEE_GMFC
aux_change_vdisk_id 16
aux_change_vdisk_name WISKEE_GAFC

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-66. GMCV relationship state and progress

In addition to the cycle period and cycling mode data in the verbose output of the relationship,
examine the progress value. Quite a bit of write activity has transpired on the master volume and
these changed blocks need to copied to the auxiliary volume once the relationship is started.
The relationship now contains a freeze time. For cycling mode, freeze time is the time of the last
consistent image on the auxiliary volume.

© Copyright IBM Corp. 2012, 2016 10-73


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Start GMCV relationship


IBM_2076:PINK_SWV7K:PINKadmin>startrcrelationship -force 2
PINK_SWV7K
IBM_2076:PINK_SWV7K:PINKadmin> lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_name,status,progre
ss,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,restoring,start_time,rc_controlled
0,fcmap0,2,WISKEE_GM,3,WISKEE_GMFC,,,copying,0,0,100,off,1,fcmap1,no,130808200155,yes
1,fcmap1,3,WISKEE_GMFC,2,WISKEE_GM,,,idle_or_copied,0,50,100,off,0,fcmap0,no,,yes

BM_2145:NAVY_SVC:NAVYadmin> lsrcrelationshipprogress 4
id progress
NAVY_SVC
4 93
IBM_2145:NAVY_SVC:NAVYadmin>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_name,status,progre
ss,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,restoring,start_time,rc_controlled
1,fcmap2,4,WISKEE_GA,16,WISKEE_GAFC,,,copying,22,0,100,off,2,fcmap0,no,130808200155,yes
2,fcmap0,16,WISKEE_GAFC,4,WISKEE_GA,,,idle_or_copied,0,50,100,off,1,fcmap2,no,,yes

IBM_2145:NAVY_SVC:NAVYadmin>lssevdiskcopy -delim ,
vdisk_id,vdisk_name,copy_id,mdisk_grp_id,mdisk_grp_name,capacity,used_capacity,real_capacity,free_capacity,over
allocation,autoexpand,warning,grainsize,se_copy,compressed_copy,uncompressed_used_capacity
16,WISKEE_GAFC,0,1,DS3K_SATApool,15.00GB,3.94GB,4.25GB,319.20MB,352,on,80,256,yes,no,3.94GB
IBM_2145:NAVY_SVC:NAVYadmin>svqueryclock
Thu Aug 8 20:04:53 CDT 2013
IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationshipprogress 4
id progress
4 96

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-67. Start GMCV relationship

Once the relationship has been started, grains representing changed blocks from the master
cluster are transmitted to the auxiliary cluster automatically for each cycle.
At the master cluster, the master volume to its change volume FlashCopy mapping is started. It is in
the copying state so that subsequent writes to the master volume cause COW blocks to be copied
to the master change volume. The mapping start time provides an indication of the start time of the
cycling period.
At the auxiliary cluster, the auxiliary volume to its change volume FlashCopy mapping is in the
copying state as well. Before changed blocks are written to the auxiliary volume, its COW blocks
are first copied to the auxiliary change volume.
Recall that the GUI created change volumes are Thin-Provisioned. As writes occur, the capacity of
the Thin-Provisioned target automatically expands.
In this example, the amount of changed blocks to be copied is taking longer than the cycling period
of 180 seconds.

© Copyright IBM Corp. 2012, 2016 10-74


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Consistent copying state and freeze time updated


IBM_2076:PINK_SWV7K:PINKadmin>svqueryclock
Thu Aug 8 20:05:22 CDT 2013
IBM_2076:PINK_SWV7K:PINKadmin>lsrcrelationshipprogress 2
Next update cycle
id progress in progress
2 92

IBM_2076:PINK_SWV7K:PINKadmin>lsrcrelationship 2 PINK_SWV7K
id 2 writes
name WISK_rel1
master_cluster_name PINK_SWV7K
master_vdisk_id 2
master_vdisk_name WISKEE_GM
aux_cluster_name NAVY_SVC WISKEE_GM WISKEE_GA
aux_vdisk_id 4
aux_vdisk_name WISKEE_GA
primary master WISKEE_GMFC WISKEE_GAFC
state consistent_copying
progress 95
freeze_time 2013/08/08/20/01/55 Recovery point
status online
sync
copy_type global
cycle_period_seconds 180
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name WISKEE_GMFC
aux_change_vdisk_id 16
aux_change_vdisk_name WISKEE_GAFC

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-68. Consistent copying state and freeze time updated

When the time needed to complete the background copy is greater than the cycling period, the
copy is allowed to complete. The next cycle is started immediate after the completion of the
previous cycle.
At copy completion, the freeze time is updated with the start time of the just completed cycle. Recall
the freeze time for cycling mode is the time of the last consistent image on the auxiliary volume; or
the recovery point.
The implication of not being able to complete the copy of the changed blocks within the cycling
period (due to either too much changed data or not enough bandwidth) is the freeze time isn’t
updated and thus the auxiliary volume content is at a previous or older recovery point.

© Copyright IBM Corp. 2012, 2016 10-75


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

GMCV next cycle progress and freeze time


IBM_2076:PINK_SWV7K:PINKadmin>lsrcrelationship 2
id 2
name WISK_rel1
master_cluster_id 00000200A8402FAF
master_cluster_name PINK_SWV7K
master_vdisk_id 2
master_vdisk_name WISKEE_GM PINK_SWV7K
aux_cluster_id 0000020063617C80 writes
aux_cluster_name NAVY_SVC
aux_vdisk_id 4
aux_vdisk_name WISKEE_GA
primary master
consistency_group_id WISKEE_GMz WISKEE_GA
consistency_group_name
state consistent_copying
WISKEE_GMFCz WISKEE_GAFC
bg_copy_priority 50
progress 100
freeze_time 2013/08/08/20/05/10 Recovery point
status online
sync
updated
copy_type global
cycle_period_seconds 180
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name WISKEE_GMFC
aux_change_vdisk_id 16
aux_change_vdisk_name WISKEE_GAFC

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-69. GMCV next cycle progress and freeze time

Review the relationship details again - the freeze time or recovery point has been updated with the
start time of the just completed cycle.
The state of a started relationship In cycling mode is always consistent copying - even when the
relationship progress is at 100%.

© Copyright IBM Corp. 2012, 2016 10-76


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Relationship view at auxiliary site


IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationship 4
id 4
name WISK_rel1
master_cluster_id 00000200A8402FAF
master_cluster_name PINK_SWV7K
master_vdisk_id 2
master_vdisk_name WISKEE_GM NAVY_SVC
aux_cluster_id 0000020063617C80 writes
aux_cluster_name NAVY_SVC
aux_vdisk_id 4
aux_vdisk_name WISKEE_GA
primary master
consistency_group_id WISKEE_GM WISKEE_GA
consistency_group_name
state consistent_copying
WISKEE_GMFC WISKEE_GAFC
bg_copy_priority 50
progress 100
freeze_time 2013/08/08/20/05/10 Recovery point
status online
sync
copy_type global Next cycle:
cycle_period_seconds 180 freeze_time 2013/08/08/20/08/10
cycling_mode multi
master_change_vdisk_id 3 Next cycle:
master_change_vdisk_name WISKEE_GMFC freeze_time 2013/08/08/20/11/15
aux_change_vdisk_id 16
aux_change_vdisk_name WISKEE_GAFC

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-70. Relationship view at auxiliary site

The freeze time is reflected in the relationship entry of both clusters.


A common time reference for both clusters (such as NTP servers) is highly recommended. The
freeze time should match the FlashCopy mapping start time of the master cluster, adjusted for time
zone differences of the cluster partners. The content of the auxiliary volume is consistent with the
content of the master volume as of the freeze time value.
The freeze time is updated as each copy cycle completes. It will vary by the cycling period (180
seconds in this example) as long as each background copy of the changed block completes within
the cycling period. Otherwise the freeze time, or recovery point, lags further behind in time.

© Copyright IBM Corp. 2012, 2016 10-77


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

GMCV relationship view: In between cycles

NAVY_SVC

IBM_2145:NAVY_SVC:NAVYadmin> lssevdiskcopy -delim , -filtervalue name=W*


vdisk_id,vdisk_name,copy_id,mdisk_grp_id,mdisk_grp_name,capacity,used_capacity,real_capacity,fre
e_capacity,overallocation,autoexpand,warning,grainsize,se_copy,compressed_copy,uncompressed_u
sed_capacity
16,WISKEE_GAFC,0,1,DS3K_SATApool,15.00GB,0.75MB,323.70MB,322.95MB,4745,on,80,256,yes,
no,0.75MB
Thin-Provisioned FlashCopy
target volume space released
IBM_2145:NAVY_SVC:NAVYadmin>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_na
me,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,restoring,
start_time,rc_controlled
0,fcmap1,9,BEERS,14,BEERS_TGT,,,idle_or_copied,100,100,100,on,,,no,130805222404,no
1,fcmap2,4,WISKEE_GA,16,WISKEE_GAFC,,,stopped,0,0,100,off,2,fcmap0,no,,yes
2,fcmap0,16,WISKEE_GAFC,4,WISKEE_GA,,,idle_or_copied,0,50,100,off,1,fcmap2,no,,yes
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-71. GMCV relationship view: In between cycles

In between background copy cycles, the SVC Remote Copy function performs housekeeping that
readies the environment for the next background copy cycle.
For a given relationship, the snapshot FlashCopy mapping at each site is stopped. The allocated
capacity of the target change volumes from the previous cycle is freed.

© Copyright IBM Corp. 2012, 2016 10-78


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Global Mirror relationship details


IBM_2076:PINK_SWV7K:PINKadmin>lsrcrelationship IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationship 4
WISK_rel1 id 4
id 2 name WISK_rel1
name WISK_rel1 master_cluster_id 00000200A8402FAF
master_cluster_id 00000200A8402FAF master_cluster_name PINK_SWV7K
master_cluster_name PINK_SWV7K master_vdisk_id 2
master_vdisk_id 2 master_vdisk_name WISKEE_GM
master_vdisk_name WISKEE_GM aux_cluster_id 0000020063617C80
aux_cluster_id 0000020063617C80 aux_cluster_name NAVY_SVC
aux_cluster_name NAVY_SVC aux_vdisk_id 4
aux_vdisk_id 4 aux_vdisk_name WISKEE_GA
aux_vdisk_name WISKEE_GA primary master
primary master consistency_group_id
consistency_group_id consistency_group_name
consistency_group_name state inconsistent_stopped
state inconsistent_stopped bg_copy_priority 50
bg_copy_priority 50 progress 0
progress 0 freeze_time
freeze_time status online
status online sync
sync copy_type global
copy_type global cycle_period_seconds 300
cycle_period_seconds 300 cycling_mode none
cycling_mode none
writes master_change_vdisk_id
master_change_vdisk_id master_change_vdisk_name
master_change_vdisk_name aux_change_vdisk_id
aux_change_vdisk_id aux_change_vdisk_name
aux_change_vdisk_name
WISKEE_GM WISKEE_GA

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-72. Global Mirror relationship details

As with the previous Metro Mirror example, lsrcrelationship command output from both clusters
confirm that this Global Mirror relationship has been defined with the volumes not yet synchronized.
The copy direction is from master - which is from the WISKEE_GM volume on the PINK_SWV7K to
the WISKEE_GA volume on the NAVY_SVC.

© Copyright IBM Corp. 2012, 2016 10-79


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Start Global Mirror relationship: PINK_SWV7K

PINK_SWV7K

1
2

WISKEE_GM WISKEE_GA 3

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-73. Start Global Mirror relationship: PINK_SWV7K

To start a relationship from the GUI, go to Copy Services > Remote Copy, right-click the desired
relationship entry and select Start from the pop-up list.
In the Metro Mirror example, the relationship was defined with the GUI and started with the CLI. So
with this example, we are showing the opposite, the relationship was defined with the CLI, and we
are using the GUI to start it.

© Copyright IBM Corp. 2012, 2016 10-80


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Monitor copy progress from NAVY_SVC

NAVY_SVC

WISKEE_GM WISKEE_GA

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-74. Monitor copy progress from NAVY_SVC

The Remote Copy background copy or synchronization progress can be monitored from either
cluster.

© Copyright IBM Corp. 2012, 2016 10-81


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Performance: Reads and writes at 25 MBps

WISKEE_GM

WISKEE_GA

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-75. Performance: Reads and writes at 25 MBps

The Monitoring > Performance view provides real-time I/O statistics. Given no other activity
occurring on the PINK_SWV7K system, the MDisks read bandwidth of 25 MBps is consistent with
the background copy relationship bandwidth limit. Data is read from the extents of the master
volume on the PINK_SWV7K and sent to the partner cluster.
The Monitoring > Performance view provides real-time I/O statistics. Given no other activity
occurring on the NAVY_SVC cluster, the MDisks write bandwidth of 25 MBps again confirms the
background copy relationship bandwidth limit. Data is written to the extents of the auxiliary volume
on NAVY_SVC.

© Copyright IBM Corp. 2012, 2016 10-82


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Background copy completed: Stop relationship with write


access

2
3

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-76. Background copy completed: Stop relationship with write access

The relationship can be stopped from either cluster. Right-click the relationship entry and select
Stop from the pop-up list. As discussed previously, the write access can be selected so that the
content of the auxiliary volume can be verified.

© Copyright IBM Corp. 2012, 2016 10-83


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Global Mirror link tolerance (without cycling)


IBM_2076:PINK_SWV7K:PINKadmin>lspartnership
id name location partnership bandwidth PINK_SWV7K
00000200A8402FAF PINK_SWV7K local
0000020062C17C56 OLIVE_SVC remote fully_configured 50
0000020063617C80 NAVY_SVC remote fully_configured 50
IBM_2076:PINK_SWV7K:PINKadmin>lspartnership NAVY_SVC GM: Bandwidth sized for
id 0000020063617C80
name NAVY_SVC peak write rates
location remote
partnership fully_configured
bandwidth 50
code_level 7.1.0.2 (build 79.8.1307111000) NAVY_SVC
console_IP 10.6.76.60:443
IBM_2145:NAVY_SVC:NAVYadmin>lspartnership
gm_link_tolerance 300 id name location partnership
gm_inter_cluster_delay_simulation 0 bandwidth
gm_intra_cluster_delay_simulation 0 0000020063617C80 NAVY_SVC local
relationship_bandwidth_limit 25 0000020062C17C56 OLIVE_SVC remote fully_configured 50
gm_max_host_delay 5 00000200A8402FAF PINK_SWV7K remote fully_configured 50
IBM_2145:NAVY_SVC:NAVYadmin>lspartnership PINK_SWV7K
id 00000200A8402FAF
Link tolerance default: name PINK_SWV7K
location remote
If insufficient bandwidth partnership fully_configured
lasted for 300 seconds bandwidth 50
code_level 6.4.0.3 (build 65.2.1209010000)
stop relationship with console_IP 10.6.77.210:443
event code 1920 gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
relationship_bandwidth_limit 25
gm_max_host_delay 5
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-77. Global Mirror link tolerance (without cycling)

Traditional Global Mirror (without change volumes) implements asynchronous continuous copy to
maintain a consistent image on the auxiliary volume that is within seconds of the master volume to
provide a low recovery point (RPO).
The requires a network to support peak write workloads as well as minimal resource contention at
both sites. Insufficient resources or network congestion might result in error code 1920 and thus
stopped Global Mirror relationships.
The link tolerance function represents the number of seconds that the primary cluster tolerates slow
response time from its partner. The default is 300 seconds. When the poor response extends past
the specified tolerance, a 1920 error code is logged and one or more Global Mirror relationships is
stopped.
Global Mirror with change volumes (also referred to as cycling mode) uses FlashCopy as a means
to mitigate peak bandwidth requirements, but at the expense of higher recovery point objectives
(RPOs). It does enable the RPO to be configurable at the individual relationship level.

© Copyright IBM Corp. 2012, 2016 10-84


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Change the copy type of a relationship


The properties of a relationship can be
edited while its state is idling or stopped

1
PINK_SWV7K
v6.3 – Change between
3 GM and GMCV without
4 incurring initial copy
2
NAVY_SVC

3
v7 – Change between
MM and GM without
2 4 incurring initial copy
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-78. Change the copy type of a relationship

While a new Global Mirror relationship with change volumes can be created, existing Global Mirror
relationships can also be changed to cycling mode to avoid the overhead of resynchronizing
volumes that already contain consistent and synchronized data.
The change to cycling mode is permitted as long as the relationship state is idling or stopped.
Changing from cycling mode to non-cycling (or traditional) is also supported.
Beginning with v7, changing between Global Mirror to Metro Mirror or vice versa is also available.
These options to change the copy type of existing relationships provide operational flexibility,
reduce complexity, and ensure continued availability.

© Copyright IBM Corp. 2012, 2016 10-85


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Change between MM to GM requires v7 partners


NAVY_SVC

4
2

To change from
MM ÅÆ GM
both partners
must be at v7
IBM_2145:NAVY_SVC:NAVYadmin>lspartnership OLIVE_SVC
. . .
code_level 6.4.1.4 (build 75.3.1303080000)
. . .
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-79. Change between MM to GM requires v7 partners

Do note that changing the relationship type between Metro Mirror to Global Mirror (or vice versa) is
a v7 enhancement, and does require both partners to be at the minimum v7 code level.

© Copyright IBM Corp. 2012, 2016 10-86


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Impact of disconnected clusters on mirroring


Define - One
NAVY_SVC standalone
MM relationship OLIVE_SVC

WINES_A
WISKEE_GA WINES_M
Ensuing pages will
examine this
relationship
Unable to communicate
Define - One due to link failures
standalone
GM relationship;
change to GMCV

WISKEE_GM

PINK_SWV7K

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-80. Impact of disconnected clusters on mirroring

If all the intercluster links fail between two clusters, then communication is no longer possible
between the two clusters in the partnership. This section examines the state of relationships when a
pair of SVC clusters is disconnected and no longer able to communicate.
To minimize the potential of link failures, it is best practice to have more than one physical link
between sites. These links need to have a different physical routing infrastructure such that the
failure of one link does not affect the other links.

© Copyright IBM Corp. 2012, 2016 10-87


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Relationship state: Before link failures


IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationship -delim , NAVY_SVC
id,name,master_cluster_id,master_cluster_name,master_vdisk_id,master_vdi
sk_name,aux_cluster_id,aux_cluster_name,aux_vdisk_id,aux_vdisk_name,prim
ary,consistency_group_id,consistency_group_name,state,bg_copy_priority,p
rogress,copy_type,cycling_mode,freeze_time
10,rcrel0,0000020063617C80,NAVY_SVC,10,WINE_M,0000020062C17C56,OLIVE_SVC
,3,WINES_A,master,,,consistent_synchronized,50,,metro,,

writes

WINES_M WINES_A

OLIVE_SVC
IBM_2145:OLIVE_SVC:OLIVEadmin>lsrcrelationship -delim ,
id,name,master_cluster_id,master_cluster_name,master_vdisk_id,master_vdi
sk_name,aux_cluster_id,aux_cluster_name,aux_vdisk_id,aux_vdisk_name,prim
ary,consistency_group_id,consistency_group_name,state,bg_copy_priority,p
rogress,copy_type,cycling_mode,freeze_time
3,rcrel0,0000020063617C80,NAVY_SVC,10,WINE_M,0000020062C17C56,OLIVE_SVC,
3,WINES_A,master,,,consistent_synchronized,50,,metro,none,
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-81. Relationship state: Before link failures

We will focus on the Metro Mirror relationship the NAVY_SVC and OLIVE_SVC to study the
relationship behavior when the clusters of a partnership can no longer communicate.
The Metro Mirror relationship, rcrel0, exists between the master volume WINES_M and the
auxiliary volume WINES_A. The copy direction is from the master volume to the auxiliary volume
(primary=master).
Prior to the connectivity failure between the two clusters, the relationship is in the
consistent_synchronized state when viewed from both clusters.

© Copyright IBM Corp. 2012, 2016 10-88


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Partnership states: Before and after link failures


IBM_2145:NAVY_SVC:NAVYadmin>lspartnership
id name location partnership NAVY_SVC
bandwidth
0000020063617C80 NAVY_SVC local
0000020062C17C56 OLIVE_SVC remote fully_configured 50
00000200A8402FAF PINK_SWV7K remote fully_configured 50
*** After
IBM_2145:NAVY_SVC:NAVYadmin>lspartnership
id name location partnership bandwidth
0000020063617C80 NAVY_SVC local
0000020062C17C56 OLIVE_SVC remote not_present 50
00000200A8402FAF PINK_SWV7K remote fully_configured 50
IBM_2145:OLIVE_SVC:OLIVEadmin>lspartnership
id name location partnership bandwidth
OLIVE_SVC
0000020062C17C56 OLIVE_SVC local
0000020063617C80 NAVY_SVC remote fully_configured 75
00000200A8402FAF PINK_SWV7K remote fully_configured 50
*** After
IBM_2145:OLIVE_SVC:OLIVEadmin>lspartnership
id name location partnership bandwidth
0000020062C17C56 OLIVE_SVC local
0000020063617C80 NAVY_SVC remote not_present 75
00000200A8402FAF PINK_SWV7K remote fully_configured 50
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-82. Partnership states: Before and after link failures

A total link outage or connectivity failure between the NAVY_SVC and OLIVE_SVC causes the
cluster partnership to change from fully_configured to not_present. This is shown in the
lspartnership output for both clusters after connectivity was lost.

© Copyright IBM Corp. 2012, 2016 10-89


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Relationship state is disconnected


NAVY_SVC
IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationship -delim ,
id,name,master_cluster_id,master_cluster_name,master_vdisk_id,master_vdi
sk_name,aux_cluster_id,aux_cluster_name,aux_vdisk_id,aux_vdisk_name,prim
ary,consistency_group_id,consistency_group_name,state,bg_copy_priority,p
rogress,copy_type,cycling_mode,freeze_time
10,rcrel0,0000020063617C80,NAVY_SVC,10,WINE_M,0000020062C17C56,OLIVE_SVC
,3,WINES_A,master,,,idling_disconnected,50,,metro,,
writes

WINES_M WINES_A

OLIVE_SVC
IBM_2145:OLIVE_SVC:OLIVEadmin>lsrcrelationship –delim ,
,id,name,master_cluster_id,master_cluster_name,master_vdisk_id,master_vd
isk_name,aux_cluster_id,aux_cluster_name,aux_vdisk_id,aux_vdisk_name,pri
mary,consistency_group_id,consistency_group_name,state,bg_copy_priority,
progress,copy_type,cycling_mode,freeze_time3,rcrel0,0000020063617C80,NAV
Y_SVC,10,WINE_M,0000020062C17C56,OLIVE_SVC,3,WINES_A,master,,,consistent
_disconnected,50,,metro,none,2013/08/09/11/00/29
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-83. Relationship state is disconnected

After a total link failure between the two clusters, the copy direction of the relationship does not
change (primary=master) but changed data of the master volume can no longer be sent to the
auxiliary volume.
Examine the rcrel0 relationship state on each cluster:
• On the NAVY_SVC it is in the idling_disconnected state. Mirroring activity for the volumes is
no longer active because changes can no longer be sent to the auxiliary volume.
• On the OLIVE_SVC it is in the consistent_disconnected state. At the time of the disconnect,
the auxiliary volume was consistent but it is no longer able to receive updates.
Even though updates are no longer being sent to the auxiliary volume, the changes are tracked by
the mirroring relationship bitmap so that the two volumes can be resynchronized once the
connectivity between the clusters is recovered.

© Copyright IBM Corp. 2012, 2016 10-90


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Details after disconnect: Master view


IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationship 10
id,10 NAVY_SVC
name,rcrel0
master_cluster_id,0000020063617C80
master_cluster_name,NAVY_SVC
master_vdisk_id,10
master_vdisk_name,WINE_M
aux_cluster_id,0000020062C17C56
aux_cluster_name,OLIVE_SVC
aux_vdisk_id,3
aux_vdisk_name,WINES_A
primary,master
consistency_group_id,
consistency_group_name,
state,idling_disconnected
bg_copy_priority,50
progress,
freeze_time,
status,
sync,
copy_type,metro writes
cycle_period_seconds,300
cycling_mode,
master_change_vdisk_id,
master_change_vdisk_name,
WINES_M WINES_A
aux_change_vdisk_id,
aux_change_vdisk_name,
idling_disconnected
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-84. Details after disconnect: Master view

After the link outage, the host using the WINES_M volume continues to operate normally, reading
and writing data on the volume.
Remote copy is an SVC internal function and is totally transparent to the host application.

© Copyright IBM Corp. 2012, 2016 10-91


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Details after disconnect: Auxiliary view


IBM_2145:OLIVE_SVC:OLIVEadmin>lsrcrelationship 3
id,3
name,rcrel0 OLIVE_SVC
master_cluster_id,0000020063617C80
master_cluster_name,NAVY_SVC
master_vdisk_id,10
master_vdisk_name,WINE_M
aux_cluster_id,0000020062C17C56
aux_cluster_name,OLIVE_SVC
aux_vdisk_id,3
aux_vdisk_name,WINES_A
primary,master
consistency_group_id,
consistency_group_name,
state,consistent_disconnected Auxiliary volume
bg_copy_priority,50 no longer able to
progress,
freeze_time,2013/08/09/11/00/29 obtain updates
status,
sync,
copy_type,metro writes
cycle_period_seconds,300
cycling_mode,none
master_change_vdisk_id,
master_change_vdisk_name, WINES_M WINES_A
aux_change_vdisk_id,
aux_change_vdisk_name, consistent_disconnected
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-85. Details after disconnect: Auxiliary view

The WINES_A auxiliary volume is no longer able to obtain updates occurring on the WINES_M
master volume.
The relationship on the OLIVE_SVC captures the date and time of the connectivity failure as
freeze_time when its state changed to the consistent_disconnected state. The freeze time is the
recovery point of the WINES_A volume content - it is the last known time when data was consistent
with the master volume.
The progress value is unknown from this cluster. It has no indication as to how much write activity
has occurred on the master value.
It is recommended that at this time (or some time prior to restarting the relationship), a FlashCopy
of the auxiliary volume to a Thin-Provisioned target volume taken with a copy rate of zero, and kept
until the relationship state is consistent_synchronized again. This approach will avoid a “rolling
disaster” where a second outage during the resynchronization would cause the auxiliary volume to
be in a corrupted state.

© Copyright IBM Corp. 2012, 2016 10-92


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Master after link restore: consistent_stopped


IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationship 10 NAVY_SVC
id,10
name,rcrel0
master_cluster_id,0000020063617C80
master_cluster_name,NAVY_SVC
master_vdisk_id,10
master_vdisk_name,WINE_M
aux_cluster_id,0000020062C17C56
aux_cluster_name,OLIVE_SVC
aux_vdisk_id,3
aux_vdisk_name,WINES_A
primary,master
consistency_group_id,
consistency_group_name,
state,consistent_stopped
bg_copy_priority,50
progress,61
freeze_time,2013/08/09/11/29/09
status,online
sync,out_of_sync
copy_type,metro writes
cycle_period_seconds,300
cycling_mode,
master_change_vdisk_id,
master_change_vdisk_name, WINES_M WINES_A
aux_change_vdisk_id,
aux_change_vdisk_name,
consistent_stopped
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-86. Master after link restore: consistent_stopped

Examine the relationship detail after connectivity between the two clusters has been restored. From
the master cluster, note that:
The relationship is in a consistent_stopped state. It is not automatically restarted.
The progress of 61 percent indicates 39 percent of the grains on the WINES_M master volume
need to be copied to the WINES_A auxiliary volume.
The freeze time of the auxiliary volume’s consistent_disconnected time has been obtained from the
relationship on the auxiliary cluster (since connectivity of the clusters has been restored; enabling
this freeze time value to be transmitted).
The relationship is out of sync.

© Copyright IBM Corp. 2012, 2016 10-93


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Auxiliary after link restore: consistent_stopped


IBM_2145:OLIVE_SVC:OLIVEadmin>lsrcrelationship 3
id,3 OLIVE_SVC
name,rcrel0
master_cluster_id,0000020063617C80
master_cluster_name,NAVY_SVC
master_vdisk_id,10
master_vdisk_name,WINE_M FlashCopy
aux_cluster_id,0000020062C17C56 WINES_A
aux_cluster_name,OLIVE_SVC
aux_vdisk_id,3
aux_vdisk_name,WINES_A
primary,master writes
consistency_group_id,
consistency_group_name,
state,consistent_stopped
bg_copy_priority,50 WINES_M WINES_A
progress,61
freeze_time,2013/08/09/11/29/09
status,online
sync,out_of_sync
copy_type,metro
cycle_period_seconds,300 Best practice: Obtain a
cycling_mode,none
master_change_vdisk_id, FlashCopy of volume
master_change_vdisk_name, prior to restart of
aux_change_vdisk_id, mirroring relationship
aux_change_vdisk_name,
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-87. Auxiliary after link restore: consistent_stopped

Examine the relationship detail after connectivity between the two clusters have been restored.
From the auxiliary cluster, note that:
The relationship is in a consistent_stopped state. It is not automatically restarted.
The progress of 61 percent indicates 39 percent of the grains on the WINES_M master volume
need to be copied to the WINES_A auxiliary volume.
The relationship stopped state allows a FlashCopy to be used at the OLIVE_SVC site to capture
the data on the WINES_A volume as of the freeze time; before restarting the mirroring relationship.

© Copyright IBM Corp. 2012, 2016 10-94


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Issue restart relationship after link restore


IBM_2145:NAVY_SVC:NAVYadmin> startrcrelationship -force 10

IBM_2145:NAVY_SVC:NAVYadmin> IBM_2145:OLIVE_SVC:OLIVEadmin>
lsrcrelationship 10 lsrcrelationship -delim , 3
id,10 id,3
name,rcrel0 name,rcrel0
master_cluster_id,0000020063617C80 master_cluster_id,0000020063617C80
master_cluster_name,NAVY_SVC master_cluster_name,NAVY_SVC
master_vdisk_id,10 master_vdisk_id,10
master_vdisk_name,WINE_M master_vdisk_name,WINE_M
aux_cluster_id,0000020062C17C56 aux_cluster_id,0000020062C17C56
aux_cluster_name,OLIVE_SVC aux_cluster_name,OLIVE_SVC
aux_vdisk_id,3 aux_vdisk_id,3
aux_vdisk_name,WINES_A aux_vdisk_name,WINES_A
primary,master primary,master
consistency_group_id, consistency_group_id,
consistency_group_name, consistency_group_name,
state,inconsistent_copying state,inconsistent_copying
bg_copy_priority,50 bg_copy_priority,50
progress,62 progress,64
freeze_time, freeze_time,
status,online writes status,online
sync, sync,
…….. ……..
…….. ……..
WINES_M WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-88. Issue restart relationship after link restore

Because I/O activity have occurred, the relationship is out-of-sync and must be started with -force.
Restart of the relationship causes the changed content (39% of the grains) of the primary/master
volume to be copied to the auxiliary volume.
During this background copy, the relationship is in the inconsistent_copying state. Recall the
auxiliary volume is set offline during this state and will not be brought online until the copy
completes and the relationship returns to the consistent_synchronized state.

© Copyright IBM Corp. 2012, 2016 10-95


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Relationship consistent and synchronized again


IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationship -delim , 10
id,10 NAVY_SVC
name,rcrel0
master_cluster_id,0000020063617C80
master_cluster_name,NAVY_SVC
master_vdisk_id,10
master_vdisk_name,WINE_M
aux_cluster_id,0000020062C17C56
aux_cluster_name,OLIVE_SVC
aux_vdisk_id,3
aux_vdisk_name,WINES_A
primary,master
consistency_group_id,
consistency_group_name,
state,consistent_synchronized
bg_copy_priority,50
progress,
freeze_time,
status,online
sync,
copy_type,metro
cycle_period_seconds,300 writes
cycling_mode,
master_change_vdisk_id,
master_change_vdisk_name,
aux_change_vdisk_id, WINES_M WINES_A
aux_change_vdisk_name,

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-89. Relationship consistent and synchronized again

Once the state of the volumes is consistent_synchronized, any FlashCopy taken of the
WINES_A volume can now be discarded.

© Copyright IBM Corp. 2012, 2016 10-96


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Auxiliary volume: After link restore and sync


IBM_2145:NAVY_SVC:NAVYadmin>stoprcrelationship -access 10
OLIVE_SVC
IBM_2145:OLIVE_SVC:OLIVEadmin>lshostvdiskmap -delim , 1
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID,IO_group_id,IO_group_name
1,OLIVEWIN1,0,3,WINES_A,60050768018B05F15800000000000003,0,io_grp0

WINES_M = WINES_A

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-90. Auxiliary volume: After link restore and sync

To verify the content of the auxiliary volume, the relationship is stopped again with write access.
The WINES_A volume is mapped to the OLIVEWIN1 host and an inspection of the drive content
confirms the data is identical to the WINES_M master volume.
It might be worthwhile to reiterate that all SVC background copy operations (FlashCopy, Remote
Copy, and Volume Mirroring) are based on block copies controlled by the grains of the owning
bitmaps. SVC is a block level solution, so by design (and actually per industry standards) these
copy operations have no knowledge of OS logical file structures. Folders used in these examples
facilitate easier before/after comparisons and are for illustrative purposes only.

© Copyright IBM Corp. 2012, 2016 10-97


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Copy Services: Supported features


Mirroring Mirroring
Primary Secondary

FlashCopy Source Yes Yes

FlashCopy Target Yes Yes

FlashCopy Mirroring
Virtualization Feature
Source or Target Primary or Secondary

Image type volume Supported Supported


Extent migration within
Supported Supported
storage pool
Volume migration
Supported Supported
between storage pools
Change volume between
Not Supported Not Supported
I/O groups
Volume size change Not Supported Not Supported

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-91. Copy Services: Supported features

As shown with the first table above, the FlashCopy target volume can also participate in a
Metro/Global Mirror relationship. Constraints as to how these functions can be used together are:
• A FlashCopy mapping cannot be manipulated to change the contents of the target volume of
that mapping when the target volume is the primary volume of a Metro Mirror or Global Mirror
relationship that is actively mirroring.
• A FlashCopy mapping must be in the idle_copied state when its target volume is the
secondary volume of a Metro Mirror or Global Mirror relationship.
• The two volumes of a given FlashCopy mapping must be in the same I/O group; when the
target volume is also participating in a Metro/Global Mirror relationship.
For details refer to Storwize V7000 InfoCenter > Product overview > Technical overview >
Copy Services features.

© Copyright IBM Corp. 2012, 2016 10-98


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Summary of Copy Services configurations


Maximum
Properties Note
number
Remote Copy (Metro Mirror and Global 4096 This configuration can be any mix of Metro Mirror
Mirror) relationships per clustered system and Global Mirror relationships.
Remote Copy relationships per consistency 4096 No additional limit is imposed beyond the Remote
group Copy relationships per clustered system limit.
Remote Copy consistency groups per 256 -
clustered system
Total Metro Mirror and Global Mirror volume 1024 TB This limit is the total capacity for all master and
capacity per I/O Group auxiliary volumes in the I/O group.

FlashCopy mappings per clustered system 4096 -

FlashCopy targets per source 256 -


A volume can be the source of up to four
Cascaded Incremental FlashCopy maps 4 incremental FlashCopy maps. If this number of
maps is exceeded, then the FlashCopy behavior
for that cascade becomes non-incremental.

FlashCopy mappings per consistency group 512 -

FlashCopy consistency groups per clustered 255 -


system
Total FlashCopy volume capacity per I/O 1024 TB 4096 for a full four node clustered system with
group four I/O Groups

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-92. Summary of Copy Services configurations

This listed are the most up-to-date (as of this publication) Copy Services configuration limits. The
list of the configuration limits and restrictions specific to IBM Spectrum Virtualize software version
7.6 code are available by way of the following website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004369.

© Copyright IBM Corp. 2012, 2016 10-99


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Keywords
• FlashCopy • Global Mirror with cycling and
• Flash mapping change volume
• Consistency groups • Synchronization
• Copy rate • Freeze time
• Master
• Cycle process
• Auxiliary
• Partnership
• Remote Copy
• Relationship
• Metro Mirror
• Synchronous
• Global Mirror
• Asynchronous
• Global Mirror without cycling

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-93. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 10-100


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Review questions (1 of 2)
1. True or False: Upon the restart of a Remote Copy
relationship, a 100% background copy is performed to
ensure the master and auxiliary volumes contain the same
content.

2. True or False: The Remote Copy auxiliary volume is write


accessible by default when its relationship has been
stopped.

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-94. Review questions (1 of 2)

© Copyright IBM Corp. 2012, 2016 10-101


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Review answers (1 of 2)
1. True or False: Upon the restart of a Remote Copy
relationship, a 100% background copy is performed to
ensure the master and auxiliary volumes contain the same
content.
The answer is false.

2. True or False: The Remote Copy auxiliary volume is write


accessible by default when its relationship has been
stopped.
The answer is false.

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 10-102


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Review questions (2 of 2)
3. True or False: Metro Mirror is a synchronous copy
environment which provides for a recovery point objective of
zero.

4. True or False: The maximum background copy rate when


editing a FlashCopy Mapping is 150.

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-95. Review questions (2 of 2)

© Copyright IBM Corp. 2012, 2016 10-103


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Review answers (2 of 2)
3. True or False: Metro Mirror is a synchronous copy
environment which provides for a recovery point objective of
zero.
The answer is true.

4. True or False: The maximum background copy rate when


editing a FlashCopy Mapping is 150.
The answer is false.

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 10-104


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 10. Spectrum Virtualize Copy Services: Remote Copy

Uempty

Unit summary
• Summarize the use of the GUI/CLI to establish a cluster partnership,
create a relationship, start remote mirroring, monitor progress, and
switch the copy direction
• Differentiate among the functions provided with Metro Mirror, Global
Mirror, and Global Mirror with change volumes

Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016

Figure 10-96. Unit summary

© Copyright IBM Corp. 2012, 2016 10-105


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Unit 11. Storwize V7000 administrative


management
Estimated time
01:15

Overview
This unit examines administrative management options that assist you in monitoring,
troubleshooting, and servicing a Storwize V7000 environment. This unit also highlights the
importance of the Service Assistant Too and introduce IBM Spectrum storage offerings.

How you will check your progress


• Checkpoint questions
• Machine exercises

References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html

© Copyright IBM Corp. 2012, 2016 11-1


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Unit objectives
• Recognize system monitoring features to help maintain nodes and
components availability
• Evaluate and filter administrative task commands entries that are
captured in the audit log
• Employ system configuration backup and extract the backup files from
the system using the CLI or GUI
• Summarize the benefits of an SNMP, syslog, and email server for
forwarding alerts and events
• Recall procedures to upgrade the system software and drive microcode
firmware to a higher code level
• Identify the functions of Service Assistant tool for management access
• List the benefits of IBM Spectrum storage offerings

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-1. Unit objectives

© Copyright IBM Corp. 2012, 2016 11-2


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Administrative management topics


• System Monitoring
ƒ System Health and Alert Status
ƒ Event Log: Messages and Alerts
ƒ Directed Maintenance Procedure
ƒ Performance Monitoring

• Access
• Settings
• Service Assistant (SA)
• IBM Spectrum Storage

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-2. Administrative management topics

This topic discusses the system monitoring, event log detections and performance monitoring of
the Storwize V7000 environment.

© Copyright IBM Corp. 2012, 2016 11-3


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

System Details
• System Details option has been
removed from Monitoring menu.
ƒ Modified information is still
available directly from the System
(dynamic) panel.
• Monitoring > System
ƒ Determine system status
ƒ Dynamic view of system capacity
and operating state
í Monitor individual nodes and
attached enclosures
í Monitor individual hardware
components

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-3. System Details

Although the System Detail option is no longer part of the latest Storwize V7000 management GUI
software code, you can still view modified information on control and expansion enclosures and
various hardware components of the system through its dynamic display. From the System panel,
you can monitor capacity and view nodes details to determine whether the nodes in your system
are online. In addition, you can view individual hardware components and monitor their operating
state.
To monitor nodes in the management GUI, select Monitoring > System. Select the node that you
want to monitor to view status for the node. For systems with multiple expansion enclosures, the
number indicates the total of detected expansion enclosure that are attached to the control
enclosure. Select the expansion enclosure to display the entire rack view of these enclosures.

© Copyright IBM Corp. 2012, 2016 11-4


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Health Status and Status Alerts indicators


• Health Status indicator
ƒ Green = Healthy
ƒ Red = critical issue; immediate attention required
ƒ Yellow = degraded or warning action occurred

• Status Alerts indicated by the X icon


ƒ Tracks the number of events, non-critical and critical
alerts that occurred
ƒ Each alerts provides a link to Monitoring > Events:
í Migration process (yellow)
í Node or drive offline (red)
í System upgrade (red)
í Not all actions issued will change Health Status
indicator status

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-4. Health Status and Status Alerts indicators

When an issue or warning occurs on the system, the management GUI Health Status indicator
(rightmost area of the control panel) will change color. The health status indicator can be green
(healthy), yellow (degraded or warning), or red (critical). Depending on the type event that
occurred, a status alerts provides message information or alerts about internal and external system
events, or remote partnerships.
If there is a critical system error, the Health Status bar turns red and alerts the system administrator
for immediate action. The Health Status indicator view does not change for non-critical errors. A
status alert in the form of an X widget icon can appear next to the Health Status. The status alert
provides a time stamp and brief description of the event that occurred. Each alert is a hyperlink and
redirects you to the Monitoring > Event panel for actions.

© Copyright IBM Corp. 2012, 2016 11-5


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

System event log


• Two types of events: messages and alerts
ƒ Alerts log action required
í Some alerts contain a error code (fix procedures)
ƒ Message indicate warning or action completed
• Event log panel provides concise view of
system event log entries
ƒ All events provides a brief description
ƒ All events are time stamped
í Management GUI time stamps are local time to web
browser running
í CLI uses system time
ƒ Message are fixed when acknowledge and
marked as Fixed
• Event log can be export to a .csv file for
analysis

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-5. System event log

Although the IBM Storwize V7000 dynamic view allows you to not only view monitor capacity and its
component details, it also provides a visible indication that the system is operating healthy or an
issue has occurred in its operating state. The system reports all informational, warnings, and errors
related to any changes detected by the system to the events log.
Events added to the log are classified as either alerts or messages based on the following criteria:
• An alert is logged when the event requires an action. These errors can include hardware errors
in the system itself as well as errors about other components of the entire system. Certain alerts
have an associated error code, which defines the service action that is required. The service
actions are automated through the fix procedures. If configured, a call home to IBM by way of
email is generated to request assistant or replacement parts. Messages are fixed when you
acknowledge reading them and mark them as fixed. If the alert does not have an error code, the
alert represents an unexpected change in the state. This situation must be investigated to
determine whether this unexpected change represents a failure. Investigate the cause of an
alert and resolve it as soon as it is reported.
• A message is logged when a change that is expected is reported, for instance, when an array
build completes.

© Copyright IBM Corp. 2012, 2016 11-6


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty
Each event recorded in the event log includes fields with information that can be used to diagnose
problems. Each event has a time stamp that indicates when the action occurred or the command
was submitted on the system.
When logs are displayed in the command-line interface, the time stamps for the logs in CLI are the
system time. However, when logs are displayed in the management GUI, the time stamps are
translated to the local time where the web browser is running.
Events can be filtered to sort them according to the need or export them to the external
comma-separated values (CSV) file.

© Copyright IBM Corp. 2012, 2016 11-7


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

System event log access


• Event log can be access by way of the management GUI or CLI.
ƒ Management GUI Monitoring > Events
ƒ CLI lseventlog command
• You can change the default view by right-clicking on any column area
and select options to be added to the display.

Event can be filtered:


• Recommended action (default)
• Unfixed messages and alters
• Show all

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-6. System event log access

Primary debug tool for Storwize V7000 is the event log, which can be accessed from the
management GUI Monitoring > Events or using CLI by issuing the lseventlog command.
Like the other menu options, Events window allows you to filter and add many other parameters
related to events.

© Copyright IBM Corp. 2012, 2016 11-8


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Maintenance mode
• Maintenance mode is a mechanism for preventing unnecessary
messages from being sent to IBM and administrators.
ƒ Maintenance mode is designed to be used by the directed maintenance
procedures (DMPs) rather than the administrator directly.
• The DMPs might direct the administrator to perform hardware actions
which will look like an error to the system.
ƒ For example, removing a drive from an enclosure.
• Under these scenarios it is not helpful to send Call Home emails to IBM
and event notifications to administrators.
ƒ To address this issue, Storwize V7000has the concept of a maintenance mode
which can be set by modifying the I/O group properties.
í svctask chiogrp -maintenance yes
ƒ Maintenance mode only applies to errors in the SAS domain and the Storwize
V7000hardware.
ƒ The DMPs will control maintenance mode without any need for administrator
action.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-7. Maintenance mode

Many events or problems that occur in your Storwize V7000 system environment require little to no
user action. The Maintenance mode is a directed maintenance procedures (DPMs) mechanism for
preventing unnecessary messages such sending call home emails to IBM and event notifications to
administrators. Users have the option to indicate which I/O group needs to be placed in
maintenance mode while carrying out service procedures on a storage enclosure by issuing the
svctask chiogrp -maintenance yes.
Once you enter maintenance mode, it will continues until otherwise specified. Therefore, the mode
can be switched off using the same command with no to ensure that unwarranted type of events
and problems are not all reported.
The DMPs will control maintenance mode without any need for administrator action. It is switched
off after 30 minutes in any case.

© Copyright IBM Corp. 2012, 2016 11-9


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Alerts with error code


• Right-click an alert and select Properties to view error code or
message detail.
ƒ Error code generates a Recommended Action, which contains a Run Fix
recommended fix procedure.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-8. Alerts with error code

You can right-click on an event and select Properties to view more specific information. In this
example, an Error code 1690 is generated with the alerts message, which indicates that
Flash_Mdisk_02 RAID array is not protected by sufficient spares. This type of error generates a
Recommended Action that require attention and have an associated fix procedure. Alerts are listed
in priority order and should be fixed sequentially by using the available fix procedures.

© Copyright IBM Corp. 2012, 2016 11-10


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Directed Maintenance Procedure


• Directed Maintenance Procedure (DMP) storage checks whether the
problem still exists and fixes the issue, if possible.
• The DMP takes the administrator through resolving the error event with
suggestions, recommendations, and rechecks.
• Click Run Fix to initiate the Fix Procedure.
ƒ If any unresolved issues exist, the Recommended Actions section displays
those unresolved errors.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-9. Directed Maintenance Procedure

Errors with error code might direct you to carry out certain service procedures to replace a
hardware component using the directed maintenance procedure (DMP) step by step guidance,
while ensuring that sufficient redundancy is maintained in the system environment.
A Run Fix procedure is a wizard that helps you troubleshoot and correct the cause of an error.
Certain fix procedures will reconfigure the system, based on your responses; ensure that actions
are carried out in the correct sequence; and, prevent or mitigate the loss of data. For this reason,
you must always run the fix procedure to fix an error, even if the fix might seem obvious. The fix
procedure might bring the system out of a Degraded state and into a Healthy state.
In a normal situation during the daily administration of the Storwize V7000, you are unlikely to see
error events. However, as events messages and alerts are displayed, there might be a continuing
flow of informational messages. Therefore, typical Events displays only recommended actions.

© Copyright IBM Corp. 2012, 2016 11-11


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Example (1 of 3):
Scenario of a Directed Maintenance Procedure
• Troubleshooting scenario:
ƒ Ambient temperature is greater
than warning threshold.
í System will overheat and
eventually shut down if the error
situation is not fixed.
ƒ To address the event, use the
status alert link or select
Monitoring > Event.
ƒ Select the event message and
run the Recommended Action:
Run Fix procedure.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-10. Example (1 of 3):Scenario of a Directed Maintenance Procedure

In this scenarios a status alert has indicated that an unresolved event caused by a room
temperature that is too high, which might cause the system to overheat and eventually shut down if
the error situation is not fixed.
To run the fix procedure for the error with the highest priority, click Recommended Action at the top
of the Event page and click Run Fix Procedure. When you fix higher priority events first, the system
can often automatically mark lower priority events as fixed.
While the Recommended Actions filter is active, the event list shows only alerts for errors that have
not been fixed, sorted in order of priority. The first event in this list is the same as the event
displayed in the Recommended Action panel at the top of the Event page of the management GUI.
If it is necessary to fix errors in a different order, select an error alert in the event log and then click
Action > Run Fix Procedure.
Selecting the Run Fix procedure brings in the first Window of the DMP that shows the first step of
the DMP procedure. In this example, the system reports that drive 2 (flash module 2) in slot 5 is
measuring a temperature that is too high. In addition, the system has validated to report that all four
fans in both canisters are operational and online.

© Copyright IBM Corp. 2012, 2016 11-12


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Example (2 of 3):
Scenario of a Directed Maintenance Procedure
• The next step in the DMP procedure is for the administrator:
ƒ Measure the room temperature.
ƒ Make sure the ambient temperature is within the system specifications.

• In the third step of the DMP procedure:


ƒ Suggestions about potential causes of overheating are provided.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-11. Example (2 of 3):Scenario of a Directed Maintenance Procedure

In the next phase the DMP procedure, the user is ask to verify the events reported with few more
inputs related. In this case, it’s the room temperature which needs verification.
Suggestions are provided that could be probable indications or solutions to the event. Overheating
might be caused by blocked air vents, incorrectly mounted blank carriers in a flash module slot, or a
room temperature that is too high.

© Copyright IBM Corp. 2012, 2016 11-13


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Example (3 of 3):
Scenario of a Directed Maintenance Procedure
• In this DMP procedure step, storage:
ƒ Checks whether the error condition is resolved.
ƒ Verify all events of the same type are marked as fixed, if possible.

• The events indicating an error condition relating to temperature are


now gone and the system is back in a Healthy state.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-12. Example (3 of 3):Scenario of a Directed Maintenance Procedure

Once the error is fixed, system return the system to healthy status from the earlier degraded status.
Event log is also updated.

© Copyright IBM Corp. 2012, 2016 11-14


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Run error code fix procedure solution


• Error 1690: Insufficient spares protection
• The Run Fix Procedure wizard reviews and diagnoses the event condition.
ƒ Provides recommendations.
ƒ The wizard might display instructions on replacing parts or preforming other
repair activities.
ƒ In this case, there are no available drives, so we need to add additional SSD
drives to the system configuration.
ƒ Once flash/SSD has been added, the error is fixed, the wizard will instruct you
to “Click OK” to mark error as fixed.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-13. Run error code fix procedure solution

You can use fix procedures to diagnose and resolve the event error code alerts. Fix procedures
help simplify these tasks by automating as many of the tasks as possible. One or more panels
might be displayed with instructions for you to replace parts or perform other repair activity. When
the last repair action is completed, the procedures might attempt to restore failed devices to the
system. After you complete the fix, you see the statement Click OK to mark the error as fixed. Click
OK. This action marks the error as fixed in the event log and prevents this instance of the error from
being listed again.
When fixing hardware faults, the fix procedures might direct you to perform hardware actions that
look like an error to the system, for example, replacing a drive. In these situations, the fix
procedures enter maintenance mode automatically. New events are entered into the event log
when they occur. However, a specific set of events are not notified unless they are still unfixed
when exiting maintenance mode. The events that were recorded in maintenance mode are fixed
automatically when the issue is resolved. Maintenance mode prevents unnecessary messages
from being sent.

© Copyright IBM Corp. 2012, 2016 11-15


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Alerts without error code


• System event log
captured threshold
information
ƒ No error codes generated
by action are still required
by administrator.
• Solution
ƒ In this case, alert can be
fixed by first verifying
available storage capacity
and then expanding
volume capacity.
ƒ Once volume capacity is
expanded, mark event as
fixed.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-14. Alerts without error code

In this example, the system event log has captured the threshold information of a thin-provisioned
volume. If a solution is not applied the storage devices might run out of physical space. Therefore,
the administrator needs to verify that the storage device has physical storage space available, and
add more physical storage as needed.

© Copyright IBM Corp. 2012, 2016 11-16


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Administrative management topics


• System Monitoring
• Access
ƒ Audit Log

• Settings
• Service Assistant (SA)
• IBM Spectrum Storage

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-15. Administrative management topics

This topic discusses the requirements and procedures to reset system password.

© Copyright IBM Corp. 2012, 2016 11-17


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

System audit log entries


• View audit log entries using Access > Audit Log or the CLI catauditlog
command.
ƒ Use the advantage of filter or search among the audit log entries.

Logs executed
action commands

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-16. System audit log entries

The system maintains an audit log of successfully executed commands, indicating which users
performed particular actions at certain times. An audit log tracks actions that are issued through the
management GUI or the CLI. You can view the audit log entries by selecting Access > Audit Log or
the CLI catauditlog command displays its entries.
The audit log entries can be customized to display the following types of information:
• Time and date when the action or command was issued on the system
• Name of the user who performed the action or command
• IP address of the system where the action or command was issued
• Parameters that were issued with the command
• Results of the command or action returns code of the action command
• Sequence number
• Object identifier that is associated with the command or action
The GUI provides the advantage of filter or search among the audit log entries to reduce the
quantity of output.

© Copyright IBM Corp. 2012, 2016 11-18


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

System audit log /dumps/audit directory


IBM Storwize:V009B:V009B1-admin>catauditlog -first 5
audit_seq_no timestamp cluster_user ssh_ip_address result res_obj_id action_cmd
23 150411010003 superuser 10.6.9.210 0 svctask
detectmdisk
24 150412010002 superuser 10.6.9.210 0 svctask
detectmdisk
25 150413010003 superuser 10.6.9.210 0 svctask
detectmdisk
26 150413180002 superuser 10.6.9.211 0 svctask
chcurrentuser -keyfile /tmp/superuser.pub-8035843563636449081.pub
27 150414010003 superuser 10.6.9.210 0 svctask
detectmdisk
IBM_2145:Team50A_SVC:superuser>

IBM Storwize:V009B:superuser >lsdumps


id filename
0 snap.single.78N10WD-1.121221.164843.tgz Files can be downloaded and
1 78N10WD-1.trc.old used to analyze problems.
2 dump.78N10WD-1.140826.170838
.....
32 svc.config.backup.bak_78N10WD-1
33 svc.config.backup.xml_78N10WD-1
34 svc.config.backup.sh_78N10WD-1
35 svc.config.backup.log_78N10WD-1
36 dpa_heat.78N10WD-1.141107.130950.data
IBM Storwize:V009B:superuser>
Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-17. System audit log /dumps/audit directory

The in-memory portion of the audit log has a capacity of 1 MB and can store about 6000 commands on
average (affected by the length of commands and parameters issued). When the in-memory log is full, its
content is automatically written to a local file on the configuration node in the /dumps/audit directory.
The catauditlog CLI command when used with the -first parameter provides the requested
most recent number of entries with the CLI. In this example, the command returns a list of five
in-memory audit log entries.
The lsdumps command with -prefix /dumps/audit is used to list the files on disk. These files
can be downloaded from the system for later analysis should it be required by problem
determination. The file entries are in readable text format. The following commands are not
recorded in the audit log:
• All commands that failed
• dumpconfig
• cpdumps
• cleardumps
• finderr
• dumperrlog
• dumpinternallog
• svcservicetask dumperrlog
• svcservicetask finderr

© Copyright IBM Corp. 2012, 2016 11-19


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Administrative management topics


• System Monitoring
• Access
• Settings
ƒ System support
ƒ Network
ƒ Notifications
ƒ Security
ƒ System Licensing
ƒ Update System
ƒ GUI Preferences

• Service Assistant (SA)


• IBM Spectrum Storage
Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-18. Administrative management topics

This topic discusses the system monitoring and event log detections of the Storwize
V7000environment.

© Copyright IBM Corp. 2012, 2016 11-20


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Download support data


• Settings > Support > Download Support Package
ƒ Additional Storwize V7000information can be analyzed by IBM Support personnel.
ƒ Support data collection can also be collected using.
í CL svc_snap command
í Service Assistant (SA)

• Download radio buttons:


í svc_snap
í svc_snap dump
í svc_snap dumpall
í svc_livedump --nodes all;
svc_snap dumpall

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-19. Download support data

A problem determination process might require additional information from the Storwize V7000 for
analysis by IBM Support personnel. This data collection can be performed using the svc_snap
command or the GUI. The GUI provides for a simpler download of the support information.
Click Settings > Support, then click Download Support Package, select the type of support
package advised by IBM Support personnel to download and click Download.
An alternative to the management GUI is to use the Service Assistant GUI to download support
information. This path might be necessary if, due to an error condition, the management GUI is
unavailable.

© Copyright IBM Corp. 2012, 2016 11-21


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Additional debug information capture


• Three additional data types are available.
ƒ Drivedumps
í A collection of information collected from a drive. Identical to mdiskdumps for SSDs
in Storwize V7000v5.1.0

ƒ MDiskdumps
í Information about all bad blocks on a managed disk. These include migrated
medium errors and RAID kill sectors

ƒ Enclosuredumps
í A collection of debug information relevant to the hardware in the enclosure.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-20. Additional debug information capture

There are some additional commands to trigger special dumps that are more relevant to V7000.
These commands are created using trigger commands like triggerdrivedump or
triggerenclosuredump and can be found in the dumps folder. IBM Support guides you on how to
create and where to find these files if required.

© Copyright IBM Corp. 2012, 2016 11-22


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Backup Storwize V7000 system metadata


• Issue command as superuser or ID with security admin authority.
ƒ svcconfig clear -all command deletes existing copies of backup files.
IBM Storwize:V009B:superuser>svcconfig clear -all
CMMVC6155I SVCCONFIG processing completed successfully
IBM Storwize:V009B:superuser>svcconfig backup
..................................................................................
..................................................................................
...........................................................................
CMMVC6155I SVCCONFIG processing completed successfully
IBM Storwize:V009B:superuser>

ƒ svcconfig backup command creates these files in the system /tmp directory.

File Name Description


svc.config.backup.xml This file contains your system configuration data.

svc.config.backup.sh This file contains the names of the commands that were issued to
create the backup of the system.
svc.config.backup.log This file contains details about the backup, including any error
information that might have been reported.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-21. Backup Storwize V7000 system metadata

The Storwize V7000 system configuration data is stored on all nodes in the system and is internally
hardened so that in normal circumstances the Storwize V7000 should never lose its configuration
settings. However, in exceptional circumstances this metadata might become corrupted or lost.
You can use CLI to trigger configuration backup either manually on an ad hoc basis or by an
automatic process regularly. The svcconfig backup command generates a new backup file.
Triggering a backup using the GUI is not possible, but you can save the output from GUI.
The CLI command svcconfig backup backs up the system configuration metadata in the
configuration node /tmp directory. These files are typically downloaded or copied from the system
for safekeeping. It might be a good practice to first issue the svcconfig clear -all command to
delete existing copies of the backup files and then perform the configuration backup.
The application user data is not backed up as part of this process.
The IBM Support Center should be consulted before any configuration data restore activity is
attempted.

© Copyright IBM Corp. 2012, 2016 11-23


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Download config backup file from system using the GUI


• svcconfig backup command also creates these files in the system
/dumps directory.
ƒ These backup files are created by the system automatically everyday at 1 a.m.
local time.

Storwize V7000 system


.bak in name = prior level of backup file config node

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-22. Download config backup file from system using the GUI

In addition to the /tmp directory, a copy of the config.backup.xml from the svcconfig backup
command is also kept in the /dump directory.
The system actually creates its own set of configuration metadata backup files automatically each
day at 1 a.m. local time. These files are created in the /dump directory and contain ‘cron’ in the file
names.
Right-click the file entry provides another method to download backup files.
The content of the configuration backup files can be viewed using a web browser or a text
processing tool such as WordPad.
This output is from the copy of the backup file extracted from the /dumps directory using the GUI. It
contains the same data as the file in the /tmp directory.

© Copyright IBM Corp. 2012, 2016 11-24


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Example of CLI: PSCP Storwize V7000 config backup file


• Extract backup files from Storwize V7000 system using PuTTY Secure
Copy (PSCP ).

• Extracted backup files from Storwize V7000 system:

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-23. Example of CLI: PSCP Storwize V7000 config backup file

The backup files can be downloaded from the system using pscp and archived in concert with
installation asset protection procedures.
Run configuration backup and download for archiving on a regularly scheduled basis or at a
minimum after each major change to the Storwize V7000 configuration (such as defining or
changing volumes, storage pool, or host object mappings).
The content of the configuration backup files can be viewed using a web browser or text processing
tool such as WordPad.
This output is from the backup file extracted from the /tmp directory. It contains a listing of all the
objects defined in the system.

© Copyright IBM Corp. 2012, 2016 11-25


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Network: Managing system, service IP addresses,


ports and connectivity

Can be used to configure ports to the system's management IP


addresses - only port 1 is used

Can be used to unconfigure service IP address by clearing the


IPv4 or IPv6. Set the services IP address for each node in the
system. Only the superuser ID is authorized for access

Can be used to modify iSCSI connections, host attachment, and


remote copy

Use to configure settings for the system to attach to iSCSI-


attached hosts

Display the connectivity between nodes and other storage


systems and hosts that are attached through the Fibre Channel
network
Use to specify specific ports to prevent communication between
nodes in the local system or between nodes in a remote-copy
partnership

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-24. Network: Managing system, service IP addresses, ports and connectivity

Use the Network panel to manage the management IP addresses for the system, service IP
addresses for the nodes, and iSCSI and Fibre Channel configurations. The system must support
Fibre Channel or Fibre Channel over Ethernet connections to your storage area network (SAN).
• Management IP addresses can be defined for the system. The system supports one to four IP
addresses. You can assign these addresses to two Ethernet ports and their backup ports.
Multiple ports and IP addresses provide redundancy for the system in the event of connection
interruptions.
• The service IP addresses are used to access the service assistant tool, which you can use to
complete service-related actions on the node. All nodes in the system have different service
addresses. A node that is operating in service state does not operate as a member of the
system.
• Use the Ethernet ports panel to display and change how Ethernet ports on the system are being
used.
• From the iSCSI panel, you can configure settings for the system to attach to iSCSI-attached
hosts.
• You can use the Fibre Channel ports panel in addition to SAN fabric zoning to restrict
node-to-node communication. You can specify specific ports to prevent communication

© Copyright IBM Corp. 2012, 2016 11-26


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty
between nodes in the local system or between nodes in a remote-copy partnership. This port
specification is called Fibre Channel port masking.

© Copyright IBM Corp. 2012, 2016 11-27


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Management IP address redundancy


• Configure node backup:
ƒ Ethernet port 2 (optional) can be used as an alternate system management
interface.
• The chsystemip command can be used to set or change the IP
address of either Ethernet ports.
ƒ Use the lssystemip command to list the system IP addresses.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-25. Management IP address redundancy

Storwize V7000 system management occurs across Ethernet connections using the system
management IP address owned by the configuration node. Each node has two Ethernet ports and
both can be used for system management. Ethernet port 1 must be configured. Ethernet port 2 is
optional can be used as an alternate system management interface.
The configuration node is the only node that activates the system management IP address and the
only node that receives system management requests. If the configuration node fails, another node
in the system becomes the configuration node automatically and the system management IP
addresses are transferred during configuration node failover.
If the Ethernet link to the configuration node fails (or some other component failures related to the
Ethernet network occur) because the event is unknown to the Storwize V7000 then no configuration
node failover would be triggered. Therefore, configuring Ethernet port 2 as a management interface
allows access to the system using an alternate IP address.
Use Settings > Network > Management IP Addresses to configure port 2 as the backup system
IP management address. You can use the alternate IP address to access to the Storwize
V7000management GUI and CLI.
The chsystemrip command is used to set or change the IP address of either Ethernet ports.
Actually most of the commands with ‘cluster’ have been replaced with ‘system’. The svcinfo
lssystemip command has been replaced with the lssystemip command to list the system IP
addresses.

© Copyright IBM Corp. 2012, 2016 11-28


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Notifications
• Configure the Storwize V7000to alert the user and IBM when new events
are added to the system.
• Can choose to receive alerts about:
ƒ Errors (for example, hardware faults inside the system)
ƒ Warnings (errors detected in the environment)
ƒ Info (for example, asynchronous progress messages)
ƒ Inventory (email only)
• Alerting methods are:
ƒ SNMP traps
ƒ Syslog messages
ƒ Email call home
• Call Home to IBM is performed using email.
ƒ Will send Errors and Inventory back to an IBM email address to automatically open PMRs.
ƒ IBM will call the customer.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-26. Notifications

The Storwize V7000 uses Simple Network Management Protocol (SNMP) traps, syslog messages,
and Call Home email to notify you and the IBM Support Center when significant events are
detected. Any combination of these notification methods can be used simultaneously.
Notifications are normally sent immediately after an event is raised. However, there are events that
can occur because of service actions that are being performed. If a recommended service action is
active then these events are notified only if they are still unfixed when the service action completes.

© Copyright IBM Corp. 2012, 2016 11-29


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Notifications: Email
• Call Home support is
initiated for the following
reasons or types of data:
ƒ Problem or event
notification: Data is sent
when there is a problem or
event that might require the
attention of IBM service
personnel.
ƒ Inventory information: A
notification is sent to provide
the necessary status and
hardware information to IBM
service personnel.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-27. Notifications: Email

The Call Home feature allows electronic call home messaging transmission of operational and
error-related data to IBM and other users through a Simple Mail Transfer Protocol (SMTP) server
connection in the form of an event notification e-mail. Call home automatically notifies IBM service
personnel when errors occur in the hardware components of the system or sends data for error
analysis and resolution. Configuring call home reduces the response time for IBM Support to
address the issues.
Configure an SMTP server to be able to send e-mails. The SMTP server must allow the relaying of
e-mails from the Storwize V7000 system IP address.
Click Settings > Notifications, then select Email and Enable Notifications to configure the email
settings, including contact information and email recipients. A test function can be invoked to verify
communication infrastructure.

© Copyright IBM Corp. 2012, 2016 11-30


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Notifications: SNMP
• Standard protocol for
managing networks and
exchanging messages.
• Identify servers or managers
using Settings >
Notifications > SNMP.
ƒ SNMP server can be configured
to receive all or a subset of IP addresses for this server
event types.
í Up to six SNMP servers can be
configured.
ƒ Use MIB (Management
Information Base) to read and
interpret these Storwize
V7000events. SNMP server can be
í Available from the Storwize configured to receive all of
V7000support website. these types of events

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-28. Notifications: SNMP

The Simple Network Management Protocol (SNMP) is a standard protocol for managing networks
and exchanging messages. The system can send SNMP messages that notify personnel about an
event. You can use an SNMP manager to view the SNMP messages that the system sends. Up to
six SNMP servers can be configured.
You can also use the Management Information Base (MIB) file for SNMP to configure a network
management program to receive SNMP messages that are sent by the system. This file can be
used with SNMP messages from all versions of the software to read and interpret these Storwize
V7000 events.
To configure the SNMP server, identify the management server IP address, remote server port
number, and community name so that the Storwize V7000 generated SNMP messages can be view
from the identified SNMP server. Each Storwize V7000 detected event is assigned a notification
type of either error, warning, or information. The SNMP server can be configured to receive all or a
subset of these types of events.

© Copyright IBM Corp. 2012, 2016 11-31


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Notifications: Syslog messages


• Standard protocol for
forwarding log messages
from a sender to a receiver
on an IP network
• Identify servers or managers
using Settings > IPv4 or IPv6
Notifications > Syslog
ƒ Up to a maximum of six syslog
servers
ƒ Send syslog messages that
notify personnel about an event
ƒ System uses the User
Datagram Protocol (UDP) to
transmit the syslog message

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-29. Notifications: Syslog messages

The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver
on an IP network. Click Settings > Notifications, then select Syslog to identify a syslog server.
The IP network can be either IPv4 or IPv6. The system can send syslog messages that notify
personnel about an event. Syslog error event logging is available to enable the integration of
Storwize V7000events with an enterprise’s central management repository.
The system can transmit syslog messages in either expanded or concise format. You can use a
syslog manager to view the syslog messages that the system sends. The system uses the User
Datagram Protocol (UDP) to transmit the syslog message. You can specify up to a maximum of six
syslog servers.

© Copyright IBM Corp. 2012, 2016 11-32


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Security: Remote Authentication and encryption


• Facilitates centralized management of users at the domain controller
ƒ Only needs to define LDAP server and user group (not users) in each SVC
ƒ Select Settings > Security > Remote Authentication to configure and
manage remote authentication services
í Uses existing passwords and user groups that are defined on the remote service

• System support Encryption data-at-rest


ƒ Optional feature that provides protects against the potential exposure of
sensitive user data and user metadata that is stored on discarded, lost, or
stolen storage devices

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-30. Security: Remote Authentication and encryption

Use the Security panel to configure and manage remote authentication services and encryption
settings on the system.
The system supports two methods of enhanced security for the system. With remote authentication
services, an external authentication server can be used to authenticate users to system data and
resources. User credentials are managed externally through various supported authentication
services, such as LDAP.
When you configure remote authentication, you do not need to configure users on the system or
assign additional passwords. Instead you can use your existing passwords and user groups that
are defined on the remote service to simplify user management and access, to enforce password
policies more efficiently, and to separate user management from storage management.
For availability, multiple LDAP servers can be defined. These LDAP servers must all be the same
type (for example MS AD). Authentication requests are routed to those LDAP servers marked as
Preferred unless the connection fails or a user name isn’t found. Requests are distributed across all
the defined preferred LDAP servers in round robin fashion for load balancing.
Additionally the system supports encryption of data stored on drives that are attached to the
system. To use encryption, you must obtain an encryption license and configure encryption to be
used on the system. Only Storwize V7000 Gen2 systems support encryption.

© Copyright IBM Corp. 2012, 2016 11-33


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty
The system provides optional encryption of data at rest, which protects against the potential
exposure of sensitive user data and user metadata that is stored on discarded, lost, or stolen
storage devices. Encryption can only be enabled and configured on enclosures that support
encryption. Encryption of system data and system metadata is not required, so system data and
metadata are not encrypted.

© Copyright IBM Corp. 2012, 2016 11-34


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Update licensed capacities


• Base software license provided with your system includes the use of its basic
functions
• Storwize V7000supports two licensing models
ƒ Standard Edition
í Based on the total number of terabytes (TB) that the system is licensed for virtualization,
FlashCopy and remote-copy functions, and IBM Real-time Compression. The Real-time
Compression limit is the total virtual capacity of all the compressed volumes in the
system
ƒ Entry Edition
í Based on the number of physical disks that are licensed for virtualization and whether
remote-copy and FlashCopy functions are licensed

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-31. Update licensed capacities

Click Settings > System > Licensing to update Storwize V7000 licensed capacities. The base
software license provided with your system includes the use of its basic functions; however, the
following additional licenses can be purchased to expand the capabilities of your system.
Administrators are responsible for purchasing additional licenses and configuring the systems
within the license agreements, which includes configuring the settings of each licensed function on
the system.
The system supports capacity-based licensing that grants you a number of terabytes (TB) for
additional licensed functions. Administrators are responsible for managing use within the terms of
the existing licenses and for purchasing additional licenses when existing license settings are no
longer sufficient for the needs of their organization. In addition, the system also issues warning
messages if the capacity used for licensed functions is above 90% of the license settings that are
specified on the system.

© Copyright IBM Corp. 2012, 2016 11-35


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

System and drives software upgrade


• Upgrade system software level
ƒ Automatic
ƒ Manual
• Upgrade internal drives
Registration requires
ƒ Update individual drives or update all Machine Type and serial
number
drives on the system
ƒ Does not support updating external
drives that or attached
• Perform upgrades at the lowest
utilization of the systems
• Download software package
ƒ Must be registered as an IBM user to be
eligible for downloads
ƒ Have a current Storwize V7000
configuration backup on file

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-32. System and drives software upgrade

You can upgrade the system to the latest software and internal drives firmware levels by using the
management GUI System Status panel, Actions button. Select the code fix pack as well as the
software Upgrade Test utility. In addition to the code software, additional documentation such as
release notes, flashes, and hints/tips can be clicked from this page.
The download information links to the code compatibility cross reference (or use web search engine
to directly access the page).
Review the cross reference as upgrading from older code levels to v7 might require an intermediate
upgrade to v6 first.
The site also provides links to information that is valuable for planning the upgrade. Read the
information carefully and act accordingly. It is recommended to perform the upgrade during off peak
load times. The updated node is unavailable during the upgrade and therefore the other node must
handle the complete load. All this information can be used to create a good plan for the upgrade.
It is recommend that you perform upgrades at the lowest utilization of the systems – such as over
the weekend – which most already do. Since nodes will be in a cache write-through mode, means
that none of the node during the upgrade process will have cache enabled because all writes will be
going to the actual disk drives. So be aware there could be some impact in performance with no
available cache.

© Copyright IBM Corp. 2012, 2016 11-36


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty
Basically, it will do one I/O group at a time. However, once your system is half-way into the process
to finish, then cache is turned off for the rest of the upgrade.

© Copyright IBM Corp. 2012, 2016 11-37


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Software packages download


• Download the latest software packages for IBM Support website
ƒ Spectrum Family software
ƒ Software Upgrade Test Utility
ƒ Drive firmware
• Download and review Read files

To your
workstation

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-33. Software packages download

You can upgrade the system to the latest software and internal drives firmware levels by using the
management GUI System Status panel, Actions button. Select the code fix pack as well as the
software Upgrade Test utility. In addition to the code software, additional documentation such as
release notes, flashes, and hints/tips can be clicked from this page.
The download information links to the code compatibility cross reference (or use web search engine
to directly access the page).
Review the cross reference as upgrading from older code levels to v7 might require an intermediate
upgrade to v6 first.
The site also provides links to information that is valuable for planning the upgrade. Read the
information carefully and act accordingly. It is recommended to perform the upgrade during off peak
load times. The updated node is unavailable during the upgrade and therefore the other node must
handle the complete load. All this information can be used to create a good plan for the upgrade.
It is recommend that you perform upgrades at the lowest utilization of the systems – such as over
the weekend – which most already do. Since nodes will be in a cache write-through mode, means
that none of the node during the upgrade process will have cache enabled because all writes will be
going to the actual disk drives. So be aware there could be some impact in performance with no
available cache.

© Copyright IBM Corp. 2012, 2016 11-38


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty
Basically, it will do one I/O group at a time. However, once your system is half-way into the process
to finish, then cache is turned off for the rest of the upgrade.

© Copyright IBM Corp. 2012, 2016 11-39


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Software Upgrade Test Utility overview


• Software Upgrade Test Utility is used as a preventive measure.
ƒ Software checks for unknown issues that can prevent storage device software
upgrade
ƒ Non-disruptive - does not require nodes to be restarted
ƒ No interruption to host I/O
• Download with utility software from Fix Center along with system upgrade
code.

Best practice: To minimize potential issues that might arise during the installation of the upgrade package, resolve all
unfixed errors in the Storwize V7000 system event log prior to the upgrade activity.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-34. Software Upgrade Test Utility overview

The Storwize V7000 Upgrade Test utility tests for known issues that might prevent a software
upgrade to complete successfully. The utility supports Storwize V7000, Storwize V7000, Storwize
V5000, IBM Flex System V7000, Storwize 3500 and Storwize 3700 for software upgrade.
The installation and usage of this utility is non-disruptive and does not require any nodes to be
restarted, so there is no interruption to host I/O. The utility will only be installed on the current
configuration node. Download the utility along with the system upgrade code. The utility is installed
on the system in the same manner as the system upgrade package. After installation of the utility
the Storwize V7000GUI automatically runs and displays its output.
The utility can be run as many times as necessary on the same system to perform a readiness
check in preparation for a software upgrade. We strongly recommend running this utility for a final
time immediately prior to applying the upgrade, making sure that there have not been any new
releases of the utility since it was previously downloaded.
To download the Software Upgrade Test Utility, navigate to Fix Central, choose your specific
product and download from the “Select fixes” page for your product.
Ensure that you have no unfixed errors in the log and that the system date and time are correctly
set. Start the fix procedures, and ensure that you fix any outstanding errors before you attempt to
concurrently update the code.
For systems running Storwize V7000 code levels prior to v6 the CLI is used to invoke the utility.

© Copyright IBM Corp. 2012, 2016 11-40


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Software upgrade: Automatic versus manual

Automatic Manual
• Preferred method • Provides more
• Upgrades each node flexibility
in the system • Remove node from
systematically system
ƒ Configuration node is • Upgrades software on
updated last node
Versus
ƒ As each node restarted,
there might be some
• Return node to the
degradation in the system
maximum I/O rate ƒ Configuration node is
during updated last

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-35. Software upgrade: Automatic versus manual

There are two update methods in which to upgrade the system software code: Automatic or
Manual.
The automatic method is the preferred procedure for upgrading software on nodes. During the
automatic upgrade process, the system will upgrade each node in the system one at a time, and
the new code is staged on the nodes, before upgrading the configuration node.
While each node restarts, there might be some degradation in the maximum I/O rate that can be
sustained by the system. After all the nodes in the system are successfully restarted with the new
software level, the new software level is automatically committed.
To provide more flexibility in the upgrade process, you can also upgrade each node manually.
During this manual procedure, the upgrade is prepared, you remove a node from the system,
upgrade the software on the node, and return the node to the system. As same as the automatic
upgrade, you must still upgrade all the nodes in the clustered system. Repeat all the steps in this
procedure for each node that you upgrade that is not a configuration node.

© Copyright IBM Corp. 2012, 2016 11-41


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Launch Update System wizard


• Verify the current GUI software
level details - select Settings >
System > Upgrade System.
• To initiate CCL, click Update
and browse to upload the test
utility and the update package.
• Click Update to proceed.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-36. Launch Update System wizard

To view the current firmware level running, use the Settings menu and select System > Upgrade
Software. This action can also be performed from the Monitoring > System view, select Action
and select Update System.
The Update System software window will also displays fetched information on latest software
version available for update. The displayed version may not always be the recommended version,
so always refer to IBM Fix Central for the latest tested version available.
To initiate the upgrade, click the Update button and browse to the location of the downloaded
software update package. Once you have selected both the files, click the Update option to
proceed with the system install.
Select the method in which to perform the update: automatic or manual. The process begins with
uploading the software code package to the system.

© Copyright IBM Corp. 2012, 2016 11-42


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Update System: Running Update Test Utility


• The Update Test Utility generates the svcupgradetest command
which invokes the utility to assess the current system environment, report
errors and warnings of potential issues.
• If issues are detected, the system will pause and provide a Read more
link to help address issues.
ƒ Once issues are fixed, click Resume to continue. Fixes cannot be
applied while
upgrading.

Test Utility can be performed multiple times using CLI.


Running individual Test Utility is not supported from the GUI.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-37. Update System: Running Update Test Utility

Once the Update Test Utility has been installed, the Update System wizard prompts for the code
version to be checked. It will then generate an svcupgradetest command that invokes the utility to
assess the current system environment.
The purpose of running the test utility is to verify that no errors are reported and warnings of
potential issues. and that the system is ready to update. If any issue is discovered by the test utility,
the firmware update stops and provides a Read more link to help address any issue detected. The
Update Test Utility can be run as many times using CLI as necessary on the same system to
perform a readiness check in preparation for a software upgrade. We suggest that you run this
utility for a final time immediately prior to applying the upgrade. The Update Test Utility is not
supported to run as an individual tool using the management GUI.
After the Upgrade Test utility output is reviewed and if necessary, all issues have been resolved,
you can click Resume to continue with the update.

© Copyright IBM Corp. 2012, 2016 11-43


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Update System: Node is taken offline during upgrade


• Nodes can run new code
concurrently with normal system
activity.
• Process begins with the first node
of each I/O group.
ƒ Configuration node is update after
half of the nodes are updated.
í Issue CLI lsnode -delim ,
command to verify the configuration
node of the system.
ƒ Each node is taken offline for
upgrade (one node at a time).
í GUI generates the applysoftware
command to apply the software code.
í Health Status flags node condition
with a status alert. Administrator can issue the
svqueryclock command to view
ƒ Once update is complete, node update duration time.
restarts and new code is applied.
Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-38. Update System: Node is taken offline during upgrade

During the update process, the first node in the IO group is taken offline. Once the code level is
verified by the system, the GUI generates the applysoftware command to apply the system
software code to the updating node. The system Health Status pod will also flag the condition as a
node status alert. Although, Storwize V7000 supports Concurrent Code Load, you can expect
performance degradation as each node is taken offline in turn while the software is being installed.
You can issue the CLI lsnode command to verify which node is the configuration is the
configuration node. In a four-node system, the configuration node isn’t typically upgraded until after
half of the nodes of the system have been upgraded.
The update process can take some time to complete. Once the node that was being updated has
been restarted with the upgraded software, it is placed back online with an updated code level. The
next node is line will repeat the process to be taken offline for the software upgrade.
If you are updating multiple systems, the software upgrade should be allowed to complete on one
system before it is started on the other system. Do not upgrade both systems concurrently.
The administrator can also issue the svqueryclock command to view a duration time reference for
the particular upgrade in process.

© Copyright IBM Corp. 2012, 2016 11-44


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Update System: Host path discovery


• Node being upgraded does not participate in I/O activity.
ƒ I/O activity is re-directed to the other node in the I/O group by host multipath
software.
ƒ Each node in the I/O group will be held in a cache write-through mode until all
nodes in the system is update.
ƒ SDD datapath query device command can be used to monitored path
status.
• Example:
ƒ NODE1 is offline for update, paths are automatically routed to NODE2 for I/O
activities.
Paths to Host
preferred NODE1 Application access
offline volume

Configuration node
Alternate
NODE2
paths used

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-39. Update System: Host path discovery

A node being updated can no longer participate in IO activity in the IO group. While the node being
upgraded is offline, the other node in the IO group operates in write-through mode. As a result, all
IO activity for the volumes in the IO group is directed to the other node in the IO group by the host
multipathing software. Ensure that hosts with IO activity have access to all configured paths (use
multipath driver interfaces such as the SDD datapath query device command for verification).
From the Windows host perspective, the SDD datapath query device command can be used to
monitored path status. In this example, NODE1 is offline. All paths to NODE1 are expected to be
unavailable, therefore any IO activities associated with these unavailable paths would be failed
over to other paths of the device. The SDD software automatically routes IO using paths to the
alternate node NODE2. SDD balances IO across all paths to the alternate node when there are no
paths to the preferred node.

© Copyright IBM Corp. 2012, 2016 11-45


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Update System: Host path discovery


• Code pause for a 30 minute (estimated) delay between each node upgrade.
ƒ Allow host multipath software to rediscover paths to node being upgraded.
• Each node takes about 15 to 60 minutes to upgrade.
ƒ You cannot interrupt the upgrade until all in system are upgraded.
• After each node has been updated, the system is updated.

During the upgrade, the GUI might go offline temporarily as the V7000 Controllers
are restarted during the upgrade. Refresh your browser to reconnect.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-40. Update System: Host path discovery

There is a thirty-minute delay or time-out built in between node upgrades. This delay allows time for
the host multipathing software to rediscover paths to the nodes that are upgraded, so that there is
no loss of access when another node in the IO group is upgraded.
You cannot interrupt the upgrade and switch to installing a different software level. Each node in the
system can takes from 15 to 60 minutes to upgrade (typical 30 minutes). You cannot invoke the
new functions of the upgraded code until all member nodes are upgraded and the upgrade is
committed. The system takes a conservative approach to ensure that paths are stabilized before
proceeding.
After completing the update to the last node, a system upgrade process is performed.

© Copyright IBM Corp. 2012, 2016 11-46


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Upgrade event log entries


• During the upgrade process, the event log record entries based on
system, node and io_grp status.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-41. Upgrade event log entries

During the upgrade process, entries are made in the event log which indicates the node status
during the upgrade or failure that may have occur during the upgrade process. It will also indicate
discovery of IO ports.

© Copyright IBM Corp. 2012, 2016 11-47


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Update System: Complete/All nodes upgraded


• After all nodes have been upgraded then the system advances to the
new code level - all paths to devices have been recovered.
• Reissue the CLI lsnode -delim , command to view the new
configuration node of the system.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-42. Update System: Complete/All nodes upgraded

The new system code level is displayed in the Upgrade System view. The status will either indicate
that the system is running the most up-to-date code level, or a new software update is available for
update.
You can reissue the CLI lsnode command to view the new configuration node of the system.
Because of the operational limitations that occur during the update process, the code update is a
user task. If you have problems with an update and must call for support, see the topic about how to
get information, help, and technical assistance.

© Copyright IBM Corp. 2012, 2016 11-48


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Update drive firmware using GUI or CLI


• Depending on the system software version, you can update drives using:
ƒ Management GUI
ࡳ Select Pools > Internal Storage > Actions > Upgrade or right-click a drive and select
Upgrade.
ࡳ Select Monitoring > System > Actions > Update > Drive.
ࡳ Browse to the directory of the drive microcode file.

ƒ CLI command:
ࡳ Multiple drives can be upgraded per invocation.
lsdependentvdisks -drive drive_id
applydrivesoftware -file name -type firmware -drive drive_id
Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-43. Update drive firmware using GUI or CLI

Depending on the version of software you are running on your system, you can upgrade a
solid-state drive (SSD) by downloading and applying firmware updates using the management GUI
or using the CLI.
The management GUI allows you to update individual drives or update all drives that have available
updates.
Depending on the number of drives and the size of the system, drive updates can take up to 10
hours to complete.
You can monitor the progress of the update, using the management GUI Running Tasks icon and
then click Drive Update Operations. You can also use the Monitoring > Events panel to view any
completion or error messages that are related to the update.
There are some codes in which the drive upgrade procedure is supported only by using the CLI.
Using scp or pscp, copy the firmware upgrade file and the Software Upgrade Test Utility package to
the /home/admin/upgrade directory by using the management IP address. Next, run the
applydrivesoftware command. You must specify the firmware upgrade file, the firmware type,
and the drive ID. To apply the upgrade even if it causes one or more volumes to go offline, specify
the -force option.

© Copyright IBM Corp. 2012, 2016 11-49


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

SSD drive conditions


• A drive is not updated if it meets the following conditions:
ƒ The drive is offline or the drive is failed.
ƒ The RAID array to which the drive belongs is not redundant.
ƒ The drive firmware is the same as the update firmware.
ƒ The drive has dependent volumes that will go offline during the update.
ƒ The drive is used as a boot drive for the system.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-44. SSD drive conditions

Manufacturers sometimes need to release firmware updates down the road to address technical
issues and bugs that are revealed once the SSDs are sold into the market. However, a firmware
update can offer performance enhancements along with better host system compatibility and drive
reliability.
All drives being updated must be in good standing. If you purchased drives some time ago,
chances are you will need to update the shipped version to a newer one, providing the drive does
not meet the list of the following conditions.

© Copyright IBM Corp. 2012, 2016 11-50


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

List drive details: SAS_SSD drive example


IBM Storwize V009:V009B1-admin>lsdrive 0
id 0
status online
error_sequence_number
use member
UID 5000a7203002dedd
tech_type sas_ssd
capacity 278.9GB
block_size 512
vendor_id IBM-207x
product_id HK230041S
FRU_part_number 85Y5861
FRU_identity 11S49Y7422YXXXP109U02P
RPM model and part numbers
firmware_level 291E
FPGA_level 0B
mdisk_id 3
mdisk_name mdisk3
member_id 0 installed firmware level
enclosure_id 1
slot_id 23
node_id
node_name
quorum_id
port_1_status online Output from previous release
port_2_status online
for illustration purposes
only

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-45. List drive details: SAS_SSD drive example

Here is an example of the lsdrive command that displays ssd drive attributes.

© Copyright IBM Corp. 2012, 2016 11-51


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

GUI Preferences: Navigation and login message


• Animate the GUI navigation menu.
ƒ Must re-log in for changes to take effect.

Without animation

With animation

• Create a user message to be displayed at the time of login to the GUI or CLI.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-46. GUI Preferences: Navigation and login message

A new code level typically includes GUI enhancements and sometimes the path to display some of
the objects are reorganized.
You can enable animated navigation menu selections that are bigger. This requires the user to
re-login again.
Create a login message to be displayed to anyone logging into the GUI or in a CLI session.

© Copyright IBM Corp. 2012, 2016 11-52


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

GUI preferences: General


• Modify GUI preferences
ƒ Clear GUI objects to restore defaults
ƒ Change the default timeout
ƒ Specify a web address to be used as the Information Center
í Defaults to IBM latest version of the Information Center
• Access to in-depth documentation on the system and its capabilities
ƒ Refresh GUI cache synchronize the GUI with the system and trigger an automatic
browser refresh
ƒ Low graphic mode provides non-dynamic menu options with slower connection
í Must re-log in for changes to take effect
ƒ Extents sizes can be enabled during pool creation (disable by default)

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-47. GUI preferences: General

While not required, it might be worthwhile to refresh GUI objects using the General section to
change settings that are related to how information is displayed on the GUI.
• Use the Clear button to restore all default preferences.
• Change the default system timeout.
• You management GUI provides as a default, the web address to a locally installed version of
the information center. The information provides in-depth documentation on the Storwize
V7000system support and capabilities.
• The Refresh GUI Cache synchronize the GUI with the system and trigger an automatic browser
refresh.
▪ For those who do not prefer the dynamic menu icon that tends to be a little challenging at
time. You can enable the management GUI to operate in low graphics mode. This mode
provides a non-dynamic version of the management GUI for slower connection and menu
selection. Before the management GUI can to perform in low graphics mode, you must
re-log in to the management GUI.
▪ When create a storage pool, the availability to change the extents size is disable by default.
You must click the Advanced pool settings option to enable this feature.

© Copyright IBM Corp. 2012, 2016 11-53


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

VMware virtual volumes


• VMware virtual volumes (VVols) is a new feature introduced in IBM
Spectrum Virtualize 7.6.
ƒ Ability to create Vvols on Storwize V7000 directly from the VMware VCenter
server
ƒ Must be enabled (default OFF)
ƒ Hosts must be running ESXi version 6.0 or higher to use VVols functionality

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-48. VMware virtual volumes

VVols is a new feature introduced in IBM Spectrum Virtualize 7.6. This new functionality allows
users to create volumes on IBM Spectrum Virtualize directly from VMware VCenter server.
Hosts must be running ESXi version 6.0 or higher to use VVols functionality. In addition, the host
must be added to the storage system, with the host-type field set to VVol. You can also enable
VVols on existing hosts by changing the host type to VVol.
The Settings > System > VVols section allows to enable or disable the functionality.
Download the latest publication, Implementing VVOLS on the SVC and Storwize Family to learn
more about the supported feature.

© Copyright IBM Corp. 2012, 2016 11-54


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Verify management GUI web browser settings


• The management GUI supports the following web browsers:
ƒ Mozilla Firefox 38
ƒ Mozilla Firefox Extended Support Release (ESR) 38
ƒ Microsoft Internet Explorer (IE) 10 and 11
ƒ Google Chrome 41 Enable JavaScript for your web browser
• Enable cookies in your web browser.
• Enable scripts to disable or replace context menus (Mozilla Firefox
only).

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-49. Verify management GUI web browser settings

To access the management GUI, you must ensure that your web browser is supported and has the
appropriate settings enabled. IBM supports higher versions of the browsers if the vendors do not
remove or disable function that the product relies upon. For browser levels higher than the versions
that are certified with the product, customer support accepts usage-related and defect-related
service requests. If the support center cannot re-create the issue, support might request the client
to re-create the problem on a certified browser version. Defects are not accepted for cosmetic
differences between browsers or browser versions that do not affect the functional behavior of the
product. If a problem is identified in the product, defects are accepted. If a problem is identified with
the browser, IBM might investigate potential solutions or work-arounds that the client can
implement until a permanent solution becomes available.

© Copyright IBM Corp. 2012, 2016 11-55


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Clear web browser cache


• Use the web browser Tools option Clear Recent History > Everything to clear
web browser cache, images, buttons, icons, and web page history.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-50. Clear web browser cache

There are some situations where refreshing GUI objects won't work, because of reloads, the
webpage might still be using the old files from the cache. Therefore, you need to refresh your cache
first! Your browser has a folder in which certain items that have been downloaded are stored for
future use. Graphic images (such as buttons and icons), photo's, and even entire web pages are
examples of items which are saved or cached. This method will not only refresh GUI objects, but it
will clear the web browser history.

© Copyright IBM Corp. 2012, 2016 11-56


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Administrative management topics


• System Monitoring
• Access
• Settings
• Service Assistant (SA)
• IBM Spectrum Storage

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-51. Administrative management topics

This topic discusses the requirements and procedures to reset system password.

© Copyright IBM Corp. 2012, 2016 11-57


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

When to use the Service Assistant


• Use the service assistant in the following situations:
ƒ When you cannot access the system from the management GUI and you cannot
access the Storwize V7000 to run the recommended actions
ƒ When the recommended action directs you to use the service assistant
ƒ Node hardware issues
ƒ Recovery data that has been corrupted data or lost configuration data
• SA does not provide support to service expansion enclosures.
• Inappropriate use can cause loss of access to data or even data loss.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-52. When to use the Service Assistant

The Service Assistant (SA) interface primary use is to perform service-related tasks when a node is
in service state or is not yet a member of a system. You should complete service actions on node
canisters only when directed to do so by the fix procedures.
The storage system management GUI operates only when there is an online system. Use the
service assistant if you are unable to create a system or if both node canisters in a control
enclosure are in service state. The node canister might also be in a service state because it has a
hardware issue, has corrupted data, or has lost its configuration data.
The service assistant does not provide any facilities to help you service expansion enclosures.
Always service the expansion enclosures by using the management GUI.
If used inappropriately, the service actions that are available through the service assistant can
cause loss of access to data or even data loss.

© Copyright IBM Corp. 2012, 2016 11-58


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Service Assistant interface overview


• Used to service a node for hardware and boots issues, even if not a system
member
• Service Assistant access
ƒ https://node SA IP/service
ƒ https://clusterip/service
• Best practice: Set SA IP address for each node

node SA IP/service

Service Assistant GUI


available to superuser only

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-53. Service Assistant interface overview

SA can be accessed using the node service IP address with the superuser ID. The Service
Assistant can also be reached using the system IP address with /service appended.
An alternative to using the system GUI to set the service IP address is using the Change Service
IP option of the Service Assistant navigation tree.
To start the application, complete the following steps.
• Start a supported web browser and point your web browser to serviceaddress/service for the
node that you want to work on.
• Log on to the service assistant using the superuser password. If you are accessing a new node
canister, the default password is passw0rd. If the node canister is a member of a system or has
been a member of a system, use the password for the superuser password.

© Copyright IBM Corp. 2012, 2016 11-59


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Service assistant basic management information


• The Node Detail section displays data that is associated with the selected node:
ƒ Status information of all nodes
ƒ Status of hardware components
ƒ Identification data (code level, WWPNs)

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-54. Service assistant basic management information

The Node Detail section displays data that is associated with the selected node:
• The Node tab shows general information about the node canister that includes the node state
and whether it is a configuration node.
• The Hardware tab shows information about the hardware.
• The Access tab shows the management IP addresses and the service addresses for this node.
• The Location tab identifies the enclosure in which the node canister is located.
• The Ports tab shows information about the I/O ports.

© Copyright IBM Corp. 2012, 2016 11-60


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

SA service-related actions
• Collect logs to create and download a package of files to send to
support personnel.
• Remove the data for the system from a node.
• Recover a system if it fails.
• Install a code package from the support site or rescue the code from
another node.
• Upgrade code on node canisters manually versus performing a
standard upgrade procedure.
• Configure a control enclosure chassis after replacement.
• Change the service IP address that is assigned to Ethernet port 1 for
the current node canister.
• Install a temporary SSH key if a key is not installed and CLI access is
required.
• Restart the services used by the system.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-55. SA service-related actions

Listed are a number of service-related actions that can be performed using the Service Assistant
interface. A number of tasks that are performed by the service assistant cause the node canister to
restart. It is not possible to maintain the service assistant connection to the node canister when it
restarts. If the current node canister on which the tasks are performed is also the node canister that
the browser is connected to and you lose your connection, reconnect and log on to the service
assistant again after running the tasks.

© Copyright IBM Corp. 2012, 2016 11-61


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Recover lost data using the SA T3 recovery

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-56. Recover lost data using the SA T3 recovery

If system data has been lost from all nodes, an administrator might be directed to remove system
data from the node canisters, and perform a system recovery to retrieve system through the
Service Assistant Tool. The procedure to recover the entire system is known as T3 recovery. This
procedure assumes that the system reported a system error code 550 or error code 578. To
address the issue, perform a service action to place each node in a service state.
For a complete list of prerequisites and conditions for recovering the system, see the following
information:
• “Recover System Procedure” in the Troubleshooting, Recovery, and Maintenance Guide
• Recover System Procedure in the Information Center

© Copyright IBM Corp. 2012, 2016 11-62


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Support and email notifications


• IBM Support website
• Sign up to receive IBM Google or Bing:
Notifications for the IBM Support Portal
Storwize V7000and Storwize V7000

Storwize storage
products to stay current subscriber name
with product support
information.
ƒ Click links in email to
download directly to
download the latest
codes.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-57. Support and email notifications

Visit the IBM Support website http://www.ibm.com/storage/support/2076 to view the latest storage
information. The Downloads page of the Storwize V7000 support website contains documentation
as well as the software code.
Tools, such as Easy Tier STAT and the IBM Comprestimator are available from this site. Third-party
host software integration such as device driver for VMware VAAI and Microsoft Windows Volume
Shadow Copy Service are also available for download from this site.
You can also subscribe to IBM Notifications for the IBM storage products to stay current with
product support information.
An email containing the URL to the code package website is sent automatically to subscribers
when a new code level is released.

© Copyright IBM Corp. 2012, 2016 11-63


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Best practices for proactive problem prevention


• Sign up to receive product technical notifications
ƒ Enable using My Notifications function on web as detailed at URL:
ftp://ftp.software.ibm.com/systems/support/tools/mynotifications/overview.pdf
• Configure Call Home and Inventory Reporting:
ƒ Part of the initial easy setup procedure
ƒ Emails sent to IBM and customer’s administrators
ƒ Error events will alert IBM
í IBM will contact customer to help with problem resolution
ƒ Inventory option emails vital product data and software license information to
IBM
í Include code levels running: Enables proactive notification of customers about
critical problems with that code level

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-58. Best practices for proactive problem prevention

To proactively prevent any problems, stay informed on IBM’s technical support resource for all IBM
products and services:
• Receive technical notifications about the new code releases, technical updates, or possible
problems, it is highly recommend to use My Notifications on the IBM website. You can register
your
• IBM devices and get the latest information about the device.
• Allow a fast reaction on problems, we recommend to configure the Call Home function to send
emails to IBM support and also event, information, inventory mails, and others to the
administrators. As described in this unit.

© Copyright IBM Corp. 2012, 2016 11-64


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Administrative management topics


• System Monitoring
• Access
• Settings
• Service Assistant (SA)
• IBM Spectrum Storage
ƒ Family
ƒ Solutions

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-59. Administrative management topics

This topic provides an overview of IBM Spectrum Storage family and its Offerings.

© Copyright IBM Corp. 2012, 2016 11-65


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

IBM Spectrum Storage: Introduction


• IBM Spectrum Storage family defines the IBM’s approach to Software
Defined Storage (SDS).
• Software built on proven storage technologies and expertise with
simplified management.

Agility Control Efficiency

Speed Insight Utilization


Spectrum Accelerate Spectrum Control Spectrum Virtualize

Elasticity Governance Placement


Spectrum Scale Spectrum Protect Spectrum Archive

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-60. IBM Spectrum Storage: Introduction

The IBM Spectrum Storage family is the industry’s first software family based on proven
technologies and designed specifically to simplify storage management, scale to keep up with data
growth, and optimize data economics. It represents a new, more agile way of storing data, and
helps organizations prepare themselves for new storage demands and workloads. The software
defined storage solutions included in the IBM Spectrum Storage family can help organizations
simplify their storage infrastructures, cut costs, and start gaining more business value from their
data.
IBM Spectrum Storage provides the following benefits:
▪ Simplify and integrate storage management and data protection across traditional and new
applications
▪ Deliver elastic scalability with high performance for analytics, big data, social, and mobile
▪ Unify siloed storage to deliver data without borders with built-in hybrid cloud support
▪ Optimize data economics with intelligent data tiering from flash to tape and cloud
▪ Build on open architectures that support industry standards that include OpenStack and
Hadoop

© Copyright IBM Corp. 2012, 2016 11-66


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

IBM Spectrum Storage family


Based on technology from

Tivoli Storage Productivity Center (TPC) and


Family of Storage Management and IBM Spectrum
management layer of Virtual Storage Center
Optimization Software Control
(VSC)

IBM Spectrum
Tivoli Storage Manager (TSM)
Protect
Control Protect Archive
IBM Spectrum
Linear Tape File System (LTFS)
Archive

Virtualize Accelerate Scale


IBM Spectrum
SAN Volume Controller (SVC)
Virtualize

IBM Spectrum
Software from XIV System
Accelerate
Private, Public
Any Storage Flash Systems or Hybrid Cloud
IBM Spectrum
Elastic Storage - GPFS
Scale

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-61. IBM Spectrum Storage family

The IBM’s Spectrum storage family comprises of the following products:


IBM Spectrum Control
IBM Spectrum Control provides efficient infrastructure management for virtualized, cloud, and
software-defined storage to simplify and automate storage provisioning, capacity management,
availability monitoring, and reporting.
The functionality of IBM Spectrum Control is provided by IBM Data and Storage Management
Solutions.
IBM Spectrum Protect
IBM Spectrum Protect enables reliable, efficient data protection and resiliency for software-defined,
virtual, physical, and cloud environments.
The functionality of IBM Spectrum Protect is provided by IBM Backup and Recovery Solutions.
IBM Spectrum Archive
IBM Spectrum Archive enables you to automatically move infrequently accessed data from disk to
tape so you can lower costs while retaining ease of use and without the need for proprietary tape
applications.
The functionality of IBM Spectrum Archive is provided by IBM Linear Tape File System™.

© Copyright IBM Corp. 2012, 2016 11-67


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty
IBM Spectrum Virtualize
IBM Spectrum Virtualize is an industry-leading storage virtualization product that enhances existing
storage to improve resource utilization and productivity to achieve a simpler, more
scalable and cost-efficient IT infrastructure.
The functionality of IBM Spectrum Virtualize is provided by IBM SAN Volume Controller.
IBM Spectrum Accelerate
IBM Spectrum Accelerate is a software defined storage solution, which is born of the proven XIV
integrated storage offering, which designed to help speed delivery of data across the organization
and add extreme flexibility to cloud deployments.
IBM Spectrum Accelerate delivers hotspot-free performance, easy management scaling, and
proven enterprise functionality such as advanced mirroring and flash caching to different
deployment platforms.
IBM Spectrum Scale
IBM Spectrum Scale is a proven high-performance data and file management solution that can
manage over one billion petabytes of unstructured data. Spectrum Scale redefines the economics
of data storage using policy-driven automation: as time passes and organizational needs change,
data can be moved back and forth between flash, disk and tape storage tiers without manual
intervention.
IBM Spectrum Scale, delivered by IBM General Parallel File System or GPFS (code name Elastic
Storage).

© Copyright IBM Corp. 2012, 2016 11-68


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

IBM Spectrum Storage solutions

• Proven technology, open standards, modular adoption


• Brings three key components required to begin the transformation of a
traditional datacenter to an agile, cloud-like environment
ƒ Visibility
ƒ Control
ƒ Automation
Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-62. IBM Spectrum Storage solutions

The benefits at high level:


IBM Spectrum Control- Analytics-driven data management to reduce costs by up to 50 percent
IBM Spectrum Protect- Optimized data protection to reduce backup costs by up to 38 percent
IBM Spectrum Archive- Fast data retention that reduces TCO for active archive data by up to 90%
IBM Spectrum Virtualize- Virtualization of mixed environments stores up to 5xmore data
IBM Spectrum Accelerate- Enterprise storage for cloud deployed in minutes instead of months
IBM Spectrum Scale- High-performance, highly scalable storage for unstructured data

© Copyright IBM Corp. 2012, 2016 11-69


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Help and technical assistant

Website
Directory of worldwide contacts http://www.ibm.com/planetwide

Support for Storwize V7000 (2076) www.ibm.com/storage/support/storwize/v7000

Support for IBM System Storage and www.ibm.com/storage/support/


IBM TotalStorage products

Requires
IBM user ID

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-63. Help and technical assistant

The table lists websites where you can find help, technical assistance, and more information about
IBM products. IBM maintains pages on the web where you can get information about IBM products
and fee services, product implementation and usage assistance, break and fix service support, and
the latest technical information.

© Copyright IBM Corp. 2012, 2016 11-70


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

PDF publications

Title Order number

IBM Storwize V7000 Gen2 Quick Installation Guide GI13-xxxx

IBM Storwize V7000 Quick Installation Guide GC27-2290


IBM Storwize V7000 Expansion Enclosure Installation Guide, Machine
GC27-4234
type 2076
IBM Storwize V7000 Troubleshooting, Recovery, and Maintenance
GC27-2291
Guide
Storwize V7000 Gen2 Installation Poster SC27-5923

IBM Systems Safety Notices G229-9054

IBM Storwize V7000 Read First Flyer GC27-2293


IBM System Storage SAN Volume Controller and IBM Storwize V7000
GC27-2287
Command-Line Interface User's Guide
IBM Statement of Limited Warranty (2145 and 2076) Part number: 4377322
SC28-6872 (contains Z125-
IBM License Agreement for Machine Code:
5468)

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-64. PDF publications

This table lists PDF publications that are also available in the information center. Click the number
in the “Order number” column to be redirected.
• IBM Storwize V7000 Gen2 Quick Installation Guide: This guide provides instructions for
unpacking your shipping order and installing your system. The first of three chapters describes
verifying your order, becoming familiar with the hardware components, and meeting
environmental requirements. The second chapter describes installing the hardware and
attaching data cables and power cords. The last chapter describes accessing the management
GUI to initially configure your system.
• IBM Storwize V7000 Quick Installation Guide: This guide provides detailed instructions for
unpacking your shipping order and installing your system. The first of three chapters describes
verifying your order, becoming familiar with the hardware components, and meeting
environmental requirements. The second chapter describes installing the hardware and
attaching data cables and power cords. The last chapter describes accessing the management
GUI to initially configure your system.
• IBM Storwize V7000 Expansion Enclosure Installation Guide, Machine type 2076: This guide
provides instructions for unpacking your shipping order and installing the 2076 expansion
enclosure for the Storwize V7000 system.

© Copyright IBM Corp. 2012, 2016 11-71


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty
• IBM Storwize V7000 Troubleshooting, Recovery, and Maintenance Guide: This guide describes
how to service, maintain, and troubleshoot the Storwize V7000 system.
• Storwize V7000 Gen2 Installation Poster: The installation poster provides an illustrated
sequence of steps for installing the enclosure in a rack and beginning the setup process.
• IBM Systems Safety Notices: This guide contains translated caution and danger statements.
Each caution and danger statement in the Storwize V7000 documentation has a number that
you can use to locate the corresponding statement in your language in the IBM Systems Safety
Notices document.
• IBM Storwize V7000 Read First Flyer: This document introduces the major components of the
Storwize V7000 system and describes how to get started with the IBM Storwize V7000 Quick
Installation Guide.
• IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line
Interface User's Guide: This guide describes the commands that you can use from the Storwize
V7000 command-line interface (CLI).
• IBM Statement of Limited Warranty (2145 and 2076): This multilingual document provides
information about the IBM warranty for machine types 2145 and 2076.
• IBM License Agreement for Machine Code: This multilingual guide contains the License
Agreement for Machine Code for the Storwize V7000 product.

© Copyright IBM Corp. 2012, 2016 11-72


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Keywords
• Node hardware replacement • Service assistant IP address
• Worldwide name (WWN) • User group
• Event notifications • Remote user
• Directory Services • System audit log entry
• Email • IBM Spectrum Control
• SNMP • IBM Spectrum Protect
• Syslog • IBM Spectrum Archive
• Remote authentication • IBM Spectrum Virtualize
• Support package • IBM Spectrum Accelerate
• Upgrade test utility • IBM Spectrum Scale

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-65. Keywords

Listed are keywords that were used in this unit.

© Copyright IBM Corp. 2012, 2016 11-73


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Review questions (1 of 3)
1. True or False: The system audit log contains both
information and action commands issued for the system.

2. True or False: Only the superuser ID is authorized to use the


Service Assistant interface.

3. True or False: The Storwize V7000 nondisruptive node


hardware replacement can only be performed using the CLI.

4. True or False: Host application I/O operations are allowed


during Storwize V7000 system code upgrades.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-66. Review questions (1 of 3)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 11-74


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Review answers (1 of 3)
1. True or False: The system audit log contains both
information and action commands issued for the system.
The answer is false.

2. True or False: Only the superuser ID is authorized to use


the Service Assistant interface.
The answer is true.

3. True or False: The Storwize V7000 nondisruptive node


hardware replacement can only be performed using the CLI.
The answers is False. Storwize V7000 node hardware
replacement WWNN can also be performed using the Service
Assistant GUI.

4. True or False: Host application I/O operations are allowed


during Storwize V7000 system code upgrades.
The answer is true.
Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 11-75


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Review questions (2 of 3)
5. True or False: The Storwize V7000 system IP address can
be accessed from either Ethernet port 1 or port 2 for system
management.

6. True or False: Application data is backed up along with


system metadata data when the svcconfig backup
command is executed.

7. True or False: The Storwize V7000 configuration backup file


can be downloaded using the GUI or copied using PuTTY
secure copy (PSCP).

8. True or False: Base software license provided with the


Storwize V7000system includes the use of its basic
functions.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-67. Review questions (2 of 3)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 11-76


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Review answers (2 of 3)
5. True or False: The Storwize V7000 system IP address can
be accessed from either Ethernet port 1 or port 2 for system
management.
The answer is false.

6. True or False: Application data is backed up along with


system metadata data when the svcconfig backup
command is executed.
The answer is false.

7. True or False: The Storwize V7000 configuration backup file


can be downloaded using the GUI or copied using PuTTY
secure copy (PSCP).
The answer is true.

8. True or False: Base software license provided with the


Storwize V7000system includes the use of its basic
functions.
The answer is true.
Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 11-77


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Review questions (3 of 3)
9. Which IBM Spectrum Storage offerings can scale out and
support Yottabytes of data?

10. (Blank) is an end to end enterprise back solution offered in


IBM Spectrum Storage family.

11. What is the IBM Spectrum Storage offering that is delivered


as a hybrid cloud offering?

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-68. Review questions (3 of 3)

Write your answers here:

© Copyright IBM Corp. 2012, 2016 11-78


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Review answers (3 of 3)
9. Which IBM Spectrum Storage offerings can scale out and
support Yottabytes of data?
The answer is IBM Spectrum Scale.

10. IBM Spectrum Protect is an end to end enterprise back


solution offered in IBM Spectrum Storage family.
The answer is IBM Spectrum Protect.

11. What is the IBM Spectrum Storage offering that is delivered


as a hybrid cloud offering?
The answer is IBM Spectrum Control Storage.

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

© Copyright IBM Corp. 2012, 2016 11-79


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0
Unit 11. Storwize V7000 administrative management

Uempty

Unit summary
• Recognize system monitoring features to help maintain nodes and
components availability
• Evaluate and filter administrative task commands entries that are
captured in the audit log
• Employ system configuration backup and extract the backup files from
the system using the CLI or GUI
• Summarize the benefits of an SNMP, syslog, and email server for
forwarding alerts and events
• Recall procedures to upgrade the system software and drive microcode
firmware to a higher code level
• Identify the functions of Service Assistant tool for management access
• List the benefits of IBM Spectrum storage offerings

Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016

Figure 11-69. Unit summary

© Copyright IBM Corp. 2012, 2016 11-80


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V11.0

backpg

© Copyright International Business Machines Corporation 2012, 2016.

Das könnte Ihnen auch gefallen