Sie sind auf Seite 1von 372

Front cover

TotalStorage
Productivity Center V3.3
Update Guide
Implement TPC V3.3 on supported
platforms

Learn to effectively use new


functions

Manage storage subsystems


with TPC V3.3

Mary Lovelace
Mathias Defiebre
Harsha Gunatilaka
Curtis Neal
Yuan Xu

ibm.com/redbooks
International Technical Support Organization

TotalStorage Productivity Center V3.3


Update Guide

April 2008

SG24-7490-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.

First Edition (April 2008)

This edition applies to Version 3, Release 3 of TotalStorage Productivity Center (product number 5608-VC0).

© Copyright International Business Machines Corporation 2008. All rights reserved.


Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Chapter 1. What is new in TPC V3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Topology Viewer enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Overview of Topology Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Data Path Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Context Sensitive reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Pin list persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.5 Configuration historical analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Group reporting enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Configuration planning and performance management . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 Volume Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2 Path Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.3 Zone Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Rollup Reporting for multiple servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 TotalStorage Productivity Center for Replication support . . . . . . . . . . . . . . . . . . . . . . . 11
1.6.1 Launching the TPC for Replication GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.6.2 FlashCopy volume relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7 Extended support for additional host platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7.1 TotalStorage Productivity Center V3.3 VMWare Extended support . . . . . . . . . . . 12
1.7.2 McData Intrepid 10000 Director (i10k) support . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.7.3 IBM TS3310 Tape Library support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Chapter 2. Installation of TPC V3.3 on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15


2.1 TotalStorage Productivity Center installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.2 Product code media layout and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Hardware prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Preinstallation steps for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.1 Verify primary domain name systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.2 Activate NetBIOS settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.3 Internet Information Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.4 Create Windows user ID to install Device server and Data server . . . . . . . . . . . . 21
2.4.5 User IDs and password to be used and defined . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 DB2 install for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.1 DB2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.2 Agent Manager installation for Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.6 Installing TotalStorage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . 57
2.6.1 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.6.2 Installing Data Server, Device Server, GUI, and CLI . . . . . . . . . . . . . . . . . . . . . . 63
2.6.3 Agent deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

© Copyright IBM Corp. 2008. All rights reserved. iii


Chapter 3. Installation of TPC V3.3 on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.1 Preinstallation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.1.1 Planning considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.1.2 Java SDK or runtime environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2 Installation of TPC V3.3 on Linux platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2.1 Installing DB2 Version 8 on Linux platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.2.2 Installation of the Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.2.3 Installing TPC V3.3 database schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.2.4 Installing TPC V3.3 Data Server, Device Server, CLI, and GUI . . . . . . . . . . . . . 118

Chapter 4. TPC V3.3 upgrade methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127


4.1 TPC V3.3 structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.2 TotalStorage Productivity Center pre–planning steps . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.3 DB2 FixPak install for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.3.1 Verify the DB2 FixPak installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.3.2 DB2 UDB Fixpak installation on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.4 Agent Manager upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.4.1 Steps for upgrading Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.4.2 Verifying Agent Manager installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.5 TPC V3.3 pre-upgrade process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.6 Upgrade TPC V3.3 server components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Chapter 5. N Series and NetApp support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157


5.1 Overview of NAS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.1.1 General NAS system requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.1.2 NAS monitoring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.2 Windows Data agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.3 UNIX Data agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.4 Retrieving and displaying data about NAS filer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.4.1 View NAS Filer from the Topology Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.4.2 Navigation Tree-based asset reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.4.3 Filesystem reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.4.4 NAS device quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

Chapter 6. Brocade Mi10k SMI-S Provider support . . . . . . . . . . . . . . . . . . . . . . . . . . . 189


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
6.2 Planning the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
6.2.1 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
6.2.2 Downloading SMI-S interface code and documentation . . . . . . . . . . . . . . . . . . . 193
6.3 Installing SMI-S Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
6.3.1 Configuring the SMI-S interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.3.2 Verifying the connection with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.4 Monitoring Mi10k through TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6.4.1 Running CIMOM discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6.4.2 Generating reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

Chapter 7. TPC for Replication support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207


7.1 Verifying the connection with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
7.2 Monitoring FlashCopy Relationships through TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

Chapter 8. IBM TS3310 Tape Library support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8.2 Adding the SMI-S Agent to TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8.2.1 Supported hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

iv TotalStorage Productivity Center V3.3 Update Guide


8.2.2 Verifying the connection with TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8.3 Monitoring TS3310 through TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

Chapter 9. Topology Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223


9.1 What is new in Topology Viewer V3.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
9.2 Feature descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
9.2.1 Pin list persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
9.2.2 Link to reports / alerts from Topology Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9.2.3 Context Sensitive Reporting and Data Path Explorer . . . . . . . . . . . . . . . . . . . . . 229
9.2.4 Configuration Historical Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
9.2.5 Configuration Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

Chapter 10. SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245


10.1 SAN Planner overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
10.1.1 Volume Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
10.1.2 Path Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
10.1.3 Zone Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
10.1.4 Requirements for SAN Planner. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
10.1.5 SAN Planner panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
10.1.6 Usage combinations for the three planners . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
10.2 Invoking the planners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
10.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
10.2.2 The SAN Planner Configuration panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
10.3 Using the SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
10.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
10.3.2 Invoking the SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

Chapter 11. Enterprise server rollup function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265


11.1 Enterprise server rollup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
11.2 Preparing for enterprise server rollup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
11.2.1 TPC server probe process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
11.2.2 TotalStorage Productivity Center V3.3 generating rollup reports . . . . . . . . . . . 271

Chapter 12. VMware ESX Server support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277


12.1 VMWare ESX Server overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
12.2 Planning for VMWare configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
12.3 Configuring TPC communication to VMWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280

Appendix A. Configuring X11 forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295


Preparing the display export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Preparation of the AIX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Preparation of the Windows workstation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Setup for the AIX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306

Appendix B. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309


User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . . 311
Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
IBM System Storage Enterprise Storage Server/DS6000/DS8000. . . . . . . . . . . . . . . . 312
IBM DS4000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314

Contents v
Appendix C. Performance metrics in TPC Performance Reports. . . . . . . . . . . . . . . . 315
Performance metric collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
By Storage Subsystem report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
By Controller report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
By I/O Group report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
By Node report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
By Array report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
By Managed Disk Group report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
By Volume report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
By Managed Disk report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
By Port report - Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
By Port report - Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
How to get Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

vi TotalStorage Productivity Center V3.3 Update Guide


Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2008. All rights reserved. vii


Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® FlashCopy® System i™
AIX 5L™ Hypervisor™ System z™
DB2® IBM® System Storage™
DB2 Universal Database™ NetView® Tivoli®
developerWorks® OS/390® TotalStorage®
DS4000™ Passport Advantage® WebSphere®
DS6000™ Power PC® z/OS®
DS8000™ Redbooks®
Enterprise Storage Server® Redbooks (logo) ®

The following terms are trademarks of other companies:

Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or
its affiliates.

Snapshot, Network Appliance, FilerView, Data ONTAP, NetApp, and the Network Appliance logo are
trademarks or registered trademarks of Network Appliance, Inc. in the U.S. and other countries.

Java, JVM, J2SE, Solaris, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.

Active Directory, Microsoft, SQL Server, Windows, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

viii TotalStorage Productivity Center V3.3 Update Guide


Preface

IBM® TotalStorage® Productivity Center provides an integrated storage infrastructure


management solution that is designed to allow you to manage every point of your storage
infrastructure, between the hosts through the network and fabric, through to the physical
disks. It can help simplify and automate the management of devices, data, and storage
networks.

IBM TotalStorage Productivity Center V3.3 continues to build on the function provided in prior
releases. This IBM® Redbooks® publication will take you through what is new in TotalStorage
Productivity Center and explain how to implement and use the new function.

The team that wrote this book


This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center. Here we show the team that
wrote this book.

Yuan, Mathias, Mary, Curtis, Harsha

Mary Lovelace is a Consulting IT specialist at the International Technical Support


Organization. She has more than 20 years of experience with IBM in large systems, storage
and Storage Networking product education, system engineering and consultancy, and
systems support. She has written many Redbooks on TotalStorage Productivity Center and
z/OS® storage products.

Mathias Defiebre is an IT Specialist for Proof of Concepts in the ATS Customer Solutions
team in Mainz, Germany. His areas of expertise include setup and demonstration of IBM
System Storage™ and TotalStorage solutions in Open Systems environments. He has
worked at IBM for six years after graduating from the University of Cooperative Education
Mannheim in 2001 with a Bachelor of Science and a German Diploma in Information
Technology Management. Mathias is an IBM Certified Specialist - TotalStorage Networking
and Virtualization Architecture.

© Copyright IBM Corp. 2008. All rights reserved. ix


Harsha Gunatilaka is a Software Test Engineer in Tivoli® Software. He is currently part of
the IBM TotalStorage Productivity Center test team in Tucson, Arizona. He has experience
with IBM storage products and software. He holds a degree in Management Information
Systems from the University of Arizona.

Curtis Neal is a Senior IT Specialist working for the System Storage Group in San Jose,
California. He has over 25 years of experience in various technical capacities, including
mainframe and open system test, design, and implementation. For the past seven years, he
has led the Open Storage Competency Center, which helps customers and IBM Business
Partners with the planning, demonstration, and integration of IBM System Storage Solutions.

Yuan Xu is a Senior IT Specialist with Advanced Technical Support in China. He has 20 years
of IT experience in software development and open system solution design, implementation,
and support. He joined IBM China in 1999. His areas of expertise include post-sale support
and pre-sale support of IBM storage products, he has extensive experience on
Performance-Benchmark testing and Proof of Concepts demonstration with IBM Enterprise
storage and Mid-range storage, as well as business continuity solution. He holds a degree in
Information Management from the People's University of China.

Thanks to the following people for their contributions to this project:

Bob Haimowitz
Rich Conway
Sangam Racherla
Yvonne Lyon
International Technical Support Organization

Diana Duan
Paul Lee
Tivoli L2 Technical Support

Roland Cao
Harsha Gunatilaka
Thiha Than
Bill Tuminaro
Sarah Hovey
Tivoli Development

Doug Dunham
Tivoli Storage Management Consultant

Chris Katsura
Miki Walter
Tivoli Information Development

Eric Butler
Almaden Research

Silviano Gaona
Brian Steffler
Brocade

x TotalStorage Productivity Center V3.3 Update Guide


Become a published author
Join us for a two- to six-week residency program! Help write a book dealing with specific
products or solutions, while getting hands-on experience with leading-edge technologies. You
will have the opportunity to team with IBM technical professionals, Business Partners, and
Clients.

Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks® in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an e-mail to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

Preface xi
xii TotalStorage Productivity Center V3.3 Update Guide
1

Chapter 1. What is new in TPC V3.3


IBM TotalStorage Productivity Center V3.3 (TPC V3.3) brings together storage device
management, fabric management, and storage resource management into a single
integrated offering and offers innovative new enhancements including:
򐂰 Enterprise roll-up reports from multiple TPC instances throughout global corporate
environments
򐂰 New analytic capabilities for improved system availability with rapid discovery of
performance problems
򐂰 Comprehensive configuration guidance with intelligent contrast comparisons to help
reduce SAN outages caused by configuration changes;
򐂰 New storage planning wizards, which offer intelligent configuration guidance via preset
best-practices policies
򐂰 Dynamic security and violation alerting
򐂰 Quick access to vital information through new “favorite grouping” features, which allow the
storage administrator to save a group of resources that are deemed mission-critical, and
then quickly recall and monitor that group.

© Copyright IBM Corp. 2008. All rights reserved. 1


1.1 TotalStorage Productivity Center
IBM has made improvements that deliver the kind of management-rich environment that has
made TotalStorage Productivity Center an industry leader, including these:
򐂰 Usability enhancements for the Improved Topology Viewer:
– Data Path Explorer view
– Persistent Pinning Enhancements
򐂰 Configuration Planning and Performance Management Wizard:
– Context Sensitive Performance Reports
– Storage Configuration Planners
򐂰 Rollup reporting for multiple TotalStorage Productivity Center servers
򐂰 TotalStorage Productivity Center for Replication support
򐂰 Extended support for additional host platforms and storage devices:
– VMWare support using TPC for Data
– IBM TS3310 Tape Library support
– Full TPC for Fabric support for McData Intrepid 10000 Director (i10k)

1.2 Topology Viewer enhancements


The Topology Viewer has been enhanced in TPC V3.3 with the following functions:
򐂰 Context Sensitive Reporting and Data Path Explorer
򐂰 Pin list persistence
򐂰 Topology Viewer link to reports / alert
򐂰 Historical Analysis
򐂰 Configuration Analysis

1.2.1 Overview of Topology Viewer


The Topology Viewer is designed to provide an extended graphical topology view; that is, a
graphical representation of the physical and logical resources (such as computers, fabrics,
and storage subsystems) that have been discovered in your storage environment. In addition,
the Topology Viewer depicts the relationships among resources — for example, the disks
comprising a particular storage subsystem. Detailed, tabular information, such as the
attributes of a disk, is also provided.

The overall goal of the Topology Viewer is to provide a central location to view a storage
environment, quickly monitor and troubleshoot problems, and gain access to additional tasks
and functions within the TotalStorage Productivity Center User Interface without users losing
their orientation to the environment. This kind of flexibility through the Topology Viewer User
Interface displays better cognitive mapping between the entities within the environment, and
provides data about entities and access to additional tasks and functionality associated with
the current environmental view and the user's role.

2 TotalStorage Productivity Center V3.3 Update Guide


The Topology Viewer uses the TotalStorage Productivity Center database as the central
repository for all data it displays. It actually reads the data in user-defined intervals from the
database and updates, if necessary, the displayed information automatically. It will make a
storage manager’s life easier. But as is true for every tool, you first have to understand the
basics and the concepts to get the most out of it. Figure 1-1 shows an overview of the
Topology Viewer.

Figure 1-1 Topology Viewer overview

1.2.2 Data Path Explorer


A new view is added to the Topology Viewer that allows you to view the paths between
servers and storage subsystems or between storage subsystems (for example, SVC to
back-end storage, or server to storage subsystem). Performance and health overlays on this
view will provide a mechanism to assess the impact of performance or device state in the
paths on the connectivity between the systems. The view will consist of three panes (host
information, fabric information, subsystem information) that show the path through a fabric or
set of fabrics for the endpoint devices (host to subsystem or subsystem to subsystem) as
shown in Figure 1-2.

Chapter 1. What is new in TPC V3.3 3


Figure 1-2 Data Path Explorer view from Topology Viewer

A possible scenario utilizing this feature is an application on a host that is running slow. The
system administrator wants to find out what the health of the associated I/O path for that
application is. Are all components along that path healthy? Are there any component level
performance problems that might be causing the slow application response?

The system administrator might also want to find out whether the I/O paths for two
applications (on separate host LUNs) are in conflict with each other. They could be sharing a
common component (such as a switch). After viewing the I/O paths for these two applications,
the administrator makes the required zoning or connectivity change to alleviate the problem.

1.2.3 Context Sensitive reporting


TPC V3.1 provides both the Topology Viewer and different kinds of reports and alerts to help
you manage your infrastructure. However those functions are not correlated with each other.
When you are navigating in Topology Viewer and want to see the alert or report on the entity
you are looking at, you have to go to the reports or alerts section to manually define them.

With TotalStorage Productivity Center V3.3, the reports and alerts are integrated into the GUI
console. It provides a mechanism to navigate from an entity shown on the Topology Viewer to
other areas of the console to show reports or perform management of the selected entity.
This enhancement will enable users to directly jump to the appropriate portions of the console
from the Topology Viewer. Figure 1-3 shows how TotalStorage Productivity Center V3.3 can
provide reports integrated into the console.

4 TotalStorage Productivity Center V3.3 Update Guide


Figure 1-3 shows how to create a report from the Computers group graphical view. Right-click
on the title of the Computers group and click Reports from the pop-up context menu.

Figure 1-3 Context sensitive reporting

1.2.4 Pin list persistence


The pin list feature in the Topology Viewer in TotalStorage Productivity Center V3.1 allows the
you to “pin” an entity, which puts a small flag next to the entity and propagates this flag to all
views that contain that entity. This is useful for marking entities for various reasons, such as
“to-look-at-later” type reminders and upward propagation.

In TPC V3.1, the pin list in the Topology Viewer is not persistent across sessions. If you close
the TPC GUI (or even just the Topology Viewer panel), you will lose the current pin list. With
TPC V3.3 the pin list is persistent across sessions and per user. This means that different
users (people using different usernames to log into TPC) will be able to have their own
persistent and private pin lists, which will not be lost when the TPC GUI is closed and which
will not be affected by pin list operations performed by other users.

When using TotalStorage Productivity Center V3.3 with multiple users log onto the same TPC
server simultaneously using the same username, they will share the same pin list. When one
user pins or unpins an entity, the other users will not see that update on their clients right
away. They will see the changes only after they refresh the Overview panel or other affected
views.

Figure 1-4 illustrates right-clicking on a pinned entity in the Topology Viewer Overview panel,
which brings up the context menu for that entity.

Chapter 1. What is new in TPC V3.3 5


Figure 1-4 Pinned entity context menu

1.2.5 Configuration historical analysis


The configuration history feature takes and displays snapshots of changes that occurred in
your SAN configuration over a period of time that you specify. After you set the time period
(how often to take snapshots and how long to store them), in a page similar to the topology
viewer, you can manipulate a snapshot selection panel to show changes that occurred
between two or more points in the time period. System administrators can use the
configuration history feature to:
򐂰 Correlate performance statistics with configuration changes:
For example, during collection of performance statistics (including volume performance
statistics) on an ESS system, you might delete a volume. Although no new statistics are
reported on that volume, the TotalStorage Productivity Center Performance Manager
would have already collected partial statistical information prior to the deletion. At the end
of the data collection task, reporting of the partially collected statistics on the (now)
deleted volume would require access to its properties which would not be available. The
configuration history feature, with its ability to take and store snapshots of a system’s
configuration, could provide access to the volume’s properties.
򐂰 Analyze end-to-end performance:
To know why performance has changed on volume A during the last 24 hours, it would be
useful to know what changes were made to the storage subsystem’s configuration that
might affect the volume’s performance, even if performance statistics were not recorded
on some of those elements. For example, if performance statistics on a per-rank basis are
not collected, but the number of volumes allocated on a rank is increased from 1 to 100
over time, access to that configuration history information helps with analyzing the
volume’s degraded performance over time.
򐂰 Aid in planning and provisioning:
The availability of configuration history can enhance the quality of both provisioning and
planning. For example, historical data is useful when using the TotalStorage Productivity
Center Volume Placement Advisor to provision a volume or when using the TotalStorage
Productivity Center Version 3.3 Integrated Host, Security, and Subsystem Planner to plan
tasks.

6 TotalStorage Productivity Center V3.3 Update Guide


Change Monitoring is the detection of changes in the environment over time and the
identification of the changes to the user.

Detection of changes is accomplished by manually or automatically initiated action that will


“snapshot” key tables at periodic intervals in the DB for comparison with other snapshots.

The following information is collected in the snapshot:


Subsystem, Pool, Volume, Storage Extent, Disk Group, Fabric, Switch, Port, Port, Fabric,
Switch, Node, Zone, Zone Set, Host, HBA, HostNew Snapshot™ Tables.

The configuration history is displayed through the Topology Viewer-like graphic viewer.
You can identify the changes in this viewer. The change overlay is a topology overlay that
becomes active in Change Rover mode. The purpose of this overlay is to show whether an
entity visible in the topology viewer has been changed between point TimeA and TimeB in
time. In essence there are only four possible states of any entity in this situation:
򐂰 The entity has not been changed between TimeA and TimeB.
򐂰 The entity has been created since TimeA.
򐂰 The entity has been deleted since TimeA.
򐂰 The entity has been modified since TimeA.

1.3 Group reporting enhancements


Currently TPC allows the user to create both monitoring and reporting groups for computers
and filesystems. These reporting groups can be used for creating reports in the Data
Manager → Reporting → Capacity and Data Manager → Reporting → Usage categories,
where they provide aggregate views of storage in the environment based on the computer or
filesystem group definitions created.

Today, with TotalStorage Productivity Center V3.3, you have the ability to probe Storage
Subsystems and report on their storage. This new capability allows you to create reporting
groups for storage subsystems, and then be able to use these storage subsystem reporting
groups to create TPC reports presenting aggregate views of storage subsystem storage
(based on storage subsystem groups).

It also supports the ability able to use the existing definitions of computer and filesystem
reporting groups, to create TPC reports presenting aggregate views of storage subsystem
storage, in the context of the filesystem and computer groups.

Finally, it provides the user the ability to use the definitions of storage subsystem reporting
groups, to create TPC reports presenting aggregate views of disk capacity in the context of
the storage subsystem groups. This is shown in Figure 1-5.

Chapter 1. What is new in TPC V3.3 7


Figure 1-5 Subsystem Group reports

1.4 Configuration planning and performance management


TPC V3.3 provides several SAN planning tools: Volume Planner, Path Planner, and Zone
Planner. The SAN Planner assists you in end-to-end planning involving fabrics, hosts, storage
controllers, storage pools, volumes, paths, ports, zones, and zone sets. When a plan is made,
you can select to have the plan implemented by the SAN Planner.

You can use each planner separately, but you must use all three planners together (Volume
Planner, Path Planner, and Zone Planner) to use storage provisioning.

1.4.1 Volume Planner


The Volume Planner replaces the Volume Performance Advisor. This support is being
enhanced by supporting the DS8000™ and DS6000™ as well as the ESS 800.

The Volume Planner helps administrators plan for the provisioning of subsystem storage
based on capacity, storage controller type, number of volumes, volume size, performance
requirements, RAID level, performance utilization, and capacity utilization. This is true if
performance data for the underlying storage subsystem exists. If a prior performance monitor
has not been run for that storage subsystem, the Volume Planner will make its decisions
based on capacity only. The Volume Planner generates a plan that presents the storage
controllers and storage pools that can satisfy the request. If you explicitly specify the storage
pool and controller information, then the Volume Planner checks to see whether the input
performance and capacity requirements can be satisfied.

8 TotalStorage Productivity Center V3.3 Update Guide


1.4.2 Path Planner
The Path Planner enables system administrators to plan and implement storage provisioning
for hosts and storage subsystems with multi-path support in fabrics managed by IBM
TotalStorage Productivity Center. Planning the provisioning of storage to hosts with multi-path
drivers requires knowing which storage subsystems are supported by the host multi-path
driver and the supported multi-path nodes that both the driver and storage subsystem
support. Planning the paths between the hosts and storage controller requires designing
paths between hosts and storage subsystems that are implemented through zones in the
fabric.

The Path Planner only has support for IBM SDD multipath drivers.

1.4.3 Zone Planner


The Zone Planner enables the administrator to plan for zoning and LUN masking
configuration based on the following information: host ports, storage controller ports, zones,
zone sets, switches, user zoning input, user LUN masking input, existing LUN masking or
mapping. If the user specifies the exact zoning and LUN masking information, then the Zone
Planner informs the user whether the user input is different from the selected input or the
existing configuration.

The goal of the planning framework is to allow users to select high level policies with respect
to performance, availability, and security, and the underlying planning system based on the
current resource utilization, and selected policies determines the best course of action and it
generates a plan.

You have the option of either accepting the advice and then triggering the associated actions
(such as creating a volume or performing zoning), or you can reject the advice and then
perform re-planning.

When you start a planning operation, you have to provide the following types of input:
򐂰 Users have the option of choosing and specifying values for a set of policies.
򐂰 Users have the option of selecting hosts, switches, and storage controllers and storage
pools that they want to consider as part of the planning process.
򐂰 Users can perform storage planning, host planning or security planning in isolation or in a
integrated manner.

The planner wizard can operate in the following modes:

Plan Mode: In this mode, the user inputs which storage controller and storage pool to select,
and the wizard determines the number of required paths from the host to the storage
controller, and how the zoning and LUN masking/mapping should be performed.

Storage Planner: In this mode, the Storage Planner provides support for DS8000, DS6000,
and ESS 800 controllers; it takes:
򐂰 Input with respect to capacity and storage controller preference
򐂰 Input such as storage controller type, number of volumes, volume size, performance
requirements, use of unassigned volumes, RAID level.
򐂰 Performance utilization and capacity utilization input from the TPC database; it generates
a plan that presents the storage controllers and storage pools that can satisfy this request.

Chapter 1. What is new in TPC V3.3 9


Host Planner: The Host Planner has these functions:
򐂰 Takes input with respect to the host ports and storage ports, and the required performance
and availability requirements
򐂰 Can decide to use multi-pathing when it is available.
򐂰 Can take host port performance and storage controller port performance utilization and
existing multi-pathing and zoning information from the TPC database as its input

1.5 Rollup Reporting for multiple servers


TotalStorage Productivity Center V3.3 can now consolidate asset and health information
collected from multiple TPC instances providing the ability for scalable enterprise wide
management of the storage environments. Figure 1-6 shows an overview of the Rollup
Reporting function.

Figure 1-6 Rollup reporting

Figure 1-6 illustrates this strategic feature as you build up a hierarchy of TotalStorage
Productivity Center V3.3 servers. A TotalStorage Productivity Center V3.3 instance is
configured as a master. One or more master TotalStorage Productivity Center V3.3 servers
are configured to collect roll-up information from the subordinates. The master server will
communicate with the subordinate servers using the Device Server API.

A subordinate TotalStorage Productivity Center V3.3 server can have one or more master
servers that are configured to collect roll-up information. The health of the subordinate
servers is based on the result of the check operation and the status from the probe actions.
All TotalStorage Productivity Center V3.3 instances can be configured for discovery and
probing for enterprise reports.

10 TotalStorage Productivity Center V3.3 Update Guide


1.6 TotalStorage Productivity Center for Replication support
The basic functions of TotalStorage Productivity Center (TPC) for Replication is to provide
management of FlashCopy®, Metro Mirror, and Global Mirror capabilities for the ESS Model
800, DS6000, and DS8000. It also manages FlashCopy and MetroMirror for IBM SAN Volume
Controller. TPC for Replication is designed to simplify the management of advanced copy
services by:
򐂰 Automating administration and configuration of these services with wizard-based session
and copy set definitions
򐂰 Providing simple operational control of copy services tasks, including starting,
suspending, and resuming
򐂰 Offering tools for monitoring and managing copy sessions

1.6.1 Launching the TPC for Replication GUI


TPC V3.3 supports the launching of external tools such as the TotalStorage Productivity
Center for Replication GUI. There is a new node in the TotalStorage Productivity Center GUI
Navigation Tree, External Tools. You must first define the IBM TotalStorage Productivity
Center for Replication URL to the TotalStorage Productivity Center (see Figure 1-7).

If the TotalStorage Productivity Center for Replication is installed and running on the same
host as the TotalStorage Productivity Center server, an external tool will automatically be
added to the TotalStorage Productivity Center during TotalStorage Productivity Center server
startup. Then you can launch the TotalStorage Productivity Center Replication GUI from the
TotalStorage Productivity Center using an external Web browser. This is similar to launching
an Element Manager, with the difference being that the external tool definitions are stored in
the TPC database and can therefore be shared between TPC users.

Figure 1-7 External Tools launch of TPC for Replication

Chapter 1. What is new in TPC V3.3 11


1.6.2 FlashCopy volume relationships
The TotalStorage Productivity Center V3.3 now has TPC for Replication support, allowing you
to identify FlashCopy volume relationships (copy sets) and report usable capacity on the GUI
and UI reports. Based on this additional information, the value reported for “usable LUN
capacity” (in subsystem reports) will be updated to exclude the capacity of volumes that are
the targets in an FC relationship. In addition, volume reports will be updated to display the
new information. The FlashCopy copy set is available for ESS, DS6000, DS8000, and SVC
systems. This feature does not cover MetroMirror and GlobalMirror volumes.

1.7 Extended support for additional host platforms


TotalStorage Productivity Center V3.3 provides support for additional host platforms and
storage devices. They include:
򐂰 VMWare Support using TPC for Data
򐂰 Full TPC for Fabric support for McData Intrepid 10000 Director (i10k)
򐂰 IBM TS3310 Tape Library support

1.7.1 TotalStorage Productivity Center V3.3 VMWare Extended support


VMware products enable virtualization of hardware for x86 based computer hardware. The
line of product consists of VMWare player, VMWare workstation, VMWare GSX server and
VMWare ESX server. The VMWare workstation, VMWare server, and ESX server provide a
facility to create configure and run virtual machines. VMWare player only allows running of
pre-created and configured virtual machines. As such, VMWare player is a target for
demonstrating of products, and so on. The VMWare workstation and VMWare server installs
and runs inside an OS installed on a physical machine.

VMWare allows a single physical computer system to be divided into logical virtual machines
running various operating systems. To the applications running in the VM, it is a computer
system with a unique IP address and access to storage that is virtualized by the hosting
system (Hypervisor™).

TotalStorage Productivity Center V3.3 will support ESX servers at level 3.0.2 and above. It will
collect hosted VM information. The VMs must be running an operating system in the TPC for
Data agent support list and be running a TPC for Data agent. GSX servers, VMWare servers,
and VMWare workstations are not supported. No support is provided for Virtual VMotion.

TPC will collect Hypervisor information from the system or Virtual Center using the VMWare
Virtual Infrastructure API (proprietary) API of ESX 3 and Virtual Center 2.

12 TotalStorage Productivity Center V3.3 Update Guide


1.7.2 McData Intrepid 10000 Director (i10k) support
The Intrepid 10000 Director is McData’s highest port count switch (256 ports). TotalStorage
Productivity Center V3.3 users can perform topology discovery, monitoring and zone control
through the SMI-S interface and topology discovery and monitoring through the SNMP
interface.

SNMP discovery of the topology will be supported. However, McData recommends having the
switches on private networks and therefore SNMP connectivity is generally not possible.
Since the CIM/OM is generally on the public network, the SMI-S communication for fabric
discovery, monitoring, and zone control would be available and is the expected
communication channel.

1.7.3 IBM TS3310 Tape Library support


IBM TotalStorage Productivity Center now supports the IBM TS3310 Tape Library. The IBM
TS3310 has an embedded SMI-S client, which can be monitored by TPC for tape manager
reporting.

Chapter 1. What is new in TPC V3.3 13


14 TotalStorage Productivity Center V3.3 Update Guide
2

Chapter 2. Installation of TPC V3.3 on


Windows
In this chapter we show the step-by-step installation of the TotalStorage Productivity Center
(TPC) V3.3 on the Windows® platform. We discuss the installation wizards that are provided
to help you with the installation. Of the available installation paths, Typical and Custom, we
describe the Custom installation in our environment.

© Copyright IBM Corp. 2008. All rights reserved. 15


2.1 TotalStorage Productivity Center installation
TotalStorage Productivity Center uses several installation wizards that guide you through the
installation of the TotalStorage Productivity Center servers and agents. In this chapter we
describe the installation of the TotalStorage Productivity Center Standard Edition. The
prerequisite components are installed prior to invoking the installation wizard.

TotalStorage Productivity Center provides an integrated storage infrastructure management


solution that is designed to allow you to manage every point of your storage infrastructure,
between the hosts through the network and fabric through to the physical disks. It can help
simplify and automate the management of devices, data, and storage networks.

2.1.1 Installation overview


TotalStorage Productivity Center V3.3 offers a simple, easy to installation package.

The default installation directory is:


򐂰 c:\Program Files\IBM\... (for Windows)
򐂰 /opt/IBM/... (for UNIX and Linux®)

You can change this path during installation setup. There are two types of installation: typical
and custom.

Typical installation
The Typical installation allows you to install all the components of the TotalStorage
Productivity Center on the local server by selecting the options Servers, Agents, and Clients.
Our recommendation is not to use the Typical installation, because the control of the
installation process is much better when you use the Custom installation method.

Custom installation
The Custom installation allows you to install each component of the TotalStorage Productivity
Center separately and deploy remote Fabric and or Data agents on different computers. This
is the installation method we recommend.

2.1.2 Product code media layout and components


In this section we describe the contents of the product media at the time of writing. The media
content will differ depending on whether you are using the Web images or the physical media
shipped with the TPC V3.3 package.

Passport Advantage and Web media content


The Web media consists of three disk images:
򐂰 Disk1:
– OS: Windows, AIX®, and Linux RH 3, Linux RH4
– Database Schema
– Data Server
– Device Server
– GUI
– CLI
– Local Data agent
– Local Fabric agent

16 TotalStorage Productivity Center V3.3 Update Guide


򐂰 Disk2:
– OS: Windows, AIX, Linux RH 3, Linux RH 4
– Remote installation of Data agent
– Remote installation of Fabric agent
򐂰 Disk3:
– OS: Windows, AIX, and Linux RH 3, Linux RH 4, Linux Power, Linux s390 (zLinux),
Solaris™, HP-UX
– Local Data agent
– Local Fabric agent
– Data upgrade for all platforms

Note: When installing TPC server from Disk1, the installer will prompt you to insert Disk2
to copy files from it. Then it will copy the remote installation files for Data agent and Fabric
agent so that it can perform remote installation for those agents. If you are only installing
local agents, you can use Disk3.

Physical media
The physical media shipped with the TPC V3.3 product consists of a DVD and a CD. The DVD
contains the Disk1 and Disk2 content described in “Passport Advantage and Web media
content” on page 16. The physical media CD is the same as the Web Disk3 media.

2.2 Hardware prerequisites


For the hardware prerequisites, see the Web site at:

http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/supportresources?
taskind=3&brandind=5000033&familyind=5329731

2.3 Software prerequisites


For the software prerequisites, see the Web site at:

http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/supportresources?
taskind=3&brandind=5000033&familyind=5329731

2.4 Preinstallation steps for Windows


These are the prerequisite components for IBM TotalStorage Productivity Center V3.3:
򐂰 IBM DB2® UDB Enterprise Server Edition V8.2 FixPak 14
򐂰 Agent Manager 1.3.2

Order of component installation


The components are installed in the following order:
1. DB2
2. Agent Manager
3. TotalStorage Productivity Center components

Chapter 2. Installation of TPC V3.3 on Windows 17


Tip: We recommend that you install the Database Schema first. After that, install Data
Server and Device Server in a separate step.

If you install all the components in one step, if any part of the installation fails for any
reason (for example, space or passwords), the installation suspends and rolls back,
uninstalling all the previously installed components.

2.4.1 Verify primary domain name systems


Before you start your installation, we recommend you verify if a primary domain name system
(DNS) suffix is set. This can require a computer restart.

To verify the primary DNS name, follow these steps:


1. Right-click My Computer on your desktop.
2. Click Properties.
The System Properties panel is displayed as shown in Figure 2-1.
3. On the Computer Name tab, click Change.

Figure 2-1 system properties

18 TotalStorage Productivity Center V3.3 Update Guide


4. Enter the host name in the Computer name field. Click More to continue (see Figure 2-2).

Figure 2-2 Computer name

5. In the next panel, verify that Primary DNS suffix field displays a domain name. Click OK
(see Figure 2-3).

Figure 2-3 DNS domain name

6. If you made any changes, you might need to restart your computer (see Figure 2-4).

Figure 2-4 You must restart the computer for changes to take effect

Chapter 2. Installation of TPC V3.3 on Windows 19


2.4.2 Activate NetBIOS settings
If NetBIOS is not enabled on Microsoft® Windows 2003, then GUID is not generated. You
must verify and activate NetBIOS settings.

On your TotalStorage Productivity Center Server, go to Start → Control Panel → Network


Connections. Select your Local Area Connections. From the Local Area Connection
Properties panel, double-click Internet Protocol (TCP/IP). The next panel is the Internet
Protocol (TCP/IP) Properties. Click Advanced as shown in Figure 2-5.

Figure 2-5 TPC/IP properties

On the WINS tab, select Enable NetBIOS over TCP/IP and click OK (Figure 2-6).

Figure 2-6 Advanced TCP/IP properties

20 TotalStorage Productivity Center V3.3 Update Guide


2.4.3 Internet Information Services
Port 80 is used by Internet Information Services (IIS). Port 80 is also used by the Agent
Manager for the recovery of agents that can no longer communicate with the manager,
because of lost passwords or certificates. If any service is using port 80, then Agent Recovery
Service installs, but it does not start.

Before beginning the installation of TotalStorage Productivity Center, you must do one of the
following actions:
򐂰 Change the IIS port to something other than 80, for example, 8080.
򐂰 Uninstall IIS.
򐂰 Disable IIS.

To uninstall IIS, use the following procedure:


1. Click Start → Control Panel → Add/Remove Programs.
2. In the Add or Remove Programs window, click Add/Remove Windows Components.
3. In the Windows Components panel, select Application Server and click Details...
4. In the Application Server panel, deselect Internet Information Services (IIS), click OK.

In our installation, we disabled IIS to avoid any port conflicts.

To disabled IIS, use the following procedure:


1. Click Start → Control Panel → Administrative Tools → Services.
2. Right-click World Wide Web Publishing Service, and choose Stop.

2.4.4 Create Windows user ID to install Device server and Data server
In order to install Device server and Data server, you must have a Windows User ID with all
the proper required rights. We created a unique user ID, as described in Table 2-3 on
page 22.

It is a good practice to use the worksheets in Appendix B, “Worksheets” on page 309 to


record the user IDs and passwords used during the installation of TotalStorage Productivity
Center.

2.4.5 User IDs and password to be used and defined


In this section, we describe the user IDs and passwords you need to define or set up during
TotalStorage Productivity Center installation.

Table 2-1 points you to the appropriate table that contains the user IDs and passwords used
during the installation of TotalStorage Productivity Center.

Table 2-1 Index to tables describing required user IDs and passwords
Item Table

Installing DB2 and Agent Manager Table 2-2 on page 22

Installing Device server or Data server Table 2-3 on page 22

Installing Data agent or Fabric agent Table 2-4 on page 23

DB2 administration server user Table 2-5 on page 23

Certificate authority password Table 2-6 on page 24

Common agent registration Table 2-7 on page 24

Chapter 2. Installation of TPC V3.3 on Windows 21


Item Table

Common agent service logon user ID and Table 2-8 on page 25


password

Host authentication password Table 2-9 on page 25

Data server account password Table 2-10 on page 25

NAS filer login user ID and password Table 2-11 on page 25

Resource manager registration user ID and Table 2-12 on page 26


password

WebSphere® Application Server administrator Table 2-13 on page 26


user ID and password

Table 2-2 on page 22 through Table 2-13 on page 26 contain information about the user IDs
and passwords used during the installation of the TotalStorage Productivity Center
prerequisites and components.

Table 2-2 Installing DB2 and Agent Manager


Item OS Description Created when Used when

Installing DB2 All Log on Windows Used to log on


and Agent as a local Windows to install
Manager Administrator DB2 and Agent
Manager

Group User ID Password ITSO’s user ID


and password

Administrators User ID used to Used to log on Administrator /


log on password

Table 2-3 Installing Device server or Data server


Item OS Description Created when Used when

Installing All Add user ID to DB2 Has to be created Used to log on


Device server or Admin group or assign before starting Windows to
Data server the user rights: Device server install Device and
- Log on as a service and Data server Data server
- Act as part of the installation
operating system
- Adjust memory quotas
for a process
- Create a token object
- Debug programs
- Replace a process
level token.
On Linux or UNIX, give
root authority

Group User ID Password ITSO’s user ID


and password

Adminis- New user ID used to log New password tpcadmin /


trators on Windows used to log on tpcadmin
Windows (group
Administrators)

22 TotalStorage Productivity Center V3.3 Update Guide


Table 2-4 Installing Data agent or Fabric agent
item OS Description Created when Used when

Installing All User rights: Has to be created Used to logon


Data agent or - Act as part of the before starting Windows to
Fabric agent operating system Data agent or install Data agent
- Log on as a service. Fabric agent or Fabric agent
On Linux or UNIX, give installation
root authority

Group User ID Password ITSO’s user ID


and password

Administrators New user ID used to New password tpcadmin /


log on Windows used to log on tpcadmin
Windows (group
Administrators)

To install a GUI or CLI, you do not need any particular authority or special user ID.

Table 2-5 DB2 administration server


item OS Description Created when Used when

DB2 All Used to run the DB2 Specified when Used by the DB2
administration administration server on DB2 is installed GUI tools to
server user your system. Used by perform
the DB2 GUI tools to administration
perform administration tasks.
tasks. See rules below.

Group User ID Password ITSO’s user ID


and password

New user ID New password db2tpc / db2tpc

DB2 user ID and password rules


DB2 user IDs and passwords must follow these rules:
򐂰 UNIX® user names and passwords cannot be more than eight characters long.
They cannot start with a numeric digit or end with $.
򐂰 Windows 32-bit user IDs and passwords can contain 1 to 20 characters.
򐂰 Group and instance names can contain 1 to 8 characters.
򐂰 User IDs cannot be any of the following words:
– USERS
– ADMINS
– GUESTS
– PUBLIC
– LOCAL
򐂰 User IDs cannot begin with:
– IBM
– SQL
– SYS
򐂰 User IDs cannot include accented characters.
򐂰 UNIX users, groups, and instance names must be lowercase.
򐂰 Windows 32-bit users, groups, or instance names can be any case.

Chapter 2. Installation of TPC V3.3 on Windows 23


DB2 creates a user group with the following administrative rights:
򐂰 Act as a part of an operating system
򐂰 Create a token object
򐂰 Increase quotas
򐂰 Replace a process-level token
򐂰 Log on as a service.

Note: Adding the user ID used to install TotalStorage Productivity Center to the DB2 Admin
group gives the user ID the necessary administrative rights.

Table 2-6 Certificate authority password


item OS Description Created when Used when

Certificate All This password locks Specified when


authority the file, you install Agent
password CARootKeyRing.jks. Manager
Specifying a value for
this password is
optional. You need to
specify this password
only if you want to be
able to unlock the
certificate authority
files.
We recommend
that you create a
password.
Group User ID Password ITSO’s user ID
and password

No default, if not tpctpc


specified one is
generated
automatically

Important: Do not change the Agent Registration password under any circumstances.
Changing this password will render the certificates unusable.

Table 2-7 Common agent registration passwords


Item OS Description Created when Used when

Common agent All This is the Specified when you Used during
registration password required install Agent common agent,
by the common Manager Data agent and
agent to register Fabric agent
with the Agent installation
Manager

Group User ID Password ITSO’s user ID


and password

changeMe changeMe
(is the default)

24 TotalStorage Productivity Center V3.3 Update Guide


Table 2-8 Common agent service logon user ID and password
item OS Description Created when Used when

Common agent Windows This creates a new Specified when


service logon service account for the you install Data
user ID and common agent to run agent or Fabric
password under. agent (only local).

Group User ID Password ITSO’s user ID


and password

Adminis- If you do not specify tpcadmin /


trators anything, itcauser is tpcadmin
created by default

Table 2-9 Host authentication password


item OS Description Created when Used when

Host All Specified when Used when you


authentication you install the install Fabric
password Device server agent, to
communicate with
the Device server.

Group User ID Password ITSO’s user ID


and password

Must be provided tpctpc

Table 2-10 Data server account password


item OS Description Created when Used when

Data server Windows This creates a new Specified when


account service account for you install the Data
password the Data server to server
run under.

Group User ID Password ITSO’s user ID


and password

Adminis- TSRMsrv1 tpctpc


trators

Table 2-11 NAS filer login user ID and password


item OS Description Created when Used when

NAS filer login Windows Specified when


user ID and you run NAS
password discovery

Group User ID Password ITSO’s user ID


and password

Chapter 2. Installation of TPC V3.3 on Windows 25


Table 2-12 Resource manager registration user ID and password
item OS Description Created when Used when

Resource ALL Specified when Used when Device


manager you install Device server and Data
registration user server and Data server have to
ID and password server register to Agent
Manager

Group User ID Password ITSO’s user ID


and password

Manager Password Manager /


(by default) (by default) password

Table 2-13 WAS Webpshere administrator user ID and password


item OS Description Created when Used when

WebSphere ALL You can use Specified when Used when Device
Application tpcadmin, in order you install Device server has to
Server to avoid to create a server communicate with
WebSphere new one WebSphere
administrator
user ID and Group User ID Password ITSO’s user ID
password and password

If not provided, it If not provided, it tpcadmin /


will be created. will be created. tpcadmin

2.5 DB2 install for Windows


In this section, we show a typical installation of DB2 8.2, and also the installation of FixPak 14
for DB2. Before beginning the installation, it is important that you log on to your system as a
local administrator with Administrator authority for Windows (see Table 2-2 on page 22).

Attention: If you update DB2 from an older version, for example, from DB2 7.2 to DB2 8.2,
the TotalStorage Productivity Center installer might not recognize the DB2 version.

26 TotalStorage Productivity Center V3.3 Update Guide


2.5.1 DB2 installation

To begin the installation of DB2, follow these steps:


1. Insert the IBM DB2 Installer CD into the CD-ROM drive.
If Windows autorun is enabled, the installation program should start automatically. If it
does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center
CD-ROM drive. Go to the DB2 Installation image path and double-click setup.exe. You will
see the first panel, as shown in Figure 2-7. Select Install Product to proceed with the
installation.

Figure 2-7 DB2 setup welcome

Chapter 2. Installation of TPC V3.3 on Windows 27


2. The next panel allows you to select the DB2 product to be installed. Click Next to proceed
as shown in Figure 2-8.

Figure 2-8 Select product

The InstallShield Wizard starts (see Figure 2-9).

Figure 2-9 Preparing to install

28 TotalStorage Productivity Center V3.3 Update Guide


3. The DB2 Setup wizard panel is displayed, as shown in Figure 2-10. Click Next to proceed.

Figure 2-10 Setup wizard

4. The next panel is the license agreement, click I accept the terms in the license
agreement (Figure 2-11).

Figure 2-11 license agreement

Chapter 2. Installation of TPC V3.3 on Windows 29


5. To select the installation type, accept the default of Typical and click Next to continue
(see Figure 2-12).

Figure 2-12 Typical installation

6. Accept the defaults and proceed with Install DB2 Enterprise Server Edition on this
computer (see Figure 2-13). Click Next to continue.

Figure 2-13 Installation action

30 TotalStorage Productivity Center V3.3 Update Guide


7. The panel shown in Figure 2-14 shows defaults for drive and directory to be used as the
installation folder. You can change these or accept the defaults, then click Next to
continue.

Figure 2-14 Installation folder

8. Set the user information for the DB2 Administration Server; choose the domain of this
user. If it is a local user, leave the field blank. Click Next to continue.
Type a user name and password of the DB2 user account that you want to create
(Figure 2-15). You can refer to Table 2-5 on page 23.
DB2 creates a user with the following administrative rights:
– Act as a part of an operating system.
– Create a token object.
– Increase quotas.
– Replace a process-level token.
– Log on as a service.

Chapter 2. Installation of TPC V3.3 on Windows 31


Figure 2-15 User Information

9. Accept the defaults in the panel shown in Figure 2-16, and click Next to continue.

Figure 2-16 Administration contact list

32 TotalStorage Productivity Center V3.3 Update Guide


10.Click OK when the warning window shown in Figure 2-17 opens.

Figure 2-17 Warning

11.In the Configure DB2 instances panel, accept the default and click Next to continue
(see Figure 2-18).

Figure 2-18 Configure DB2 instances

Chapter 2. Installation of TPC V3.3 on Windows 33


12.Accept the defaults, as shown in Figure 2-19. Verify that Do not prepare the DB2 tools
catalog on this computer is selected. Click Next to continue.

Figure 2-19 Prepare db2 tools catalog

13.In the panel shown in Figure 2-20, click Defer the task until after installation is
complete and then click Next to continue.

Figure 2-20 Health Monitor

34 TotalStorage Productivity Center V3.3 Update Guide


14.The Enable operating system security for DB2 objects panel shown in Figure 2-21 will only
be presented if the DB2ADMNS and DB2USERS groups already exist, this is probably
because you installed and then uninstalled DB2 on the same server before. On a clean
install, you might not see this panel. We accept the defaults, and click Next to proceed.

Figure 2-21 Enable operating system security for DB2 objects

15.The panel shown in Figure 2-22 is presented, click Install to continue.

Figure 2-22 Start copying files

Chapter 2. Installation of TPC V3.3 on Windows 35


The DB2 installation proceeds and you see a progress panel similar to the one shown in
Figure 2-23.

Figure 2-23 installing DB2 Enterprise Server Edition installation progress

16.When the installation completes, click Finish, as shown in Figure 2-24.

Figure 2-24 DB2 setup wizard

36 TotalStorage Productivity Center V3.3 Update Guide


17.If you see the DB2 Product Updates panel shown in Figure 2-25, click No because you
have already verified that your DB2 version is at the latest recommended and supported
level for TotalStorage Productivity Center.

Figure 2-25 DB2 product tapes

18. Click Exit First Steps (Figure 2-26) to complete the installation.

Figure 2-26 Universal Database First Steps panel

Chapter 2. Installation of TPC V3.3 on Windows 37


Verifying the installation
To verify the DB2 installation, check if db2tpc user has been created and included in
DB2ADMNS group.

Open a Command Prompt window and enter the db2level command to check the version
installed as shown in Figure 2-27.

Figure 2-27 db2 clplevel 20

Figure 2-28 shows the DB2 window services created at the end of the installation.

Figure 2-28 Windows services showing DB2 services

In the system tray, there is a green DB2 icon indicating DB2 is up and running (Figure 2-29).

Figure 2-29 DB2 icon in system tray

38 TotalStorage Productivity Center V3.3 Update Guide


2.5.2 Agent Manager installation for Windows
In this section, we describe a typical installation of Agent Manager 1.3.2.

When you install the Agent Manager, you will also be installing the Embedded version of IBM
WebSphere Application Server - Express, V6.0.2 (WebSphere Express).

To intstall the Agent Manager, follow these procedures:


1. Run the following program from the Embedded Installer directory (Table 2-14). You have to
have Java™ Virtual Machine installed. If you want to designate an alternate path to the
JVM™, use the third column command in the table below.

Table 2-14 Embedded Installer directory commands


Operating system Command Java error failure
alternate command

Microsoft Windows setupwin32.exe setupwin32.exe -is:javahome ..\jre\windows

AIX setupAix.bin setupAix.bin -is:javahome ../

Linux setupLinux.bin setupAix.bin -is:javahome ../

Linux on Power PC® setupLinuxPPC.bin setupLinux.bin -is:javahome ../

Solaris setupSolaris.bin setupLinux.bin -is:javahome ../

Note: Log on with a user ID that has administrative authority on Windows.

2. The Installation Wizard starts; you can see a panel similar to the one in Figure 2-30.

Figure 2-30 Install wizard panel

Chapter 2. Installation of TPC V3.3 on Windows 39


3. The Choose the runtime container for Agent Manager panel is displayed as in Figure 2-31
with the default option The Websphere Application Server. Make sure that the
WebSphere Application Server is already installed selected. Do not select this option,
because we do not have Websphere installed.
Choose The embedded version of the IBM WebSphere Application Server delivered
with the Agent Manager installer and click Next to continue.

Figure 2-31 Install wizard panel

40 TotalStorage Productivity Center V3.3 Update Guide


4. Figure 2-32 shows the Directory Name for the installation. Click Next to accept the default
or click Browse to install to a different directory.

Figure 2-32 Directory Name panel

Chapter 2. Installation of TPC V3.3 on Windows 41


5. The Type and Location of Registry panel is displayed, as shown in Figure 2-33. Choose
DB2 database on this computer which is the default, and click Next to continue.

Figure 2-33 Type and Location of Registry panel

42 TotalStorage Productivity Center V3.3 Update Guide


6. In the next DB2 Universal Database™ Connection Information panel shown in
Figure 2-34, enter the following database information:
– Database Software Directory:
Enter the directory where DB2 is installed on your system. The default directory is:
• C:\Program Files\IBM\SQLLIB
– Database Name:
A default database called IBMCDB will be created for the Agent Manager.
After entering the information, click Next to continue.

Figure 2-34 DB2 Universal Database Connection Information panel

7. The Database User Information panel is shown in Figure 2-35. Enter the database user
name and password, this is the DB2 administrator user ID that is in the DB2ADMNS group
and Administrator group.
If you want to use a different user ID for installation of Agent Manager, you can select the
Use a different user ID during the installation, and enter the user ID and password.
Note that if you do not select the check box, the following Database Administrator User ID
and Password will not be used.
We recommend that you use the DB2 ID and password from the DB2 installation if you
use the DB2 only for TPC.
You can refer to Table 2-5 on page 23 for the ID and password. In our installation, we use
the DB2 ID and password from the DB2 installation. Click Next to continue.

Chapter 2. Installation of TPC V3.3 on Windows 43


Figure 2-35 Database User Information panel

8. The WebSphere Application Server Connection Information panel is shown as Table 2-36
on page 45. Enter the following information, and click Next to continue.
– Host Name or Alias of Agent Manager:
Review the preinstallation task mentioned in 2.4.1, “Verify primary domain name
systems” on page 18. Use the fully qualified host name. For example, specify
tpcsrv.itsosj.sanjose.ibm.com. This value is used for the URLs for all Agent Manager
services. It is preferable to use the fully qualified host name rather than an IP address.
If you specify an IP address, you will see the warning panel shown in Figure 2-37 on
page 46.
– Registration Port:
Use the default port of 9511 for the server-side SSL.
– Secure Port:
Use the default port of 9512 for client authentication, two-way SSL.
– Public Port and Alternate Port for the Agent Recovery Service:
Use the public communication port default of 9513.
– Do not use port 80 for the agent recovery service:
Accept the default and do not check this box.
Note: If you want to check this box, make sure that port 80 is not being used by another
application. To check for other applications that are using port 80, run this command:
netstat -an

44 TotalStorage Productivity Center V3.3 Update Guide


Note: If you want Agent Recovery Service to run, you must stop any service using port 80.
If any service is using port 80, Agent Recovery Service installs, but does not start.

Figure 2-36 WebSphere Application Server Connection Information panel

9. If you specify an IP address instead of a fully qualified host name for the Host Name or
Alias of Agent Manager, you see the panel shown in Figure 2-37. We recommend that you
click the Back button and specify a fully qualified host name.

Chapter 2. Installation of TPC V3.3 on Windows 45


Figure 2-37 Warning IP specified panel

10.In the WebSphere Application Server Connection Information panel shown in Figure 2-38,
accept the defaults and click Next to continue.

Figure 2-38 WebSphere Application Server Connection Information for Application Server Name

46 TotalStorage Productivity Center V3.3 Update Guide


11.In the Security Certificates panel (see Figure 2-39), we highly recommend that you accept
the defaults to generate new certificates for a secure environment.
Click Next to continue.

Figure 2-39 Create security certificates

12.In the panel shown in Figure 2-40, specify the Security Certificate settings. To create
Certificates, you must specify a Certificate Authority Password. You must specify this
password to look at the certificate files after they are generated. Make sure that you record
this password in the worksheets in Appendix B, “Worksheets” on page 309.
After entering the passwords, click Next to continue.

Figure 2-40 Define the Certificate Authority

Chapter 2. Installation of TPC V3.3 on Windows 47


13.In the Agent Manager Set Passwords panel shown in Figure 2-41, enter the following
information and click Next to continue.
– Agent Manager Password:
This is the resource manager registration password. This password is used to register
the Data Server or Device Server with the Agent Manager. Enter the password twice.

Note: Agent Manager 1.2 installer provided a default ID/password as


manager/password. For Agent Manager 1.3, there is no default password.

At the time of writing this book, entering a password other than “password” will
cause the Data Server and Device Server instal to fail. Check the TPC flashes to
verify if this has been fixed before entering a password other than “password”.

We recommend that you record it in the worksheets provided in Appendix B,


“Worksheets” on page 309.
– Agent registration Password:
This is the password used to register the common agents (for Fabric agent and Data
agent). You must supply this password when you install the agents. This password
locks the agentTrust.jks file. Enter the password twice.

Note: Agent Manager 1.2 installer provided a default password as changeMe. For
Agent Manager 1.3, there is no default password.

You specify a unique password and record it in the worksheets provided in Appendix B,
“Worksheets” on page 309. You must provide a password here, otherwise you cannot
continue the installation.

48 TotalStorage Productivity Center V3.3 Update Guide


Figure 2-41 Agent Manger Set passwords panel

14.The User Input Summary panel is displayed (see Figure 2-42). If you want to change any
settings, click Back and return to the window where you set the value. If you do not need
to make any changes, click Next to continue.

Figure 2-42 Input summary

Chapter 2. Installation of TPC V3.3 on Windows 49


The next panel is the WebSphere Application Server installation panel shown in Figure 2-43.
The installation can take some time, so be patient.

Figure 2-43 WebSphere Application Server installation panel

50 TotalStorage Productivity Center V3.3 Update Guide


15.When the WebSphere Application Server installation is complete, you will see the
summary information panel. Review the summary information panel (see Figure 2-44)
and click Next to continue.

Figure 2-44 Summary information panel

Chapter 2. Installation of TPC V3.3 on Windows 51


The Agent Manager installation starts and you see several messages indicating the
installation process. Wait until you get to the panel shown in Figure 2-46. This normally will
take about 5 minutes.

Figure 2-45 Agent Manager Installation progress

52 TotalStorage Productivity Center V3.3 Update Guide


16.The Start the AgentManager Application Server panel is shown in Figure 2-46. Choose
Yes, start AgentManager now and click Next to continue.

Figure 2-46 Start the AgentManager Application Server

Chapter 2. Installation of TPC V3.3 on Windows 53


You will see the panel in Figure 2-47, indicating that the WebSphere server is starting the
Agent Manager.

Figure 2-47 Starting WebSphere of AgentManager

17.The Summary of Installation and Configuration Results panel is displayed in Figure 2-48.
Verify that the Agent Manager has successfully installed all of its components. Review the
panel and click Next to continue.

54 TotalStorage Productivity Center V3.3 Update Guide


Figure 2-48 Summary of Agent Manager configuration options summary

18.The last panel (Figure 2-49) shows that the Agent Manager has been successfully
installed. Click Finish to complete the Agent Manager installation.

Figure 2-49 Finish the Agent Manager install

Chapter 2. Installation of TPC V3.3 on Windows 55


Verifying the installation
You can verify the installation by running the HealthCheck utility from a command-prompt.

From a command prompt, navigate to the directory,


<InstallDir>\Program Files\IBM\AgentManager\toolkit\bin and run HealthCheck.

Refer to the HealthCheck.readme file located in the directory,


<InstallDir>\Program Files\IBM\AgentManager\toolkit, for the usage of HealthCheck.

In our installation, we use changeMe as agent registration password, so we specify it as part


of the RegistrationPW parameter. See Figure 2-50.

Figure 2-50 Healthcheck utility

Verify that the ARS.version field shows the level you have installed (in our case it is 1.3.2.13)
and that at the end you see the message, Health Check passed, as shown in Figure 2-51.

Figure 2-51 Healthcheck utility result

56 TotalStorage Productivity Center V3.3 Update Guide


After the completion of the Agent Manager installation, you can verify the connection to the
database (see Figure 2-52). From a command-prompt, enter:
db2cmd→ db2→ connect to IBMCDB user db2tpc using db2tpc

Figure 2-52 DB2 command line CONNECT

2.6 Installing TotalStorage Productivity Center components


Now that the prerequisites have been installed, we install the TotalStorage Productivity Center
components.

Before starting the installation, verify that DB2 8.2 Enterprise Edition FixPak 14 has been
installed and has been started.

Important: Log on to your system as a local administrator with database authority, for
Windows.

1. For Windows, if Windows autorun is enabled, the installation program should start
automatically. If it does not, open Windows Explorer and go to the TotalStorage
Productivity Center CD–ROM drive or directory. Double-click setup.exe.
2. Chose your language and click OK (see Figure 2-53).

Figure 2-53 Language selection panel

Chapter 2. Installation of TPC V3.3 on Windows 57


3. The License Agreement panel is displayed. Read the terms and select I accept the terms
of the license agreement. Then click Next to continue (see Figure 2-54).

Figure 2-54 License panel

4. Figure 2-55 shows how to select typical or custom installation. You have the following
options:
– Typical installation:
This selection allows you to install all of the components on the same computer by
selecting Servers, Agents, and Clients.
– Custom installation:
This selection allows you to install each component separately.
– Installation licenses:
This selection installs the TotalStorage Productivity Center licenses. The TotalStorage
Productivity Center license is on the CD. You only need to run this option when you add
a license to a TotalStorage Productivity Center package that has already been installed
on your system.
For example, if you have installed TotalStorage Productivity Center for Data package,
the license will be installed automatically when you install the product. If you decide to
later enable TotalStorage Productivity Center for Fabric, run the installer and select
Installation licenses. This option will allow you to install the license key from the CD.
You do not have to install the IBM TotalStorage Productivity Center for Fabric product.
In this chapter, we document Custom Installation. Click Next to continue.

58 TotalStorage Productivity Center V3.3 Update Guide


Figure 2-55 Custom installation

5. In the Custom installation, you can select all the components in the panel shown in
Figure 2-56. This is the recommended installation scenario. In our scenario, we show the
installation in stages. By default all components (except the Remote Data agent and
Remote Fabric agent) are checked. As the first step, we only select the option to Create
database schema. and click Next to proceed (see Figure 2-56).

Figure 2-56 Custom installation component selection

Chapter 2. Installation of TPC V3.3 on Windows 59


6. To start the Database creation, you must specify a DB2 user ID. We suggest that you use
the same DB2 user ID you created when you installed DB2 (see Table 2-5 on page 23).
Click Next, as shown in Figure 2-57.

Figure 2-57 DB2 user and password

7. Enter your DB2 user ID and password again (see Table 2-5 on page 23). Do not take the
default of Use Local Database. Click Create local database. By default, a database
named TPCDB is created. Click Schema creation details to continue (Figure 2-58).
.

Figure 2-58 DB2 user and create local database

60 TotalStorage Productivity Center V3.3 Update Guide


The panel in Figure 2-59 allows you to change the default space assigned to the
database. Review the defaults and make any changes. In our installation we accepted the
defaults.
For better performance, we recommend that you:
– Allocate TEMP DB on a different physical disk than the TotalStorage Productivity Center
components.
– Create larger Key and Big Databases.
Select System managed (SMS) and click OK and then Next to proceed (Figure 2-59).
To understand the advantage of an SMS database versus a DMS database, refer to the
chapter entitled, “Selecting an SMS or DMS tablespace” in the Redbooks publication,
IBM TotalStorage Productivity Center: The Next Generation, SG24-7194.

Figure 2-59 DB schema space

8. You will see the TPC installation information you selected as shown in Figure 2-60;
click Install to continue.

Figure 2-60 TPC installation information

Chapter 2. Installation of TPC V3.3 on Windows 61


Figure 2-61 is the Database schema installation progress panel. Wait for the installation to
complete.

Figure 2-61 installing DB

9. Upon completion, the successfully installed panel is displayed. Click Finish to continue
(Figure 2-62).

Figure 2-62 Installation summary information

62 TotalStorage Productivity Center V3.3 Update Guide


2.6.1 Verifying the installation
To check the installation, choose Start → All Programs → IBM DB2 → General
Administration Tools → Control Center, to start DB2 Control Center to verify that you have
two DB2 instances in your environment, as shown in Figure 2-63.

Figure 2-63 Verifying DB2 installation

Attention: Do not edit or modify anything in DB2 Control Center. This could cause serious
damage to your tablespace. Simply use DB2 Control Center to browse your configuration.

Log files
Check for errors and java exceptions in the log files at the following locations:
򐂰 <InstallLocation>\TPC.log
򐂰 <InstallLocation>\log\dbSchema\install
For Windows, the default InstallLocation is c:\Program Files\IBM\TPC

Check for the success message at the end of the log files for successful installation.

2.6.2 Installing Data Server, Device Server, GUI, and CLI


In our environment, we performed a custom installation of Data Server, Device Server, GUI,
and CLI.

Preinstallation tasks
To install Data Server and Device Server components, you must log on to Windows 2003 with
a User ID that has the following rights:
򐂰 Log on as a service.
򐂰 Act as part of the operating system.
򐂰 Adjust memory quotas for a process.
򐂰 Create a token object.
򐂰 Debug programs.
򐂰 Replace a process-level token.

Chapter 2. Installation of TPC V3.3 on Windows 63


Be certain that the following tasks are completed:
򐂰 We recommend that you create a user ID for installation. We created the user ID tpcadmin
(refer to Table 2-3 on page 22).
򐂰 The database schema must be installed successfully to start the Data server installation.
򐂰 An accessible Agent manager must be available to start the Device server installation.
򐂰 Data server must be successfully installed prior to installing the GUI.
򐂰 Device server must be successfully installed prior to installing the CLI.

Custom installation
To perform a custom installation, follow these steps:
1. Start the TotalStorage Productivity Center installer.
2. Chose the language to be used for installation.
3. Accept the terms of the License Agreement.
4. Select the Custom Installation.
5. Select the components you want to install. In our scenario, we select the four server
components, as shown in Figure 2-64. Notice that the field, Create database schema,
is grayed out.

Tip: We recommend that you install Data agent and Device agent in a separate step.

If you install all the components at the same time, if one fails for any reason (for
example, space or passwords), the installation suspends and a rollback occurs,
uninstalling all the previously installed components.

Figure 2-64 Installation selection

64 TotalStorage Productivity Center V3.3 Update Guide


The DB2 user ID and password is filled in because we use it to create the database
schema. See Figure 2-65 and click Next.

Figure 2-65 User ID and password

6. Click Use local database. We will use the database TPCDB we just created in the
previous step. Click Next to continue (Figure 2-66).

Figure 2-66 Use local database selection

Chapter 2. Installation of TPC V3.3 on Windows 65


7. In the panel in Figure 2-67, enter the following information:
– Data Server Name:
Enter the fully qualified host name of the Data Server.
– Data Server Port:
Enter the Data Server port. The default is 9549.
– Device Server Name:
Enter the fully qualified host name of the Device Server.
– Device Server Port:
Enter the Device Server port. The default is 9550.
– TPC Superuser:
Enter the Administrators Group for the TPC Superuser. We created the user ID
tpcadmin and added this to the existing Administrators group. See 2.4.5, “User IDs and
password to be used and defined” on page 21 for more details.
– Host Authentication Password:
This is the password used for the Fabric agents to communicate with the Device
Server. Remember to record this password. See Table 2-9 on page 25.

– Data Server Account Password:


For Windows only. TPC installer will create an ID called TSRMsrv1 with the password
you specified here to run the Data server service. The display name for the Data
Server in Windows Services panel is:
IBM TotalStorage Productivity Center - Data Server
– WebSphere Application Server admin ID and Password:
This is the user ID and password required by the Device Server to communicate with
the embedded WebSphere.
You can use the TPC Superuser here. In our case we used tpcadmin. See Table 2-13
on page 26 for further details.
If you click the Security roles... button, the Advanced security roles mapping panel is
displayed. You can assign a Windows OS group to a role group for each TPC role you
want to make an association with, so we can have different authority IDs to do different
TPC operations. The operating group must exist before you can associate a TPC role with
a group. You do not have to assign security roles at installation time, you can assign these
roles after you have installed TPC.
If you click the NAS discovery... button: The NAS discovery information panel is
displayed. You can enter the NAS filer login default user name and password and the
SNMP communities to be used for NAS discovery. You do not have to assign the NAS
discovery information at installation time, you can configure it after you have installed TPC.
Refer to Chapter 5, “N Series and NetApp support” on page 157 for details.
Click Next to continue.

66 TotalStorage Productivity Center V3.3 Update Guide


Figure 2-67 Component information for installation

8. In the panel shown in Figure 2-68, enter the Agent Manager information. You must specify
the following information:
– Hostname or IP address:
Fully qualified name or IP address of the agent manager server. For further details
about the fully qualified name, refer to 2.4.1, “Verify primary domain name systems” on
page 18.
– Port (Secured):
Port number of the Agent Manager server. If acceptable (not in use by any other
application), use the default port 9511.
– Port (Public):
The public communication port. If acceptable (not in use by any other application), use
the default of 9513.
– User ID:
This is the user ID used to register the Data Server or Device Server with the Agent
Manager. You have to use the built-in ID manager since it is not allowed to specify it
during the Agent Manager 1.3.2 installation (see Figure 2-41 on page 49).
– Password:
This is the password used to register the Data Server or Device Server with the Agent
Manager. You previously specified this user ID during the Agent Manager install (see
Figure 2-41 on page 49).
– Password - common agent registration password:
This is the password used by the common agent to register with the agent manager, it
was specified when you installed the Agent Manager (see Figure 2-41 on page 49).
Click Next to continue.

Chapter 2. Installation of TPC V3.3 on Windows 67


Figure 2-68 Agent Manager Information panel

9. The Summary information panel is displayed. Review the information, then click Install to
continue (see Figure 2-69).

Figure 2-69 Summary of installation

68 TotalStorage Productivity Center V3.3 Update Guide


The installation starts. You might see several messages related to Data Server installation
similar to Figure 2-70.

Figure 2-70 Installing Data Server

If you install from the electronic image, the installer will prompt you to change to the
directory of the second disk. Click Browse and choose the directory of Disk2, and click
OK to continue.

Tip: You extract the files from disk1 and disk2 into a directory of the names: disk1 and
disk2. The TotalStorage Productivity Center installation program will be able to find the
disk2 files; it will not prompt the Insert Next Disk panel. Put disk1 and disk2 in the same
directory. For example, on Windows (note that this is case-sensitive), you might do this:
C:\disk1
C:\disk2
or
C:\tpc33\disk1
C:\tpc33\disk2

Figure 2-70 shows the panel for multiple disk installation.

Figure 2-71 Multiple disk installation panel

Chapter 2. Installation of TPC V3.3 on Windows 69


You might also see several messages about the Device Server installation, as shown in
Figure 2-72.

Figure 2-72 Installing Device Server

10.After the GUI and CLI installation messages, you see the summary information panel
(Figure 2-73). Read and verify the information and click Finish to complete the
installation.

Figure 2-73 Component installation completion panel

Verifying the installation


At the end of the installation, the Windows Services shows that the Data Server and Device
Server services (see Figure 2-74) have been installed.

70 TotalStorage Productivity Center V3.3 Update Guide


Figure 2-74 Windows service

Check that the Administrators group contains the newly created TPC user ID. The user ID
TSRMsrv1 is created by default by the install program.

Log files for data server


Check the logs for any errors/java exceptions. The log files for the data server are:
򐂰 <InstallLocation>\TPC.log
򐂰 <InstallLocation>\log\data\install
򐂰 <InstallLocation>\log\install
򐂰 <InstallLocation>\data\log
For Windows, the default InstallLocation is c:\Program Files\IBM\TPC

Log files for device server


Check the log files for any errors. The log files for the device server are:
򐂰 <InstallLocation>\TPC.log
򐂰 <InstallLocation>\log\device\install
򐂰 <InstallLocation>\device\log
For Windows, the default InstallLocation is c:\Program Files\IBM\TPC

Log files for GUI


Check the log files for any errors. The log files for the GUI are:
򐂰 <InstallLocation>\TPC.log
򐂰 <InstallLocation>\log\gui\install
򐂰 <InstallLocation>\gui\log
For Windows, the default InstallLocation is c:\Program Files\IBM\TPC

Log files for CLI


Check the log files for any errors. The log files for the CLI are:
򐂰 <InstallLocation>\TPC.log
򐂰 <InstallLocation>\log\cli\install
For Windows, the default InstallLocation is c:\Program Files\IBM\TPC

Chapter 2. Installation of TPC V3.3 on Windows 71


2.6.3 Agent deployment
We do not discuss the agent deployment in this chapter. This topic is well documented in
Chapter 6, “Agent deployment” of the Redbooks publication, IBM TotalStorage Productivity
Center: The Next Generation, SG24-7194.

72 TotalStorage Productivity Center V3.3 Update Guide


3

Chapter 3. Installation of TPC V3.3 on Linux


In this chapter we show the step-by-step installation of TPC on the Linux platform. Assuming
that you have a newly installed Linux server, we cover preparation topics on the operating
system side as well as installation topics of the TPC prerequisite components: DB2 and
Agent Manager. Of the available installation paths, Typical and Custom, we describe the
Custom installation in our environment.

© Copyright IBM Corp. 2008. All rights reserved. 73


3.1 Preinstallation steps
In this section we discuss planning considerations, including the Java runtime environment.

3.1.1 Planning considerations


First, make sure that you do proper planning. We recommend that you refer to Chapter 3,
“Installation planning and considerations” of the Redbooks publication, IBM TotalStorage
Productivity Center: The Next Generation, SG24-7194. The information provided there is still
valid and provides a good reference. Also, in this same book, see Chapter 5, section 5.5.2,
“User IDs, passwords, and groups” from the AIX installation part, this is also very useful.

Regarding support and machine requirements, consult the TPC support Web site.

3.1.2 Java SDK or runtime environment


Make sure to have a proper version of Java installed. We recommend that you go to the
IBM developerWorks® Web site and download the most current release of the Java runtime
environment that fits your platform. In our case, that is J2SE™ 1.4.2 SR8. It is available at the
following Web site:
http://www.ibm.com/developerworks/java/jdk/linux/

3.2 Installation of TPC V3.3 on Linux platform


Assuming that you have not installed DB2 yet and you do not want to use a remote DB2
instance, you have to start with the installation of IBM DB2. After that, you must install the
Agent Manager, and finally, install the TPC Data and Device Servers, the GUI, and the CLI.

74 TotalStorage Productivity Center V3.3 Update Guide


3.2.1 Installing DB2 Version 8 on Linux platform
First, access your DB2 installation media. You have the possibility to install DB2 via the
graphical installer; to do this, run db2setup.

As an alternative, you can use the command line installer. To do this, run db2_install. This will
only install the DB2 data to your operating system. Configuration has to be done manually
then.

Using the graphical installer to install DB2, you have three different options to select the type
of installation you want to make. These options are Typical, Compact, and Custom. We
describe the Custom option to install DB2.

Follow these steps to perform the installation of IBM DB2 UDB:


1. At the command prompt, enter the following command to start the installation:
./db2setup
The IBM DB2 Setup Launchpad opens, and appears similar to the one shown in
Figure 3-1. Click Install Products to begin with the installation of IBM DB2 UDB.

Figure 3-1 DB2 installation setup

Chapter 3. Installation of TPC V3.3 on Linux 75


2. The product selection panel appears and looks similar to Figure 3-2. Choose option DB2
UDB Enterprise Server Edition and click Next.

Figure 3-2 DB2 product selection

76 TotalStorage Productivity Center V3.3 Update Guide


3. The DB2 Setup wizard starts loading, This will take a short period of time. During the
loading phase, you see a panel similar to Figure 3-3.

Figure 3-3 DB2 loading for install

When the DB2 Setup wizard is done loading, you see the DB2 Setup wizard welcome
panel, similar to Figure 3-4. Click Next to continue the installation of IBM DB2 UDB.

Figure 3-4 DB2 Setup wizard

Chapter 3. Installation of TPC V3.3 on Linux 77


4. The Software License Agreement is displayed similar to Figure 3-5. If you agree with the
software license agreement, select Accept and click Next. If you do not accept the license
agreement, you cannot continue with the installation.

Figure 3-5 DB2 installation license agreement

78 TotalStorage Productivity Center V3.3 Update Guide


5. You are presented the installation type selection dialog as in Figure 3-6. Choose Custom
and click Next.

Figure 3-6 DB2 custom installation

Chapter 3. Installation of TPC V3.3 on Linux 79


6. The installation action selection panel is displayed, similar to Figure 3-7. Check Install
DB2 UDB Enterprise Server Edition on this computer and click Next.

Figure 3-7 DB2 Select the installation action

7. The features selection panel, similar to Figure 3-8, is displayed. Select the features to be
installed. From the Server support tab, choose the options shown in Figure 3-8. You
might want to deselect DB2 Data Source Support, because it is only needed to connect
to remote databases residing on the System i™ or System z™ server running z/OS or
OS/390®.

80 TotalStorage Productivity Center V3.3 Update Guide


Figure 3-8 DB2 feature selection

8. Select the Client support tab. Make sure that the features shown in Figure 3-9 are
selected. This selection should reflect the default values.

Figure 3-9 DB2 client support features

Chapter 3. Installation of TPC V3.3 on Linux 81


9. Select the Administration tools tab. Select the features shown in Figure 3-10. This
selection should reflect the default values. Features from the tabs, Application
Development tools and Business Intelligence, are not needed.

Figure 3-10 DB2 Administration tools selection

10.Select the Getting started tab. Select the features shown in Figure 3-11. This selection
should reflect the default values. Click Next.

82 TotalStorage Productivity Center V3.3 Update Guide


Figure 3-11 DB2 Getting started selection

11.The Languages panel enables you to install additional languages. Select the languages
you want to install and click Next.

Figure 3-12 DB2 languages selection

Chapter 3. Installation of TPC V3.3 on Linux 83


12.You advance to the Documentation panel. If you already have a DB2 Information Center
running, specify its location here. Otherwise, select Install the DB2 Information Center
separately after this DB2 product installation and click Next as shown in Figure 3-13.

Figure 3-13 DB2 Information Center installation selection

13.The DAS User panel is now presented. If you would like the installer to create a DB2
Administration Server (DAS) user ID, you must enter a unique username for the DAS user
in the User name field. You must also enter a password in both the Password and the
Confirm password fields.
If you leave the UID and GID fields blank and check the Use default boxes, the system will
assign a UID and GID for you. Alternatively, you can check the Existing user button and
select the name of an existing user ID, which will become the DAS user. When you have
completed this form, click Next as shown in Figure 3-14.

84 TotalStorage Productivity Center V3.3 Update Guide


Figure 3-14 DB2 Administrator Server user IS and password

14.The Set up a DB2 instance window opens and appears similar to the one shown in
Figure 3-15. Select Create a DB2 Instance and click Next.

Figure 3-15 DB2 instance window

Chapter 3. Installation of TPC V3.3 on Linux 85


15.You advance to the Instance use panel, similar to Figure 3-16. Select Single-partition
instance and click Next to continue.

Figure 3-16 DB2 Single-partition instance

86 TotalStorage Productivity Center V3.3 Update Guide


16.You continue to the Instance owner panel, similar to Figure 3-17. If you would like the
installer to create a DB2 Instance owner user ID, you must enter a unique username for
the instance owner user in the User name field. You must also enter a password in both
the Password and the Confirm password fields.
If you leave the UID and GID fields blank and check the Use default boxes, the system will
assign a UID and GID for you. Alternatively, you can check the Existing user button and
select the name of an existing user ID which will become the DB2 instance owner. When
you have completed this form, click Next.

Figure 3-17 DB2 Single-partition instance

Chapter 3. Installation of TPC V3.3 on Linux 87


17.The fenced user configuration dialog is shown, similar to Figure 3-18. If you would like the
installer to create a fenced user ID, you must enter a unique username for the instance
owner user in the User name field. You must also enter a password in both the Password
and the Confirm password fields.
If you leave the UID and GID fields blank and check the Use default boxes, the system will
assign a UID and GID for you. Alternatively, you can check the Existing user button and
select the name of an existing user ID which will become the fenced user. When you have
completed this form, click Next as shown in Figure 3-18.

Figure 3-18 DB2 Single-partition instance

88 TotalStorage Productivity Center V3.3 Update Guide


18.The DB2 instance TCP/IP communication configuration dialog is shown, similar to
Figure 3-19. Select Configure and accept the default Service name and the default Port
number for the DB2 instance. Click Next.

Figure 3-19 DB2 instance TCP/IP communication configuration

19.You advance to the instance properties panel, similar to Figure 3-20. Select authentication
type Server, check the Autostart the instance at system startup checkbox and click
Next.

Chapter 3. Installation of TPC V3.3 on Linux 89


Figure 3-20 DB2 instance TCP/IP communication configuration

20.The DB2 catalog preparation panel is now shown, similar to Figure 3-21. Choose Do not
prepare the DB2 tools catalog on this computer and click Next.

Figure 3-21 DB2 catalog preparation

90 TotalStorage Productivity Center V3.3 Update Guide


21.The administration contact setup dialog is shown, similar to Figure 3-22. Select Local as
the administration contact list location. If you do not have an infrastructure suitable for
SMTP notification, uncheck the Enable notification checkbox in the Notification SNMP
server. Click Next.

Figure 3-22 DB2 SMTP notification

22.You proceed to the health monitoring notification setup, similar to Figure 3-24. If you
disabled the SMTP notification, you are presented a warning message like Figure 3-23.
Click OK to confirm that a setup without SMTP notification is intended.

Figure 3-23 DB2 health monitoring notification setup

23.The health monitoring notification setup is presented, similar to Figure 3-24. Select New
contact and specify a Name and an E-mail address for the administration contact for this
instance, or select Defer this task until after installation is complete, and click Next.

Chapter 3. Installation of TPC V3.3 on Linux 91


Figure 3-24 DB2 health monitoring notification setup

24.You get to the summary panel, similar to Figure 3-25. Verify that the summary information
fits your installation needs and click Finish to start the DB2 installation.

Figure 3-25 DB2 summary panel

92 TotalStorage Productivity Center V3.3 Update Guide


25.During installation, you are presented progress updates from time to time, similar to
Figure 3-26.

Figure 3-26 DB2 installation progress

26.When the installation is finished, you are presented a completion message similar to
Figure 3-27. Check the Post-installation steps and click Status report to verify installation
success.

Figure 3-27 DB2 installation completion

Chapter 3. Installation of TPC V3.3 on Linux 93


27.The Status report looks similar to Figure 3-28. Verify that every installation step is
completed with the status of Success. Click Finish to end the installation program.

Figure 3-28 DB2 successful installation

28.Congratulations, you have successfully completed the first step of the DB2 installation.

94 TotalStorage Productivity Center V3.3 Update Guide


3.2.2 Installation of the Agent Manager
Source the db2profile script of your DB2 Instance to adapt your environment to be able to
execute DB2 tasks. For example, if the DB2 you want to use for TPC is owned by the user
db2inst1, you issue the following command (note that helium is the name of the server we are
installing TPC on):

[root@helium disk1]# . /home/db2inst1/sqllib/db2profile

To install the Agent Manager, make sure you have a possibility to work with graphical
installers. Prepare the GUI according to our instructions:
1. Go to your Agent Manager install resource and start the installation by issuing the
following command:
[root@helium EmbeddedInstaller]# ./setupLinux.bin
2. The Agent Manager Installer opens up. The first selection you have to make is whether
you want to install the Agent Manager in combination with an already existing installation
of the WebSphere Application Server, or if you want to use the embedded WebSphere
Application Server, which is delivered with the Agent Manager. We do not have a
WebSphere Apllication Server installed already, so we use the embedded one.
3. Click Next when you are ready to continue the installation. See Figure 3-29.

Figure 3-29 Linux Agent Manager installation

Chapter 3. Installation of TPC V3.3 on Linux 95


4. This panel prompts you for the installation directory of the Tivoli Agent Manager. Choose a
location that has enough space left. We made sure that the /opt has enough space left and
stay with the default installation directory, which is /opt/IBM/AgentManager, as shown in
Figure 3-30. Click Next to continue.

Figure 3-30 Linux Agent Manager installation directory

5. This panel prompts you for the type and location of the database used for the Tivoli Agent
Manager registry. You can choose among the following six options:
– DB2 database on this computer (this is the default)
– DB2 database on another computer (without DB2 Administration Client)
– Local alias to DB2 database on another computer (using DB2 Administration Client)
– Oracle® database on this computer
– Oracle database on another computer (using Oracle database Client)
– Derby database on this computer
We choose DB2 database on this computer, which is the default. It is suitable for the
local installation of DB2 that we have.

96 TotalStorage Productivity Center V3.3 Update Guide


Make your decision and click Next to continue the installation, as shown in Figure 3-31.

Figure 3-31 Tivoli Agent Manager registry

Chapter 3. Installation of TPC V3.3 on Linux 97


6. Based on the selection we made on the previous panel, we are now asked to provide
additional information on the DB2 UDB connection. Specific the Home Directory of the
DB2 Instance. In our case its /home/db2inst1/sqllib as shown in Figure 3-32. Also choose
a Database Name; in our case its IBMCDB, which is also the default. Click Next to
continue the installation.

Figure 3-32 Linux DB2 UDB connection

98 TotalStorage Productivity Center V3.3 Update Guide


7. You enter Database User Information in the next panel. Specify a Database Runtime User
ID and specify a password for that ID. You also have the possibility to specify a separate ID
for runtime and installation purposes. Specify a User ID with full administration privileges
as Database Administrator User ID and use another one without the create object
authority to be used as runtime User ID (see Figure 3-33). That allows you to limit the
authority you give to the runtime DB2 User ID. Click Next to continue.

Figure 3-33 Database user information

Chapter 3. Installation of TPC V3.3 on Linux 99


8. Specify the WebSphere Application Server Connection Information during this step of the
installation. Provide a fully qualified Host Name or Alias that can be resolved throughout
your whole environment. It is considered best practice to specify a fully qualified hostname
that can be resolved via your DNS system.
Working with the hosts - file of your machines can leave you with a hard to manage setup
for long-term installations. Check whether the specified Agent Manager Registration Port,
the Secure Port, and the Public Port and Alternate Port for the agent recovery service are
suitable for your environment. You can also decide whether or not to use port 80 for the
agent recovery service.
If you do not want to run the agent manager as a root user, you have to check this box to
not use port 80. Using port 80 for the agent recovery service can be a good practice in
strongly firewalled environments to enable agent communication with the agent manager.
Of course, using port 80 for the agent recovery service will conflict with any other service
that might use port 80, for example, an http server. If you want to run an http server on port
80 on the same machine, you have to change the port for the agent recovery service.
If you intend to use the TPC feature to access the GUI via a Web server, you could think
about changing the port for the agent recovery service. But you could also change the port
of the http server, which then still can be used to access the TPC GUI (see Figure 3-34).
Click Next to continue.

Figure 3-34 Linux WebSphere Application Server connection information

100 TotalStorage Productivity Center V3.3 Update Guide


9. The next panel (see Figure 3-35) lets you specify the application server name and the
context root of the application server. You also can decide on whether or not to
automatically start the Agent Manager each time the system restarts. Click Next to
continue.

Figure 3-35 Application Server information

Chapter 3. Installation of TPC V3.3 on Linux 101


10.The next step lets you choose whether you want to use the demonstration certificates or
create your own certificates for this installation (see Figure 3-36). It is not only considered
best practice, but is also strongly recommended that you create your own certificates for
this installation, even if you only plan the installation to be for testing or demonstration
purposes. Not only do the demonstration certificates leave you with a low level of security,
they also make long-term installations hard to manage. Click Next to continue.

Figure 3-36 Security Certificates

102 TotalStorage Productivity Center V3.3 Update Guide


11.Define the certificate authority by specifying a certificate authority name (for example,
TivoliAgentManagerCA) and a security domain (the DomainName). Also specify a
certificate authority password if your security policies require you to examine the contents
of the certificate authority truststore (see Figure 3-37). This is considered a best practice.
Click Next to continue.

Figure 3-37 Agent Manager security certificates

Chapter 3. Installation of TPC V3.3 on Linux 103


12.Finally, specify an Agent Manager password and an Agent Registration password.
The Agent Manager password locks the Agent Manager truststore file and keystore file.
It is internally used by the Agent Manager. The agent registration password is used by
common agents to register with the Agent Manager (see Figure 3-38). Click Next to
continue.

Figure 3-38 Agent Manager registration information

104 TotalStorage Productivity Center V3.3 Update Guide


13.You then are presented with a user input summary that contains an overview of all the
decisions you just made (see Figure 3-39). Verify that everything is the way you want it to
be. Click Next to start the installation.

Figure 3-39 Agent Manager installation summary

Chapter 3. Installation of TPC V3.3 on Linux 105


14.First, according to our selections, the embedded version of the WebSphere application
server is installed. The installation looks similar to Figure 3-40 and will randomly provide
you with a progress update.

Figure 3-40 Agent Manager installation progress

106 TotalStorage Productivity Center V3.3 Update Guide


15.After successfully installing the embedded WebSphere Application Server, the Agent
Manager installation provides you with a summary information. Check the settings and
click Next to continue.

Figure 3-41 Agent Manager installation summary

Chapter 3. Installation of TPC V3.3 on Linux 107


16.During the process, the Agent Manager installation provides you with sporadic progress
updates as shown in Figure 3-42.

Figure 3-42 Agent Manager installation progress

108 TotalStorage Productivity Center V3.3 Update Guide


17.When the Agent Manager installation is almost completed, you are provided the possibility
to start the Agent Manager right now or defer this task until later, as shown in Figure 3-43.
Choose Yes, start AgentManager now and click Next to continue.

Figure 3-43 Agent Manager start options

Chapter 3. Installation of TPC V3.3 on Linux 109


18.The Agent Manager Application Server is started immediately as shown in Figure 3-44.

Figure 3-44 Starting WebSphere Application Server Agent Manager

110 TotalStorage Productivity Center V3.3 Update Guide


19.You are presented with a summary of your installation and configuration results. Verify
whether every step completed with the status of Successful as shown in Figure 3-45.
Click Next to continue.

Figure 3-45 Agent Manager installation summary

20.Finally, the installation prompts you with a summary information that the installation of the
Agent Manager is complete and the Agent Manager has been started. Click Finish to end
the installation of the Agent Manager.

Chapter 3. Installation of TPC V3.3 on Linux 111


Figure 3-46 Agent Manager application server summary

Congratulations, you have successfully installed the Agent Manager.

3.2.3 Installing TPC V3.3 database schema


After the successful installation of the Agent Manager, continue with the installation of the
TPC DB2 database schema.

Before you continue with the installation, you have to enable the root user to be able to install
the following TPC components. Therefore the root user needs to be added to the linux group
of the db2 instance owner.

Red Hat installation


On Red Hat, follow this procedure.

If your DB2 Instance owner is db2inst1, logon as db2inst1 and issue the command groups:
[db2inst1@helium db2inst1]$ groups
db2grp1 dasadm1

You are presented the groups which db2inst1 is member of. We are interested in the group
db2grp1. Now we need to add root to the db2grp1 group.

So logon as root and issue the command groups:


[root@helium root]# groups
root bin daemon sys adm disk wheel

112 TotalStorage Productivity Center V3.3 Update Guide


Make sure that you note the groups of which root is a member. If something goes wrong
during the addition of the db2grp1 to root’s groups, you want to be able to remember the
groups that root was a member of.

For Red Hat, in the example given, the following command would add root to the db2grp1.
Logged on as root, execute the following command.
usermod -G root,bin,daemon,sys,adm,disk,wheel,db2grp1 root

After that, log off and logon as root again to see the changes you made. Execute the
command groups as root again:
[root@helium root]# groups
root bin daemon sys adm disk wheel db2grp1

Now you can continue with the installation of the TPC components.

We consider it a best practice to split the installation of TPC into the following three parts:
1. Install the TPC DB2 database schema.
2. Install the Data Server and the Device Server, the GUI, the CLI, the Data agent, and the
Fabric agent.
3. Optional: Install Remote Data agent and / or Remote Fabric agent.

First, install the TPC DB2 database schema. Start the installation of TPC by executing:
[root@helium disk1]# ./setup.sh

The graphical installer will prompt you with a selection window to choose the installer’s
language. Choose English as shown in Figure 3-47 and click OK to continue.

Figure 3-47 Choose installation language

4. The initialization of the installation wizard takes a brief moment. During this time you can
see the panel shown in Figure 3-48.

Figure 3-48 Initializing wizard... panel

Chapter 3. Installation of TPC V3.3 on Linux 113


5. The TPC installation program starts and prompts you with the license agreement.
Carefully read through the license agreement. To continue, you have to accept the license
agreement and click Next as shown in Figure 3-49.

Figure 3-49 TPC license agreement

6. The panel in Figure 3-50 prompts you for the type of installation you want to perform.
Choose Custom Installation to be able to perform the four tier installation approach we
take. Choose an installation location for TPC with enough space left. We use the default
/opt/IBM/TPC. Click Next to continue.

Figure 3-50 Installer choice

114 TotalStorage Productivity Center V3.3 Update Guide


7. The panel in Figure 3-51 lets you choose the components you want to install in this run.
Choose only the Create database schema checkbox and deselect all the other check
boxes so that you only create the database schema and nothing else. Click Next to
continue.

Figure 3-51 Create database schema

8. This panel allows you to enter the database administrator information that is used to
connect to the database during installation and uninstallation. In our environment, the
database administrator username is db2inst1(see Figure 3-52). Click Next to continue.

Figure 3-52 Database administrator information

Chapter 3. Installation of TPC V3.3 on Linux 115


9. To create the new database schema, the information shown in Figure 3-53 is needed.
Enter the information the product will use when communicating with the DB2 instance and
creating the required repository tables. Be careful not to choose a local database that
already exists. IBMCDB is the Common agent database, TOOLSDB is the DB2 tools
database. Both must not be used for TPC installation. Choose Create new database and
choose a name for it. The default name for the newly created TPC database is TPCDB.
The schema creation details allow you to further specify the size and layout of the TPCDB.
If you are not a DB2 specialist, accept the defaults. Click Next to continue.

Figure 3-53 DB2 database schema information

10.The installer will present you a summary information. Check it carefully to ensure that it
represents what you want to do as shown in Figure 3-54. If everything is acceptable, click
Next to continue.

116 TotalStorage Productivity Center V3.3 Update Guide


Figure 3-54 DB2 database schema information

11.The installation now begins. The installer gives you progress updates as shown in
Figure 3-55.

Figure 3-55 Database schema installation progress

12.After the installation is finished, you are presented an installation summary. Make sure that
the installation was successful and click Finish to end the installation as shown in
Figure 3-56.

Chapter 3. Installation of TPC V3.3 on Linux 117


Figure 3-56 Database schema summary

Congratulations, you have successfully installed the TPC Database Schema.

3.2.4 Installing TPC V3.3 Data Server, Device Server, CLI, and GUI
The next step is to install the remaining TPC components.
1. To continue with the TPC installation, start the installer again by executing the following
command where helium is the name of the server on which we are installing the code.
[root@helium disk1]# ./setup.sh

The graphical installer will prompt you with a selection window to choose the installer’s
language. Choose English as shown in Figure 3-57 and click OK to continue.

Figure 3-57 Installer language

2. The initialization of the installation wizard takes a brief moment. During this time, you can
see the following panel shown in Figure 3-58.

Figure 3-58 Installer wizard

118 TotalStorage Productivity Center V3.3 Update Guide


3. The TPC installation GUI starts and prompts you with the license agent. Carefully read
through the license agreement as shown in Figure 3-59. To continue, you have to accept
the license agreement and click Next.

Figure 3-59 License agreement panel

4. This panel prompts you for the type of installation you want to perform. Choose Custom
Installation to be able to continue the two-tier installation approach we take. The
installation location for TPC is detected automatically. We use the default /opt/IBM/TPC as
shown in Figure 3-60. Click Next to continue.

Figure 3-60 Select Custom installation

Chapter 3. Installation of TPC V3.3 on Linux 119


5. This dialog lets you choose the components that you want to install in this run. In this run,
the Create database schema checkbox is grayed out because we already installed it. Now
select all the remaining check boxes so that you install the Data Server, the Device Server,
the GUI, the CLI, the Data agent, and the Fabric agent as shown in Figure 3-61. Click
Next to continue.

Figure 3-61 Component selection panel

6. Enter the database administrator information to be used during the installation in the panel
shown in Figure 3-62 and click Next to continue.

Figure 3-62 Database administrator information

120 TotalStorage Productivity Center V3.3 Update Guide


7. Because we already installed the DB2 database schema, you now have to select that from
the list available under use local database. Select the TPCDB database as shown in
Figure 3-63. Click Next to continue.

Figure 3-63 TCBDB database schema information

8. The next dialog, as shown in Figure 3-64, lets you specify Data Server, Device Server and
Data agent information. Enter a fully qualified hostname for the Data Server and the
Device Server. This hostname should be resolvable by your DNS system from all
machines you plan to use in combination with TPC. Specify a Data Server and a Device
Server port. Specify the TPC superuser.
Specify a password for the Fabric agents to communicate with the Device Server, the host
communication password. You will not be able to specify a Data Server account password.
This is for Windows installations only. Specify a WebSphere Application Server Admin ID
and a password. Click Security roles... to advance to Figure 3-64, click NAS discovery
to advance to Figure 3-66 on page 123, or click Data agent options to advance to
Figure 3-67 on page 123. Click Next to continue with the installation.

Chapter 3. Installation of TPC V3.3 on Linux 121


Figure 3-64 TPC component information

You have the possibility to create advanced security roles mapping. Stay with the defaults
and click OK as shown in Figure 3-65 to get back.

Figure 3-65 Security roles mapping

122 TotalStorage Productivity Center V3.3 Update Guide


Because the NAS discovery information is optional (see Figure 3-66), leave it as it is and
click OK to get back. The NAS discovery information can be entered at a later time.

Figure 3-66 NAS Discovery information

Configure the default options for Data agents in this dialog. Enable the checkbox to have
an agent run a scan when first installed. Enable the checkbox to allow an agent to run a
script sent by the server. Click OK to continue.

Figure 3-67 Data agent options

Click Next to continue.

Chapter 3. Installation of TPC V3.3 on Linux 123


9. Specify the Agent Manager information. Enter a fully qualified hostname, the Agent
Manager secured port, and the Agent Manager public port. Specify the Data Server and
Device Server registration information and the common agent registration password. Click
Next to continue.

Figure 3-68 Agent Manager information

10.The next panel, as shown in Figure 3-69, prompts you with the common agent selection.
Click Install a new common agent at the location listed below. The default location is
/opt/IBM/TPC/ca with an agent port of 9510. Click Next to continue.

Figure 3-69 Common Agent selection panel

124 TotalStorage Productivity Center V3.3 Update Guide


11.The next dialog shown in Figure 3-70 shows you the installation summary information.
Check that everything is as expected and click Install to begin the actual installation.

Figure 3-70 Installation option summary panel

12.After the installation is finished, you are presented with a summary panel as shown in
Figure 3-71. Click Finish to end the installation.

Figure 3-71 Installer summary panel

Congratulations, you have successfully installed TPC. You can now start the TPC GUI by
issuing the following command:
[root@helium root]# /usr/local/bin/TPC

Chapter 3. Installation of TPC V3.3 on Linux 125


126 TotalStorage Productivity Center V3.3 Update Guide
4

Chapter 4. TPC V3.3 upgrade methodology


IBM TotalStorage Productivity Center uses multiple techniques to collect data from the
various subsystems and devices in your environment. These various data paths are coming
from and passing across many components of TotalStorage Productivity Center.

In this chapter we guide you through a TotalStorage Productivity Center V3.3 upgrade and
describe the requirements necessary to accomplish this goal. We also discuss the
preparation steps required before you do an upgrade. We discuss the following topics:
򐂰 IBM upgrade strategy
򐂰 Upgrade technical considerations
򐂰 Step-by-step upgrade process

© Copyright IBM Corp. 2008. All rights reserved. 127


4.1 TPC V3.3 structure
IBM TotalStorage Productivity Center Standard Edition is composed of a modular, integrated
set of products that can be purchased individually or in different combinations of the following
elements:
򐂰 A data component, IBM TotalStorage Productivity Center for Data
򐂰 A fabric component, IBM TotalStorage Productivity Center for Fabric
򐂰 A disk component, IBM TotalStorage Productivity Center for Disk

TPC for Data and TPC for Fabric share a Common Agent to manage the fabric as well as
capacity utilization of file systems and databases. Figure 4-1 shows the TotalStorage
Productivity Center physical structure.

Figure 4-1 TotalStorage Productivity Center architecture

The Data server is the control point for product scheduling functions, configuration, event
information, reporting, and GUI support. It coordinates communication with agents and data
collection from agents that scan file systems and databases to gather storage demographics
and populate the database with results.

Automated actions can be defined to perform file system extension, data deletion, and Tivoli
Storage Manager backup or archiving or event reporting when defined thresholds are
encountered.

128 TotalStorage Productivity Center V3.3 Update Guide


The Data server is the primary contact point for GUI user interface functions. It also includes
functions that schedule data collection and discovery for the Device server. The Device server
component discovers, gathers information from, analyzes performance of, and controls
storage subsystems and SAN fabrics. It coordinates communication with agents and data
collection from agents that scan SAN fabrics.

The single database instance serves as the repository for all TotalStorage Productivity Center
components. The Data agents and Fabric agents gather host, application, and SAN fabric
information and send this information to the Data server or Device server. The GUI allows you
to enter information or receive information for all TotalStorage Productivity Center
components.

You can upgrade previous releases of TotalStorage Productivity Center to V3.3. The
TotalStorage Productivity Center architecture flow is illustrated in Figure 4-1. You can upgrade
the following IBM TotalStorage Productivity Center releases to IBM TotalStorage Productivity
Center version 3.3:
򐂰 IBM TotalStorage Productivity Center for Data version 2.3 to 3.3
򐂰 IBM TotalStorage Productivity Center version 3.1.1, 3.1.2, or 3.1.3 to 3.3
򐂰 IBM TotalStorage Productivity Center version 3.2 or 3.2.1 to 3.3

If you have TPC V2.1 or V2.2, you must first upgrade to V2.3, then upgrade to IBM
TotalStorage Productivity Center V3.3. For information about upgrading IBM TotalStorage
Productivity Center for Data from V2.1 or 2.2 to 2.3, see the IBM TotalStorage Productivity
Center for Data Installation and Configuration Guide for V2.3, GC32-1727, found at:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

Note: The Migration Utility allows you to migrate data from IBM TotalStorage Productivity
Center V2.3 to V3.3. You can migrate Disk, Performance, Fabric or TotalStorage Expert
data. You do not have to migrate data from IBM TotalStorage Productivity Center for Data
V2.3 because that data is migrated through the normal upgrade process to IBM
TotalStorage Productivity Center V3.3.

4.2 TotalStorage Productivity Center pre–planning steps


The typical upgrade process requires executing the following steps:
򐂰 Password gathering:
– Obtain the DB2 administration user and password.
– Obtain the DB2 and agent manager password.
– Obtain the Certificate authority password.
– Obtain the Common agent service logon userid and password.
– Obtain the Host authentication password.
– Obtain the Data and Device server registration login userid and password.
– Obtain WebSphere Application server administrator userid and password.
– Obtain the NAS filer login userid and password.
򐂰 Upgrade DB2 to FixPak 14.
򐂰 Upgrade Agent Manager to Agent Manager V1.3.2.
򐂰 Upgrade TotalStorage Productivity Center to TotalStorage Productivity Center V3.3.

Chapter 4. TPC V3.3 upgrade methodology 129


4.3 DB2 FixPak install for Windows
TPC V3.3 requires DB2 Version 8 FixPak 14. The DB2 8.1 FixPak 14 is available at the
following Web site:
http://www-306.ibm.com/software/data/db2/udb/support/downloadv8_windows32bit.html

At this site, select DB2 Enterprise Server Edition for download.

To begin the upgrade of DB2 FixPak, follow these steps:


1. Download the electronic FixPak from the above Web site, and unzip the file into a
temporary working directory.
2. Open the Windows Services panel by selecting Start → Control Panel →
Administration Tools → Services and stop all DB2 services (there are 7 of them, 6 of
which are started) as shown in Figure 4-2.

Figure 4-2 stopping DB2 services

130 TotalStorage Productivity Center V3.3 Update Guide


3. In the system tray, you notice that the green DB2 icon now has a red box on it, indicating
the DB2 services are stopped. Click on the icon and select Exit, as shown in Figure 4-3.
You must do this, otherwise the FixPak installer will not be able to install the FixPak.

Figure 4-3 Exit DB2 icon in system tray

4. Double-click setup.exe from the temporary working directory you created in step 1 to start
the upgrade.
5. Click Run to proceed as shown in Figure 4-4.

Figure 4-4 DB2 FixPak install

6. You will see the first panel, as shown in Figure 4-5. Select Install Product to proceed.

Figure 4-5 DB2 FixPak Setup welcome

Chapter 4. TPC V3.3 upgrade methodology 131


7. The next panel allows you to select the DB2 product to be installed. Click Next to continue,
as shown in Figure 4-6.

Figure 4-6 Select product

The InstallShield Wizard starts as shown in Figure 4-7.

Figure 4-7 Preparing to install

132 TotalStorage Productivity Center V3.3 Update Guide


8. The DB2 setup panel is displayed, as shown in Figure 4-8. Click Next to continue.

Figure 4-8 Setup wizard

The DB2 installation proceeds, and you see a progress panel similar to the one shown in
Figure 4-9.

Figure 4-9 Installation progress

Chapter 4. TPC V3.3 upgrade methodology 133


9. When the installation completes, click Finish, as shown in Figure 4-10.

Figure 4-10 DB2 setup completed

10.The installer reminds you to reboot your system after you upgraded your DB2, as shown in
Figure 4-11. Click Yes to reboot your server.

Figure 4-11 DB2 p 14 installation requires system reboot

134 TotalStorage Productivity Center V3.3 Update Guide


4.3.1 Verify the DB2 FixPak installation
To verify the DB2 FixPak installation, follow these steps:
1. To verify the DB2 FixPak installation, open the Windows Services panel by selecting
Start → Control Panel → Administration Tools → Services and check if the DB2
services are started as shown in Figure 4-12.

Figure 4-12 Windows services showing DB2 services started

2. Open a Command Prompt window and enter the db2level command to check the DB2
version installed as shown in Figure 4-13.

Figure 4-13 db2level shows FixPak 14 installed

Chapter 4. TPC V3.3 upgrade methodology 135


4.3.2 DB2 UDB Fixpak installation on Linux
In this section we take you through the steps to upgrade DB2 on a Linux platform.
1. Issue the following command to check your DB2 Level:
db2level
The output will look similar to Figure 4-14.

Figure 4-14 db2 level check

2. It is considered a good idea to install the most current IBM DB2 FixPak. For TPC V3.3 we
install the DB2 FixPak 11. Download it from the IBM Web page. Navigate through the path
Software → Information Management → DB2 Product Family → DB2 UDB Version 8
FixPaks and Clients Install
3. Prepare your DB2 for upgrade during FixPak download. Shut down DB2 by entering these
commands:
su - db2inst1
. $HOME/sqllib/db2profile
db2 force applications all
db2 terminate
db2stop
db2licd -end
$HOME/sqllib/bin/ipclean
exit

136 TotalStorage Productivity Center V3.3 Update Guide


An example is provided with Figure 4-15. As you can see, you can also issue the
commands if DB2 is already partially stopped.

Figure 4-15 Shutdown of DB2

4. Continue with the shutdown of the Administration server by entering these commands as
shown in Figure 4-16:
su - db2tpc
.$HOME/das/dasprofile
db2admin stop
exit
/opt/IBM/db2/V8.1/bin/db2fmcu -d
/opt/IBM/db2/V8.1/bin/db2fm -i db2tpc -D

Figure 4-16 Shutdown of server

Chapter 4. TPC V3.3 upgrade methodology 137


5. Copy the FixPak file to a directory with enough free space and unpack it:
tar -xvf FP14_MI00176.tar

6. Enter the FixPak directory and start the installation of the Fixpak by entering this
command as shown in Figure 4-17.
installFixPak -y

Figure 4-17 Installation of FixPak

7. After successfully installing the FixPak, update the DB2 database instances. As a
preparation to the instance update, you can list the database instances by entering this
command:
/opt/IBM/db2/V8.1/instance/db2ilist

Figure 4-18 Update DB2 instances

138 TotalStorage Productivity Center V3.3 Update Guide


8. Identify the DB2 instance you want to update, and update it with this command as shown
in Figure 4-19.
/opt/IBM/db2/V8.1/instance/db2iupdt db2inst1

Figure 4-19 Update DB2 instance

9. List the DB2 database administration servers by entering this command as shown in
Figure 4-20 on page 139.
/opt/IBM/db2/V8.1/instance/daslist

Figure 4-20 List DB2 database administration servers

Chapter 4. TPC V3.3 upgrade methodology 139


10.Identify the DB2 Administration Server you want to update and update it with the
command as shown in Figure 4-21.
/opt/IBM/db2/V8.1/instance/dasupdt db2tpc

Figure 4-21 Update Administration Server

11.Start up DB2 again by entering the following commands (see Figure 4-22).
su - db2inst1
. $HOME/sqllib/db2profile
db2start
exit
su - db2tpc
.$HOME/das/dasprofile
db2admin start
exit

Figure 4-22 Restart DB2

140 TotalStorage Productivity Center V3.3 Update Guide


12.Check your DB2 level again by issuing the following commands (see Figure 4-23):
. /home/db2inst1/sqllib/db2profile
db2level

Figure 4-23 Verify DB2 level after FixPak install

Congratulations, you have successfully installed the DB2 FixPak.

4.4 Agent Manager upgrade


The Agent Manager is an embedded WebSphere application used to communicate with
registered common agents to collect data. This is a Storage Management Initiative -
Specification (SMI-S) standard.

Attention: A new version of Agent Manager (V1.3.2) is available for use with TPC V3.3.
It is mandatory that you upgrade to V1.3.2, because the embedded WebSphere shipped
with Agent Manager V1.2 is no longer supported.

Chapter 4. TPC V3.3 upgrade methodology 141


4.4.1 Steps for upgrading Agent Manager
This topic provides information on how to upgrade Agent Manager on a Windows 2003
platform. Complete the following steps:
1. Untar or unzip the electronic Agent Manager image or insert the Agent Manager CD in the
CD-ROM drive.
2. Logon the computer with administrative authority or root authority.
3. Navigate to the EmbeddedInstaller directory. Invoke the Agent Manager installation
program.
– For Windows, double-click on setupwin32.exe
– For AIX, invoke setupAix.bin
– For Linux, invoke setupLinux.bin
4. The Agent Manager installation program opens and discovers an existing instance of
Agent Manager installed as shown in Figure 4-24.

Figure 4-24 Previous Agent Manager installation discovery

142 TotalStorage Productivity Center V3.3 Update Guide


5. In the next panel, the Database User Information panel is displayed. Enter the database
user IDs and passwords for the DB2 database instance (see Figure 4-25).

Figure 4-25 Database User Information panel

Accept the information or enter the correct information. Click Next.


6. A series of installing panels for embedded WebSphere are displayed indicating the
progress of the installation. Wait for the upgrade to complete (see Figure 4-26).

Figure 4-26 WebSphere Application Server install progress

Chapter 4. TPC V3.3 upgrade methodology 143


7. The summary information panel is displayed indicating where Agent Manager will be
installed and the size information. Review the information and click Next to continue
(see Figure 4-29).

Figure 4-27 Agent Manager installation summary

8. A series of installing panels for the Agent Manager are displayed indicating the progress of
the installation (see Figure 4-28). Wait for the installation to complete.

Figure 4-28 Agent Manager installation progress

144 TotalStorage Productivity Center V3.3 Update Guide


9. The Stop and Restart the AgentManager Application Server panel is displayed (see
Figure 4-29). You have these options:
a. Yes, stop and then start AgentManager now.
b. No, I will stop and then start AgentManager later.
Select YES and click Next to continue.

Figure 4-29 Agent Manager server restart option

10.A summary of the Installation and Configuration Results panel is displayed (see
Figure 4-30). This panel indicates if the Agent Manager has successfully validated and
installed all of its components. Review the information and click Next to continue.

Chapter 4. TPC V3.3 upgrade methodology 145


Figure 4-30 Agent Manager installation summary

11.The summary information panel is displayed. For a successful upgrade, you will see the
panel shown in Figure 4-31. Click Finish to continue.

Figure 4-31 Agent Manager install summary

146 TotalStorage Productivity Center V3.3 Update Guide


4.4.2 Verifying Agent Manager installation
You can use the GetAMInfo command to verify the Agent Manager version. To run the
GetAMInfo command, go to the following directory:
򐂰 <agent_manager_dir>/bin (for AIX and Linux)
򐂰 <agent_manager_dir>\bin (for Windows)

Run the following command:


򐂰 GetAMInfo.sh (for AIX and Linux)
򐂰 GetAMInfo.bat (for Windows)

Figure 4-32 shows the output from the GetAMInfo command:

Figure 4-32 getaminfo command output

4.5 TPC V3.3 pre-upgrade process


When you upgrade TotalStorage Productivity Center, you will be upgrading all components
(if all components are installed). Before you upgrade TotalStorage Productivity Center,
complete the following steps:
1. Exit all instances of TotalStorage Productivity Center GUI.
2. Make sure that you have exclusive access to the server on which you are installing TPC
V3.3. If you are accessing the server remotely, make sure that there are no other remote
connections to the server. Multiple remote connections, such as Windows Remote
Desktop Connections, will cause the upgrade to fail and might render the server
unrecoverable. To log off other remote users on Windows, follow these steps:
a. Go to Start → Settings → Control Panel → Administrative Tools → Terminal
Services Manager.
b. On the Users tab, right-click the users that should not be logged in to the server and
select Logoff from the pop-up menu.
c. Close the Terminal Services Manager window.

Chapter 4. TPC V3.3 upgrade methodology 147


3. Stop all three IBM TotalStorage Productivity Center services. Also make sure that you stop
any long running scan jobs. To stop the services on Windows, go to Start → Settings →
Control Panel → Administrative Tools → Services. Stop the following services:
– IBM WebSphere Application Server V5 - Device Server
– IBM TotalStorage Productivity Center - Data Server
– IBM Tivoli Common Agent <directory>, in which <directory> is where the common
agent is installed. The default is <TPC_install_dir>/ca.
4. Back up your current TPC V3.1.x server and TotalStorage Productivity Center database.
This is important in case of an upgrade failure.
5. Restart the TotalStorage Productivity Center Server and all related services.

4.6 Upgrade TPC V3.3 server components


To upgrade TPC, use the same installation program as installing the product. Depending on
what components you have already installed on your system, there will be some differences
in the panels you see. The content of the TPC V3.3 code media is described in 2.1.2,
“Product code media layout and components” on page 16.

The following panels take you through an upgrade from TPC V3.1 to TPC V3.3 on a Windows
2003 server.
1. Start the IBM TotalStorage Productivity Center installation program.
2. The Select a language panel is displayed. Select a language and click OK as shown in
Figure 4-33, “Installer language selection” on page 148.

Figure 4-33 Installer language selection

3. The Software License Agreement panel is displayed as shown in Figure 4-34. Read the
terms of the license agreement. If you agree with the terms of the license agreement,
select I accept the terms of the license agreement. Click Next to continue.

148 TotalStorage Productivity Center V3.3 Update Guide


Figure 4-34 License agreement

4. The default selection is a Typical Installation illustrated in the next panel. It is important
that you select Custom installation (see Figure 4-35). Then click Next to continue.

Figure 4-35 Select Custom installation

Chapter 4. TPC V3.3 upgrade methodology 149


5. The panel in Figure 4-36 is displayed: Select one or more components to install on the
local or remote computer. It indicates the installed components and the product version
numbers. For example, for the Data Server, you might see this: Data Server 3.1.3.37.
When you have all of the IBM TotalStorage Productivity Center installed, you will notice
that you cannot select a component to install. Click Next to continue.

Figure 4-36 Select components to install

6. The Database administrator information panel is displayed with information filled in by the
installation program (Figure 4-37). Click Next.

Figure 4-37 Specific database administrator information

150 TotalStorage Productivity Center V3.3 Update Guide


7. The Existing database schema information panel is displayed. If the information is correct
on the panel, click Next (see Figure 4-38).

Figure 4-38 Database schema information

8. The Data server, Device server, Data agent, and Agent information panel is displayed.
Enter the Host authentication password, which is used when you install Fabric agent, to
communicate with the Device server. Also Enter the WebSphere Application Server (WAS)
administration ID and Passwood. All fields are case sensitive. Click Next to continue.

Chapter 4. TPC V3.3 upgrade methodology 151


Figure 4-39 Installer password information panel

9. The Summary information panel is displayed (see Figure 4-40). The items listed will be the
components upgraded. Verify the list of components and click Next to continue.

Figure 4-40 Installation summary panel

152 TotalStorage Productivity Center V3.3 Update Guide


10.At this point, you will see a series of installation panels displayed as the TotalStorage
Productivity Center V3.3 installs the various components. A series of installation panels for
the Agent Manager are displayed indicating the progress of the installation. Wait for the
installation to complete.

Figure 4-41 Installation progress panels

11.If you are installing using the Web images, a panel will be displayed to insert Disk2. Enter
the directory path where Disk2 is extracted (see Figure 4-42 on page 153). When Disk 2 is
located, click Next to continue.

Figure 4-42 Disk 2 location

Chapter 4. TPC V3.3 upgrade methodology 153


12.The successfully installed panel is displayed as shown in Figure 4-43. Use Windows
services to insure that TotalStorage Productivity Center services are started. The
Windows services include but are not limited to the following services:
– DB2 - DB2- 0
– IBM Tivoli Common Agent
– IBM TotalStorage Productivity Center
– IBM WebSphere Application Server

Figure 4-43 Successful installation panel

154 TotalStorage Productivity Center V3.3 Update Guide


13.Select the icon to start TotalStorage Productivity Center. Notice in Figure 4-44 that the
version now indicates Version 3.3.0.

Figure 4-44 TotalStorage Productivity Center sign-on panel

Chapter 4. TPC V3.3 upgrade methodology 155


156 TotalStorage Productivity Center V3.3 Update Guide
5

Chapter 5. N Series and NetApp support


IBM TotalStorage Productivity Center supports discovering, monitoring, and reporting on
IBM N Series and NetApp® Network Attached Storage (NAS).

The chapter provides a step-by-step guide to configure TotalStorage Productivity Center to


monitor NAS devices.

© Copyright IBM Corp. 2008. All rights reserved. 157


5.1 Overview of NAS support
TPC for Data supports IBM N Series and Network Appliance™ filers for filer discovery,
probe/scan agent configuration, asset probe, file system scan, quota import/monitoring,
alerting and capacity reporting.

Unlike other Data agents, there is no agent code to install on NAS devices, TPC issues
SNMP queries to the NAS device to collect the summary information for the aspect of the
NAS device, and TPC also utilizes a proxy agent implementation to collect more detailed
information. A Data agent is designated as the “proxy agent” responsible for collecting
asset/quota information from assigned NAS devices via SNMP. TPC collects the mounted file
system information or shares via NFS or CIFS viewed by the system hosting the Data agent.

After collecting the NAS device information, the NAS devices are displayed in the Topology
Viewer as computers. You can check information about them just as you would for computers.

The collected information is also used for the following reports, which can be found in the
Navigation Tree under Reporting → Asset → By OS Type → Network Appliance.
򐂰 Controllers
򐂰 Disks
򐂰 File System or Logical Volumes
򐂰 Exports or Shares
򐂰 Monitored Directories

5.1.1 General NAS system requirements


In this section we describe the NAS requirements:
򐂰 A NAS device must support SNMP and be enabled for queries. Check the SNMP
configuration on NAS from the FilerView®. Choose SNMP → Config to make sure the
SNMP is enabled on the NAS, by default it is enabled (see Figure 5-1). Also, notice the
SNMP community name. It is required and is used when adding the NAS device to TPC.

158 TotalStorage Productivity Center V3.3 Update Guide


Figure 5-1 Configure SNMP

򐂰 A NAS device must supply a unique sysName.

Check the following Web site for the IBM N Series Models and NetApp filers supported by
TPC V3.3:
http://www-1.ibm.com/support/docview.wss?rs=597&uid=ssg1S1003019

5.1.2 NAS monitoring options


The NAS device can be monitored by proxy through either a Windows Data agent or a UNIX
Data agent. Windows Data agents are used to monitor CIFS shares on the NAS filer, whereas
UNIX Data agents are used to monitor NFS exports. Each option has different requirements
and setup steps. In this chapter, we go through both implementation methods.

UNIX proxy agent requirements


To monitor the NAS device through a UNIX Proxy Data agent, the IBM N Series or Network
Appliance Filers must meet the following criteria:
򐂰 The NAS device must support Network File System (NFS) queries.
– The NAS must have NFS licensed and proper NFS share configured.
– The root file-system from the NetApp filer should be mounted on the agent where the
Productivity Center for Data agent will be deployed. This ensures that during
post-install discovery, the NetApp filer will be discovered automatically.

Chapter 5. N Series and NetApp support 159


Windows proxy agent requirements
򐂰 The NAS device must support Common Internet File System (CIFS) queries.
– The NAS must have CIFS licensed and proper CIFS share configured.
– The NAS filers within your environment must be visible to the machines where you
install the agent or agents. If NAS filers are to be monitored by Windows machines,
those NAS filers must be configured to be members of the same Windows domain.
NAS in a Windows workgroup environment is not supported.
– The root file-system from the NetApp filer does not need to be mounted (on the target
Windows machine), but it has to be exported. The Productivity Center for Data agent
will pull a list of machines from the browsing service for the domain that the agent
machine is in.

The account used for scanning NAS for Windows must be a Domain account that can log into
both the Windows agent machine and the NAS device.

5.2 Windows Data agent configuration


In this section we document the procedure to configure TPC to manage IBM N Series or
NetApp Filer through a Windows server.

Note: The Windows server used as a proxy Data agent must be a member of the Windows
domain, and the NAS filer should also be added to the Windows domain.

Configure the NAS filer to be a member of a Windows domain


You must add the NAS Filer into your Windows domain. You can verify this by logging to the
Windows domain controller using a user ID with administrator’s privilege and clicking
Start → Settings → Control Panel → Administrative Tools → Active Directory® Users
and Computers. You will see the panel as shown in Figure 5-2; make sure that the NAS
device is under the Computers tree.

Figure 5-2 NAS is a member of a Windows Domain

160 TotalStorage Productivity Center V3.3 Update Guide


Configure CIFS share on NAS filer
On NAS FilerView, go to CIFS → Share → Add to add a new CIFS share as shown in
Figure 5-3.

Figure 5-3 Add a CIFS share on NAS

Chapter 5. N Series and NetApp support 161


Map NAS CIFS share to Windows server and run read and write I/O
The account used for scanning NAS for Windows must be a Domain account that can log into
both the Windows agent machine and the NAS device. In our lab, we login to the Windows
server using such a Domain Account, and map the NAS CIFS share that we defined
previously to the Windows server. We then made read and write I/O on the mapped network
drive (for example, copy a file to the mapped drive) to make sure the NAS share is working
correctly (see Figure 5-4).

Note: You must use a Domain User that has Domain Administrator privileges.

Figure 5-4 Map the NAS CIFS share

Installing Data agent on Windows server


Install a TPC Data agent on the Windows server. We will use it as a proxy agent to collect
information from NAS. This Windows server must be a member of the same Windows domain
as the NAS or a trusted domain, it can be a domain controller or a member server.

Refer to Chapter 6, “Agent deployment” in the Redbooks publication, IBM TotalStorage


Productivity Center: The Next Generation, SG24-7194, for detailed information on agent
deployment.

Manage NAS devices in TPC


There two ways to configure which NAS devices will be managed by TotalStorage Productivity
Center: You can either run a discovery to discover the NAS devices in your environment, or
you can manually add the devices to TotalStorage Productivity Center. We demonstrate both
of these methods.

162 TotalStorage Productivity Center V3.3 Update Guide


Manage NAS Devices through discovery

Important: Successful discovery depends on the configuration mentioned in the previous


section being done correctly.

Follow these steps:


1. Launch the IBM TotalStorage Productivity Center GUI.
2. You must set a default domain login/password in TotalStorage Productivity Center for it to
discover the NetApp devices.
Select the Administrative Services → Configuration → License Keys, and double-click
on the TPC for Data entry (see Figure 5-5).

Figure 5-5 TotalStorage Productivity Center Licensing Panel

Chapter 5. N Series and NetApp support 163


Click on the Filer Logins tab and click the button Update default login and password.
Enter the appropriate domain User ID and Password (see Figure 5-6).

Note: This user must be have Domain Administrator privileges.

Figure 5-6 Update Default Filer Login Information

3. Expand the Administrative Services in the Navigation Tree and select Administrative
Services → Discovery.

Note: You must verify that the correct SNMP community name is defined in the
Windows Domain, NAS and SAN FS job. To verify, go to Administrative Services →
Discovery → Windows Domain, NAS and SAN FGS and select the Options panel.
Add the correct SNMP community name for the filer.

To run a NAS/NetApp discovery job, right-click Windows Domain, NAS and SAN FS and
select Run Now as shown in Figure 5-7.

Note: The discovery will discover all entities in the Windows Domain. To shorten the
time taken for discovery, you can put a check mark on Skip Workstations in the
Discovery properties panel (see Figure 5-5).

164 TotalStorage Productivity Center V3.3 Update Guide


Figure 5-7 NAS discovery job

To check the running job status, right-click Windows Domain, NAS and SAN FS and
select Update Job Status. Wait until the discovery job finishes.

4. After the discovery job completes, but before TotalStorage Productivity Center can perform
operations such as Probe and Scan against NetWare/NAS/NetApp filers, first they must
be licensed. In a very large environment, you might not want to automatically license all
the discovered NAS devices, so you have a choice as to which servers to license.

Note: If you manually add a NAS device, you do not need to do this, the filer will be
licensed automatically.

Select the Administrative Services → Configuration → License Keys, and double-click


on the TPC for Data entry (see Figure 5-8).

Figure 5-8 License Key panel

Chapter 5. N Series and NetApp support 165


The Licensing tab is displayed. Locate the desired NetWare/NAS/NetApp filer. Select the
associated check box for the NAS filer in the Licensed column, and select the Disk icon
on the toolbar to save changes (see Figure 5-9).

Important: You must save the changes after you license the NAS filer before you leave this
panel, otherwise TPC will not save the change to its repository.

Figure 5-9 Licensing tab panel

5. Set Filer Login ID/Password as follows:


For Windows attached NAS, you have to specify the login ID and password to the NAS.
Click the Filer Logins Tab, and select the NAS filer, then click the Set login per row
button, enter the Logon Id and Password in the popped up Filer Login Editor panel, click
Save to save the changes (see Figure 5-10). This ID and password must be a Domain
account that can log into both the Windows agent machine and the NAS device.

Note: Set Filer Login/Password is not required if it is a UNIX attached NAS filer.

166 TotalStorage Productivity Center V3.3 Update Guide


Figure 5-10 Logon ID and password for NAS filer for Windows

6. Run a discovery again:


After licensing the NAS filer and setting the login ID/password, we need to run a discovery
job again to get further information on the NAS filer. Refer to Figure 5-7 on page 165.

Manually add NAS device to TPC


If the filer you wish to monitor was not discovered, you can add it manually to TotalStorage
Productivity Center:
1. Launch the TotalStorage Productivity Center GUI.
2. Expand the Administrative Services in the navigation tree and select Administrative
Services → Configuration → Manual NAS/Netware Server entry. You will see the panel
shown in Figure 5-11.

Figure 5-11 Manual NAS Server Entry Panel

Chapter 5. N Series and NetApp support 167


3. Add NAS Server:
Click the Add NAS Server button, Enter the following information in the panel shown in
Figure 5-12 and click OK to continue:
– Network name:
Enter the fully qualified host name or IP address of the NAS filer.
– Data Manager Agent OS Type:
Select the operating system of the computer that contains the agent that will gather
information about the NAS filer. In our scenario we select Windows here.
– Accessible from:
Select the agent host from drop-down list that you want to “discover” the NAS filer.
The drop-down list will only display agents that are:
• Running under the operating system selected in the Data Manager Agent OS Type
field.
• Located on Windows or UNIX computers that are accessible to the NAS filers.
Data agents are not located on the NAS filers themselves.
The Windows agents are located on Windows computers within the same domain
as the NAS filers.
– SNMP Community:
Enter the SNMP community name, the default is PUBLIC. This is used to get
information from the NAS filer. Refer to “General NAS system requirements” on
page 158.
– Login ID and Password:
This is for Windows only. Enter the Login ID and password; it must be a Domain
account that can login to both the Windows agent machine and the NAS filer.

Figure 5-12 Add NAS Server

168 TotalStorage Productivity Center V3.3 Update Guide


If all the information you have provided is correct, you will see the NAS filer added to the
panel as shown in Figure 5-13.

Figure 5-13 Manual Added NAS Entry

Setting the Scan/Probe agent


When you have either discovered the NAS device or added it manually, you must set the
Scan/Probe agent before probing or scanning the filer. This is not an obvious step. Since
there is no Data agent installed on the NAS filer, we must have someone doing the
Scan/Probe job as a proxy agent. When this task is done, TPC will treat these devices as
normal servers with attached storage. An agent can scan multiple NAS Servers, and a NAS
Server can be scanned by multiple agents, so we can set up parallel scans.

Note: Keep in mind that you are assigning work to computers, and that these scans go
over the IP network, so it is better to pick a proxy agent on the same LAN. These scans will
be slower than a local scan, and it might take a very long time to run if the scan is also
doing a Proxy scan on a large device. For large NAS devices, use multiple proxy agents.
Normally the default scan of once per day is really not required, so look into weekly scans.

Chapter 5. N Series and NetApp support 169


Select Administrative Services → Configuration → Scan/Probe Agent Administration.
Click the NAS filer that you want to define to the Scan/Probe agent. In our lab, we did multiple
selections by holding the Ctrl key and clicking the NAS filer entries. Click Set agent for all
selected rows, and in the pop-up panel, choose the Windows Data agent that has the NAS
filer attached (see Figure 5-14).

Figure 5-14 Scan/Probe agent administration

Now you can see that the NAS filer filesystems have been assigned the Scan/Probe agent.
Make sure you click the Save button in the toolbar to save the changes (see Figure 5-15).

Figure 5-15 Save Scan/Probe agent

170 TotalStorage Productivity Center V3.3 Update Guide


Running a Probe job
When the Scan/Probe agent has been set, a probe to collect the devices hardware
configuration and filesystem information will be run automatically. If you wish to create and
run an additional probe, you can do so. In the TPC GUI Navigation Tree, select IBM
TotalStorage Productivity Center → Monitoring → Probes, right-click and select Create
Probe. Click the What to PROBE tab and add only the NAS filer as shown in Figure 5-16.

Figure 5-16 Define Probe job

Click the When to Run tab, and choose the radio button of Run Now, and save the Probe job
by clicking the Save button on the toolbar (see Figure 5-17). The Probe jobs starts. You can
right-click on the Probe job and select Update Job Status to check the running job status.

Figure 5-17 Save Probe job

When the Probe job has been successfully completed, you can verify that TPC has the filer
data by viewing the TPC dashboard. On the Monitored Server Summary Panel, you should
see the total number of Network Appliance devices monitored and the total Filesystem/Disk
Capacities (see Figure 5-18).

Chapter 5. N Series and NetApp support 171


Figure 5-18 TotalStorage Productivity Center Dashboard showing NAS Filer Information

Running a Scan job


To collect more detail information of filesystems, files, and directories, you can run a Scan job.
In TPC GUI Navigation Tree, expand Data Manager → Monitoring, right-click Scans, and
click Create Scan. In the Filesystems tab, remove all other entries from the Current
Selections, and add only the NAS filer to it as shown in Figure 5-19.

Figure 5-19 Define Scan job

172 TotalStorage Productivity Center V3.3 Update Guide


In the Profiles tab, we select all of the default profiles and apply them to file systems and
directories by clicking the >> button as shown in Figure 5-20. Profiles allow us to specify what
statistical information is gathered and to fine-tune and control what files are scanned.

Figure 5-20 Apply Profiles

Click the When to Run tab, and choose the radio button of Run Now. Save the Scan job by
clicking the Save button on the toolbar (see Figure 5-21). You will be prompted for a name for
the Scan job, enter a job name and click OK to start the Scan job. You can right-click the job
name and select Update Job Status to check the running job status.

Figure 5-21 Save Scan job

Chapter 5. N Series and NetApp support 173


5.3 UNIX Data agent configuration
In this section we will document the procedure to configure TPC to manage IBM N Series or
NetApp Filer through a UNIX proxy Data agent.

Check NFS config on NAS filer


On NAS FilerView, go to NFS → Manage Exports to check the NFS configuration. When the
NFS license is enabled, the root file-system /vol/vol0 is exported as shown in Figure 5-22.

Figure 5-22 NAS NFS exports

Install TPC for Data agent on UNIX host


Install a TPC Data agent on the UNIX server. We will use this as a proxy agent to collect
information from the NAS.

Refer to Chapter 6, “Agent Deployment” in the Redbooks publication, IBM TotalStorage


Productivity Center; The Next Generation, SG24-7194 for detailed information.

174 TotalStorage Productivity Center V3.3 Update Guide


Mount the NAS root file-system to UNIX host (Data agent)
The root file-system from the NetApp filer should be mounted on the agent where the TPC for
Data agent will be deployed. This will ensure that during post-install discovery, the NetApp
filer will be discovered automatically (see Figure 5-23).

Figure 5-23 Mount root file-system from NAS filer

Discover NAS devices in TPC


Depending on your environment, there are two ways to have NAS devices managed:
1. You can run a discovery to discover the NAS devices in your environment.
2. You can manually add the devices to TotalStorage Productivity Center.

We will go through both methods.

Manage NAS devices through discovery


Follow these steps:
1. Launch the IBM TotalStorage Productivity Center GUI.
2. Expand the Administrative Services in the navigation tree and select Administrative
Services → Discovery.
To run a NAS/NetApp discovery job, right-click Windows Domain, NAS and SAN FS and
select Run Now as shown in Figure 5-7.
3. After the discovery job completes but before TotalStorage Productivity Center can perform
operations such as Probe and Scan against NetWare/NAS/NetApp filers, first they must
be licensed. In a very large environment you might not want to automatically license all the
discovered NAS devices, so you have a choice as to which servers to license.

Select the Administrative Services → Configuration → License Keys, and double-click


on the TPC for Data entry (see Figure 5-8).
The Licensing tab is displayed. Locate the desired NetWare/NAS/NetApp filer. Select the
associated check box for the NAS filer in the Licensed column, and select the Disk icon
on the toolbar to save the changes (see Figure 5-24).
.

Important: You must issue a save after you license the NAS filer, but before you leave this
panel. Otherwise TPC will not save the change to its repository.

Chapter 5. N Series and NetApp support 175


Figure 5-24 NAS Licensing tab

You do not need to enter the Filer Login tab, because it is only required for a Windows
environment.
4. Run a discovery again.
We need to run a discovery job again to get further information on the NAS filer. Refer to
Figure 5-7 on page 165.

Manually add NAS device to TPC


Follow these steps:
1. Launch the IBM TotalStorage Productivity Center GUI.
2. Expand the Administrative Services in the navigation tree and select Administrative
Services → Configuration → Manual NAS/Netware Server entry. You will see the panel
shown in Figure 5-25.
Click the Add NAS Server button and enter the following information, then click OK.
– Network name:
Enter the fully qualified host name or IP address of the NAS filer.
– Data Manager Agent OS Type:
Select the operating system of the computer that contains the agent that will gather
information about the NAS filer. In out case, we select Unix here.
– Accessible from:
Select the Unix agent host from drop-down list that you want to ‘discover’ the NAS filer.
– SNMP Community:
Enter the SNMP community name, the default is PUBLIC. This is used to get
information from the NAS filer.
– Login ID and Password:
Windows only. It is grayed out when we choose Unix under Data Manager Agent OS
Type.

176 TotalStorage Productivity Center V3.3 Update Guide


Figure 5-25 Manual Add NAS server

If all the information you provided above is correct, you will see the NAS filer added to the
panel shown in Figure 5-26.

Figure 5-26 Manually Added NAS filer

Chapter 5. N Series and NetApp support 177


Set Scan/Probe agent
When you have either discovered the NAS device or added it manually, you must set the
Scan/Probe agent. This is not an obvious step. Since there is no Data agent installed on NAS
filer, we have to have someone doing the Scan/Probe job as a proxy agent. When this task is
done, TPC will treat these devices as normal servers with attached storage. An agent can
Scan multiple NAS Servers, and a NAS Server can be scanned by multiple Agents, so we can
set up parallel Scans.

Note: Keep in mind that you are assigning work to computers, these scans go over the IP
network, so it is better to pick a proxy agent on the same LAN. These scans will be slower
than a local scan, and it might take a very long time to run if the scan is also doing a Proxy
scan on a large device. For large NAS devices, use multiple proxy agents. Normally the
default scan of once per day is really not required, so look into weekly scans.

Select Administrative Services → Configuration → Scan/Probe Agent Administration.


Click the NAS filer that you want to define Scan/Probe agent. Click the NAS filer entries, and
click Set agent per row, in the popped up panel, choose the UNIX Data agent that has the
NAS filer attached (see Figure 5-27).

Figure 5-27 Set Scan/Probe agent

178 TotalStorage Productivity Center V3.3 Update Guide


You can see that the NAS filer filesystems have been assigned Scan/Probe agents. Make
sure you click the Save button in the toolbar to save the change (see Figure 5-28).

Figure 5-28 Save Scan/Probe agent

Running a Probe job


When the Scan/Probe agent has been set, a probe to collect the devices hardware
configuration and filesystem information will be run automatically. If you wish to create and
run an additional probe, you can so do. In TPC GUI Navigation Tree, select IBM TotalStorage
Productivity Center → Monitoring → Probes, right-click and select Create Probe. Click
What to PROBE tab, remove all other entries from the Current Selections, and add only the
NAS filer into it as shown in Figure 5-29.

Figure 5-29 Define a Probe job

Chapter 5. N Series and NetApp support 179


Click When to Run tab, and choose the radio button of Run Now, and save the Probe job by
clicking the Save button on the toolbar (see Figure 5-17). The Probe job starts. You can
right-click on the Probe job and select Update Job Status to check the running job status.

When a probe has been successfully completed, you can verify that TPC has the filer data by
viewing the TPC dashboard. On the Monitored Server Summary Panel, you should see the
total number of Network Appliance devices monitored and the total Filesystem/Disk
Capacities (see Figure 5-18).

Figure 5-30 TotalStorage Productivity Center Dashboard showing NAS Filer Information

Running a Scan job


To collect more detailed information on filesystems, files, and directories, you can run a Scan
job. In TPC GUI navigation tree, expand Data Manager → Monitoring, right-click Scans, and
click Create Scan. In the Filesystems tab, remove all other entries from the Current
Selections and add only the NAS filer as shown in Figure 5-31.

180 TotalStorage Productivity Center V3.3 Update Guide


Figure 5-31 Create Scan job

In the Profiles tab, we select all of the default profiles and apply them to file systems and
directories by clicking the >> button as shown in Figure 5-20. Profiles allow us to specify what
statistical information is gathered and to fine-tune and control what files are scanned.

Click When to Run tab, and choose the radio button of Run Now, and save the Scan job by
click the Save button on the toolbar (see Figure 5-32). TPC will prompt for a name for the
Scan job. Enter a job name and click OK to start the Scan job. You can right-click the job
name and select Update Job Status to check the running job status.

Figure 5-32 Save Scan job

Chapter 5. N Series and NetApp support 181


5.4 Retrieving and displaying data about NAS filer
We have now set up the basic NAS collection jobs, and can start to view the NAS filer
information collected. Actually after setting Scan/Probe agent for NAS filers, TPC will treat
these devices as normal computers with attached storage. Refer to Chapter 8 titled Getting
Started with TotalStorage Productivity Center, section 8.6 and 8.7 of the book IBM
TotalStorage Productivity Center: The Next Generation, SG24-7194 for detailed information.

In this section we will show you some examples of how to retrieve and display data for NAS
filers.

5.4.1 View NAS Filer from the Topology Viewer


We start viewing NAS filer information in the Topology Viewer by expanding IBM
TotalStorage Productivity Center → Topology → Computers, click the + sign on the
up-right corner of the Computers (unknown) box. You will find that the NAS filer is in there as
shown in Figure 5-33. Click on the NAS filer, and from the Topology Viewer, you can see that
its OS type is NetApp Data ONTAP®.

Figure 5-33 Topology viewer for Computer

182 TotalStorage Productivity Center V3.3 Update Guide


Double-click the NAS filer, and you can see more detailed information about this NAS filer
from the L2: Computer view (see Figure 5-34).

Figure 5-34 Topology viewer for NAS Filer

Chapter 5. N Series and NetApp support 183


5.4.2 Navigation Tree-based asset reporting
We can start with the Navigation Tree-based asset reporting. Expand Data Manager →
Reporting → Asset → By OS Type → Network Appliance. You can see NAS filer asset
information as shown in Figure 5-35.

Figure 5-35 View Asset of Network Appliance

184 TotalStorage Productivity Center V3.3 Update Guide


5.4.3 Filesystem reporting
In this section we show you how you can generate reports from the NAS filesystems.
Follow these steps:
1. Expand Data Manager → Reporting → Capacity → Filesystem Free Space → By
Computer. Click Generate Report as shown in Figure 5-36.

Figure 5-36 Generate Report of NAS Filesystem Free Space

2. From the following panel, highlight the NAS filer and click the magnifier icon in front of it
(see Figure 5-37).

Figure 5-37 Filesystem Free Space by Computer

3. in the next panel, highlight all the mount points you are interested in under the Mount
Point column you are interested in, and right-click on them. Choose Chart space usage
for selected as shown in Figure 5-38.

Chapter 5. N Series and NetApp support 185


Figure 5-38 Filesystem Free Space from the NAS filer

The Filesystem free space chart is presented in the next panel. This chart shows the current
free space on each volume on the NAS filer. You can right-click the chart and select the
Customize this chart button to customize the chart. On the pop up panel, we choose 4 in the
Maximum number of charts or series per screen drop-down menu (see Figure 5-39).

Figure 5-39 Chart of Filesystem Free Space by Computer

186 TotalStorage Productivity Center V3.3 Update Guide


Now you can see the customized Chart of Filesystem Free Space by Computer in the
following shown panel (Figure 5-40). You can click on the Prev and Next buttons to see
additional charts of other NAS volumes.

Figure 5-40 Customized Chart of Filesystem Free Space by Computer

Chapter 5. N Series and NetApp support 187


5.4.4 NAS device quotas
You can import quotas set up on your NAS device into TotalStorage Productivity Center.

To work with these quotas:


1. Run a Scan job on the filers on which you want to import quotas.
2. Expand Data Manager → Policy Management → Network Appliance Quotas
3. Right-click on Schedules and select Create NetApp Quota Job.
4. Select the filer you want to import the quotas from.
5. Under the Alert tab, you can define a condition that will trigger an alert if a certain
percentage of the quota hard limit is reached (see Figure 5-41).

Figure 5-41 Importing NetApp Quotas

For further information regarding NetApp Quotas, refer to the latest version of the
IBM TotalStorage Productivity Center Version 3.3 User’s Guide, GC32-1775.

188 TotalStorage Productivity Center V3.3 Update Guide


6

Chapter 6. Brocade Mi10k SMI-S Provider


support
This chapter provides a step-by-step guide to configure the Common Information Model
Object Manager (CIMOM) for the Brocade Mi10k, which is required to monitor the device
through TotalStorage Productivity Center. We also discuss the reports that contain the Mi10k
information.

© Copyright IBM Corp. 2008. All rights reserved. 189


6.1 Introduction
After you have completed the installation of TotalStorage Productivity Center, you have to
install and configure the Common Information Model Object Manager (CIMOM) in order for
TPC to communicate with the Brocade Mi10k. In this section the CIMOM is also referred to
the McDATA OPENconnectors SMI-S Interface.

The TotalStorage Productivity Center uses SLP as the method for CIM clients to locate
managed objects. When a CIM agent implementation is available for a supported device, the
device can be accessed and configured by management applications using industry-standard
XML-over-HTTP transactions.

In order to manage Brocade Mi10k fabric using SMI-Specifications V1.1.0, we require the
software component mentioned on this site:
http://www.snia.org/ctp/conformingproviders/mcdata

At the time of writing this book, this is the McDATA OPENconnectors SMI-S Interface.

The McDATA OPENconnectors SMI-S Interface is a software product that provides support
for the SMI-S for Brocade director and switch products. McDATA OPENconnectors SMI-S
Interface provides an SMI-S interface for management of Brocade products. It exposes a
WBEM (specifically, CIM XML) interface for management.

6.2 Planning the installation


This section contains the planning considerations before installing the McDATA
OPENconnectors SMI-S Interface.

Prerequisites
The McDATA OPENconnectors SMI-S Interface requires the hardware and software
described here.

Figure 6-1 lists the switch hardware supported for this release of McDATA OPENconnectors
SMI-S Interface and the minimum levels of firmware required by the hardware.

Table 6-1 Supported firmware


Manufacturer Product Minimum firmware supported

Brocade Intrepid 10000 9.02.04

Supported operating system platforms


The McDATA OPENconnectors SMI-S Interface, Version 1.5.1, implementation supports the
following operating system (OS) platforms:
򐂰 Windows XP with Service Pack 2
򐂰 Windows 2003 Server
򐂰 Windows 2000, with Service Pack 2 or higher
򐂰 Solaris 9
򐂰 Solaris 10

190 TotalStorage Productivity Center V3.3 Update Guide


6.2.1 Supported configurations
The McDATA OPENconnectors SMI-S Interface communicates with a device using one of two
modes:
򐂰 Direct Connection mode: The McData Provider communicates directly with the SAN
Device.
򐂰 EFCM Proxy mode: The McData Provider and Device communicate through EFCM only.

Note: We recommend using the EFCM proxy mode.

Direct Connection mode


Figure 6-1 depicts the Direct Connection mode, in which a WBEM Server application
supports the standard CIM/WBEM interfaces to a client and accesses the managed device.
Communications take place in Direct Connection mode, in which the data passes directly
between the McData Provider and the device.

Figure 6-1 Direct Connection mode

EFCM Proxy mode


Figure 6-2 depicts the communications in EFCM Proxy mode. In this case, EFCM manages
the switch and communications between the device and the McData Provider flow through
EFCM. When EFCM is used to manage a device, EFCM assumes the management interface
with a switch or director. As a result, McDATA OPENconnectors SMI-S Interface
communications must use EFCM to communicate with the switch.

Chapter 6. Brocade Mi10k SMI-S Provider support 191


The EFCM Proxy mode has the following attributes that are different from Direct Connection
mode:
򐂰 The McData Provider always manages the same set of devices as those managed by
EFCM.
򐂰 The McData Provider cannot add or delete a device from its list of managed devices; this
must be done through EFCM. When EFCM adds a device, that device is automatically
added to the McData Provider. When EFCM deletes a device, the device is removed from
the McData Provider.
򐂰 If EFCM is running when the McData Provider starts, any devices being managed by
EFCM are automatically added to the McData Provider.

Figure 6-2 EFCM Proxy Mode

Note: We recommend using the EFCM Proxy Mode.

Our lab setup


We installed the McDATA OPENconnectors SMI-S Interface on a Windows 2003 server and
we follow the EFCM Proxy Model. Our McDATA OPENconnectors SMI-S Interface points to a
EFCM server that can be reached over IP.

192 TotalStorage Productivity Center V3.3 Update Guide


6.2.2 Downloading SMI-S interface code and documentation
The SMI-S Interface code can be downloaded from McData Web site. In order to do that, you
first have to login to McData File Center at the following URL:
http://www.mcdata.com/filecenter/template?page=index

After logging into the McData File Center, the McData OPEN connectors SMI-S Interface can
be found by selecting DOCUMENTS from the top toolbar. In the Find documents where box,
check the box Category is one or more of the following and select SMI-S from the scroll
down on the right. Select SMI-S 1.5.2 with the description Brocade OPENconnectors SMI-S
interface version 1.5.2 for Windows. This is the same as the McDATA OPENconnectors
SMI-S Interface Version 1.5.1.

The McData OPENconnectors SMI-S Interface User Guide can be downloaded from the
Resource Library.
http://www.mcdata.com/wwwapp/resourcelibrary/jsp/navpages/index.jsp?mcdata_categor
y=www_resource&resource=index

6.3 Installing SMI-S Agent


In this section we guide you through an EFCM Proxy Model Installation.

Locate the installation image and launch the install.exe.

The installer will launch, displaying the License Agreement panel shown in Figure 6-3.
Select I Accept the terms of the lIcense Agreement and click Next.

Figure 6-3 License Agreement panel

Chapter 6. Brocade Mi10k SMI-S Provider support 193


The next panel asks you whether you would like to upgrade an existing SMI-S interface, or
install a new one. Choose Install SMI-S Interface 1.5.1 as shown in Figure 6-4.

Figure 6-4 Installation selection panel

The installer gives you the option of installing Service Location Profile (SLP) software. If SLP
is not installed, the post-install configuration will fail.

To install both components select SMI-S Interface and SLP as shown in Figure 6-5 and then
click Next.

Figure 6-5 Install options

194 TotalStorage Productivity Center V3.3 Update Guide


Next you are prompted for the installation folder. You can either change to a different directory
or accept the default (see Figure 6-6). Click Next to continue.

Figure 6-6 Select Installation Folder

The next panel is the Pre-installation Summary panel (Figure 6-7) that allows you to start the
installation. Click Install to continue.

Figure 6-7 Pre-Installation Summary panel

Chapter 6. Brocade Mi10k SMI-S Provider support 195


You can see the installation progress as shown in Figure 6-8.

Figure 6-8 Installation progress

After the McDATA OPENconnectors SMI-S Interface is installed, the installer launches the
SMI-S interface server configuration tool, as detailed in the next section.

6.3.1 Configuring the SMI-S interface


As soon as installation completes, a SMI-S Server Interface Server Configuration Panel is
displayed as shown in Figure 6-9. This prompts you for the following information to be
entered:
򐂰 In the Network Address field, specify the network address of the EFCM server to use for
proxy connections.
򐂰 In the User ID field, specify the user name to use to log in to the EFCM server. This
account has to be previously defined on EFCM.
򐂰 In the Password field, specify the password to use to log in to the EFCM server.
򐂰 Select the Test Connection button to test the connection settings for the EFCM server.
򐂰 A message displays to verify whether the test succeeded or failed:
– If the user ID or password fails, a message explains that the user ID used could not be
validated.
– If the network address fails, a message explains that the server could not be found at
the address.
– If the test succeeds, a message explains that the server was found and user ID
validated.
򐂰 Click the OK button to save changes.

196 TotalStorage Productivity Center V3.3 Update Guide


Figure 6-9 SMI-S Interface Server Configuration

When you click on the Test Connection, you are presented the progress panel shown in
Figure 6-10.

Figure 6-10 Test progress

If you are not at the required EFCM 9.2.0 level, you receive the error message in Figure 6-11,
and the configuration will end.

Figure 6-11 Error message showing incorrect EFCM version

If you are at the required EFCM 9.2.0 level, when the connection is successfully tested, you
see the message shown in Figure 6-12. Click OK to continue.

Figure 6-12 Connection successful

Chapter 6. Brocade Mi10k SMI-S Provider support 197


The final panel shown in Figure 6-13 indicates the McData provider is successfully installed.
Click Finish to exit the installer.

Figure 6-13 SMI-S Interface Provider installation complete

You are now ready to work with the McData SMI-S provider through TotalStorage Productivity
Center.

6.3.2 Verifying the connection with TPC


At this point you are ready to test the TPC connection to the McData SMI-S Interface. You can
either have TPC discover it automatically or add it manually using the Add CIMOM task. We
recommend that you add the CIMOM manually to TPC.

In the TotalStorage Productivity Center GUI, select Administrative Services → Data


Sources → CIMOM Agents and click the Add CIMOM button.

You are prompted for the following data to be provided (see Figure 6-14):
򐂰 Host: Enter the IP address or the hostname of the SMI-S Interface Server.
򐂰 Port: By default, you can use port 5989 used by HTTPS.
򐂰 Username: The default userid you can use is Administrator.
򐂰 Password: The password for Administrator is password.
򐂰 Interoperability namespace: The namespace is /interop.
򐂰 Protocol: Use either HTTP or HTTPS depending on your setting. We recommend HTTPS.
򐂰 Truststore Location: Leave blank.
򐂰 Truststore Passphrase: Leave blank.
򐂰 Display Name: You can enter any name you want to identify the CIM Agent.
򐂰 Description: You can enter a description for the CIM Agent.
򐂰 Test CIMOM connectivity: We suggest that you leave the check mark in the check box
and let TPC perform the connection test.

198 TotalStorage Productivity Center V3.3 Update Guide


Figure 6-14 Add McData SMI-S Agent to TPC

Important: The Interoperability namespace for Brocade switches is /interop.

6.4 Monitoring Mi10k through TPC


The McData SMI-S Provider allows TPC to collect performance data on the Mi10k. In order to
collect port connection information, an In-Band Fabric agent attached to the Mi10k SAN is
required.

Note: We recommend that you do not add an out-of-band Fabric agent for the Mi10k. This
will cause invalid information to be added to the TPC repository.

Chapter 6. Brocade Mi10k SMI-S Provider support 199


6.4.1 Running CIMOM discovery
After adding the CIMOM, run a CIMOM discovery by right clicking Administrative
Services → Discovery → CIMOM and selecting Run Now (see Figure 6-15).

Figure 6-15 Run CIMOM discovery

200 TotalStorage Productivity Center V3.3 Update Guide


When the CIMOM discovery is complete, create a new probe for the MI10K fabric as shown in
Figure 6-16.

Figure 6-16 Create new probe for Mi10K

Chapter 6. Brocade Mi10k SMI-S Provider support 201


6.4.2 Generating reports
When the McData SMI-S Provider and an In-Band Fabric agent have been configured on the
TPC Server, we can generate relevant reports and view the topology for the fabric.

To view information regarding the switches, generate the SAN Assets (Switches) report under
IBM TotalStorage Productivity Center → My Reports → System Reports → Fabric
(see Figure 6-17).

Figure 6-17 SAN Asset Report showing the Mi10k

202 TotalStorage Productivity Center V3.3 Update Guide


To view the storage topology, open the Topology Viewer under IBM TotalStorage
Productivity Center → Topology → Switches (see Figure 6-18).

Important: In order to see connection information for the Mi10k, an in-band Fabric agent is
required on the fabric.

Figure 6-18 Topology Viewer showing the Brocade Mi10k switch and connected entities

Chapter 6. Brocade Mi10k SMI-S Provider support 203


Performance data collection
To collect switch performance information, you must create a switch performance monitor job
under Fabric Manager → Monitoring → Switch Performance Monitors. Right-click and
select Create Switch Performance Monitors (see Figure 6-19).

Figure 6-19 Create Switch Performance Monitor Job

When the Switch Performance Monitor Job has completed, you are able to view the switch
performance data in TotalStorage Productivity Center. To view the port performance data,
go to Fabric Manager → Reporting → Switch Performance → By Port and generate the
report. The data is shown in tabular form (see Figure 6-20).

204 TotalStorage Productivity Center V3.3 Update Guide


Figure 6-20 Switch Performance Data

To view the switch performance data in graphical mode, you can click on the pie chart icon on
the upper left side and select the desired chart (see Figure 6-21).

Figure 6-21 Switch performance data in graphical form

Chapter 6. Brocade Mi10k SMI-S Provider support 205


206 TotalStorage Productivity Center V3.3 Update Guide
7

Chapter 7. TPC for Replication support


TotalStorage Productivity Center now supports launching external tools, such as the IBM
TotalStorage Productivity Center for Replication GUI. If TotalStorage Productivity Center for
Replication is installed and running on the same server as TotalStorage Productivity Center, it
will be will automatically discovered and an external tool will be added to the External Tools
node in the TPC Navigation Tree. If it is not on the same server, then you can manually define
it under the external tools node. When it has been added, you can launch the TotalStorage
Productivity Center for Replication GUI from TotalStorage Productivity Center using a Web
browser.

TotalStorage Productivity Center now reports on FlashCopy information on volumes. From


this information, TotalStorage Productivity Center can identify volumes as either FlashCopy
targets or sources. In addition, the “usable LUN capacity” will be adjusted to exclude the
capacity of volumes which are defined as FlashCopy targets.

© Copyright IBM Corp. 2008. All rights reserved. 207


7.1 Verifying the connection with TPC
After you have completed the installation of TotalStorage Productivity Center, you are able to
define and launch external tools.

If you have TPC for Replication already installed and running on the same host as
TotalStorage Productivity Center, an external tool will be automatically discovered and
available under IBM TotalStorage Productivity Center → External Tools (see Figure 7-1).

Figure 7-1 TPC for Replication External Tool Automatically Discovered by TPC

208 TotalStorage Productivity Center V3.3 Update Guide


If you do not have TPC for Replication on the same server as your TPC instance, you can
manually add an external tool by right-clicking IBM TotalStorage Productivity Center →
External Tools and selecting Add TotalStorage Productivity Center for Replication GUI
(see Figure 7-2).

Figure 7-2 Add IBM TotalStorage Productivity Center for Replication GUI

You are prompted for the following data to be provided (see Figure 7-3):
򐂰 URL/Command: Enter the IP address or the hostname of the TPC for Replication Server,
along with the port number and location.
򐂰 Label: You can define a label to easily identify the external tool.
򐂰 Description: You can enter a description for the external tool.

Figure 7-3 Add TotalStorage Productivity Center for Replication GUI External Tool to TPC

Chapter 7. TPC for Replication support 209


After either discovering or manually adding the external tool, you can launch the TPC for
Replication GUI by right-clicking the external tool and selecting Launch Tool (see
Figure 7-4).

Figure 7-4 Launch IBM TotalStorage Productivity Center for Replication

210 TotalStorage Productivity Center V3.3 Update Guide


7.2 Monitoring FlashCopy Relationships through TPC
TotalStorage Productivity Center now reports on volume FlashCopy relationships. To see this
information, you must discover and probe a storage subsystem with FlashCopy volumes
setup.

When you have the storage subsystem probed, you will see the FlashCopy Target Capacity
value on the TotalStorage Productivity Center dashboard populated with the total FlashCopy
Target Capacity of all storage subsystems probed (see Figure 7-5).

Figure 7-5 FlashCopy Target Capacity

Note: The usable LUN capacity now excludes the capacity of volumes that are defined as
FlashCopy targets.

Chapter 7. TPC for Replication support 211


To view the FlashCopy properties, you can also view the Data Manager reports. If you view
Data Manager → Reporting → Capacity → Disk Capacity → By Storage Subsystem, you
are able to view FlashCopy Target Capacity for each subsystem (see Figure 7-6).

Figure 7-6 FlashCopy Target Capacity by Storage Subsystem

212 TotalStorage Productivity Center V3.3 Update Guide


To determine whether an individual volume is setup as a FlashCopy source or target, it is
possible to view the LUN information under Data Manager → Reporting → Asset → By
Storage Subsystem → Storage Subsystem → Virtual Disks or LUNs → Volume Name
(see Figure 7-7).

Figure 7-7 View FlashCopy Attributes for Individual Volume or Virtual Disk

Chapter 7. TPC for Replication support 213


214 TotalStorage Productivity Center V3.3 Update Guide
8

Chapter 8. IBM TS3310 Tape Library support


IBM TotalStorage Productivity Center now supports the IBM TS3310 Tape Library. The IBM
TS3310 has an embedded SMI-S client, which can be monitored by TPC for tape manager
reporting.

This chapter provides a step-by-step guide to configure TotalStorage Productivity Center to


monitor the TS3310 Tape Library.

© Copyright IBM Corp. 2008. All rights reserved. 215


8.1 Introduction
After you have completed the installation of TotalStorage Productivity Center, you have to add
an SMI-S Agent in order to monitor the IBM TS3310 Tape Library.

The IBM TS3310 has an embedded SMI-S agent, so no external agent setup is required.

The TotalStorage Productivity Center uses SLP as the method for CIM clients to locate
managed objects. When a CIM agent implementation is available for a supported device, the
device can be accessed and configured by management applications using industry-standard
XML-over-HTTP transactions.

8.2 Adding the SMI-S Agent to TPC


In this section we discuss some considerations regarding the SMI-S Agent.

8.2.1 Supported hardware


The SMI-S Agent is configured and enabled by default on the IBM TS3310 Tape Library.
Therefore, no additional configuration is required on the tape library. Supported hardware is
shown in Table 8-1.

Table 8-1 Supported hardware


Manufacturer Product Minimum Firmware
Supported

IBM TS3310 400G.GS010

You are now ready to work with the TS3310 SMI-S provider through the TotalStorage
Productivity Center.

8.2.2 Verifying the connection with TPC


At this point you are ready to test the TPC connection to the IBM TS3310 SMI-S Interface.
You can either have TPC discover it automatically or add it manually using the Add CIMOM
task. We recommend that you add the CIMOM manually to TPC.

In the TotalStorage Productivity Center GUI, select Administrative Services → Data


Sources → CIMOM Agents and click the Add CIMOM button.

216 TotalStorage Productivity Center V3.3 Update Guide


You are prompted for the following data to be provided (see Figure 8-1):
򐂰 Host: Enter the IP address or the hostname of the TS3310 Tape Library.
򐂰 Port: By default, you can use port 5988 used by HTTP.
򐂰 Username: The default userid you can use is admin.
򐂰 Password: The password for admin is secure.
򐂰 Interoperability namespace: The namespace for the TS3310 is root/cimv2.
򐂰 Protocol: Use either HTTP or HTTPS depending on your setting. We recommend HTTP.
򐂰 Truststore Location: Leave blank.
򐂰 Truststore Passphrase: Leave blank.
򐂰 Display Name: You can enter any name you want to identify the CIM Agent.
򐂰 Description: You can enter a description for the CIM Agent
򐂰 Test CIMOM connectivity: We suggest that you leave the check mark in the check box
and let TPC perform the connection test.

Figure 8-1 Add IBM TS3310 SMI-S Agent to TPC

Chapter 8. IBM TS3310 Tape Library support 217


Important: The Interoperability namespace for IBM TS3310 Tape Libraries is root/cimv2.

After adding the CIMOM, run a CIMOM discovery by right-clicking Administrative


Services → Discovery → CIMOM and selecting Run Now (see Figure 8-2).

Figure 8-2 Run CIMOM discovery

218 TotalStorage Productivity Center V3.3 Update Guide


When the CIMOM discovery is complete, create a new probe for the tape library (see
Figure 8-3).

Figure 8-3 Create new probe for TS3310 Tape Library

When the probe has been completed, we can generate reports and view relevant information
about the TS3310 Tape Library.

Chapter 8. IBM TS3310 Tape Library support 219


8.3 Monitoring TS3310 through TPC
The TS3310 SMI-S Provider allows TPC to collect tape library, drive, media changer, I/O port,
cartridge, and frame information.

When the SMI-S Provider has been configured on the TPC Server, we can generate relevant
reports.

To view information regarding the tape libraries in your environment, view the Tape Libraries
panel under Tape Manager (see Figure 8-4).

Figure 8-4 Tape Manager panel

220 TotalStorage Productivity Center V3.3 Update Guide


Using the selection buttons at the top of this panel, you can view detailed information about
your tape libraries. For example, you can view the tape library drive information by clicking the
Drive button (see Figure 8-5).

Figure 8-5 Tape library drive detail panel

Chapter 8. IBM TS3310 Tape Library support 221


To view the storage topology, open the Topology Viewer in the Navigation Tree under
IBM TotalStorage Productivity Center → Topology → Storage and select the TS3310
Tape Library (see Figure 8-6).

Figure 8-6 Topology View showing the IBM TS3310 Tape Library

222 TotalStorage Productivity Center V3.3 Update Guide


9

Chapter 9. Topology Viewer


Within TotalStorage Productivity Center, the Topology Viewer is designed to provide an
extended graphical topology view; a graphical representation of the physical and logical
resources (for example, computers, fabrics, and storage subsystems) that have been
discovered in your storage environment. In addition, the Topology Viewer depicts the
relationships among resources (for example, the disks comprising a particular storage
subsystem). Detailed, tabular information (for example, attributes of a disk) is also provided.
With all the information that Topology Viewer provides, you can easily and more quickly
monitor and troubleshoot your storage environment.

The overall goal of the Topology Viewer is to provide a central location to view a storage
environment, quickly monitor and troubleshoot problems, and gain access to additional tasks
and functions within the TotalStorage Productivity Center graphical user interface (GUI)
without users losing their orientation to the environment. This kind of flexibility through the
Topology Viewer UI displays better cognitive mapping between the entities within the
environment, and provides data about entities and access to additional tasks and functionality
associated with the current environmental view and the user's role.

The Topology Viewer uses the TotalStorage Productivity Center database as the central
repository for all data it displays. It actually reads the data in user-defined intervals from the
database and, if necessary, updates the displayed information automatically.

The Topology Viewer is an easy-to-use and powerful tool within TotalStorage Productivity
Center. It can make a storage manager’s life easier. But as is true for every tool, to get the
most out of it, you first have to understand the basics, the concept, and the “dos and don’ts.”

In this chapter we look at the Topology Viewer enhancements in TPC V3.3.

© Copyright IBM Corp. 2008. All rights reserved. 223


9.1 What is new in Topology Viewer V3.3
TPC V3.3 has significant enhancements in terms of configuration planning, performance
management, and usability improvements. Here are the new features introduced in TPC V3.3
for the Topology Viewer:
򐂰 Pin list persistence
򐂰 Topology Viewer link to reports and alert
򐂰 Context Sensitive Reporting and Data Path Explorer
򐂰 Historical Analysis
򐂰 Configuration Analysis

9.2 Feature descriptions


In the following topics, we take a closer look at the new features in TPC V3.3.

9.2.1 Pin list persistence


The pin list feature in the Topology Viewer in TotalStorage Productivity Center V3.1 allows the
you to “pin” an entity, which puts a small flag next to the entity and propagates this flag to all
views that contain that entity. This is useful for marking entities for various reasons, such as
“to-look-at-later” type reminders and upward propagation. Pinned entities are typically of high
interest to you, such as for monitoring purposes, and easy access is essential. Through
pinning, the Topology Viewer can provide functionality that is similar to NetView® SmartSets.

In TPC V3.1, the pin list in the Topology Viewer is not persistent across sessions. If you close
the TPC GUI (or even just the Topology Viewer panel), you will lose the current pin list. Now,
with TPC V3.3, the pin list is persistent across sessions and per user.

Pin lists for multiple users and different user names


This means that different users, that is, people using different user names to log into TPC, are
able to have their own persistent and private pin lists. When multiple users log onto the same
TPC server simultaneously using the same username, they share the same pin list. And
different usernames can have their own different pin lists. The pin list information is available
across closing and opening of the topology console and across stop/restarts of the TPC GUI.
The pin list is not lost when the TPC GUI is closed and is not affected by pin list operations
performed by other users. When using the same username login again or signing on from a
different TPC console location, the original pin list is still available.

This enhancement makes pin lists for longer durations. The pins will remain visible until you
unpin them in the Topology Viewer. So it can be used to easily refer to a small number of
entities for your whole monitoring period of time.

To pin an entity in the graphical view, open the Background Context Menu by selecting an
entry and right-click, then select the Pin or Unpin menu option (see Figure 9-1).

224 TotalStorage Productivity Center V3.3 Update Guide


Figure 9-1 Pinning an entity

Figure 9-2 shows the entities we pinned in the Topology Viewer Overview panel. Pinning
provides a direct path to look at an entity in detail by just double clicking on it. Note that in a
collapsed group, pinned entities are surfaced (see Figure 9-2), whereas in an expanded
group, pinning is shown on each pinned entity.

Figure 9-2 Pinned entities surfaced in a group

Chapter 9. Topology Viewer 225


As you can see by this example, pinning in the Topology Viewer offers an easy-to-use and
powerful method to select single or multiple entities you want to monitor.

9.2.2 Link to reports / alerts from Topology Viewer


TPC V3.1 provides both the Topology Viewer and different kinds of reports and alerts to help
you manage your infrastructure. However, those functions are not correlated with each other.
When you are navigating in Topology Viewer and want to see the alert or report on the entity
you are looking at, you have to go to the reports or alerts section to manually define them.

With TotalStorage Productivity Center V3.3, the reports and alerts are integrated into the GUI
console. It provides a mechanism to navigate from an entity shown on the Topology Viewer to
other areas of the console to show reports or perform management of the selected entity.
This enhancement enables users to directly jump to the appropriate portions of the console
Navigation Tree from the Topology Viewer.

Link to alerts
From the Topology Viewer, you might notice that there is a red exclamation point (!) shown
beside an entity or in the group title indicating there are alerts for this entity or some of the
entities in the same group. You can hover to the red exclamation point; it will show you how
many alerts are now open for the selected entities.

Figure 9-3 Topology Viewer alert summary

You would right-click on the entity and choose Show <entity type> Alerts (in our example,
we are looking at Switches Group, so it is Show Switch Alerts) as shown in Figure 9-4.

Figure 9-4 Topology Viewer Show Alerts

226 TotalStorage Productivity Center V3.3 Update Guide


The TPC GUI jumps to the related alert view as shown in Figure 9-5. You can look at the
alerts, and then choose to clear or delete them if you do not need the information any more.

Figure 9-5 Alert details

Link to reports
In the Topology View, from the L0-L2 graphic view of Computers and Storage classes, you
can link to reports that reflect the entities you choose from. This is very convenient when you
are looking at an entity or a certain group of entities, and you want to get a report showing
more information.

Here is an example of how to create a report from the Computer group graphic view.
Right-click on the title of the Computers group, then click Reports from the pop-up context
menu as shown in Figure 9-6.

Note: Do not right-click on the computer entity, if you do so, the report will be related only
to that computer.

Chapter 9. Topology Viewer 227


Figure 9-6 Topology Viewer Report navigation

The pop-up panel shows the possible reports that can be generated related to the selected
entity. In our example, we choose Data Manager → Capacity → Filesystem Used Space →
By Computer, and click Create Report (see Figure 9-7).

Figure 9-7 Entity report selection

228 TotalStorage Productivity Center V3.3 Update Guide


The TPC GUI will jump to the TPC reports console. Click Generate Report to continue
(see Figure 9-8).

Figure 9-8 Report navigation to Generate Report for selected entity

You will see the report shown in Figure 9-9 that contains filesystem used space information
for the selected computer group in Topology View.

Figure 9-9 Report from selected entity in Topology Viewer

9.2.3 Context Sensitive Reporting and Data Path Explorer


A new view is added to the Topology Viewer that allows you to view the paths between
servers and storage subsystems or between storage subsystems (for example, SVC to
back-end storage or two subsystems participating in a replication session). Performance and
health overlays on this view provide a mechanism to assess the impact of performance or
device state in the paths on the connectivity between the systems. The view consists of three
panes (host information, fabric information, subsystem information) that show the path
through a fabric or set of fabrics for the endpoint devices (host to subsystem or subsystem to
subsystem).

Chapter 9. Topology Viewer 229


The goal of this improvement is to identify the impact of performance issues in a fabric or
devices connected to the fabric on the other components in the environment. To identify the
impact of performance issues, you should define performance monitor for related subsystems
and switches.

Data Path Explorer prerequisites


The following prerequisites are necessary:
򐂰 TPC Agents are required to view data paths from a host to a subsystem:
– CIM agent for the subsystem
– Data agent on the host
– In-band or out-of-band Fabric agent
– Note: In an environment with out-of-band Fabric agents only, the out-of-band agents
must be configured for all the switches in a fabric.
– CIM Agent for SVC required to view data paths involving SVC.
򐂰 No data paths are shown if any one of the above agents are not configured.
򐂰 Zone Overlay: You must have an in-band Fabric agent or Brocade API agent.
򐂰 Performance Overlay: You have to configure switch and subsystem performance monitors.
򐂰 Multi path driver Support: Only the SDD driver is supported. On Windows, SDD 1.6.1 or
greater is required.

To launch the Data Path Explorer, in Topology Viewer, select one or more hosts, subsystems,
disks, volumes, or Mdisks, and click the Data Path Explorer shortcut in the MiniMap tool
(Figure 9-10).

Figure 9-10 MiniMap with Data Path Explorer shortcut

Note: In order to view the Data Path Explorer for a host, you must either have an in-band
Fabric agent installed or both a Data agent and an out-of-band Fabric agent on the SAN on
which the host is connected.

The Data Path Explorer view for hosts is shown in Figure 9-11. From this view, you can see
how the host volumes connect to the storage subsystem through the fabric.

230 TotalStorage Productivity Center V3.3 Update Guide


Figure 9-11 Data Path Explorer view for hosts

Data Path Explorer view for subsystem is shown in Figure 9-12. if the subsystem is an SVC,
from this view, you can see how the SVC Mdisks connect to its back-end storage subsystem
through the fabric. If the subsystem is not an SVC, from this view, you can see how the
subsystem connects to its hosts.

Figure 9-12 Data Path Explorer view for subsystem

Chapter 9. Topology Viewer 231


Data Path Explorer view for Mdisk is shown in Figure 9-13. From this view, you can see how
the Mdisk comes from its back-end subsystem.

Figure 9-13 Data Path Explorer view for Mdisk

The Data Path Explorer view for volume is shown in Figure 9-14. From this view, you can see
how the volume from the subsystem connects to SVC or hosts.

Figure 9-14 Data Path Explorer view for volume

232 TotalStorage Productivity Center V3.3 Update Guide


9.2.4 Configuration Historical Analysis
The configuration history feature takes and displays snapshots of changes that occurred in
your SAN configuration over a period of time that you specify. After you set the time period
(how often to take snapshots and how long to store them), in a page similar to the Topology
Viewer, you can manipulate a snapshot selection panel to show changes that occurred
between two or more points in the time period. System administrators can use the
configuration history feature to accomplish the following tasks:
򐂰 Correlate performance statistics with configuration changes:
For example, during collection of performance statistics (including volume performance
statistics) on an ESS system, you might delete a volume. While no new statistics are
reported on that volume, the TotalStorage Productivity Center Performance Manager
would have already collected partial statistical information prior to the deletion. At the end
of the data collection task, reporting of the partially collected statistics on the (now)
deleted volume would require access to its properties, which would not be available. The
configuration history feature, with its ability to take and store snapshots of a system’s
configuration, could provide access to the volume’s properties.
򐂰 Analyze end-to-end performance:
To know why performance has changed on volume A during the last 24 hours, it would be
useful to know what changes were made to the storage subsystem’s configuration that
might affect the volume’s performance, even if performance statistics were not recorded
on some of those elements. For example, if performance statistics on a per-rank basis are
not collected, but the number of volumes allocated on a rank is increased from 1 to 100
over time, access to that configuration history information helps with analyzing the
volume’s degraded performance over time.
򐂰 Aid in planning and provisioning:
The availability of configuration history can enhance the quality of both provisioning and
planning. For example, historical data is useful when using the TotalStorage Productivity
Center Volume Placement Advisor to provision a volume or when using the TotalStorage
Productivity Center Version 3.3 Integrated Host, Security and Subsystem Planner to plan
tasks.

Change Monitoring is the detection of changes in the environment over time and the
identification of the changes to the user.

Detection of changes is accomplished by manually or automatically initiated action that can


“snapshot” key tables at periodic intervals in the DB for comparison with other snapshots.

The following information is collected in the snapshot:


Subsystem, Pool, Volume, Storage Extent, Disk Group, Fabric, Switch, Port, Port, Fabric,
Switch, Node, Zone, Zone Set, Host, HBA, HostNew Snapshot Tables.

The configuration history is displayed through the Topology Viewer, like a graphic viewer.
You can identify the changes in this viewer. The change overlay is a topology overlay that
becomes active in Change Rover mode. The purpose of this overlay is to show whether an
entity visible in the Topology Viewer has been changed between point TimeA and TimeB in
time. In essence, there are only four possible states of any entity in this situation:
򐂰 The entity has not been changed between TimeA and TimeB.
򐂰 The entity has been created since TimeA.
򐂰 The entity has been deleted since TimeA.
򐂰 The entity has been modified since TimeA.

Chapter 9. Topology Viewer 233


Setting up the configuration history
In the Navigation tree pane, expand Administrative Services Configuration →
Configuration → History Settings. The Configuration History Settings page displays for you
to set how often to capture SAN configuration data and how long to retain it (see Figure 9-15).
By default, TPC does not take a configuration history snapshot. You can perform the following
action to enable the snapshot:
򐂰 In the Create snapshot every field, click the check box to enable this option and type how
often (in hours) you want the system to take snapshot views of the configuration.
򐂰 In the Delete snapshots older than field, click the check box to enable this option and
type how long (in days) you want the snapshots to be stored.
򐂰 The page displays the total number of snapshots in the database and the date and time of
when the latest snapshot was taken. To refresh this information, click Update.
򐂰 To optionally create and title a snapshot on demand, in the Title this snapshot (optional)
field, type a name for the on demand snapshot and click Create Snapshot Now. If you do
not want to title the on demand snapshot, simply click Create Snapshot Now.
򐂰 To return to the default settings (the default settings are Create snapshots every 12 hours
and Delete snapshots older than 14 days), click Reset to defaults.
򐂰 To save your settings, click File → Save.
After you save changes, TPC will take a snapshot at your designated time bases.

Figure 9-15 Configuration History Settings

Viewing configuration changes


In the Navigation tree pane, expand IBM TotalStorage Productivity Center → Analytics →
Configuration History. The floating panel (Figure 9-16) allows you to define the time periods
against which the configuration is compared to determine whether changes have occurred.

The left Time Range slider covers the range of time from the oldest snapshot in the system to
the current time. The right Snapshots in Range slider allows you to select any two snapshots
from the time interval specified by the Time Range slider. The value in parentheses beside
the Snapshots in Range slider indicates the total snapshots in the currently selected time
range. The Snapshots in Range slider has one check mark for each snapshot from the time
interval that you specified in the Time Range slider.

234 TotalStorage Productivity Center V3.3 Update Guide


Each snapshot in the Snapshots in Range slider is represented as time stamp, mm/dd/yyyy
hh:mm, where the first mm equals the month, dd equals the day, yyyy equals the year, hh
equals the hour, and the second mm equals the minute. The value in parentheses beside
each snapshot indicates the number of changes that have occurred between this and the
previous snapshot. Snapshots with zero changes are referred to as empty snapshots. If you
provided a title while creating an on demand snapshot, the title displays instead of the time
stamp. If you want to remove empty snapshots, click the check box in Hide Empty
Snapshots. After you selected the Time Range and Snapshots in Range, click Apply, the
Displaying Now box will indicate the two snapshots that are currently active.

Figure 9-16 Configuration History Time Range slider

The Configuration History page (a variation of the Topology Viewer, see Figure 9-17) displays
the configuration’s entities. To distinguish them from tabs in the Topology Viewer page, tabs in
the Configuration History page (Overview, Computers, Fabrics, Storage, and Other) have a
light gray background and are outlined in orange. The minimap in the Configuration History
page uses the following colors to indicate the aggregated change status of groups:
򐂰 Blue: One or more entities in the group have changed. Note that the addition or removal of
an entity is considered a change.
򐂰 Gray: All of the entities in the group are unchanged.

Figure 9-17 Topology Viewer Configuration History

Chapter 9. Topology Viewer 235


A change overlay presents icons and colors in the graphical view that indicate changes in the
configuration between the time that a snapshot was taken and the time that a later snapshot
was taken shown as Figure 9-18. The tabular view also displays the change status in
addition to the graphical view.

Note: In the Configuration History view, the performance and alert overlays are disabled
and the minimap's shortcut to the Data Path Explorer is not available.

Figure 9-18 Topology Viewer Configuration History detail view

Table 9-1 describes the colors and icons of the change overlay.

Table 9-1 Colors and icons of the change overlay


Icon and colors of the change Description
overlay

Light gray entities without an Entities that did not exist at the time that the snapshot was
overlay icon taken or at the time that a later snapshot was taken.
Background: Light Gray
Entities that changed between the time that the snapshot was
Yellow pencil taken and the time that a later snapshot was taken.
Background: Blue /turquoise
Entities that did not change between the time that the
Light gray circle snapshot was taken and the time that a later snapshot was
Background: Dray Gray taken.

236 TotalStorage Productivity Center V3.3 Update Guide


Icon and colors of the change Description
overlay

Entities that were created between the time that the snapshot
Green cross was taken and the time that a later snapshot was taken.
Background: Green
Entities that were deleted between the time that the snapshot
Red minus sign was taken and the time that a later snapshot was taken.
Background: Red

9.2.5 Configuration Analysis


SAN environments are getting larger and more complex. Customers are looking for
mechanisms to manage changes in the environment. One of the mechanisms is the ability to
compare the configuration against best practices policies and identify non-compliant areas of
the fabrics.

A policy is a rule that is applied to the storage environment, for example: “All the HBAs from
the same vendor and model type should have the same firmware level.” You are notified of
violations of these policies.

You can specify the configuration check for all fabrics, one fabric, or one zoneset. This
provides you some extend of flexibility when you just want to check certain scope of the fabric.
The configuration check job can be run as needed or scheduled just like other probe or scan
jobs.

You are notified of violations of these policies through Alerts and logfiles. The entities violating
these SAN Configuration policies will be flagged and displayed in the Topology Viewer.
Table 9-2 lists and explains the best practice policies in TPC V3.3.

Table 9-2 Policies used by the Configuration Analysis feature


Policy Explanation

1. Each connected computer Determines whether an administrator forgot to zone a connected


and storage subsystem port port. Putting connected ports into zones is useful for security and
must be in at least one zone in performance reasons. Ports are usually grouped into zones
the specified zone sets. based on applications, server operating systems, or HBA
vendors. The Fabric scope is not supported by this policy.

2. Each HBA accesses storage Determines whether an HBA accesses both storage subsystem
subsystem ports or tape ports, and tape ports. Because HBA buffer management is configured
but not both. differently for storage subsystems and tape, it is not desirable to
use the same HBA for both disk and tape traffic. The Fabric and
Zone Set scopes are not supported by this policy because an
HBA can be connected to multiple fabrics.

3. Each volume is accessed Determines whether computers that run different operating
only by computers running the systems access the same storage volumes. Use of the same
same type and version of volumes by computers that run different operating systems can
operating system. corrupt the data that is stored on the volumes. This applies,
regardless of whether the computers are in the same zone.

Chapter 9. Topology Viewer 237


Policy Explanation

4. Each zone contains only Determines whether HBAs from different vendor types are in their
HBAs from a single vendor. own zone. Receiving a registered state change notification
(RSCN) can cause an HBA to lose a zoned device, preventing the
HBA from seeing or communicating with other devices in the
zone. To avoid losing a zoned device, keep HBAs from different
vendor types in their own zone.

5. Each zone contains only a Determines whether different storage subsystems are in the
single model of storage same zone. While no technical problem is associated with
subsystem. storage subsystems from different vendors and of different
models being in the same zone, an administrator might find them
more difficult to organize. When similar storage systems are in
the same zone, an administrator can easily group them for
different applications.

6. Each zone is part of a zone Determines the presence of orphan zones. Orphan zones are not
set. associated with any zone set. They are not useful because their
definitions are not used and they take up switch resources.

7. Each host must be zoned so Determines whether the zones that were configured by the
that it can access all of its storage administrator allow each computer to access all of the
assigned volumes. storage volumes that are assigned to it. The administrator
specifies the storage subsystem ports through which the
computer port accesses volumes, but might forget to configure
zones that enable the ports to communicate during volume
assignment. This policy also determines whether zoning makes
assigned volumes inaccessible to the computer ports. The Fabric
scope is not supported by this policy.

8. Each computer has only Checks whether there is only one type of HBA in each computer.
HBAs of the same model and Using only one type of HBA minimizes configuration problems.
firmware version. The policy also checks whether firmware upgrades have been
done properly for all HBAs in a computer. The Zone Set scope is
not supported by this policy.

9. For each host type and Determines whether all firmware upgrades have been done for
operating system, every HBA of the HBAs in the operating system. The Zone Set scope is not
a given model must have the supported by this policy.
same firmware version.

10. Every SAN switch of a given Determines whether firmware upgrades have been done for all
model must have the same switches of the same type. For example, if you have four identical
firmware version. models of SAN switches from the same vendor and you perform
a firmware upgrade on one, it is best to perform the upgrade on
all of the others. The Zone set scope is not supported by this
policy.

11. Every storage subsystem of Determines whether firmware upgrades have been done for all
a given model must have the storage subsystems of the same type. For example, if you have
same firmware version. four identical storage subsystems from the same vendor and you
perform a firmware upgrade on one, it is best to perform the
upgrade on all of the others. The Zone Set scope is not supported
by this policy.

238 TotalStorage Productivity Center V3.3 Update Guide


Policy Explanation

12. Each fabric can have a Checks whether the number of zone definitions in the fabric is
maximum of x zones. larger than the number that you entered. In large fabrics, too
large a number of zone definitions can become a problem. Fabric
zone definitions are controlled by one of the switches in that
fabric, and limiting their number ensures that the switch's zoning
tables do not run out of space. The Zone Set scope is not
supported by this policy. You can enter up to a maximum of 9999
zones.

13. Each zone can have a Checks whether the number of zone members in a zone is larger
maximum of x zone members. than the number that you entered. In large fabrics, too large a
number of zone members can become a problem. Fabric zone
members are controlled by one of the switches in that fabric, and
limiting their number ensures that the switch’s zoning tables do
not run out of space. You can enter up to a maximum of 9999
zones.

Setting up the configuration history


Follow these steps:
1. Ensure that you have previously run discovery and probe jobs for the computers, fabrics,
switches, storage, and other objects of interest, and make sure that the IBM TotalStorage
Productivity Center Data Server and Device Server are running. If your have Brocade
Fabric, make sure you have enabled Advanced Brocade Discovery in Out of Band
Fabric Agents definition and provide username and password, so that TPC can discover
zoning information of the fabric.
2. In the Navigation Tree pane, expand IBM TotalStorage Productivity Center →
Analytics. Right-click Configuration Analysis. Choose Create Analyzer. The Create
Analyzer window opens shown as Figure 9-19. Perform the following steps to define the
analysis:
a. Click the check box to display a check mark beside Enabled.
b. In the Description field, type a brief description of the of the analysis job.
c. To specify the scope of the SAN data to be checked, make a selection from the
Configuration Analysis Scope list. Choices include:
• All Fabrics
• One Fabric
• One Zoneset
If you select One fabric or One Zoneset, an adjacent list displays for you to click a
specific fabric or zone set.
d. Check the SAN data against up to 13 policies by performing one of these selections:
• Choose all of the policies by clicking the check box to display a check mark beside
Select All/UnSelect All.
• Choose one or more individual policies by clicking the check box to display a check
mark beside each policy. For Policies 12 and 13, type values up to a maximum of
9999.

Note: If you have a large Fabric environment, we recommend to choose a small


subset of the policies to run at a time. The benefit is that you can focus on analysis
specific violations, and also shorten the time for the analysis job. The analysis job
stores and displays only the first 50 policy violations. You must resolve these
violations and run the job again to view any remaining violations.

Chapter 9. Topology Viewer 239


Figure 9-19 Configuration Analysis Create Analyzer pane

3. Scroll down to Scheduling to specify how often to run the analysis, in our case, we choose
Run Now, and click File Save, and give the analysis job a name in the pop-up box (see
Figure 9-20) to start the job.

Figure 9-20 Create Analyzer - specify name

240 TotalStorage Productivity Center V3.3 Update Guide


4. When the analysis job completes, you can see this panel (Figure 9-21 on page 241),
indicating the information of the job. To see the detail information, click the magnifier icon
in front of the entry.

Figure 9-21 Analysis job finish

The Job log file page displays the following information (see Figure 9-22):
– Scope of the run (all fabrics, one fabric, or one zone set)
– Policies that were checked
– Total violations
– Breakdown of violations per policy
– Whether the run completed successfully or the errors it encountered

Figure 9-22 Analyzer Job log file

Chapter 9. Topology Viewer 241


Checking a policy violation
To view one or more alerts generated by policy violations, you can check from the Alert view
or from the Topology Viewer tabular view, which can provide more detailed information:
򐂰 Check from Alert View:
In the Navigation tree, expand IBM TotalStorage Productivity Center → Alerting →
Alert Log → Configuration Analysis. The Alert History - All Policy Violations page
(Figure 9-23) displays a log of job runs that generated alerts for policy violations. A policy
violation alert is generated for each policy that was violated during a run. A policy might be
violated several times, but only one alert is generated. Click the Magnifier icon in front of
each log; the text in the alert indicates the number of times the policy was violated.

Figure 9-23 Alert History - All Policy Violations page

򐂰 Check from Topology Viewer tabular View:


In the Navigation Tree, expand IBM TotalStorage Productivity Center → Topology. In
the Overview tab, click on the Fabrics, then in the tabular view, click the Alert tab. Scroll
up and down, and you will find Policy xx Violated alerts there. Expand the alert; you can
see the number of times the policy was violated, and the enclosing entities and affected
entities (see Figure 9-24). This information provides you clues to help fix the violation.

Note: Tabular view Alert tab from Overview shows all violations, if you choose
Computers, Fabrics, Storage, or Other, the Alert tab only shows alerts related to that
specified device group.

242 TotalStorage Productivity Center V3.3 Update Guide


Figure 9-24 Policy violation alert tab

As an example, we have Policy 6 violated (each zone is part of a zone set); check the affected
entities. We found in Fabric (100000051E35D514), zone (COLORADO_HBA_ITSOSVC02)
and zone (GALLIUM_HBA_ITSOSVC02) are defined but are not part of a zone set, and in
Fabric (100000051E34E895), zone (COLORADO_HBA_ITSOSVC02) is defined but is not
part of a zone set. To fix the violation, we can add the zones to the zoneset in each of the
fabrics, or if the zones are not needed, we can delete the zones from the fabrics, and run the
configuration analysis again.

We also have Policy 10 violated (every SAN switch of a given model must have the same
firmware version); check the switch configuration. We found that the two switches in our lab
are at different firmware level s (see Figure 9-25). To fix the violation, we can upgrade the
firmware to the same level, and run the configuration analysis again.

Figure 9-25 Switch firmware level check

Chapter 9. Topology Viewer 243


244 TotalStorage Productivity Center V3.3 Update Guide
10

Chapter 10. SAN Planner


IBM TotalStorage Productivity Center now has an integrated SAN Planner, which allows you
to create a plan based on fabric, host, volume, path and zone requirements and to execute
this plan to modify the system configuration.

In this chapter, we will explain the SAN Planner functionality and go through a step-by-step
creation and implementation of a plan.

© Copyright IBM Corp. 2008. All rights reserved. 245


10.1 SAN Planner overview
The SAN Planner allows you to perform end-to-end planning involving fabrics, hosts, storage
controllers, storage pools, volumes, paths, ports, zones, and zone sets. When a plan has
been generated, you can select to have the plan implemented by TotalStorage Productivity
Center.

10.1.1 Volume Planner


The Volume Planner was formerly known as the Volume Performance Advisor (VPA) in
previous versions of TotalStorage Productivity Center. It plans and selects appropriate
storage controllers, storage pools, and storage volumes (when using unassigned volumes),
that satisfy the user’s inputs. It allows you to select controller type preference, whether
storage request can be satisfied by multiple controller types, and RAID level. The Volume
Planner uses the current performance utilization of storage resources to determine whether a
new volume should be allocated on particular pool in a particular storage controller.

In TotalStorage Productivity Center V3.3, if multiple storage pools from different controllers
can potentially satisfy your provisioning request, the Volume Planner uses rated utilization of
the pools (the sum of the previous provisioning performance requirements, which might be
greater than the current utilization) to break the ties and select a candidate storage pool.

Note: Before using the Volume Planner to allocate storage based on performance
characteristics, a Performance Monitor job must be run on the target subsystem.

10.1.2 Path Planner


The Path Planner allows setup of multipath options. The Path Planner enables system
administrators and SAN administrators to plan and implement storage provisioning for hosts
and storage subsystems with multipath support in fabrics managed by TotalStorage
Productivity Center. Planning the paths between the host(s) and storage controller requires
designing paths between hosts and storage subsystems which will be implemented through
zones in the fabric.

Note: IBM Subsystem Device Driver (SDD) must be installed on a host in order to invoke
the Path Planner.

The Path Planner is used for specifying multiple paths options between selected hosts and
storage subsystems. Path Planner assists the administrator in the multipath tuning process
through the selection of these policies:
򐂰 The Multipath option specifies how the driver uses the paths between the host and the
storage subsystem. The options are as follows:
– Load Balancing sends Input/Output on all paths
– Round Robin sends Input/Output on one path until a time interval expires (set in an
SDD setting at the host) or stops to use another path.
– Fail-Over sends Input/Output on one path until a failure occurs and fails over (switches)
to another path.
򐂰 The Specify number of paths option specifies the number of paths between each host and
the storage subsystem.

246 TotalStorage Productivity Center V3.3 Update Guide


򐂰 Use fully redundant paths. The Path Planner will check for redundant fabrics between
each host and storage subsystem and create paths in each fabric. This requires at least
two fabrics.

Workload profiles provide the Path Planner with estimates of required I/O traffic. These
estimates are used to determine the number of paths required from each host to the storage
subsystem and select the multipath driver path utilization. The number of paths and the driver
multipath mode can be adjusted by the performance requirements specified through the
selection of a workload profile.

The Path planner does not directly interact with the Zone planner. It provides the path
information which the Zone planner uses. Each path is represented by a host port WWPN,
target port WWPN, and a Volume ID for the volume on the target which is mapped to the host
port. These paths are created when the Zone planner is implemented.

10.1.3 Zone Planner


The Zone Planner allows you to implement automatic zoning between ports on the selected
hosts and subsystems in a fabric. All zoning is based on WWPNs. The Zone Planner will plan
zoning configuration for new storage that is provisioned for use by a host. For example, it can
be used when a new storage volume is created and assigned to a host. It can also be used
with a volume that has already been created and is assigned to a host needing more storage.
In these cases, the Path Planner and Volume Planner determine which host and storage
need to be zoned together, providing the Zone planner the exact set of ports which need to be
zoned together. The Zone planner then uses the zoning inputs for the planning. If the Volume
and Path Planners are not used, you can select the host and storage ports and then invoke
the Zone Planner.

The Zone Planner expects a list of host port and storage port pairs as input. If the Path
Planner has been invoked prior to the Zone Planner, its output is used as input to the Zone
Planner. If the subsystem/host are within the same fabric and Zone Planner is not checked,
then existing zones or zone sets are used. If Zone Planner is checked, this creates a new
zone or zone set.

Note: For the Zone Planner to create zones, the host and subsystem must be within the
same fabric.

For the case where the host and subsystem reside in more than one of the same fabrics, you
are given two options. The first option is to create identical zones in all of the fabrics. The
second option is to select specific fabrics to create identical zones in. The guidance policies
used for zone planning are as follows:
򐂰 One zone per host bus adapter (HBA)
򐂰 One zone per host
򐂰 One zone per host port
򐂰 Auto zone: largest zone that satisfies the validation policies

The following validation policies are used for zone planning (where N is specified by the user):
򐂰 No two controllers of different types should be in the same zone.
򐂰 Maximum number of zone members in a zone = N.
򐂰 Maximum number of zones in a fabric = N.

Chapter 10. SAN Planner 247


10.1.4 Requirements for SAN Planner

Note: SAN Planner supports the ESS, DS6000, and DS8000 subsystems. Unsupported
subsystems cannot be selected by the user.

򐂰 For planning, TotalStorage Productivity Center must be managing the host system,
subsystem and the fabric interconnecting them. If the host, subsystem, or fabric
information is not available in the repository, then the planner will not be able generate the
plan or execute portions of the plan and will issue an error message.
򐂰 For volume creation, TotalStorage Productivity Center must be managing the subsystem.
You will need an active CIMOM and must have a completed a subsystem probe.
򐂰 For volume assignment, TotalStorage Productivity Center must be managing the host,
fabric, and subsystem. The host and subsystem must be in the same fabric.
򐂰 For zoning configuration, TotalStorage Productivity Center must be managing the fabric.
For Brocade fabrics, an out-of-band Fabric agent must be configured with the user ID and
password for the switch. For other fabrics, an in-band Fabric agent must be connected to
the fabric for zone control operations.
򐂰 A subsystem performance monitor must be run in order to select the Workload Profile
options. If a performance monitor has not been run, the Space Only workload profile
option is allowed for planning on capacity only.
򐂰 The IBM Subsystem Device Driver (SDD) is required on each host for multipath planning.

10.1.5 SAN Planner panels


The SAN Planner consists of three panels, as follows:
1. The configuration panel (see Figure 10-1) contains:
– Create Plan pane
– Planner Selection pane
– Volume Planner, Path Planner, and Zone Planner
– Get Recommendation button

248 TotalStorage Productivity Center V3.3 Update Guide


Figure 10-1 SAN Planner Configuration Panel

2. The Planner Selection Topology Viewer panel (see Figure 10-2) contains:
– Topology Viewer in the graphical and tabular view of the configuration, from which
configuration elements can be selected.
– Select Elements pane, into which the selected elements are moved. These elements
are used later by the planners during the planning and implementation stages.

Figure 10-2 SAN Planner Selection Topology Pane

Chapter 10. SAN Planner 249


3. The recommendation panel (see Figure 10-3) contains:
򐂰 The pane, Create Plan
򐂰 The pane, Plan involves the following changes (displays the planner recommendations)
򐂰 The pane, When to run (task scheduler)
򐂰 The pane, How to handle time zones
򐂰 The following four buttons:
– The Return to Planner Input button returns you to the initial planner configuration
panel.
– The Save button saves the plan for later modification.
– The Execute Plan button creates a task to execute the plan.
– The Cancel button cancels the SAN planner and exits.

Figure 10-3 SAN Planner Recommendation Panel

250 TotalStorage Productivity Center V3.3 Update Guide


10.1.6 Usage combinations for the three planners
The three planners can be invoked in an integrated manner or independently of one another.
The following examples show the combinations in which the three planners can be used:
򐂰 Using all three planners provide end-to-end planning on fabrics (one or more) that will
create the storage, zone the new storage to the host, and configure the multipath settings
on the host.
򐂰 Using Path and Zone Planners for cases such as adding a new path between the host and
volume. Select a host and volume to be used as input to the Path and Zone planners.
򐂰 Using Volume and Path Planners for cases where the host is already zoned to the storage
subsystem and you want to add more volumes.
򐂰 Using the Volume and Zone Planners for cases where additional paths are added between
the switch and storage subsystem. Select a host to be used as input to Volume and Zone
planners.
򐂰 Using the Volume Planner by itself to create a volume for later use.

10.2 Invoking the planners


In this section we explain how to create a SAN Plan.

10.2.1 Introduction
You can create a SAN Plan by right-clicking the IBM TotalStorage Productivity Center →
Analytics → SAN Planner node and selecting Create Plan (see Figure 10-4).

Figure 10-4 Creating a new SAN Plan through Analytics

Chapter 10. SAN Planner 251


You can also create a SAN plan from the IBM TotalStorage Productivity Center → Disk
Manager → SAN Planner node and selecting Create Plan (see Figure 10-5).

Figure 10-5 Creating a new SAN Plan through Disk Manager

252 TotalStorage Productivity Center V3.3 Update Guide


10.2.2 The SAN Planner Configuration panel
The Planner Selection pane displays a tree view of the hosts, fabrics, and storage
subsystems which are used during the planning process. This pane is populated in two ways:
1. By clicking the Add button on the Planner Selection pane. This brings you into the Planner
Selection Topology Viewer panel (see Figure 10-6). This viewer displays the configuration
and what elements have been selected for the planning. The selections are moved from
the graphical view into the Select Elements pane. Click OK to return to the Planner
Selection pane on the configuration panel. The pane displays the selected elements.

Figure 10-6 SAN Planner Selection Panel

2. Through the Launch Planner selection from the TotalStorage Productivity Center Topology
Viewer. In the graphical view of the Topology Viewer, highlight one or more supported
elements, right-click, and select Launch Planner from the context menu (see
Figure 10-7). The selection is placed into the Planner Selection pane of the SAN Planner
configuration panel.

Chapter 10. SAN Planner 253


Figure 10-7 Launching SAN Planner through the Topology Viewer

Both methods provide a Topology Viewer from which you can select a variety of elements.
The elements allowed for selection are fabrics, storage subsystems, storage pools, volumes,
and computers. You can select a storage subsystem, not select any storage subsystems, or
select a storage subsystem and one or more storage pools. You might also select one or
more fabrics for use in path and zone planning.

10.3 Using the SAN Planner


This topic describes how to use the SAN Planner to select configuration information for a plan
and modify a configuration.

10.3.1 Introduction
You must run a fabric discovery and complete subsystem and fabric probes in order to use
the SAN Planner.

Note: The SAN Planner is available for ESS, DS6000, and DS8000 systems. Unsupported
subsystems will not be allowed to be selected.

A Data agent is required to obtain host information used in volume assignments. An in-band
Fabric agent is optional if the fabric information is available through another in-band agent in
the same fabric, an outbound agent, or through a CIMOM managed switch. Run the in-band
host agent probe to get the host operating system information. If the in-band host agent has
not been run and the storage controller also does not know about the host operating system,
an asterisk is displayed next to the host port. Ensure that the operating system-imposed limits
on the number of volumes that can be accessed are not exceeded.

254 TotalStorage Productivity Center V3.3 Update Guide


10.3.2 Invoking the SAN Planner
In the Navigation Tree under IBM TotalStorage Productivity Center, SAN Planner can be
accessed through Analytics or Disk Manager. Follow these steps:
1. Expand the Analytics or Disk Manager node and right-click on SAN Planner and select
Create Plan. The configuration panel opens. This contains five panes: Create Plan,
Planner Selection, Volume Planner, Path Planner, and Zone Planner (see Figure 10-8).
If a plan had already been configured and saved, clicking on the saved plan (located under
SAN Planner) will open the configuration panel with the five panes. Instead of a Create
Plan pane, the saved plan will start with the pane, and plan_name is the name the plan
was given when it was saved previously.

Figure 10-8 Invoking the SAN Planner

2. In the Create Plan pane, enter a short description of the plan into the Description field.
3. The Planner Selection pane will be empty if no input has been made.
4. Using the Add button to select elements for the Planner Selection pane is described in the
following steps:
a. Click the Add button. This brings up the Planner Selection Topology Viewer panel.
Select the storage controllers, hosts, fabrics, and storage pools to be used for volume
provisioning considerations.
The topology viewer displays the actual configuration and the plan selections. The
Topology Viewer pane displays the configuration from which selections are made. The
Select Elements pane displays the selection of elements to be used in the plan.
b. Expand the Graphical Storage view by double-clicking on the box title for a subsystem.
This expands the storage view into the L0 level view of the available subsystems.
c. Select the elements by clicking on any of the supported elements. For multiple element
selections, press the Ctrl key while you click on each element icon (see Figure 10-9).

Chapter 10. SAN Planner 255


Figure 10-9 SAN Planner Selection Pane

d. Click >> to move the selected elements into the Select Elements pane. If you decide to
remove the selected elements from the Select Elements pane, click <<.
The Select Elements pane lists the fabrics, computers, subsystems, pools, and
volumes if selected. All selections must be located within the same fabric for SAN
Planner to work. The Fabrics section lists the fabrics (by WWN) and the corresponding
selected subsystems and selected hosts within each fabric. A subsystem might appear
multiple times if it has been configured in different fabrics. The Subsystems section will
contain the selected storage subsystems. Next to each subsystem, the parenthesis
contains the number of fabrics the subsystem belongs to. Selected pools will appear
under their respective storage subsystem. Selected volumes will appear under their
respective pools. The Computers section lists the selected hosts. In parenthesis, the
number of fabrics the host belongs to will be displayed. The host(s) will be used in path
planning and zone planning if Path Planner and Zone Planner are selected in the
configuration panel.
e. Click OK when you are satisfied with all your selections. This will return to the SAN
Planner Configuration panel with the selections displayed in the Planner Selection
pane.
5. When you have returned to the configuration panel, the Planner Selection pane will
contain the selected elements (see Figure 10-10). If you decide to remove an element
from the Planner Selection pane, click on the item to highlight it and click the Remove
button. The element will disappear from the Planner Selection pane and will not be used
for the SAN plan.
You might decide to save your configuration inputs to review at a later time. This is done by
clicking the Save icon on the TotalStorage Productivity Center toolbar. If hardware
configurations are modified and the saved plan is reactivated, the Planner Selection pane
contains all the elements as they were at the time the plan was saved.

256 TotalStorage Productivity Center V3.3 Update Guide


a. If an element appearing in the Planner Selection pane has been physically removed,
clicking the Get Recommendation button will cause an error. Remove the element by
highlighting it in the Planner Selection pane and click the Remove button.
b. If more fabrics, storage subsystems, storage pools, volumes, or computers have been
added, they will be taken into consideration when the Get Recommendation button is
clicked. If they meet your input requirements, they will be included in the
recommendation panel.

Figure 10-10 SAN Planner Creation Pane

6. You are ready to start the volume planning. Click on + to expand the Volume Planner
options. If you do not want to do any volume planning, uncheck the Volume Planner box to
disable the fields within the pane.

Note: Based on the user inputs, the Volume Planner selects appropriate storage
controllers, storage pools, and storage volumes when using unassigned volumes.

a. Enter the total capacity of the volumes to be used for provisioning in the Total Capacity
field.
b. Click on Divide capacity between if you want to divide the Total Capacity among one
to several volumes. For example, enter 1 and 1 volumes if you want the total capacity to
be on 1 volume. Enter 1 and 5 volumes if you want the total capacity to be divided
among 1 to 5 volumes.
c. Click on Divide capacity among volumes of size if you want to keep the capacity of
each volume between x.x GB and y.y GB.
d. On Workload Profile, select the workload profile that represents how the new volumes
will be used.

Chapter 10. SAN Planner 257


Note: For a workload profile selection, a performance monitor must have been run
on the storage controller unless the Space Only workload profile is selected.

The predefined workload profiles are as follows:


• OLTP Standard: For typical online transaction processing.
• OLTP High: For very active online transaction processing.
• Data Warehouse: For applications with inquiries into large data warehouses.
• Batch Sequentia:l For batch applications involving large volumes of data.
• Document Archival: For document archival applications.
• Space only: Use this option for a storage subsystem that has not been monitored
for a performance data collection.
You can create your own Workload Profile by clicking the Disk Manager “Policy
Management” Workload Profiles node. The selection list is the same as listed above.
e. On RAID Level, select the possible Raid levels on the available storage subsystems.
The predefined RAID levels are as follows:
• <system selected> This is the default value. The best possible RAID level of the
volume will be selected based on user input and the available performance and
capacity of the underlying storage subsystems.
• RAID 1: The format type of the volume will be RAID 1 format.
• RAID 5: The format type of the volume will be RAID 5 format.
• RAID 10: The format type of the volume will be RAID 10 format.
f. For Volume Name Prefix, enter a name which will be used as the prefix in the volume
name.
g. Check the box for Use existing unassigned volumes (if available) if you want to use
existing volumes which are unassigned.

Note: An unassigned volume cannot be selected if it does not meet the performance
requirements selected in Workload Profiles and RAID Level.

h. Click the Suggest Storage Pools button to obtain a selection of storage pools.

Note: When the user has specified storage controllers in the Planner Selection, the
planner lists the storage pools from those storage controllers which have the storage
capacity as requested by the user.

Suggest Storage Pools will select a set of storage pools to create the volumes from.
Note that Suggest Storage Pools cannot be used if volumes were selected during the
selection process and are visible in the Planner Selection pane. Storage pools are
used if they are not full or are not visible from all the hosts. They also have to be fixed
block to be used by the planners.
7. Click the Get Recommendation button if you want the SAN planner to select one or more
volumes depending on the inputs you provided.

258 TotalStorage Productivity Center V3.3 Update Guide


Note: If no storage pools were selected, then clicking on Get Recommendation will
cause the planner to consider all the pools existing on the selected storage
subsystems.

If storage controllers and storage pools were selected, then clicking on Get
Recommendation will cause the planner to consider only those pools visible in the
Planner Selection pane.

Figure 10-11 SAN Planner Creation Pane

8. You are ready to start the path planning. Click on + to expand the Path Planner options
(see Figure 10-12). If you do not want to do any path planning, uncheck the Path Planner
box to disable the inputs to the Path Planner pane.
a. Select the Multipath option to determine how I/O will be distributed across all paths.
The predefined I/O options are as follows:
• Load Balancing: Sends /IO on all paths.
• Round Robin: Sends I/O on one path until a time interval expires (set in an SDD
setting at the host) or stops to use another path.
• Fail-Over: Sends I/O on one path until a failure occurs and fails over (switches) to
another path.
b. Check the box for Specify number of paths to input the number of paths you wish to
configure. Enter the number of paths in the box on the right.
c. Check the box for Use Fully redundant paths to use the paths from host to storage
subsystem(s) through a minimum of 2 fabrics. Note that this requires 2 fabrics.

Chapter 10. SAN Planner 259


Figure 10-12 Path Planner Pane

9. You are ready to start the zone planning. Click on + to expand the Zone Planner options
(see Figure 10-13). If you do not want to do any zone planning, uncheck the Zone Planner
box to disable the inputs to the Zone Planner pane.

Note: Based on the user inputs, the Zone Planner allows the zoning to be changed to
ensure that hosts can see the new storage.

a. Select the Automatically create zone to indicate where zoning will be done. The
predefined options are as follows:
• <auto-zone> This is the default. The plan will be generated using the maximum
number of zones without grouping the data paths based on host, host ports, or host
bus adapters (HBAs).
• ...for each host: Create a zone for each host.
• ...for each HBA: Create a zone for each HBA.
• ...for each host port: Create a zone for each host port.
b. Check the box for Specify maximum number zones. Enter the maximum number of
zones in the box to the right.
c. Check the box for Specify maximum zone members per zone. Enter maximum
number of zone members (per zone) in the box to the right.
d. Check the box for Use active zone set if you want any zone set that is available to be
selected.
e. Check the box for Append zone name prefix and enter a zone name prefix in the box
to the right if you want to set a prefix name for each zone.

260 TotalStorage Productivity Center V3.3 Update Guide


Figure 10-13 Zone Planner Pane

10.When the settings are specified, click the Get Recommendation button. All the settings
will be validated and surface any errors. If there are no errors, a plan is generated and the
recommended configuration is displayed in a new panel (see Figure 10-14).
a. In the Create Plan pane, enter a description in the Description field.
b. If you are satisfied with your selections, go to the When to run pane and either click
Run Now to start the task immediately when it will be submitted, or click Run Once to
start the task at the specified date and time you select.
c. The TotalStorage Productivity Center server and the GUI can be located in different
time zones. The How to handle time zones pane will allow control on running the task
at the specified time zone. Click on Use the time zone that the server runs in if you
want to use the timezone where the TotalStorage Productivity Center server resides.
Click on Use this time zone and select the timezone from the drop-down list box. This
will start the task at the specified date and time entered in the When to run pane and
use the timezone selected here.
d. Click the Execute Plan button to save the plan and to execute it. This starts the job at
the date, time, and time zone specified in the When to run and the How to handle time
zones panes. The executed task will have a job status with the ability to view the job
log(s).

Chapter 10. SAN Planner 261


e. Click the Save button to save the plan for later inspection. A dialog box will appear.
Enter a plan name in the Specify Plan name field. This creates and saves a task under
SAN Planner. To return into this plan, go to SAN Planner and click on the task. This will
be displayed with all the selections that were made prior to saving.
If hardware configurations have been modified and the saved plan is reactivated, the
Plan involves the following changes pane will contain all the elements as they were at
the time the plan was saved.
• If an element appearing in the Plan involves the following changes pane has been
physically removed, continuing with Execute Plan will cause an error. Click the
Return to Planner Input button to return to the configuration input panel. Remove
the element by highlighting it in the Planner Selection pane and click the Remove
button. Click the Get Recommendation button to refresh the recommendation
panel with the latest selections.
• If more fabrics, storage subsystems, storage pools, volumes, or computers have
been added, they will be taken into consideration when the Get Recommendation
button is clicked. If they meet your input requirements, they will be included in the
recommendation panel.
f. Click the Return to Planner Input button to make more adjustments to the plan inputs.
This returns you to the configuration panel.
g. If you decide to completely stop planning, click the Cancel button. Any inputs made will
not be saved. You will be taken completely out of the SAN Planner section.

Figure 10-14 SAN Planner recommendation pane

11.When you execute the plan, a pop-up window will appear to confirm (Figure 10-15).

262 TotalStorage Productivity Center V3.3 Update Guide


Figure 10-15 SAN Planner Confirmation Window

12.After the job has been submitted, a job entry will appear under IBM TotalStorage
Productivity Center → Analytics → SAN Planner → [SAN Planner Name] (see
Figure 10-16).

Figure 10-16 SAN Planner Job in the Running State

Chapter 10. SAN Planner 263


13.When the job is complete, the entry will turn green, meaning that the job has completed
successfully (see Figure 10-17).

Figure 10-17 SAN Planner Job in the Completed State

264 TotalStorage Productivity Center V3.3 Update Guide


11

Chapter 11. Enterprise server rollup function


In this chapter we introduce the TotalStorage Productivity Center V3.3 enterprise rollup
feature and its benefits. Many customers are currently deploying multiple TPC servers in their
environments because of multiple physical infrastructures. Also, the limitation in the number
of agents that one server can have is addressed by enabling the customer to partition large
numbers of agents across multiple TPC Data Servers.

Customers require a way to roll up network wide summary metrics from multiple TPC servers
and report on their storage environment from a network wide perspective. In this chapter we
describe:
򐂰 How to define subordinate TPC servers to the master TPC server.
򐂰 How to combine reports for multiple TPC servers into an enterprise-wide rollup report that
can give the customer a full view of their environment.

© Copyright IBM Corp. 2008. All rights reserved. 265


11.1 Enterprise server rollup overview
IBM TotalStorage Productivity Center V3.3 illustrated in Figure 11-1 provides an enterprise
rollup of reporting across multiple TPC servers in an environment. A recommended
configuration is to have, at most, one TPC server acting as a “master” or “enterprise” server
that gathers enterprise-wide data for reports, and all other servers acting as “subordinate”
servers and providing data about entities they manage. Note that master and subordinate
denominations can change over time. A master TPC server can also manage entities just like
any ordinary TPC server and report on these entities.

Rolled up data from TPC subordinate servers are stored into new tables in the master TPC
server. Otherwise, all of the existing TPC reports would need to be modified if rollup data is
put into the existing T_RES_*** tables. The existing reports would all have to be modified to
exclude the rollup data. Using TPC V3.3 with new table definitions allow you to created, store,
and retrieve the rolled up data.

TPC subordinate servers should have no more than 1200 unique data sources. This number
includes Data agents, Fabric agents (in-band and out-of-band), CIM agents, and VM agents
(VMWare). When this threshold has been met, a new TPC subordinate server should be
deployed and all new agents should be pointed to it. If the same storage entity is managed by
multiple subordinate servers, rollup reports reflect the storage information from the
subordinate server that most recently probed that entity.

Figure 11-1 TotalStorage Productivity Center rollup overview

266 TotalStorage Productivity Center V3.3 Update Guide


The TotalStorage Productivity Center V3.3 rollup feature allows you to navigate a hierarchical
view of assets by drilling down through the following report categories:
򐂰 Agents
򐂰 Computers
򐂰 Storage Subsystems
򐂰 Disk/volume Groups
򐂰 Disks
򐂰 File systems or logical volumes
򐂰 LUNs
򐂰 Fabrics

In the Navigation Tree, you can use the Rollup Reports → Asset → Agents report to view
information about Data agents and Device agents that are associated with subordinate
servers in your environment. You can generate and sort the data within this report by clauses,
as follows:
򐂰 By Agent: Shows agents sorted according to the name of the machine on which they are
installed.
򐂰 By OS Type: Shows agents sorted according to the operating system under which they
run. Click the magnifying glass icon next to an operating system type to view more
detailed information about each of the agents running under that operating system.
򐂰 By TPC Server: Shows agents sorted according to the subordinate server that manages
them.

For the TPC server rollup feature, all servers must be at TPC V3.3 versions older than V3.3
(for example, TPC V2.3, TPC V3.1) will not support the TPC server rollup function.

11.2 Preparing for enterprise server rollup


The following subsections describe the definition, execution, and report generation process of
the TotalStorage Productivity Center V3.3 rollup feature. The first step is to identify the server
that will be the master. Launch TPC sever GUI and log into the TPC GUI as a TPC superuser
or administrator.

Perform the following steps on the master TPC server to add a subordinate TPC server:
1. Expand the Administrative Services → Data Sources portion of the Navigation Tree.
2. Click on TPC Servers
3. Click the Add TPC Server button.

Figure 11-2 shows the result of these steps.

Chapter 11. Enterprise server rollup function 267


Figure 11-2 TPC Subordinate Server Definition

When the Add TPC Server dialog appears, enter the following items:
򐂰 Host Name or IP address: We recommend that you enter a fully qualified DNS name.
򐂰 Host Device or Server Port: The default is port 9550.
򐂰 Host Authentication Password: This is the TPC Host Authentication password of the
Device Server.
򐂰 Optionally, input a Display Name.
򐂰 Optionally, input a Description.
򐂰 Select the Test TPC Server connectivity before adding check box on the Add TPC
Server dialog for a subordinate TPC server that is up and running and whose
authentication data matches that of the data provided in the Add TPC Server dialog.
򐂰 Click the Save button on the Add TPC Server dialog

The TPC GUI sends the request to TPC master server to add the TPC subordinate server
and perform a check. The TPC master server verifies that the subordinate TPC Server is not
already being monitored. The TPC master server will verify that the login information for the
subordinate TPC server is correct by connecting the remote TPC subordinate server. When
these verification steps are completed, the TPC master server adds the subordinate TPC
server to its database and returns a success code to the TPC GUI. The panel in Figure 11-3
illustrates a successful ADD of a TPC subordinate server.

268 TotalStorage Productivity Center V3.3 Update Guide


Figure 11-3 Successful ADD of a TPC subordinate server

11.2.1 TPC server probe process


The information from TPC subordinate servers are extracted by using the TPC Device Server
Probe. Probing is the process of gathering detailed information from within, such as storage
subsystems usage statistic and asset information about computers, clusters, and fabrics.

When a probe is run, the data server on the master TPC server authenticates with the device
servers on the subordinate TPC servers by using the host authentication passwords provided
by the user. It then makes Web service calls to the TPC Device Server on the subordinate.
Batch reports can be generated on any TPC Data agent that is being directly monitored by
the TPC master server from which the enterprise-wide rollup reports are generated. Adding
capabilities to the device server services allows other users of TPC to gain access to these
functions. For instance, workflows and the TPC CLI could very easily use the new capabilities
provided by the rollup component.

To probe subordinate TPC servers, you have to create a probe definition. By default, all the
newly registered subordinate servers are added in the default TPC server probe definition.
Prior to scheduling, a TPC Server probe verifies that at least one subordinate TPC server with
data is being monitored by the master TPC server.

To schedule a TPC server probe, perform these steps from the master TPC server:
1. Expand the Administrative Services ' IBM TotalStorage Productivity Center →
Monitoring →TPC Server Probes portion of the Navigation Tree.
2. Select All TPC Servers, as shown in Figure 11-4, and all the resources will be probed.
3. You also have the choice of selecting a specific TPC servers to probe, as well as the class
of information to pull from the probed servers (subsystem, fabric, computers, clusters,
tapes, databases). This information is stored in the master's local database.

Chapter 11. Enterprise server rollup function 269


Figure 11-4 Create TPC Server Probe - What to PROBE

The results of a successful TPC subordinate probe should similar to what is depicted in
Figure 11-5 depending the number of TPC subordinates selected and data gathered.

Figure 11-5 Job log for TPC Server Probe

270 TotalStorage Productivity Center V3.3 Update Guide


11.2.2 TotalStorage Productivity Center V3.3 generating rollup reports
TotalStorage Productivity Center includes different rollup reporting categories that provide
you with the flexibility to view data about your storage resources according to the needs of
your environment. The following rollup reporting categories are available:
򐂰 Asset: These reports provide detailed statistics about agents (TPC), computers, storage
subsystems, disk and volume groups, disks, filesystems, logical volumes, LUNs, and
fabrics.
򐂰 Database Asset: These reports provide detailed statistics about the RDBMSs in your
environment, including Oracle, SQL Server®, Sybase, and UDB.
򐂰 Capacity: These reports provide storage metrics related to the disk capacity, filesystem
capacity, filesystem used space, and filesystem freespace of the storage entities in your
environment.
򐂰 Database Capacity: These reports provide storage metrics related to the storage capacity
of the RDBMSs in your environment, including Oracle, SQL Server, Sybase, and UDB.

How to generate rollup reports


To generate an enterprise rollup report, expand the navigation tree to display the type of
report you want to generate. For example, if you want to generate a disk capacity report,
expand Data Manager → Rollup Reports → Capacity → Disk Capacity.

To view rollup capacity information according to storage subsystem, click Rollup Reports →
Capacity → Disk Capacity → By Storage System. The Selection page in Figure 11-6 is
displayed. Use the Available Columns and Included Columns list boxes to determine what
columns are displayed in a generated report as shown in Figure 11-6.

Figure 11-6 Storage Subsystem Capacity Rollup Report

Chapter 11. Enterprise server rollup function 271


Use the Selection page to select the profile to use when generating a report and determine
what columns appear within a report. Click Selection... to select the objects that you want to
report upon from the Select Resources window (see Figure 11-7).

Figure 11-7 Storage Subsystem Selection window for Rollup Reporting

Click Filter to further filter the objects that appear in a report. Filters enable you to apply
general rules to the report based on the rows in that report. Figure 11-8 illustrates the Filter
panel.

Figure 11-8 Rollup Reporting Filter bypass

272 TotalStorage Productivity Center V3.3 Update Guide


Click Generate Report and a new tab will be added to the tab dialog representing the report
that you generated. Reports are tabular in format and are comprised of rows and columns.
You can scroll the report up and down and to the right and left to view the entire report. Use
the View menu to hide/show the Navigation Tree to increase the viewable area of the report or
drag the divider bar in the middle of the panel back and forth to reallocate the amount of
space that is given each pane. Figure 11-9 is an example of a Storage Subsystem Capacity
Rollup report.

Figure 11-9 Storage Subsystem Capacity Rollup report

In the next example we guide you through the generation of an Asset information report by
computer. To generate an enterprise rollup report, launch the TPC GUI for the master TPC
master server and logon to the TPC GUI as a TPC superuser or administrator:
1. In the main Navigation Tree, expand the IBM TotalStorage Productivity Center →
My Reports → Rollup Reports node in the TPC GUI.
2. Expand either Asset or Capacity
3. Expand and select one of the reports. (The Report Selection Panel is displayed). Notice
the information in the right hand column: Rollup Reports - Asset. Use these reports to view
detailed statistics about computers.
4. You have the ability to modify the selection and filter criteria for the report. Click the
Generate Report button.
5. The report is generated and displayed.
6. The device server on the subordinate server must be running in order for the master
server to pull data.

Chapter 11. Enterprise server rollup function 273


Figure 11-10 illustrates the panel used to generate Asset information reports by computer.
Notice the information in the right hand column, entitled Included Columns.

Figure 11-10 Report filter specifications

Figure 11-11 shows the installed agents that are installed and used to provide data for rollup
reports.

Figure 11-11 Rollup reports showing installed Agents and their versions

274 TotalStorage Productivity Center V3.3 Update Guide


In these two examples we illustrated how you can roll up network wide summary metrics from
multiple TPC servers and report on their storage environment from a network wide
perspective.

Chapter 11. Enterprise server rollup function 275


276 TotalStorage Productivity Center V3.3 Update Guide
12

Chapter 12. VMware ESX Server support


TotalStorage Productivity Center now has improved support of the VMWare Virtual
Infrastructure, which consists of the VMWare ESX Server and the VMWare VirtualCenter.
The supported guest operating systems that can be run on a virtual machine are those that
are supported both by the Data agent and the ESX Server. For full functionality, both the Data
agent and Virtual Infrastructure must be up and running. If one of the items is not present in a
given environment, only a limited picture is presented and some virtual machines might not
be recognized.

© Copyright IBM Corp. 2008. All rights reserved. 277


12.1 VMWare ESX Server overview
The VMWare ESX Server is a hypervisor product which can host multiple virtual machines
that run independently of each other while sharing hardware resources. VMWare allows a
single physical computer system to be divided into logical virtual machines running various
operating systems. To the applications running inside the VM, it is a computer system with a
unique IP address and access to storage that is virtualized by the hosting system, also known
as a hypervisor.

The VMWare VirtualCenter is the management application that is the central entry point for
the management and monitoring of multiple ESX Server in a data center. To utilize the
improved VMWare support, two data sources are required. A VMWare ESX server of
VMWare Virtual Infrastructure data source is needed. Also, a TPC Data agent is required on
each virtual machine you plan to monitor.

For more information about the ESX Server or VMWare VirtualCenter, see:
http://www.vmware.com

For a list of supported VMWare products and support guest operating systems, consult the
IBM TotalStorage Productivity Center support page:
http://www.ibm.com/servers/storage/support/software/tpc/

As shown in Figure 12-1, the data flow between the VMWare environment and the TPC
consists of the two different connections. The connection of the TPC to the VMWare Host
Agent of the VMWare ESX server via the new VMWare VI Data Source and the connection of
the TPC Data agents residing in the VMWare virtual machines inside the VMWare ESX
server.

Attention: There is no need to install a Data agent on the VMWare ESX server itself.
This feature is not supported.

Figure 12-1 VMWare and TPC environment flow

278 TotalStorage Productivity Center V3.3 Update Guide


No HBA virtualization is available for the VMWare virtual machines at the current time.
Therefore, if you install a Fabric agent on a VMWare virtual machine, the Fabric agent will not
be functional.

12.2 Planning for VMWare configuration


Refer to Appendix A, “Configuring X11 forwarding” on page 295 for instructions to enable you
to use the graphical installers of the UNIX/Linux distributions of TPC V3.3 from your Windows
workstation.

This topic provides information on planning for VMWare Virtual Infrastructure configuration.
Before you can display reports or see the topology for VMWare Virtual Infrastructure, you
must complete the following general steps:
1. Import certificate. If the VMWare Virtual Infrastructure uses SSL certificates for
communication, you will have to use keytool to manually import the SSL certificates into a
truststore. Each Virtual Infrastructure data source provides an individual certificate. There
will be a default truststore registered in the Device Server’s system properties file. keytool
is a tool shipped with the Java run-time environment.
2. Add the VMWare VI data source. The data source can be a hypervisor (ESX Server or
VirtualCenter). This is the first step in getting information from VMWare Virtual
Infrastructure. Adding a VMWare data source is similar to adding a CIM agent or Data
agent.
3. Test the connection to the VMWare VI data source. This ensures that you can access
information from the VMWare data source.
4. Run a discovery job for the VMWare environment. The discovery is needed to retrieve
every ESX Server instance that is part of the Virtual Infrastructure that has been added.
The discovery mechanism is similar to a discovery for storage subsystems. Discovery jobs
can be scheduled and are performed on the complete list of known VMWare data sources.
5. Run a probe job for the ESX Server, hypervisor, and virtual machines. This step will get
the detailed information from the hypervisors and virtual machines for IBM TotalStorage
Productivity Center.
6. Configure alerts for VMWare. You can create alerts for the following alert conditions:
– Hypervisor discovered
– Hypervisor missing
– Virtual Machine added
– Virtual Machine deleted
7. Install the Data agent on each of the virtual machines you wish to monitor. For full
functionality, you need two data sources.
8. You will now be able to view VMWare reports and VMWare topology.
For reports, note the following considerations: You must probe both the ESX Server and
the Data agent in the virtual machines before you can generate accurate reports for disk
and file system capacity. For example, you have an ESX Server that has 100 GB and 60
GB is allocated to the virtual machine. The virtual machine uses 5 GB of space. Both the
ESX Server (H1) and the virtual machine (VC1) have been probed. You also have a
physical computer (PC1) that has been probed. The TOTAL capacity for the file system or
disk capacity row includes everything — virtualized disks and virtual machines as well as
non-virtualized disks and machines.

Chapter 12. VMware ESX Server support 279


12.3 Configuring TPC communication to VMWare
Based on the individual planning steps, this is the actual configuration of TPC for
communication with a VMWare ESX server:
1. Download the VMWare SSL certificate:
For communication, the VMWare ESX Server and the VMWare VirtualCenter Server use
self-generated SSL certificates called:
rui.crt
located in the following directories:
For the VMWare ESX Server the certificate is located in the directory:
/etc/vmware/ssl
For the VirtualCenter Server the certificate is located in:
C:\Documents and Settings\All Users\Applicatin Data\VMware\VMware
VirtualCenter\SSL
Copy the certificate files from the VMWare components to a directory on your local client
machine.
2. Install these VMWare certificates into a certificate store. This can be done on your local
workstation. Afterwards, copy the truststore to your TPC server. Use:
keytool
on our local workstation to generate a certificate store / truststore.
The keytool command is part of the Java runtime environment. If you work on Windows,
locate keytool by running a search. Click:
Start → Search → All files and folders
and search for:
keytool.*

280 TotalStorage Productivity Center V3.3 Update Guide


The search results will look something like Figure 12-2.

Figure 12-2 keytool search

Use a current version of keytool, so regarding Figure 12-2, the keytool.exe marked on the
top does qualify.

Chapter 12. VMware ESX Server support 281


3. Create certificate store / truststore:
Use the keytool command to create the truststore. The command syntax of the keytool
command is shown in Figure 12-3.

Figure 12-3 keytool command syntax

The syntax to create the truststore for the TPC server is:
keytool -import -file <certificate-filename> -alias <server-name> -keystore
vmware.jks

282 TotalStorage Productivity Center V3.3 Update Guide


In our environment for example, the command would look like that in Figure 12-4:
keytool -import -file rui.crt -alias faroe -keystore vmware.jks
The file from the VMWare ESX server is named rui.crt. The VMWare ESX server in our
environment is named Faroe. The truststore will be called vmware.jks. Enter a password
for the keystore and enter yes to trust the certificate as shown in Figure 12-4.

Figure 12-4 Truststore command syntax

4. Truststores are located in the Device Server configuration directory of your TPC server:
<TPC_install_directory>/device/conf
Copy the newly created certificate store / truststore to the Device Server configuration
directory of your TPC server.
The truststore will automatically be defined at service startup time as the following
property in the Device Server JVM:
javax.net.ssl.trustStore System

5. Add the VMWare VI data source. The data source can be a hypervisor (ESX Server or
VirtualCenter). This is the first step in getting information from VMWare Virtual
Infrastructure. Adding a VMWare data source is similar to adding a CIM agent or Data
agent. Go to Administrative Services → Data Sources → VMWare VI Data Source in
your TPC GUI and click Add VMWare VI Data Source (see Figure 12-5).

Figure 12-5 VMWare VI Data Source

Chapter 12. VMware ESX Server support 283


Test the connection to the VMWare VI data source by clicking Test VMWare VI Data
Source connectivity. This ensures that you can access information from the VMWare
data source (see Figure 12-6).

Figure 12-6 VMWare VI data source connectivity

If the connection to your data source is successful, the information window will look similar
to Figure 12-7,

Figure 12-7 Successful connection

284 TotalStorage Productivity Center V3.3 Update Guide


6. Run a discovery job for the VMWare environment. The discovery is needed to retrieve
every ESX Server instance that is part of the Virtual Infrastructure that has been added.
The discovery mechanism is similar to a discovery for storage subsystems. Discovery jobs
can be scheduled and are performed on the complete list of known VMWare data sources.
Got to Administrative Services → Discovery → VMWare VI Data Source and configure
your VMWare VI Data Source discovery (see Figure 12-8). Run the discovery job now.

Figure 12-8 VMWare discovery job

Chapter 12. VMware ESX Server support 285


7. Run a probe job for the ESX Server, hypervisor, and virtual machines (see Figure 12-9).
This step will get the detailed information from the hypervisors and virtual machines for
IBM TotalStorage Productivity Center. Go to IBM TotalStorage Productivity Center →
Monitoring → Probes and create a Probe for your VMWare Hypervisor and Computers.
For a total view of your VMWare VI environment, you need the VMWare VI Data Source
and the Data agents running on the Virtual machines.

Figure 12-9 Probe job creation

286 TotalStorage Productivity Center V3.3 Update Guide


8. Configure alerts for VMWare (see Figure 12-10). You can create alerts for the following
alert conditions:
– Hypervisor discovered
– Hypervisor missing
– Virtual Machine added
– Virtual Machine deleted
Go to Data Manager → Alerting → Hypervisor Alerts and right-click on Hypervisor
Alerts. Click Create Alert from the now showing options menu. Specify the alert details
and click Save to save your alert.

Figure 12-10 Hypervisor alert creation

Chapter 12. VMware ESX Server support 287


9. Install the Data agent on each of the virtual machines you wish to monitor. For full
functionality, you need two data sources. The installation of a Data agent inside a VMWare
virtual machine is done the same way then the installation of a Data agent on a physical
server. Make sure you have a platform that is supported by VMWare and TPC.
10.The general Topology Viewer overview will contain special information about Hypervisors
and VMWare virtual machines. Go to IBM TotalStorage Productivity Center →
Topology and look at the overview. It will look similar to Figure 12-11. Notice the Virtual
Computers and Hypervisors in the Computers box. Double-click Computers to open
the L0 overview.

Figure 12-11 Topology Viewer Overview panel showing Hypervisor

288 TotalStorage Productivity Center V3.3 Update Guide


11.The L0 overview of Computers shows all computers, virtual computers and hypervisors.
Notice the computer type column on the bottom. Double-click on your Hypervisor to get to
the L2 overview of your Hypervisor computers (see Figure 12-12).

Figure 12-12 L0 Computers view showing tabular list of entities

Chapter 12. VMware ESX Server support 289


12.The L2 Computer panel will show detailed information about your hypervisor. You see
information separated in Device, Virtual Computers and Connectivity tabs. Go back to
the previous panel Figure 12-12 and double-click one of your virtual computers for the L2
view of a virtual computer.

Figure 12-13 L2 Computer Topology Viewer

290 TotalStorage Productivity Center V3.3 Update Guide


13.The virtual computer L2 overview shows detailed information about Device and
Connectivity characteristics. The Device tab shows additional controller and VM disks
information. The Connectivity tab shows information about additional switches.

Figure 12-14 L2 Overview information about Device and Connectivity characteristics

Chapter 12. VMware ESX Server support 291


For reports, go to Data Manager → Reporting → Capacity → Filesystem Capacity →
By Computer (see Figure 12-15). Choose the columns you want to include into the report
and click Generate report.

Figure 12-15 Generate reports

292 TotalStorage Productivity Center V3.3 Update Guide


A Filesystem Capacity: By Computer report has been generated that looks similar to
Figure 12-16.

Figure 12-16 Filesystem Capacity: By Computer report

You must probe both the ESX Server and the Data agent in the virtual machines before
you can generate accurate reports for disk and file system capacity. For example, suppose
that you have an ESX Server that has 100 GB and 60 GB is allocated to the virtual
machine. The virtual machine uses 5 GB of space. Both the ESX Server (H1) and the
virtual machine (VC1) have been probed. You also have a physical computer (PC1) that
has been probed. The TOTAL capacity for the file system or disk capacity row includes
everything — virtualized disks and virtual machines as well as non-virtualized disks and
machines.

Chapter 12. VMware ESX Server support 293


294 TotalStorage Productivity Center V3.3 Update Guide
A

Appendix A. Configuring X11 forwarding


In this appendix we show the step-by-step installation and configuration of tools used to
achieve X11 forwarding in a firewalled environment. This includes the installation of the
prerequisite components on the UNIX/Linux side as on the Windows workstation side.
Following the instructions, you can use the graphical installers of the UNIX/Linux distributions
of TPC V3.3 from your Windows workstation.

© Copyright IBM Corp. 2008. All rights reserved. 295


Preparing the display export
The different installers used to install the various products described in this book use a
graphical user interface by default. If you do not want to use a GUI to install a product, you
should check whether it qualifies for a silent installation or an installation via console. Some of
the installers can be invoked with the -console option for operation without GUI.

The solution to achieve a display export described here is one of many possible ways to do it.
Our servers and the environment we use are behind a firewall. It does not allow connections
to be made from the AIX server behind the firewall to the outside machines in front of the
firewall. Therefore we decided to implement the following solution, which is based on the use
of ssh, ssl, rpm, cygwin, and PuTTY. The solution is described utilizing an AIX server and a
Windows workstation. It will also work with other UNIX distributions and Linux if the involved
tools are applied properly.

Preparation of the AIX server


To install some of the tools used on AIX, we utilize the tool rpm. Most Linux distributions
already have rpm preinstalled, you have to separately install it for AIX.
򐂰 rpm
The tool rpm for AIX is part of the AIX Toolbox for Linux Applications. It contains open
source packages available for installation on AIX 5L™. You can find the rpm tool and more
information about the AIX Toolbox for Linux Applications on:
http://www.ibm.com/servers/aix/products/aixos/linux/download.html
or directly download the rpm tool from:
ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/INSTALLP/ppc/rpm.rte
After download, install the rpm tool using smitty / installp.
򐂰 openssl
Also part of the AIX Toolbox for Linux Applications but within the section Cryptographic
Content for AIX is the openssl package for AIX. To download cryptographic content, you
have to logon to the download site with your IBM ID. If you do not have an IBM ID already,
you can apply for one on the site, it is free.
Access OpenSSL -- SSL Cryptographic Libraries → openssl - Secure Sockets Layer
and cryptography libraries and tools → openssl-0.9.7l-1.aix5.1.ppc.rpm
and download it.
After download, install the openssl package for AIX using rpm.
򐂰 openssh
The third component that is used on AIX in this solution is the OpenSSH on AIX package.
It is available as part of the AIX 5L Expansion Pack and Web Download Pack or on the
open source software Web site, sourceforge.
The AIX 5L Expansion Pack and Web Download Pack is a collection of extra software that
extends the base operating system capabilities. It is available at:
http://www.ibm.com/servers/aix/expansionpack/index.html
The open source software project OpenSSH on AIX can be reached via:
https://sourceforge.net/projects/openssh-aix/
Access OpenSSH on AIX → openssh-4.3p2-r2 and download the package.
After download, install the OpenSSH on AIX package using smitty / installp.

296 TotalStorage Productivity Center V3.3 Update Guide


Preparation of the Windows workstation
The tool PuTTY will be used to connect to the AIX server utilizing ssh and enabling X11
forwarding. This tool is optional, as you can also achieve the a successful X11 forwarding with
the use of only cygwin.
򐂰 PuTTY
PuTTY is a client program for the SSH, Telnet and Rlogin network protocols. It is available
for download at:
http://www.chiark.greenend.org.uk/~sgtatham/putty/
Download and install the tool by either executing the installer or simply by extracting /
copying the files to a folder of your choice, depending your download selection.
򐂰 cygwin
cygwin is a Linux-like environment for Windows. It is available for download at:
http://www.cygwin.com/

cygwin installation
Download the setup.exe as follows:
1. On the Windows workstation you want to use to receive the X11 forwarding, double-click
setup.exe. The cygwin setup will start and welcome you with a panel similar to the one
shown in Figure A-1.

Figure A-1 Cygwin Setup

2. Click Next to continue with the installation. A new window similar to Figure A-2 will display
and ask you to choose your installation type. Assuming that this is the first time you are
installing cygwin, you select the option Install from Internet to access the product source
files from an online repository. Click Next.

Appendix A. Configuring X11 forwarding 297


Figure A-2 Cygwin source install

3. The installation will ask you to choose an installation directory. Best practice is to leave the
options selectable here at their default value as shown in Figure A-3. Try to avoid adding
additional directory levels, different drive locations, or spaces in the directory name and
path. Click Next to continue.

Figure A-3 Cygwin root directory install

298 TotalStorage Productivity Center V3.3 Update Guide


4. This dialog will prompt your for a local package directory (see Figure A-4). It is used to
save the download files to hard disk for the current installation and also provides the
possibility for further installations and local network distribution. Choose a folder that has
enough space left and click Next to continue.

Figure A-4 Cygwin local package directory

This dialog lets you specify your type of Internet connection. If you use a proxy or other
special settings to connect to the internet, you can specify that in Figure A-5. Click Next to
continue.

Figure A-5 Cygwin Setup Internet connection type

Appendix A. Configuring X11 forwarding 299


The installation starts to connect to the Internet and downloads a list of possible mirror sites
from which you will be able to choose (see Figure A-6).

Figure A-6 Cygwin Setup download progress

If you use a firewall product, make sure that the installer is allow to connect to the Internet
(see Figure A-7).

Figure A-7 Symantec Client Firewall settings

300 TotalStorage Productivity Center V3.3 Update Guide


The installer prompts you to choose a download site (see Figure A-8). Besides choosing one
from the list that is shown, you can also specify your own, which you can look up on the
cygwin Web page. To speed up your installation, you should choose a mirror site close to the
location of your installation. Select a mirror site and click Next to continue.

Figure A-8 Cygwin download site

This dialog lets you select the packages to install. Scroll through the packages and open
All → X11 → xwinwm (see Figure A-9). Make sure that it is selected for installation.
Additional required packages will automatically be installed. Click Next to continue.

Figure A-9 Packages to install

Appendix A. Configuring X11 forwarding 301


The installation starts downloading data from the Internet and installing it to your hard drive
as shown in Figure A-10.

Figure A-10 Cygwin Setup progress

After the installation is finished, you are prompted to choose whether to create a desktop icon
and a start menu icon. Select what fits you best and click Finish to end the installation
(see Figure A-11).

Figure A-11 Cygwin Setup -Create Icons panel

Congratulations, you have successfully installed cygwin. Click OK to end the installation
(see Figure A-12).

302 TotalStorage Productivity Center V3.3 Update Guide


Figure A-12 Cygwin Installation Complete

The second part of this solution is optional. To achieve X11 forwarding from the AIX system to
your Windows workstation, you need to open an ssh connection to the AIX system. This can
be done using cygwin or using PuTTY.

To open an ssh connection to the AIX server using cygwin, open a cygwin bash shell:
Start → All Programs → cygwin → Cygwin Bash Shell

From within the newly opened cygwin bash shell window, issue the command:
Xwin -multiwindow

To start the X11 server on your Windows workstation. The -multiwindow option to the XWin
command will make all X11 windows forwarded from any server to appear in their own,
separate Windows window. If you prefer to have all the X11 forwardings appear in one single
Windows window, just start XWin without any options.

Now open a local xterm window:


Start → All Programs → Cygwin-X → xterm

From within the newly opened xterm window, enter the following command:
Start → Run...

and enter:
C:\cygwin\bin\run.exe -p /usr/X11R6/bin xterm -display 127.0.0.1:0.0 -ls

From within the newly created xterm window, enter the following commands:
xhost +

This command will disable security and allow all external hosts to forward X11 to your
machine.
ssh -X root@9.43.86.101

This command will create an ssh connection to the remote machine 9.43.86.101 as root user
and enable X11 forwarding through that tunnel.

After successfully connecting to the remote machine, start a graphical terminal window from
the remote host to verify that X11 forwarding is working by issuing the following command
(see Figure A-13):
xterm &

Appendix A. Configuring X11 forwarding 303


Figure A-13 Start a graphical terminal window

This will open a graphical terminal window from the remote machine exported to your local
workstation. Congratulations, you have successfully configured X11 forwarding.

PuTTY installation
If you want to use PuTTY to achieve X11 forwarding, you first need to install it. Download the
executable from:
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

and put it into a directory on your workstation or download and use the PuTTY installer.

Either way, you will be able to start PuTTY by simply executing the command:
putty.exe

304 TotalStorage Productivity Center V3.3 Update Guide


Enter the connection information to the AIX machine and make sure to choose ssh as
protocol as shown in Figure A-14.

Figure A-14 PuTTY session options

On the left side of the PuTTY windows, browse to Connection → SSH → X11 and check the
Enable X11 forwarding check box (see Figure A-15).

Figure A-15 PuTTY configuration options

Appendix A. Configuring X11 forwarding 305


You are now done. It is considered a best practice to browse back to Session and specify a
name for the session to save it for further reuse. Click Save to save the session. Then click
Open to initiate the connection to the AIX server.

Setup for the AIX server


Edit the /etc/ssh/sshd_config file and make sure the following lines are in present and not
commented out:

X11Forwarding yes

X11DisplayOffset 10

X11UseLocalhost yes

Restart the ssh daemon by issuing this command:

kill -HUP <processnumber of sshd>

To find out the process number of the sshd, issue:

ps -ef | grep sshd

Carefully browse through the results of the command and identify the ssh daemon.

root@azov.itsosj.sanjose.ibm.com:/>ps -ef | grep sshd

root 225430 237866 0 13:57:59 pts/2 0:00 grep sshd

root 180584 197108 0 13:43:04 - 0:00 sshd: root@pts/2

root 197108 102788 0 May 02 - 0:05 /usr/sbin/sshd

root 225712 197108 0 10:03:22 - 0:03 sshd: root@pts/0

In our example the ssh daemon has the process number 197108. To restart our ssh daemon
issue:
kill -HUP 197108

You might have to disconnect your ssh session now and reconnect to enable the new settings
for your session.

Then, on the Windows workstation, start cygwin by clicking Start → All Programs →
cygwin → Cygwin Bash Shell

From within the newly opened cygwin bash shell window, issue Xwin -multiwindow to start the
X11 server on your Windows workstation. The -multiwindow option to the XWin command will
make all X11 windows forwarded from any server to appear in their own, separate Windows
window. If you prefer to have all the X11 forwardings appear in one single Windows window,
just start XWin without any options.

With the cygwin bash shell window still open and the X11 server on your Windows
workstation running, go back to the PuTTY window which should still be open and issue:
xterm &

306 TotalStorage Productivity Center V3.3 Update Guide


A remote xterm window will open on your local workstation as shown in Figure A-16.

Figure A-16 Remote Xterm window

Congratulations, you have successfully configured X11 forwarding.

Appendix A. Configuring X11 forwarding 307


308 TotalStorage Productivity Center V3.3 Update Guide
B

Appendix B. Worksheets
This appendix contains worksheets that are meant for you to use during the planning and the
installation of the TotalStorage Productivity Center. The worksheets are meant to be
examples. Therefore you can decide whether you need to use them, for example, if you
already have all or most of the information collected somewhere.

If the tables are too small for your handwriting, or you want to store the information in an
electronic format, simply use a word processor or spreadsheet application, and use our
examples as a guide, to create your own installation worksheets.

This appendix contains the following worksheets:


򐂰 User IDs and passwords
򐂰 Storage device information:
– IBM TotalStorage Enterprise Storage Server® (ESS)
– IBM Fibre Array Storage Technology (FAStT)
– IBM San Volume Controller

© Copyright IBM Corp. 2008. All rights reserved. 309


User IDs and passwords
We created a table to help you write down the users IDs and passwords that you will use
during the installation of IBM TotalStorage Productivity Center for reference during the
installation of the components and for future add-ons and agent deployment. Use this table
for planning purposes.

You need one of the worksheets in the following sections for each machine where at least one
of the components or agents of Productivity Center will be installed. This is because you can
have multiple DB2 databases or logon accounts and you need to remember the IDs of each
DB2 individually.

Server information
Table B-1 contains detailed information about the servers that comprise the TotalStorage
Productivity Center environment.

Table B-1 Productivity Center server


Server Configuration information

Machine

Hostname

IP address ____.____.____.____

In Table B-2, simply mark whether a manager or a component will be installed on this
machine.

Table B-2 Managers/components installed


Manager/component Installed (y/n)?

Productivity Center for Disk

Productivity Center for Replication

Productivity Center for Fabric

Productivity Center for Data

Tivoli Agent Manager

DB2

310 TotalStorage Productivity Center V3.3 Update Guide


User IDs and passwords for key files and installation
Use Table B-3 to note the password that you used to lock the key file.

Table B-3 Password used to lock the key files


Default key file name Key file name Password

agentTrust.jks

Enter the user IDs and password that you used during the installation in Table B-4. Depending
on the selected managers and components, some of the lines are not used for this machine.

Table B-4 User IDs used on this machine


Element Default/ Enter user ID Enter password
recommended user ID

DB2 DAS User db2admina

DB2 Instance Owner db2inst1

DB2 Fenced User db2fenc1

Resource Manager managerb

Common Agent AgentMgrb

Common Agent itcauserb

TotalStorage Productivity tpcsuida


Center universal user
c
IBM WebSphere

c
Host Authentication

a. This account can have any name you choose.


b. This account name cannot be changed during the installation.

Appendix B. Worksheets 311


Storage device information
This section contains worksheets which you can use to gather important information about
the storage devices that will be managed by TotalStorage Productivity Center. You need to
have this information during the configuration of the Productivity Center. You need some of
the information before you install the device specific Common Object Model (CIM) Agent,
because this sometimes depends on a specific code level.

Determine if there are firewalls in the IP path between the TotalStorage Productivity Center
server or servers and the devices, which might not allow the necessary communication. In the
first column of each table, enter as much information as possible to identify the devices later.

IBM System Storage Enterprise Storage Server/DS6000/DS8000


Use Table B-5 to collect the information about your IBM System Storage devices.

Important: Check the device support matrix for the associated CIM Agent.

Table B-5 Enterprise Storage Server/DS6000/DS8000


Subsystem type, Name, Both IP LIC level User name Password CIM Agent host
location, organization addresses name and protocol

312 TotalStorage Productivity Center V3.3 Update Guide


IBM DS4000
Use Table to collect the information about your DS4000™ devices

Name, location, organization Firmware level IP address CIM Agent


host name
and protocol

Appendix B. Worksheets 313


IBM SAN Volume Controller
Use Table B-6 to collect the information about your SVC devices.

Table B-6 SAN Volume Controller devices


Name, location, Firmware IP address User ID Password CIM Agent
organization level host name
and protocol

314 TotalStorage Productivity Center V3.3 Update Guide


C

Appendix C. Performance metrics in TPC


Performance Reports
This appendix contains a list of performance metrics for IBM TotalStorage Productivity Center
Performance Reports, with explanations of their meanings.

© Copyright IBM Corp. 2008. All rights reserved. 315


Performance metric collection
We begin with a high level discussion of the way the IBM TotalStorage Productivity Center
Performance collects performance metrics from storage devices and switches. The
performance counters are usually kept in device firmware, pulled out for processing by CIM
agents, and forwarded to TPC for final calculations and insertion into the TPC database. For
most devices, the counters kept in firmware are monotonically increasing values. Over time,
these values go up and up and only up. Consequently, it is necessary to pull two samples of
the counters, separated by some number of seconds, in order to take the difference in the
counters and calculate metrics like I/O rates using the known time between samples.

For example, each time an I/O (a read or write) is issued to a volume, several counters (I/O
count, Bytes transferred) are incremented. If the counters are pulled at times T1 and T2, the
number of I/Os in the sample interval is obtained by subtracting the counters at time T1 from
the counters at time T2 (T2-T1). When this count is divided by the number of seconds
between T1 and T2, we obtain the I/O rate in I/Os/second for the sample interval (T1 to T2).
This is the technique, and is pretty simple for metrics like I/O rate, data rate, average transfer
size, and so forth. Other metrics, like Read hit ratios or Disk Utilization involve other
calculations involving sampled counters and times T1 and T2.

The counters in the firmware are usually unsigned 32 or 64 bit counters. Eventually, these
counters will “wrap,” meaning that the difference between the counters at T2 and T1 might be
difficult to interpret. The TPC Performance Manager attempts to adjust for such wraps during
its delta computations, but there might be unexpected wraps which can confuse the CIM
agent or the TPC Performance Manager. The TPC Performance Manger stores the deltas in
the database. Some counters are also stored in the TPC database, but the performance data
is mostly comprised of rates and other calculated metrics that depend on the counter deltas
and the sample interval, that is, the time between T1 and T2.

The primary and essential performance metrics are few and simple - for example, Read I/O
Rate, Write I/O Rate, Read Response Time and Write Response Time. Also important are
data rates and transfer sizes. Then come the cache behaviors, in the form of Read Hit Ratio
and Write Cache delays (percentages and rates). There are a myriad of additional metrics in
the TPC performance reports, but they should be used as adjuncts to the primary metrics,
sometimes helping to understand why the primary metrics have the values they do.

There are a very few metrics which measure other kinds of values. For example, the SVC
storage subsystem also reports the maximum read and write response times which occur
between times T1 and T2. Each time a sample of the counters is pulled, this kind of counter is
set back to zero. But the vast majority of counters are monotonically increasing, reset to zero
only by very particular circumstances (like hardware, software, or firmware resets).

The design of the TPC Performance Manager is such that several storage subsystems can be
included in a report (or individual subsystems by selection or filtering). But not all the metrics
apply to every subsystem of component. In these cases, a "-1" appears, indicating that no
data is expected for the metric in this particular case.

In the remainder of this section we look at the metrics that can be selected for each report.
We examine the reports in the order in which they appear in the TPC Navigation Tree.

Reports under Disk Manager


Storage Subsystem Performance
򐂰 By Storage Subsystem
򐂰 By Controller
򐂰 By I/O Group

316 TotalStorage Productivity Center V3.3 Update Guide


򐂰 By Node
򐂰 By Array
򐂰 By Managed Disk Group
򐂰 By Volume
򐂰 By Managed Disk
򐂰 By Port

Reports under Fabric Manager


Switch Performance
򐂰 By Port

By Storage Subsystem report


Critical and universal metrics are in bold. Less important metrics are in normal font, and
difficult ones to interpret are in italics.

Subsystem Component id

Time Start time of the sample interval

Interval Length of sample interval in seconds

Read I/O Rate (normal) Average number of normal read operations per second for
the sample interval. Normal operations are not sequential,
hence random. This metric is only for ESS, DS8000,
DS6000

Read I/O Rate (sequential) Average number of sequential read operations per second
for the sample interval. Sequential I/O is detected by the
subsystem. This metric is only for ESS, DS8000, DS6000

Read I/O Rate (overall) Average number of read operations per second for the
sample interval. Applies to most subsystems.

Write I/O Rate (normal) Average number of normal write operations per second for
the sample interval. Normal operations are not sequential,
hence random. This metric is only for ESS, DS8000,
DS6000

Write I/O Rate (sequential) Average number of sequential write operations per second
for the sample interval. Sequential I/O is detected by the
subsystem. This metric is only for ESS, DS8000, DS6000

Write I/O Rate (overall) Average number of write operations per second for the
sample interval. Applies to most subsystems.

Total I/O Rate (normal) Average number of normal reads and writes per second for
the sample interval. Normal operations are not sequential,
hence random. This metric is only for ESS, DS8000,
DS6000

Total I/O Rate (sequential) Average number of sequential reads and writes per second
for the sample interval. Sequential I/O is detected by the
subsystem. This metric is only for ESS, DS8000, DS6000

Total I/O Rate (overall) Average number of reads and writes per second for the
sample interval. Applies to most subsystems.

Appendix C. Performance metrics in TPC Performance Reports 317


Read Cache Hit Percentage (normal) Percentage of normal (random) reads that are cache hits
during the sample interval. Only for ESS, DS8000, DS6000

Read Cache Hits Percentage Percentage of sequential reads that are cache hits in the
(sequential) sample interval. Only for ESS, DS8000, DS6000.

Read Cache Hits Percentage Percentage of reads during the sample interval that are
(overall) found in cache. This is an important metric.

Write Cache Hits Percentage Percentage of normal (random) writes that are handled in
(normal) cache. This number should be 100%. Only for ESS, DS8K,
DS6K

Write Cache Hits Percentage Percentage of sequential writes that are handled in cache.
(sequential) This number should be 100%. Only for ESS, DS8K, DS6K

Write Cache Hits Percentage (overall) Percentage of writes that are handled in cache. This number
should be 100% for most enterprise storage.

Total Cache Hits Percentage (normal) Percentage of normal reads and writes that are cache hits
during the sample interval.

Total Cache Hits Percentage Percentage of sequential reads and writes that are cache
(sequential) hits during the sample interval.

Total Cache Hits Percentage (overall) Weighted average of read cache hits and write cache hits.

Read Data Rate Average read data rate in megabytes per second during the
sample interval

Write Data Rate Average write data rate in megabytes per second during the
sample interval

Total Data Rate Average total (read + write) data rate in megabytes per
second during the sample interval.

Read Response Time Average response time in milliseconds for reads during the
sample interval. For this report, this is an average of read
hits in cache as well as read misses.

Write Response Time Average response time in milliseconds for writes during the
sample interval.

Overall Response Time Average response time in milliseconds for all I/O in the
sample interval, including both cache hits as well as misses
to backing storage if required.

Read Transfer Size Average transfer size in kilobytes for reads during the
sample interval

Write Transfer Size Average transfer size in kilobytes for writes during the
sample interval

Overall Transfer Size Average transfer size in kilobytes for all I/O during the
sample interval.

Record Mode Read I/O Rate This is the rate in I/O/sec for a special kind of read activity
detected by ESS, DS8K and DS6K. Only the requested data
is managed in cache rather than a full track or most of a track
of data.

Record Mode Read Cache Hit Read Hit percentage for the special class of reads
Percentage mentioned above. ESS, DS8k, DS6K only.

318 TotalStorage Productivity Center V3.3 Update Guide


Disk to Cache Transfer Rate Average number of track transfers per second from disk to
cache during the sample interval.

Cache to Disk Transfer Rate Average number of track transfers per second from cache to
disk during the sample interval.

Write-cache Delay Percentage Percentage of all I/O operations that were delayed due to
write-cache space constraints or other conditions during the
sample interval. Only writes can be delayed, but the
percentage is of all I/O.

Write-cache Delay I/O Rate The rate of I/O (actually writes) that are delayed during the
sample interval because of write cache.

Cache Holding Time The average number of seconds a piece of data will stay in
cache. This value is calculated using Little's Law, only for
DS8K, DS6K, and ESS.

Backend Read I/O Rate The average read rate in reads per second caused by read
misses. This is the read rate to the back-end storage for the
sample interval.

Backend Write I/O Rate The average write rate in writes per second caused by
front-end write activity. This is the write rate to the back-end
storage for the sample interval. These are logical writes and
the actual number of physical I/O operations depends on
whether the storage is RAID 5, RAID 10, or some other
architecture.

Total Backend I/O Rate The sum of Backend Read I/O Rate and Backend Write I/O
Rate over the sample interval.

Backend Read Data Rate Average number of megabytes per second read from
back-end storage during the sample interval.

Backend Write Data Rate Average number of megabytes per second written to
back-end storage during the sample interval.

Total Backend Data Rate Sum of the Backend Read and Write Data Rates for the
sample interval.

Backend Read Response Time Average response time in milliseconds for read operations to
the back-end storage.

Backend Write Response Time Average response time in milliseconds for write operations
to the back-end storage. This time can include several
physical I/O operations, depending on the type of RAID
architecture.

Overall Backend Response Time The weighted average of Backend read and write response
times during the sample interval.

Backend Read Transfer Size The average transfer size in kilobytes for reads to the
back-end storage during the sample interval.

Backend Write Transfer Size The average transfer size in kilobytes for data written to the
back-end storage during the sample interval.

Overall Backend Transfer Size Weighted average transfer size in kilobytes for backend
reads and writes during the sample interval.

Port Send I/O Rate The average rate per second for operations that send data
from an I/O port, typically to a server. This is typically a read
from the server's perspective.

Appendix C. Performance metrics in TPC Performance Reports 319


Port Receive I/O Rate The average rate per second for operations where the
storage port receives data, typically from a server. This is
typically a write from the server's perspective.

Total Port I/O Rate Average read plus write I/O rate per second at the storage
port during the sample interval.

Port Send Data Rate The average data rate in megabytes per second for
operations that send data from an I/O port, typically to a
server.

Port Receive Data Rate The average data rate in megabytes per second for
operations where the storage port receives data, typically
from a server.

Total Port Data Rate Average read plus write data rate in megabytes per second
at the storage port during the sample interval

Port Send Response Time Average number of milliseconds that it took to service each
port send (server read) operation, for a particular port over
the sample interval.

Port Receive Response Time Average number of milliseconds that it took to service each
port receive (server write) operation, for a particular port
over the sample interval.

Total Port Response Time Weighted average port send and port receive time over the
sample interval.

Port Send Transfer Size Average size in kilobytes per Port Send operation during the
sample interval.

Port Receive Transfer Size Average size in kilobytes per Port Receive operation during
the sample interval.

Total Port Transfer Size Average size in kilobytes per port transfer during the sample
interval.

Read Queue Time For SVC, the average number of milliseconds that each
read operation during the sample interval spent on the
queue before being issued to the back-end storage device

Write Queue Time For SVC, the average number of milliseconds that each
write operation during the sample interval spent on the
queue before being issued to the back-end storage device

Overall Queue Time For SVC, the weighted average of Read Queue Time and
Write Queue Time during the sample interval.

Readahead Percentage of Cache For SVC, an obscure measurement of cache hits involving
Hits data that has been prestaged for one reason or another.

Dirty Write Percentage of Cache Hits For SVC, the percentage of write cache hits which modified
only data that was already marked "dirty" in the cache;
re-written data. This is an obscure measurement of how
effectively writes are coalesced before destaging.

Write Cache Overflow Percentage For SVC the percentage of write operations that were
delayed due to lack of write-cache space during the sample
interval.

Write Cache Overflow I/O Rate For SVC, the average rate per second of write operations
that were delayed due to lack of write-cache space during
the sample interval.

320 TotalStorage Productivity Center V3.3 Update Guide


Write Cache Flush-through For SVC, the percentage of write operations that were
Percentage processed in Flush-through write mode during the sample
interval.

Write Cache Flush-through I/O Rate For SVC, the average rate per second of tracks processed in
Flush-through write mode during the sample interval.

Write Cache Write-through For SVC the percentage of write operations that were
Percentage processed in Write-through write mode during the sample
interval.

Write Cache Write-through I/O Rate For SVC, the average number of tracks per second that were
processed in Write-through write mode during the sample
interval.

CPU Utilization Percentage For SVC the average utilization of the cluster node
controllers during the sample interval.

Port to Host Send I/O Rate For SVC, the rate per second of port send to host (server)
during the sample interval.

Port to Host Receive I/O Rate For SVC, the rate per second of port receive operations from
host (server) during the sample interval.

Total Port to Host I/O Rate For SVC, total of port send and receive IO rate during the
sample interval.

Port to Disk Send I/O Rate For SVC, the rate per second of port send to back-end
storage during the sample interval.

Port to Disk Receive I/O Rate For SVC, the rate per second of port receive operations from
back-end storage during the sample interval.

Total Port to Disk I/O Rate For SVC, the sum of port to disk send and port to disk
receive rates during the sample interval.

Port to Local Node Send I/O Rate For SVC, the rate per second at which a port sends I/O to
other nodes in the local cluster during the sample interval.

Port to Local Node Receive I/O Rate For SVC, the rate at which a port receives I/O from other
nodes in the local cluster during the sample interval

Total Port to Local Node I/O Rate For SVC, the sum of port to local node send and receive
rates during the sample interval.

Port to Remote Node Send I/O Rate For SVC, the average number of exchanges (I/Os) per
second sent to nodes in the remote SVC cluster during the
sample interval. Typically some form of remote mirroring.

Port to Remote Node Receive I/O For SVC, the average number of exchanges (I/Os) per
Rate second received from nodes in the remote SVC cluster
during the sample interval. Typically some form of remote
mirroring.

Total Port to Remote Node I/O Rate For SVC, the sum of port to remote node send and receive
I/O per second during the sample interval.

Port to Host Send Data Rate For SVC, the megabytes per second of port send to host
(server) during the sample interval.

Port to Host Receive Data Rate For SVC, the megabytes per second of port receive
operations from host (server) during the sample interval.

Total Port to Host Data Rate For SVC, total of port send and receive megabytes per
second during the sample interval.

Appendix C. Performance metrics in TPC Performance Reports 321


Port to Disk Send Data Rate For SVC, the megabytes per second of port send to
back-end storage during the sample interval.

Port to Disk Receive Data Rate For SVC, the megabytes per second of port receive
operations from back-end storage during the sample interval

Total Port to Disk Data Rate For SVC, the sum of port to disk send and port to disk
receive megabytes per second during the sample interval.

Port to Local Node Send Data Rate For SVC, the megabytes per second at which a port sends
I/O to other nodes in the local cluster during the sample
interval.

Port to Local Node Receive Data For SVC, the megabytes per second at which a port receives
Rate I/O from other nodes in the local cluster during the sample
interval

Total Port to Local Node Data Rate For SVC, the sum of port to local node send and receive
megabytes per second during the sample interval.

Port to Remote Node Send Data Rate For SVC, the average number of megabytes per second
sent to nodes in the remote SVC cluster during the sample
interval. Typically some form of remote mirroring.

Port to Remote Node Receive Data For SVC, the average number of megabytes per second
Rate received from nodes in the remote SVC cluster during the
sample interval. Typically some form of remote mirroring.

Total Port to Remote Node Data Rate For SVC, the sum of port to remote node send and receive
megabytes per second during the sample interval.

Port to Local Node Send Response For SVC the average port service time in milliseconds for
Time these operations during the sample interval.

Port to Local Node Receive For SVC the average port service time in milliseconds for
Response Time this operation during the sample interval.

Overall Port to Local Node Response For SVC the average port service time in milliseconds for
Time these operations during the sample interval.

Port to Local Node Send Queue Time For SVC, the average time in msec waiting in queue before
these send operations are executed.

Port to Local Node Receive Queue For SVC, the average time in msec waiting in queue before
Time these receive operations are executed.

Overall Port to Local Node Queue For SVC, the average time in msec. waiting before these
Time port send or port receive operations are executed.

Port to Remote Node Send Response For SVC the average port service time in milliseconds for
Time these operations during the sample interval.

Port to Remote Node Receive For SVC the average port service time in milliseconds for
Response Time these operations during the sample interval.

Overall Port to Remote Node For SVC the average port service time in milliseconds for
Response Time these operations during the sample interval.

Port to Remote Node Send Queue For SVC, the average time in msec waiting in queue before
Time these send operations are executed.

Port to Remote Node Receive Queue For SVC, the average time in msec waiting in queue before
Time these send operations are executed.

322 TotalStorage Productivity Center V3.3 Update Guide


Overall Port to Remote Node Queue For SVC, the average time in msec waiting in queue before
Time these send operations are executed.

Global Mirror Write I/O Rate For SVC, the rate in writes per second issued to the
secondary site for Global Mirror during the sample interval

Global Mirror Overlapping Write For SVC, the percentage of writes during the sample
Percentage interval, for which the write operations at the primary site for
Global Mirror have overlapping write domains.

Global Mirror Overlapping Write I/O For SVC, the average rate in writes per second during the
Rate sample interval, for which the write operations at the primary
site for Global Mirror have overlapping write domains.

Peak Read Response Time For SVC, the peak read response time in msec observed
during the sample interval. At the end of each sample
interval, this value is reset to zero.

Peak Write Response Time For SVC, the peak write response time in msec observed
during the sample interval. At the end of each sample
interval, this value is reset to zero.

Global Mirror Secondary Write Lag For SVC, the number of additional milliseconds it took to
service each secondary write operation for Global Mirror,
over and above the time needed to service the primary
writes during the sample interval.

By Controller report
This report is only for DS8000, DS6000, and ESS.

Critical and universal metrics are in bold. Less important metrics are in normal font, and
difficult metrics to interpret are in italics.

Subsystem Component id

Controller Component id

Time Start time of the sample interval

Interval Length of sample interval in seconds

Read I/O Rate (normal) Average number of normal read operations per second for
the sample interval. Normal operations are not sequential,
hence random.

Read I/O Rate (sequential) Average number of sequential read operations per second
for the sample interval. Sequential IO is detected by the
subsystem.

Read I/O Rate (overall) Average number of read operations per second for the
sample interval.

Write I/O Rate (normal) Average number of normal write operations per second for
the sample interval. Normal operations are not sequential,
hence random.

Write I/O Rate (sequential) Average number of sequential write operations per second
for the sample interval. Sequential I/O is detected by the
subsystem.

Appendix C. Performance metrics in TPC Performance Reports 323


Write I/O Rate (overall) Average number of write operations per second for the
sample interval.

Total I/O Rate (normal) Average number of normal reads and writes per second for
the sample interval. Normal operations are not sequential,
hence random.

Total I/O Rate (sequential) Average number of sequential reads and writes per second
for the sample interval. Sequential I/O is detected by the
subsystem.

Total I/O Rate (overall) Average number of reads and writes per second for the
sample interval

Read Cache Hit Percentage (normal) Percentage of normal (random) reads that are cache hits
during the sample interval.

Read Cache Hits Percentage Percentage of sequential reads that are cache hits in the
(sequential) sample interval.

Read Cache Hits Percentage Percentage of reads during the sample interval that are
(overall) found in cache. This is an important metric.

Write Cache Hits Percentage (normal) Percentage of normal (random) writes that are handled in
cache. This number should be 100%.

Write Cache Hits Percentage Percentage of sequential writes that are handled in cache.
(sequential) This number should be 100%.

Write Cache Hits Percentage (overall) Percentage of writes that are handled in cache. This number
should be 100% for most enterprise storage.

Total Cache Hits Percentage (normal) Weighted average of read cache hits and write cache hits.

Total Cache Hits Percentage Percentage of sequential reads and writes that are cache
(sequential) hits during the sample interval.

Total Cache Hits Percentage (overall) Weighted average of read cache hits and write cache hits.

Read Data Rate Average read data rate in megabytes per second during the
sample interval

Write Data Rate Average write data rate in megabytes per second during the
sample interval

Total Data Rate Average total (read + write) data rate in megabytes per
second during the sample interval.

Read Response Time Average response time in milliseconds for reads during the
sample interval. For this report, this is an average of read
hits in cache as well as read misses.

Write Response Time Average response time in milliseconds for writes during the
sample interval.

Overall Response Time Average response time in milliseconds for all IO in the
sample interval, including both cache hits as well as misses
to backing storage if required.

Read Transfer Size Average transfer size in kilobytes for reads during the
sample interval

Write Transfer Size Average transfer size in kilobytes for writes during the
sample interval

324 TotalStorage Productivity Center V3.3 Update Guide


Overall Transfer Size Average transfer size in kilobytes for all I/O during the
sample interval.

Record Mode Read I/O Rate This is the rate in I/O/sec for a special kind of read activity
detected by ESS, DS8K and DS6K. Only the requested data
is managed in cache rather than a full track or most of a track
of data.

Record Mode Read Cache Hit Read Hit percentage for the special class of reads
Percentage mentioned above.

Disk to Cache Transfer Rate Average number of track transfers per second from disk to
cache during the sample interval.

Cache to Disk Transfer Rate Average number of track transfers per second from cache to
disk during the sample interval.

Write-cache Delay Percentage Percentage of all I/O operations that were delayed due to
write-cache space constraints or other conditions during the
sample interval. Only writes can be delayed, but the
percentage is of all I/O. This is sometimes called NVS Full.

Write-cache Delay I/O Rate The rate of I/O (actually writes) that are delayed during the
sample interval because of write cache, sometimes called
NVS Full.

Cache Holding Time The average number of seconds a piece of data will stay in
cache. This value is calculated using Little's Law.

Backend Read I/O Rate The average read rate in reads per second caused by read
misses. This is the read rate to the back-end RAID arrays for
the sample interval.

Backend Write I/O Rate The average write rate in writes per second caused by
front-end write activity. This is the write rate to the back-end
storage for the sample interval. These are logical writes and
the actual number of physical I/O operations depends on
whether the storage is RAID 5, RAID 10, or some other
architecture.

Total Backend I/O Rate The sum of Backend Read I/O Rate and Backend Write I/O
Rate over the sample interval.

Backend Read Data Rate Average number of megabytes per second read from
back-end storage during the sample interval.

Backend Write Data Rate Average number of megabytes per second written to
back-end storage during the sample interval.

Total Backend Data Rate Sum of the Backend Read and Write Data Rates for the
sample interval.

Backend Read Response Time Average response time in milliseconds for read operations
to the back-end storage.

Backend Write Response Time Average response time in milliseconds for write operations
to the back-end storage. This time can include several
physical I/O operations, depending on the type of RAID
architecture.

Overall Backend Response Time The weighted average of Backend read and write response
times during the sample interval.

Appendix C. Performance metrics in TPC Performance Reports 325


Backend Read Transfer Size The average transfer size in kilobytes for reads to the
back-end storage during the sample interval.

Backend Write Transfer Size The average transfer size in kilobytes for data written to the
back-end storage during the sample interval.

Overall Backend Transfer Size Weighted average transfer size in kilobytes for backend
reads and writes during the sample interval.

By I/O Group report


This is an SVC specific report

Critical and universal metrics are in bold. Less important ones in normal font and difficult
metrics to interpret are in italics.

Subsystem Name of the SVC Cluster

I/O Group The I/O Group id

Time Start time of the sample interval

Interval Length of sample interval in seconds

Read I/O Rate (overall) Average number of read operations per second for the sample
interval. Applies to most subsystems.

Write I/O Rate (overall) Average number of write operations per second for the sample
interval. Applies to most subsystems.

Total I/O Rate (overall) Average number of reads and writes per second for the sample
interval. Applies to most subsystems.

Read Cache Hits Percentage Percentage of reads during the sample interval that are found
(overall) in cache. This is an important metric.

Write Cache Hits Percentage


(overall)

Total Cache Hits Percentage


(overall)

Read Data Rate Average read data rate in megabytes per second during the
sample interval

Write Data Rate Average write data rate in megabytes per second during the
sample interval

Total Data Rate Average total (read + write) data rate in megabytes per second
during the sample interval.

Read Response Time Average response time in milliseconds for reads during the
sample interval. For this report, this is an average of read hits
in cache as well as read misses.

Write Response Time Average response time in milliseconds for writes during the
sample interval.

Overall Response Time Average response time in milliseconds for all I/O in the sample
interval, including both cache hits as well as misses to backing
storage if required.

326 TotalStorage Productivity Center V3.3 Update Guide


Read Transfer Size Average transfer size in kilobytes for reads during the sample
interval

Write Transfer Size Average transfer size in kilobytes for writes during the sample
interval

Overall Transfer Size Average transfer size in kilobytes for all I/O during the sample
interval.

Disk to Cache Transfer Rate Average number of track transfers per second from disk to
cache during the sample interval.

Cache to Disk Transfer Rate Average number of track transfers per second from cache to
disk during the sample interval.

Write-cache Delay Percentage Percentage of all I/O operations that were delayed due to
write-cache space constraints or other conditions during the
sample interval. Only writes can be delayed, but the
percentage is of all I/O.

Write-cache Delay I/O Rate The rate of I/O (actually writes) that are delayed during the
sample interval because of write cache.

Backend Read I/O Rate The average read rate in reads per second caused by read
misses. This is the read rate to the back-end storage for the
sample interval.

Backend Write I/O Rate The average write rate in writes per second caused by
front-end write activity. This is the write rate to the back-end
storage for the sample interval. These are logical writes

Total Backend I/O Rate The sum of Backend Read I/O Rate and Backend Write I/O
Rate over the sample interval.

Backend Read Data Rate Average number of megabytes per second read from back-end
storage during the sample interval.

Backend Write Data Rate Average number of megabytes per second written to back-end
storage during the sample interval.

Total Backend Data Rate Sum of the Backend Read and Write Data Rates for the sample
interval.

Backend Read Response Time Average response time in milliseconds for read operations to
the back-end storage.

Backend Write Response Time Average response time in milliseconds for write operations to
the back-end storage. This time can include several physical
I/O operations, depending on the type of RAID architecture.

Overall Backend Response Time The weighted average of Backend read and write response
times during the sample interval.

Read Queue Time The average number of milliseconds that each read operation
during the sample interval spent on the queue before being
issued to the back-end storage device

Write Queue Time The average number of milliseconds that each write operation
during the sample interval spent on the queue before being
issued to the back-end storage device

Overall Queue Time The weighted average of Read Queue Time and Write Queue
Time during the sample interval.

Appendix C. Performance metrics in TPC Performance Reports 327


Backend Read Transfer Size The average transfer size in kilobytes for reads to the back-end
storage during the sample interval.

Backend Write Transfer Size The average transfer size in kilobytes for data written to the
back-end storage during the sample interval.

Overall Backend Transfer Size Weighted average transfer size in kilobytes for backend reads
and writes during the sample interval.

Port Send I/O Rate The average rate per second for operations that send data from
an I/O port, typically to a server. This is typically a read from the
server's perspective.

Port Receive I/O Rate The average rate per second for operations where the storage
port receives data, typically from a server. This is typically a
write from the server's perspective.

Total Port I/O Rate Average read plus write I/O rate per second at the storage port
during the sample interval.

Port Send Data Rate The average data rate in megabytes per second for operations
that send data from an I/O port, typically to a server.

Port Receive Data Rate The average data rate in megabytes per second for operations
where the storage port receives data, typically from a server.

Total Port Data Rate Average read plus write data rate in megabytes per second at
the storage port during the sample interval

Readahead Percentage of Cache An obscure measurement of cache hits involving data that has
Hits been prestaged for one reason or another.

Dirty Write Percentage of Cache The percentage of write cache hits which modified only data
Hits that was already marked "dirty" in the cache; re-written data.
This is an obscure measurement of how effectively writes are
coalesced before destaging.

Write Cache Overflow Percentage For SVC the percentage of write operations that were delayed
due to lack of write-cache space during the sample interval.

Write Cache Overflow I/O Rate For SVC, the average rate per second of write operations that
were delayed due to lack of write-cache space during the
sample interval.

Write Cache Flush-through For SVC, the percentage of write operations that were
Percentage processed in Flush-through write mode during the sample
interval.

Write Cache Flush-through I/O For SVC, the average rate per second of tracks processed in
Rate Flush-through write mode during the sample interval.

Write Cache Write-through For SVC the percentage of write operations that were
Percentage processed in Write-through write mode during the sample
interval.

Write Cache Write-through I/O Rate For SVC, the average number of tracks per second that were
processed in Write-through write mode during the sample
interval.

CPU Utilization Percentage The average utilization of the node controllers in this I/O group
during the sample interval.

Port to Host Send I/O Rate For SVC, the rate per second of port send to host (server)
during the sample interval.

328 TotalStorage Productivity Center V3.3 Update Guide


Port to Host Receive I/O Rate For SVC, the rate per second of port receive operations from
host (server) during the sample interval.

Total Port to Host I/O Rate For SVC, total of port send and receive I/O rate during the
sample interval.

Port to Disk Send I/O Rate For SVC, the rate per second of port send to back-end storage
during the sample interval.

Port to Disk Receive I/O Rate For SVC, the rate per second of port receive operations from
back-end storage during the sample interval.

Total Port to Disk I/O Rate For SVC, the sum of port to disk send and port to disk receive
rates during the sample interval.

Port to Local Node Send I/O Rate For SVC, the rate per second at which a port sends I/O to other
nodes in the local cluster during the sample interval.

Port to Local Node Receive I/O For SVC, the rate at which a port receives I/O from other nodes
Rate in the local cluster during the sample interval

Total Port to Local Node I/O Rate For SVC, the sum of port to local node send and receive rates
during the sample interval.

Port to Remote Node Send I/O Rate For SVC, the average number of exchanges (I/Os) per second
sent to nodes in the remote SVC cluster during the sample
interval. Typically some form of remote mirroring.

Port to Remote Node Receive I/O For SVC, the average number of exchanges (I/Os) per second
Rate received from nodes in the remote SVC cluster during the
sample interval. Typically some form of remote mirroring.

Total Port to Remote Node I/O Rate For SVC, the sum of port to remote node send and receive I/O
per second during the sample interval.

Port to Host Send Data Rate For SVC, the megabytes per second of port send to host
(server) during the sample interval.

Port to Host Receive Data Rate For SVC, the megabytes per second of port receive operations
from host (server) during the sample interval.

Total Port to Host Data Rate For SVC, total of port send and receive megabytes per second
during the sample interval.

Port to Disk Send Data Rate For SVC, the megabytes per second of port send to back-end
storage during the sample interval.

Port to Disk Receive Data Rate For SVC, the megabytes per second of port receive operations
from back-end storage during the sample interval

Total Port to Disk Data Rate For SVC, the sum of port to disk send and port to disk receive
megabytes per second during the sample interval.

Port to Local Node Send Data Rate For SVC, the megabytes per second at which a port sends I/O
to other nodes in the local cluster during the sample interval.

Port to Local Node Receive Data For SVC, the megabytes per second at which a port receives
Rate I/O from other nodes in the local cluster during the sample
interval

Total Port to Local Node Data Rate For SVC, the sum of port to local node send and receive
megabytes per second during the sample interval.

Appendix C. Performance metrics in TPC Performance Reports 329


Port to Remote Node Send Data For SVC, the average number of megabytes per second sent
Rate to nodes in the remote SVC cluster during the sample interval.
Typically some form of remote mirroring.

Port to Remote Node Receive Data For SVC, the average number of megabytes per second
Rate received from nodes in the remote SVC cluster during the
sample interval. Typically some form of remote mirroring.

Total Port to Remote Node Data For SVC, the sum of port to remote node send and receive
Rate megabytes per second during the sample interval.

Port to Local Node Send Response For SVC the average port service time in milliseconds for these
Time operations during the sample interval.

Port to Local Node Receive For SVC the average port service time in milliseconds for this
Response Time operation during the sample interval.

Overall Port to Local Node For SVC the average port service time in milliseconds for these
Response Time operations during the sample interval.

Port to Local Node Send Queue For SVC, the average time in msec waiting in queue before
Time these send operations are executed.

Port to Local Node Receive Queue For SVC, the average time in msec waiting in queue before
Time these receive operations are executed.

Overall Port to Local Node Queue For SVC, the average time in msec. waiting before these port
Time send or receive operations are executed.

Port to Remote Node Send For SVC the average port service time in milliseconds for these
Response Time operations during the sample interval.

Port to Remote Node Receive For SVC the average port service time in milliseconds for these
Response Time operations during the sample interval.

Overall Port to Remote Node For SVC the average port service time in milliseconds for these
Response Time operations during the sample interval.

Port to Remote Node Send Queue For SVC, the average time in msec waiting in queue before
Time these send operations are executed.

Port to Remote Node Receive For SVC, the average time in msec waiting in queue before
Queue Time these send operations are executed.

Overall Port to Remote Node For SVC, the average time in msec waiting in queue before
Queue Time these send operations are executed.

Global Mirror Write I/O Rate For SVC, the rate in writes per second issued to the secondary
site for Global Mirror during the sample interval

Global Mirror Overlapping Write For SVC, the percentage of writes during the sample interval,
Percentage for which the write operations at the primary site for Global
Mirror have overlapping write domains.

Global Mirror Overlapping Write I/O For SVC, the average rate in writes per second during the
Rate sample interval, for which the write operations at the primary
site for Global Mirror have overlapping write domains.

Peak Read Response Time For SVC, the peak read response time in msec observed
during the sample interval. At the end of each sample interval,
this value is reset to zero.

Peak Write Response Time For SVC, the peak write response time in msec observed
during the sample interval. At the end of each sample interval,
this value is reset to zero.

330 TotalStorage Productivity Center V3.3 Update Guide


Global Mirror Secondary Write Lag For SVC, the number of additional milliseconds it took to
service each secondary write operation for Global Mirror, over
and above the time needed to service the primary writes during
the sample interval.

By Node report
This is an SVC specific report

Critical and universal metrics are in bold. Less important ones in normal font and difficult
metrics to interpret are in italics.

Subsystem Name of the SVC Cluster

I/O Group The I/O Group id

Time Start time of the sample interval

Interval Length of sample interval in seconds

Read I/O Rate (overall) Average number of read operations per second for the
sample interval. Applies to most subsystems.

Write I/O Rate (overall) Average number of write operations per second for the
sample interval. Applies to most subsystems.

Total I/O Rate (overall) Average number of reads and writes per second for the
sample interval. Applies to most subsystems.

Read Cache Hits Percentage Percentage of reads during the sample interval that are
(overall) found in cache. This is an important metric.

Write Cache Hits Percentage (overall) Percentage of writes that are handled in cache. This
number should be very nearly 100%

Total Cache Hits Percentage (overall) Weighted average of read cache hits and write cache hits.

Read Data Rate Average read data rate in megabytes per second during the
sample interval

Write Data Rate Average write data rate in megabytes per second during the
sample interval

Total Data Rate Average total (read + write) data rate in megabytes per
second during the sample interval.

Read Response Time Average response time in milliseconds for reads during the
sample interval. For this report, this is an average of read
hits in cache as well as read misses.

Write Response Time Average response time in milliseconds for writes during the
sample interval.

Overall Response Time Average response time in milliseconds for all I/O in the
sample interval, including both cache hits as well as misses
to backing storage if required.

Read Transfer Size Average transfer size in kilobytes for reads during the
sample interval

Write Transfer Size Average transfer size in kilobytes for writes during the
sample interval

Appendix C. Performance metrics in TPC Performance Reports 331


Overall Transfer Size Average transfer size in kilobytes for all I/O during the
sample interval.

Disk to Cache Transfer Rate Average number of track transfers per second from disk to
cache during the sample interval.

Cache to Disk Transfer Rate Average number of track transfers per second from cache
to disk during the sample interval.

Write-cache Delay Percentage Percentage of all I/O operations that were delayed due to
write-cache space constraints or other conditions during the
sample interval. Only writes can be delayed, but the
percentage is of all I/O.

Write-cache Delay I/O Rate The rate of I/O (actually writes) that are delayed during the
sample interval because of write cache.

Backend Read I/O Rate The average read rate in reads per second caused by read
misses. This is the read rate to the back-end storage for the
sample interval.

Backend Write I/O Rate The average write rate in writes per second caused by
front-end write activity. This is the write rate to the back-end
storage for the sample interval. These are logical writes

Total Backend I/O Rate The sum of Backend Read I/O Rate and Backend Write I/O
Rate over the sample interval.

Backend Read Data Rate Average number of megabytes per second read from
back-end storage during the sample interval.

Backend Write Data Rate Average number of megabytes per second written to
back-end storage during the sample interval.

Total Backend Data Rate Sum of the Backend Read and Write Data Rates for the
sample interval.

Backend Read Response Time Average response time in milliseconds for read operations
to the back-end storage.

Backend Write Response Time Average response time in milliseconds for write operations
to the back-end storage. This time can include several
physical I/O operations, depending on the type of RAID
architecture.

Overall Backend Response Time The weighted average of Backend read and write response
times during the sample interval.

Read Queue Time The average number of milliseconds that each read
operation during the sample interval spent on the queue
before being issued to the back-end storage device

Write Queue Time The average number of milliseconds that each write
operation during the sample interval spent on the queue
before being issued to the back-end storage device

Overall Queue Time The weighted average of Read Queue Time and Write
Queue Time during the sample interval.

Backend Read Transfer Size The average transfer size in kilobytes for reads to the
back-end storage during the sample interval.

Backend Write Transfer Size The average transfer size in kilobytes for data written to the
back-end storage during the sample interval.

332 TotalStorage Productivity Center V3.3 Update Guide


Overall Backend Transfer Size Weighted average transfer size in kilobytes for backend
reads and writes during the sample interval.

Port Send I/O Rate The average rate per second for operations that send data
from an I/O port, typically to a server. This is typically a read
from the server's perspective.

Port Receive I/O Rate The average rate per second for operations where the
storage port receives data, typically from a server. This is
typically a write from the server's perspective.

Total Port I/O Rate Average read plus write I/O rate per second at the storage
port during the sample interval.

Port Send Data Rate The average data rate in megabytes per second for
operations that send data from an I/O port, typically to a
server.

Port Receive Data Rate The average data rate in megabytes per second for
operations where the storage port receives data, typically
from a server.

Total Port Data Rate Average read plus write data rate in megabytes per second
at the storage port during the sample interval

Readahead Percentage of Cache Hits An obscure measurement of cache hits involving data that
has been prestaged for one reason or another.

Dirty Write Percentage of Cache Hits The percentage of write cache hits which modified only data
that was already marked "dirty" in the cache; re-written
data. This is an obscure measurement of how effectively
writes are coalesced before destaging.

Write Cache Overflow Percentage For SVC the percentage of write operations that were
delayed due to lack of write-cache space during the sample
interval.

Write Cache Overflow I/O Rate For SVC, the average rate per second of write operations
that were delayed due to lack of write-cache space during
the sample interval.

Write Cache Flush-through For SVC, the percentage of write operations that were
Percentage processed in Flush-through write mode during the sample
interval.

Write Cache Flush-through I/O Rate For SVC, the average rate per second of tracks processed
in Flush-through write mode during the sample interval.

Write Cache Write-through For SVC the percentage of write operations that were
Percentage processed in Write-through write mode during the sample
interval.

Write Cache Write-through I/O Rate For SVC, the average number of tracks per second that
were processed in Write-through write mode during the
sample interval.

CPU Utilization Percentage The average utilization of the node controllers in this I/O
group during the sample interval.

Port to Host Send I/O Rate For SVC, the rate per second of port send to host (server)
during the sample interval.

Port to Host Receive I/O Rate For SVC, the rate per second of port receive operations
from host (server) during the sample interval.

Appendix C. Performance metrics in TPC Performance Reports 333


Total Port to Host I/O Rate For SVC, total of port send and receive I/O rate during the
sample interval.

Port to Disk Send I/O Rate For SVC, the rate per second of port send to back-end
storage during the sample interval.

Port to Disk Receive I/O Rate For SVC, the rate per second of port receive operations
from back-end storage during the sample interval.

Total Port to Disk I/O Rate For SVC, the sum of port to disk send and port to disk
receive rates during the sample interval.

Port to Local Node Send I/O Rate For SVC, the rate per second at which a port sends I/O to
other nodes in the local cluster during the sample interval.

Port to Local Node Receive I/O Rate For SVC, the rate at which a port receives I/O from other
nodes in the local cluster during the sample interval

Total Port to Local Node I/O Rate For SVC, the sum of port to local node send and receive
rates during the sample interval.

Port to Remote Node Send I/O Rate For SVC, the average number of exchanges (I/Os) per
second sent to nodes in the remote SVC cluster during the
sample interval. Typically some form of remote mirroring.

Port to Remote Node Receive I/O For SVC, the average number of exchanges (I/Os) per
Rate second received from nodes in the remote SVC cluster
during the sample interval. Typically some form of remote
mirroring.

Total Port to Remote Node I/O Rate For SVC, the sum of port to remote node send and receive
I/O per second during the sample interval.

Port to Host Send Data Rate For SVC, the megabytes per second of port send to host
(server) during the sample interval.

Port to Host Receive Data Rate For SVC, the megabytes per second of port receive
operations from host (server) during the sample interval.

Total Port to Host Data Rate For SVC, total of port send and receive megabytes per
second during the sample interval.

Port to Disk Send Data Rate For SVC, the megabytes per second of port send to
back-end storage during the sample interval.

Port to Disk Receive Data Rate For SVC, the megabytes per second of port receive
operations from back-end storage during the sample
interval

Total Port to Disk Data Rate For SVC, the sum of port to disk send and port to disk
receive megabytes per second during the sample interval.

Port to Local Node Send Data Rate For SVC, the megabytes per second at which a port sends
I/O to other nodes in the local cluster during the sample
interval.

Port to Local Node Receive Data Rate For SVC, the megabytes per second at which a port
receives I/O from other nodes in the local cluster during the
sample interval

Total Port to Local Node Data Rate For SVC, the sum of port to local node send and receive
megabytes per second during the sample interval.

334 TotalStorage Productivity Center V3.3 Update Guide


Port to Remote Node Send Data Rate For SVC, the average number of megabytes per second
sent to nodes in the remote SVC cluster during the sample
interval. Typically some form of remote mirroring.

Port to Remote Node Receive Data For SVC, the average number of megabytes per second
Rate received from nodes in the remote SVC cluster during the
sample interval. Typically some form of remote mirroring.

Total Port to Remote Node Data Rate For SVC, the sum of port to remote node send and receive
megabytes per second during the sample interval.

Port to Local Node Send Response For SVC the average port service time in milliseconds for
Time these operations during the sample interval.

Port to Local Node Receive Response For SVC the average port service time in milliseconds for
Time this operation during the sample interval.

Overall Port to Local Node Response For SVC the average port service time in milliseconds for
Time these operations during the sample interval.

Port to Local Node Send Queue Time For SVC, the average time in msec waiting in queue before
these send operations are executed.

Port to Local Node Receive Queue For SVC, the average time in msec waiting in queue before
Time these receive operations are executed.

Overall Port to Local Node Queue For SVC, the average time in msec. waiting before these
Time port send or receive operations are executed.

Port to Remote Node Send Response For SVC the average port service time in milliseconds for
Time these operations during the sample interval.

Port to Remote Node Receive For SVC the average port service time in milliseconds for
Response Time these operations during the sample interval.

Overall Port to Remote Node For SVC the average port service time in milliseconds for
Response Time these operations during the sample interval.

Port to Remote Node Send Queue For SVC, the average time in msec waiting in queue before
Time these send operations are executed.

Port to Remote Node Receive Queue For SVC, the average time in msec waiting in queue before
Time these send operations are executed.

Overall Port to Remote Node Queue For SVC, the average time in msec waiting in queue before
Time these send operations are executed.

Global Mirror Write I/O Rate For SVC, the rate in writes per second issued to the
secondary site for Global Mirror during the sample interval

Global Mirror Overlapping Write For SVC, the percentage of writes during the sample
Percentage interval, for which the write operations at the primary site for
Global Mirror have overlapping write domains.

Global Mirror Overlapping Write I/O For SVC, the average rate in writes per second during the
Rate sample interval, for which the write operations at the
primary site for Global Mirror have overlapping write
domains.

Peak Read Response Time For SVC, the peak read response time in msec observed
during the sample interval. At the end of each sample
interval, this value is reset to zero.

Appendix C. Performance metrics in TPC Performance Reports 335


Peak Write Response Time For SVC, the peak write response time in msec observed
during the sample interval. At the end of each sample
interval, this value is reset to zero.

Global Mirror Secondary Write Lag For SVC, the number of additional milliseconds it took to
service each secondary write operation for Global Mirror,
over and above the time needed to service the primary
writes during the sample interval.

By Array report
This report is for DS8000, DS6000, and ESS only.

Critical and universal metrics are in bold. Less important ones in normal font and difficult
metrics to interpret are in italics.

Subsystem Component id

Time Start time of the sample interval

Interval Length of sample interval in seconds

Read I/O Rate (normal) Average number of normal read operations per second for
the sample interval. Normal operations are not sequential,
hence random. This metric is only for ESS, DS8000,
DS6000

Read I/O Rate (sequential) Average number of sequential read operations per second
for the sample interval. Sequential I/O is detected by the
subsystem. This metric is only for ESS, DS8000, DS6000

Read I/O Rate (overall) Average number of read operations per second for the
sample interval. Applies to most subsystems.

Write I/O Rate (normal) Average number of normal write operations per second for
the sample interval. Normal operations are not sequential,
hence random. This metric is only for ESS, DS8000,
DS6000

Write I/O Rate (sequential) Average number of sequential write operations per
second for the sample interval. Sequential I/O is detected
by the subsystem. This metric is only for ESS, DS8000,
DS6000

Write I/O Rate (overall) Average number of write operations per second for the
sample interval. Applies to most subsystems.

Total I/O Rate (normal) Average number of normal reads and writes per second
for the sample interval. Normal operations are not
sequential, hence random. This metric is only for ESS,
DS8000, DS6000

Total I/O Rate (sequential) Average number of sequential reads and writes per
second for the sample interval. Sequential I/O is detected
by the subsystem. This metric is only for ESS, DS8000,
DS6000

Total I/O Rate (overall) Average number of reads and writes per second for the
sample interval. Applies to most subsystems.

336 TotalStorage Productivity Center V3.3 Update Guide


Read Cache Hit Percentage (normal) Percentage of normal (random) reads that are cache hits
during the sample interval. Only for ESS, DS8000,
DS6000

Read Cache Hits Percentage Percentage of sequential reads that are cache hits in the
(sequential) sample interval. Only for ESS, DS8000, DS6000.

Read Cache Hits Percentage (overall) Percentage of reads during the sample interval that are
found in cache. This is an important metric.

Write Cache Hits Percentage (normal) Percentage of normal (random) writes that are handled in
cache. This number should be 100%. Only for ESS,
DS8K, DS6K

Write Cache Hits Percentage Percentage of sequential writes that are handled in cache.
(sequential) This number should be 100%. Only for ESS, DS8K, DS6K

Write Cache Hits Percentage (overall) Percentage of writes that are handled in cache. This
number should be 100% for most enterprise storage.

Total Cache Hits Percentage (normal) Percentage of normal reads and writes that are cache hits
during the sample interval.

Total Cache Hits Percentage Percentage of sequential reads and writes that are cache
(sequential) hits during the sample interval.

Total Cache Hits Percentage (overall) Weighted average of read cache hits and write cache hits.

Read Data Rate Average read data rate in megabytes per second during
the sample interval

Write Data Rate Average write data rate in megabytes per second during
the sample interval

Total Data Rate Average total (read + write) data rate in megabytes per
second during the sample interval.

Read Response Time Average response time in milliseconds for reads during
the sample interval. For this report, this is an average of
read hits in cache as well as read misses.

Write Response Time Average response time in milliseconds for writes during
the sample interval.

Overall Response Time Average response time in milliseconds for all I/O in the
sample interval, including both cache hits as well as
misses to backing storage if required.

Read Transfer Size Average transfer size in kilobytes for reads during the
sample interval

Write Transfer Size Average transfer size in kilobytes for writes during the
sample interval

Overall Transfer Size Average transfer size in kilobytes for all I/O during the
sample interval.

Record Mode Read I/O Rate This is the rate in I/O/sec for a special kind of read activity
detected by ESS, DS8K and DS6K. Only the requested
data is managed in cache rather than a full track or most
of a track of data.

Record Mode Read Cache Hit Read Hit percentage for the special class of reads
Percentage mentioned above. ESS, DS8k, DS6K only.

Appendix C. Performance metrics in TPC Performance Reports 337


Disk to Cache Transfer Rate Average number of track transfers per second from disk to
cache during the sample interval.

Cache to Disk Transfer Rate Average number of track transfers per second from cache
to disk during the sample interval.

Write-cache Delay Percentage Percentage of all I/O operations that were delayed due to
write-cache space constraints or other conditions during
the sample interval. Only writes can be delayed, but the
percentage is of all I/O.

Write-cache Delay I/O Rate The rate of I/O (actually writes) that are delayed during the
sample interval because of write cache.

Backend Read I/O Rate The average read rate in reads per second caused by
read misses. This is the read rate to the back-end storage
for the sample interval.

Backend Write I/O Rate The average write rate in writes per second caused by
front-end write activity. This is the write rate to the
back-end storage for the sample interval. These are
logical writes and the actual number of physical I/O
operations depends on whether the storage is RAID 5,
RAID 10, or some other architecture.

Total Backend I/O Rate The sum of Backend Read I/O Rate and Backend Write
I/O Rate over the sample interval.

Backend Read Data Rate Average number of megabytes per second read from
back-end storage during the sample interval.

Backend Write Data Rate Average number of megabytes per second written to
back-end storage during the sample interval.

Total Backend Data Rate Sum of the Backend Read and Write Data Rates for the
sample interval.

Backend Read Response Time Average response time in milliseconds for read operations
to the back-end storage.

Backend Write Response Time Average response time in milliseconds for write operations
to the back-end storage. This time can include several
physical I/O operations, depending on the type of RAID
architecture.

Overall Backend Response Time The weighted average of Backend read and write
response times during the sample interval.

Backend Read Transfer Size The average transfer size in kilobytes for reads to the
back-end storage during the sample interval.

Backend Write Transfer Size The average transfer size in kilobytes for data written to
the back-end storage during the sample interval.

Overall Backend Transfer Size Weighted average transfer size in kilobytes for backend
reads and writes during the sample interval.

Disk Utilization Percentage Average disk utilization during the sample interval. This is
also the utilization of the raid array, since the activity is
uniform across the array.

Sequential I/O Percentage Percentage of the I/O during the sample interval which the
storage believes to be sequential. This is detected by the
storage algorithms.

338 TotalStorage Productivity Center V3.3 Update Guide


By Managed Disk Group report
This is an SVC specific report

Critical and universal metrics are in bold. Less important ones in normal font and difficult
metrics to interpret are in italics.

Subsystem Name of the SVC Cluster

Managed Disk Group The Managed Disk Group id

Time Start time of the sample interval

Interval Length of sample interval in seconds

Read I/O Rate (overall) Average number of read operations per second for the
sample interval. Applies to most subsystems.

Write I/O Rate (overall) Average number of write operations per second for the
sample interval. Applies to most subsystems.

Total I/O Rate (overall) Average number of reads and writes per second for the
sample interval. Applies to most subsystems.

Read Data Rate Average read data rate in megabytes per second during
the sample interval

Write Data Rate Average write data rate in megabytes per second during
the sample interval

Total Data Rate Average total (read + write) data rate in megabytes per
second during the sample interval.

Read Response Time Average response time in milliseconds for reads during the
sample interval. For this report, this is an average of read
hits in cache as well as read misses.

Write Response Time Average response time in milliseconds for writes during the
sample interval.

Overall Response Time Average response time in milliseconds for all I/O in the
sample interval, including both cache hits as well as
misses to backing storage if required.

Read Transfer Size Average transfer size in kilobytes for reads during the
sample interval

Write Transfer Size Average transfer size in kilobytes for writes during the
sample interval

Overall Transfer Size Average transfer size in kilobytes for all I/O during the
sample interval.

Backend Read I/O Rate The average read rate in reads per second caused by read
misses. This is the read rate to the back-end storage for the
sample interval.

Backend Write I/O Rate The average write rate in writes per second caused by
front-end write activity. This is the write rate to the
back-end storage for the sample interval. These are logical
writes

Appendix C. Performance metrics in TPC Performance Reports 339


Total Backend I/O Rate The sum of Backend Read I/O Rate and Backend Write I/O
Rate over the sample interval.

Backend Read Data Rate Average number of megabytes per second read from
back-end storage during the sample interval.

Backend Write Data Rate Average number of megabytes per second written to
back-end storage during the sample interval.

Total Backend Data Rate Sum of the Backend Read and Write Data Rates for the
sample interval.

Backend Read Response Time Average response time in milliseconds for read operations
to the back-end storage.

Backend Write Response Time Average response time in milliseconds for write operations
to the back-end storage. This time can include several
physical I/O operations, depending on the type of RAID
architecture.

Overall Backend Response Time The weighted average of Backend read and write response
times during the sample interval.

Read Queue Time The average number of milliseconds that each read
operation during the sample interval spent on the queue
before being issued to the back-end storage device

Write Queue Time The average number of milliseconds that each write
operation during the sample interval spent on the queue
before being issued to the back-end storage device

Overall Queue Time The weighted average of Read Queue Time and Write
Queue Time during the sample interval.

Backend Read Transfer Size The average transfer size in kilobytes for reads to the
back-end storage during the sample interval.

Backend Write Transfer Size The average transfer size in kilobytes for data written to the
back-end storage during the sample interval.

Overall Backend Transfer Size Weighted average transfer size in kilobytes for backend
reads and writes during the sample interval.

340 TotalStorage Productivity Center V3.3 Update Guide


By Volume report
This is a most important report, available for all SMI-S compliant subsystems, though not all
metrics are applicable to all subsystems.

Critical and universal metrics are in bold. Less important ones in normal font and difficult
metrics to interpret are in italics.

Subsystem Subsystem Name

Volume Volume Id

Time Start time of the sample interval

Interval Length of sample interval in seconds

Read I/O Rate (normal) Average number of normal read operations per second for
the sample interval. Normal operations are not sequential,
hence random. This metric is only for ESS, DS8000,
DS6000

Read I/O Rate (sequential) Average number of sequential read operations per second
for the sample interval. Sequential I/O is detected by the
subsystem. This metric is only for ESS, DS8000, DS6000

Read I/O Rate (overall) Average number of read operations per second for the
sample interval. Applies to most subsystems.

Write I/O Rate (normal) Average number of normal write operations per second for
the sample interval. Normal operations are not sequential,
hence random. This metric is only for ESS, DS8000,
DS6000

Write I/O Rate (sequential) Average number of sequential write operations per second
for the sample interval. Sequential I/O is detected by the
subsystem. This metric is only for ESS, DS8000, DS6000

Write I/O Rate (overall) Average number of write operations per second for the
sample interval. Applies to most subsystems.

Total I/O Rate (normal) Average number of normal reads and writes per second for
the sample interval. Normal operations are not sequential,
hence random. This metric is only for ESS, DS8000,
DS6000

Total I/O Rate (sequential) Average number of sequential reads and writes per
second for the sample interval. Sequential I/O is detected
by the subsystem. This metric is only for ESS, DS8000,
DS6000

Total I/O Rate (overall) Average number of reads and writes per second for the
sample interval. Applies to most subsystems.

Read Cache Hit Percentage (normal) Percentage of normal (random) reads that are cache hits
during the sample interval. Only for ESS, DS8000,
DS6000

Read Cache Hits Percentage Percentage of sequential reads that are cache hits in the
(sequential) sample interval. Only for ESS, DS8000, DS6000.

Read Cache Hits Percentage (overall) Percentage of reads during the sample interval that are
found in cache. This is an important metric.

Appendix C. Performance metrics in TPC Performance Reports 341


Write Cache Hits Percentage (normal) Percentage of normal (random) writes that are handled in
cache. This number should be 100%. Only for ESS, DS8K,
DS6K

Write Cache Hits Percentage Percentage of sequential writes that are handled in cache.
(sequential) This number should be 100%. Only for ESS, DS8K, DS6K

Write Cache Hits Percentage (overall) Percentage of writes that are handled in cache. This
number should be 100% for most enterprise storage.

Total Cache Hits Percentage (normal) Percentage of normal reads and writes that are cache hits
during the sample interval.

Total Cache Hits Percentage Percentage of sequential reads and writes that are cache
(sequential) hits during the sample interval.

Total Cache Hits Percentage (overall) Weighted average of read cache hits and write cache hits.

Read Data Rate Average read data rate in megabytes per second during
the sample interval

Write Data Rate Average write data rate in megabytes per second during
the sample interval

Total Data Rate Average total (read + write) data rate in megabytes per
second during the sample interval.

Read Response Time Average response time in milliseconds for reads during the
sample interval. For this report, this is an average of read
hits in cache as well as read misses.

Write Response Time Average response time in milliseconds for writes during
the sample interval.

Overall Response Time Average response time in milliseconds for all I/O in the
sample interval, including both cache hits as well as
misses to backing storage if required.

Read Transfer Size Average transfer size in kilobytes for reads during the
sample interval

Write Transfer Size Average transfer size in kilobytes for writes during the
sample interval

Overall Transfer Size Average transfer size in kilobytes for all I/O during the
sample interval.

Record Mode Read I/O Rate This is the rate in I/O/sec for a special kind of read activity
detected by ESS, DS8K and DS6K. Only the requested
data is managed in cache rather than a full track or most of
a track of data.

Record Mode Read Cache Hit Read Hit percentage for the special class of reads
Percentage mentioned above. ESS, DS8k, DS6K only.

Disk to Cache Transfer Rate Average number of track transfers per second from disk to
cache during the sample interval.

Cache to Disk Transfer Rate Average number of track transfers per second from cache
to disk during the sample interval.

Write-cache Delay Percentage Percentage of all I/O operations that were delayed due to
write-cache space constraints or other conditions during
the sample interval. Only writes can be delayed, but the
percentage is of all I/O.

342 TotalStorage Productivity Center V3.3 Update Guide


Write-cache Delay I/O Rate The rate of I/O (actually writes) that are delayed during the
sample interval because of write cache.

Readahead Percentage of Cache Hits For SVC, an obscure measurement of cache hits involving
data that has been prestaged for one reason or another.

Dirty Write Percentage of Cache Hits For SVC, the percentage of write cache hits which
modified only data that was already marked "dirty" in the
cache; re-written data. This is an obscure measurement of
how effectively writes are coalesced before destaging.

Write Cache Overflow Percentage For SVC the percentage of write operations that were
delayed due to lack of write-cache space during the
sample interval.

Write Cache Overflow I/O Rate For SVC, the average rate per second of write operations
that were delayed due to lack of write-cache space during
the sample interval.

Write Cache Flush-through Percentage For SVC, the percentage of write operations that were
processed in Flush-through write mode during the sample
interval.

Write Cache Flush-through I/O Rate For SVC, the average rate per second of tracks processed
in Flush-through write mode during the sample interval.

Write Cache Write-through Percentage For SVC the percentage of write operations that were
processed in Write-through write mode during the sample
interval.

Write Cache Write-through I/O Rate For SVC, the average number of tracks per second that
were processed in Write-through write mode during the
sample interval.

Global Mirror Write I/O Rate For SVC, the rate in writes per second issued to the
secondary site for Global Mirror during the sample interval

Global Mirror Overlapping Write For SVC, the percentage of writes during the sample
Percentage interval, for which the write operations at the primary site
for Global Mirror have overlapping write domains.

Global Mirror Overlapping Write I/O For SVC, the average rate in writes per second during the
Rate sample interval, for which the write operations at the
primary site for Global Mirror have overlapping write
domains.

Peak Read Response Time For SVC, the peak read response time in msec observed
during the sample interval. At the end of each sample
interval, this value is reset to zero.

Peak Write Response Time For SVC, the peak write response time in msec observed
during the sample interval. At the end of each sample
interval, this value is reset to zero.

Global Mirror Secondary Write Lag For SVC, the number of additional milliseconds it took to
service each secondary write operation for Global Mirror,
over and above the time needed to service the primary
writes during the sample interval.

Appendix C. Performance metrics in TPC Performance Reports 343


By Managed Disk report
This is an SVC specific report

Critical and universal metrics are in bold. Less important ones in normal font and difficult
metrics to interpret are in italics.

Subsystem Name of the SVC

Managed Disk Name of the Managed Disk

Time Start time of the sample interval

Interval Length of sample interval in seconds

Read I/O Rate (overall) Average number of read operations per second for the
sample interval. Applies to most subsystems.

Write I/O Rate (overall) Average number of write operations per second for the
sample interval. Applies to most subsystems.

Total I/O Rate (overall) Average number of reads and writes per second for the
sample interval. Applies to most subsystems.

Backend Read Data Rate Average number of megabytes per second read from
back-end storage during the sample interval.

Backend Write Data Rate Average number of megabytes per second written to
back-end storage during the sample interval.

Total Backend Data Rate Sum of the Backend Read and Write Data Rates for the
sample interval.

Backend Read Response Time Average response time in milliseconds for read operations
to the back-end storage.

Backend Write Response Time Average response time in milliseconds for write operations
to the back-end storage. This time can include several
physical I/O operations, depending on the type of RAID
architecture.

Overall Backend Response Time The weighted average of Backend read and write
response times during the sample interval.

Read Queue Time The average number of milliseconds that each read
operation during the sample interval spent on the queue
before being issued to the back-end storage device

Write Queue Time The average number of milliseconds that each write
operation during the sample interval spent on the queue
before being issued to the back-end storage device

Overall Queue Time The weighted average of Read Queue Time and Write
Queue Time during the sample interval.

Backend Read Transfer Size The average transfer size in kilobytes for reads to the
back-end storage during the sample interval.

Backend Write Transfer Size The average transfer size in kilobytes for data written to
the back-end storage during the sample interval.

Overall Backend Transfer Size Weighted average transfer size in kilobytes for backend
reads and writes during the sample interval.

344 TotalStorage Productivity Center V3.3 Update Guide


By Port report - Storage

Subsystem Storage Subsystem

Port Port ID

WWPN WWPN for the port

Time Interval start time

Interval Number of seconds in the interval

Port Send I/O Rate Average number of I/O operations per second for send
operations, for a particular port during the sample
interval

Port Receive I/O Rate Average number of I/O operations per second for
receive operations, for a particular port during the
sample interval

Total Port I/O Rate Average number of I/O operations per second for send
and receive operations, for a particular port during the
sample interval

Port Send Data Rate Average number of megabytes per second that were
transferred for send (server read) operations, for a
particular port during the sample interval

Port Receive Data Rate Average number of megabytes per second that were
transferred for receive (server write) operations, for a
particular port during the sample interval

Total Port Data Rate Average number of megabytes per second for send and
receive operations during the sample interval.

Port Send Response Time Average number of milliseconds that it took to service
each send (server read) operation during the sample
interval

Port Receive Response Time Average number of milliseconds that it took to service
each receive (server write) operation during the sample
interval

Total Port Response Time Average number of milliseconds that it took to service
each send and receive operation during the sample
interval

Port Send Transfer Size Average number of KB sent per I/O by a particular port

Port Receive Transfer Size Average number of KB received per I/O by a particular
port during the sample interval

Total Port Transfer Size Average number of KB transferred per I/O by a


particular port during the sample interval

Port to Host Send I/O Rate The rate per second of port send to host (server) during
the sample interval.

Port to Host Receive I/O Rate The rate per second of port receive operations from host
(server) during the sample interval.

Total Port to Host I/O Rate The total of port send and receive I/O rate during the
sample interval.

Appendix C. Performance metrics in TPC Performance Reports 345


Port to Disk Send I/O Rate For SVC, the rate per second of port send to back-end
storage during the sample interval.

Port to Disk Receive I/O Rate For SVC, the rate per second of port receive operations
from back-end storage during the sample interval.

Total Port to Disk I/O Rate For SVC, the sum of port to disk send and port to disk
receive rates during the sample interval.

Port to Local Node Send I/O Rate For SVC, the rate per second at which a port sends I/O
to other nodes in the local cluster during the sample
interval.

Port to Local Node Receive I/O Rate For SVC, the rate at which a port receives I/O from other
nodes in the local cluster during the sample interval

Total Port to Local Node I/O Rate For SVC, the sum of port to local node send and receive
rates during the sample interval.

Port to Remote Node Send I/O Rate For SVC, the average number of exchanges (I/Os) per
second sent to nodes in the remote SVC cluster during
the sample interval. Typically some form of remote
mirroring.

Port to Remote Node Receive I/O Rate For SVC, the average number of exchanges (I/Os) per
second received from nodes in the remote SVC cluster
during the sample interval. Typically some form of
remote mirroring.

Total Port to Remote Node I/O Rate For SVC, the sum of port to remote node send and
receive I/O per second during the sample interval.

Port to Host Send Data Rate For SVC, the megabytes per second of port send to host
(server) during the sample interval.

Port to Host Receive Data Rate For SVC, the megabytes per second of port receive
operations from host (server) during the sample
interval.

Total Port to Host Data Rate For SVC, total of port send and receive megabytes per
second during the sample interval.

Port to Disk Send Data Rate For SVC, the megabytes per second of port send to
back-end storage during the sample interval.

Port to Disk Receive Data Rate For SVC, the megabytes per second of port receive
operations from back-end storage during the sample
interval

Total Port to Disk Data Rate For SVC, the sum of port to disk send and port to disk
receive megabytes per second during the sample
interval.

Port to Local Node Send Data Rate For SVC, the megabytes per second at which a port
sends I/O to other nodes in the local cluster during the
sample interval.

Port to Local Node Receive Data Rate For SVC, the megabytes per second at which a port
receives I/O from other nodes in the local cluster during
the sample interval

Total Port to Local Node Data Rate For SVC, the sum of port to local node send and receive
megabytes per second during the sample interval.

346 TotalStorage Productivity Center V3.3 Update Guide


Port to Remote Node Send Data Rate For SVC, the average number of megabytes per second
sent to nodes in the remote SVC cluster during the
sample interval. Typically some form of remote
mirroring.

Port to Remote Node Receive Data Rate For SVC, the average number of megabytes per second
received from nodes in the remote SVC cluster during
the sample interval. Typically some form of remote
mirroring.

Total Port to Remote Node Data Rate For SVC, the sum of port to remote node send and
receive megabytes per second during the sample
interval.

By Port report - Fabric


Note that not all the metrics are supported by all vendor CIM Agents.

Switch Switch Name

Port Port ID

WWPN WWPN for the port

Time Interval start time

Interval Number of seconds in the interval

Port Send Packet Rate Average number of packets per second for send
operations, for a particular port during the sample interval.

Port Receive Packet Rate Average number of packets per second for receive
operations, for a particular port during the sample interval.

Total Port Packet Rate Average number of packets per second for send and
receive operations, for a particular port during the sample
interval.

Port Send Data Rate Average number of megabytes (2^20 bytes) per second
that were transferred for send (write) operations, for a
particular port during the sample interval.

Port Receive Data Rate Average number of megabytes (2^20 bytes) per second
that were transferred for receive (read) operations, for a
particular port during the sample interval.

Total Port Data Rate Average number of megabytes (2^20 bytes) per second
that were transferred for send and receive operations, for a
particular port during the sample interval.

Port Peak Send Data Rate Peak number of megabytes (2^20 bytes) per second that
were sent by a particular port during the sample interval.

Port Peak Receive Data Rate Peak number of megabytes (2^20 bytes) per second that
were received by a particular port during the sample
interval.

Port Send Packet Size Average number of KB sent per packet by a particular port
during the sample interval.

Appendix C. Performance metrics in TPC Performance Reports 347


Port Receive Packet Size Average number of KB received per packet by a particular
port during the sample interval.

Overall Port Packet Size Average number of KB transferred per packet by a


particular port during the sample interval.

Error Frame Rate The average number of frames per second that were
received in error during the sample interval.

Dumped Frame Rate The average number of frames per second that were lost
due to a lack of available host buffers during the sample
interval.

Link Failure Rate The average number of link errors per second during the
sample interval.

Loss of Sync Rate The average number of times per second that
synchronization was lost during the sample interval.

Loss of Signal Rate The average number of times per second that the signal
was lost during the sample interval.

CRC Error Rate The average number of frames received per second in
which the CRC in the frame did not match the CRC
computed by the receiver during the sample interval.

Short Frame Rate The average number of frames received per second that
were shorter than 28 octets (24 header + 4 CRC) not
including any SOF/EOF bytes during the sample interval.

Long Frame Rate The average number of frames received per second that
were longer than 2140 octets (24 header + 4 CRC + 2112
data) not including any SOF/EOF bytes during the sample
interval.

Encoding Disparity Error Rate The average number of disparity errors received per
second during the sample interval.

Discarded Class3 Frame Rate The average number of class-3 frames per second that
were discarded during the sample interval.

F-BSY Frame Rate The average number of F-BSY frames per second during
the sample interval.

F-RJT Frame Rate The average number of F-RJT frames per second during
the sample interval.

348 TotalStorage Productivity Center V3.3 Update Guide


Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.

IBM Redbooks
For information about ordering these publications, see “How to get Redbooks” on page 349.
Note that some of the documents referenced here might be available in softcopy only.
򐂰 IBM TotalStorage Productivity Center: The Next Generation, SG24-7194

Other publications
These publications are also relevant as further information sources:
򐂰 IBM TotalStorage Productivity Center for Data Installation and Configuration Guide for
V2.3, GC32-1727
򐂰 IBM TotalStorage Productivity Center Version 3.3 User’s Guide, GC32-1775

Online resources
These Web sites are also relevant as further information sources:
򐂰 TPC V3.3 support products list:
http://www-1.ibm.com/support/docview.wss?rs=597&uid=ssg1S1003019/
򐂰 Prerequisites:
http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/supportresourc
es?taskind=3&brandind=5000033&familyind=5329731
򐂰 developerWorks Web site:
http://www.ibm.com/developerworks/java/jdk/linux/
򐂰 VMWare site:
http://www.vmware.com

How to get Redbooks


You can search for, view, or download Redbooks, Redpapers, Technotes, draft publications
and Additional materials, as well as order hardcopy Redbooks, at this Web site:
ibm.com/redbooks

© Copyright IBM Corp. 2008. All rights reserved. 349


Help from IBM
IBM Support and downloads:
ibm.com/support

IBM Global Services:


ibm.com/services

350 TotalStorage Productivity Center V3.3 Update Guide


Index
MiniMap tool 230
A prerequisites 230
administrative rights Data Server
DB2 user 31 port 9549 66
Agent Manager Data server 128
Agent Manager Password 48 database instance 129
agent recovery service 44 DB2
Agent registration password 48 health monitoring notification 91
database connection 57 SMTP notification 91
default password 67 DB2 Administration Server 31
default user ID 67 DB2 database
fully qualified host name 45 performance 61
healthcheck utility 56 DB2 database schema 112
Host Name 44 DB2 database sizing 61
Registration Port 44 DB2 FixPak 130
security certificate 47 DB2 FixPak installation 135
server installation 67 DB2 install 26
server port 9511 67 DB2 installation
upgrade 141 verify installation 38
verify level 147 DB2 Instance owner screen 87
verifying the installation 56 DB2 instance properties screen 89
Agent Manager installation 39 DB2 log files 63
Agent Manager password 104 DB2 user account 31
Agent Manager truststore file 104 DB2 user IDs 23
Agent Recovery Service 21 DB2 user rights 24
agent recovery service 100 DB2 window services 38
agent registration password 104 DB2ADMNS group 35
architecture flow 129 db2inst1user name 115
ARS.version field 56 db2level command 38, 135
associated CIM Agent device support matrix 312 DB2USERS group 35
default installation directory 16
B Device Server
back-end storage view 231 port 9550. 66
Brocade Mi10k 190 DNS suffix 18
Brocade switches domain name system 18
Interoperability namespace 199
E
C EFCM 9.2.0 level 197
CD layout 16 EFCM Proxy Model
certificate authority truststore 103 installation 193
certificates 102 Enterprise server rollup reporting 266
CIFS share 161 ESX Server 279
configuration analysis 237 ESX Server probe job 286
configuration history 6, 233 external tools launch 207
Configuration History Settings 234 External Tools node 207
create a fenced user ID 88
Custom install 16 F
cygwin installation 297 FAStT device 313
fixpack
D startup DB2 140
DAS User screen 84 FixPak install 138
Data Path Explorer 3 FlashCopy information 207
back-end storage 231 FlashCopy relationships 211
fully qualified hostname 100

© Copyright IBM Corp. 2008. All rights reserved. 351


G McData SMI-S Interface
GetAMInfo command 147 add CIMOM task 198
GUID 20 McData SMI-S Provider
Mi10K reports 202
Mi10K
H switch performance 204
hardware prerequisites 17 Mi10k
healthcheck utility 56 in-band Fabric agent 203
Host Planner 10 Mi10k monitoring 199
Hypervisor Data agent 288 Migration Utility 129
Hypervisor information 12 multipath planning 248
hypervisor product 278

N
I N Series support 157
IBM FAStT 309 NAS
IBM TS3310 Tape Library 215 filer login ID 166
IBMCDB common agent database 116 monitoring options 159
install sequence 18 Unix proxy agent requirements 159
installation Windows proxy agent 160
Internet Information Services 21 NAS CIFS share 162
installation licenses 58 NAS device
Internet Information Services 21 license 165
Intrepid 10000 Director 13 Probe job 171
IP address 310 quotas 188
scan and probe 169
NAS devices
K Discovery 162
keytool command 280
manage through Discovery 175
TPC dashboard 171
L NAS filer
license NAS devices 165 asset information 184
Linux install 73 CIFS share 161
Agent Manger 95 member of Windows domain 160
database user information 99 SNMP community name 164
DB2 UDB connection 98 Topology Viewer 182
db2profile script 95 NAS filer data
IBMCDB database 98 TPC dashboard 180
Linux platform NAS filesystems reports 185
DB2 upgrade 136 NAS support
DB2 V8 75 SNMP queries 158
List DB2 databases 139 NetApp Data ONTAP 182
log files NetApp NAS support 157
CLI 71 NETBIOS 20
Device server 71
GUI 71
P
Password gathering 129
M Path Planner 9, 246
master TPC server 266 SDD 246
McDATA interface performance counters 316
configuring 196 performance metrics 316
Direct Connection mode 191 -1 value 316
EFCM Proxy mode 191 essential 316
TPC connection 198 pin list persistence 5, 224
McDATA OPENconnectors SMI-S Interface 190 pin lists
EFCM proxy mode 191 multiple users 224
supported platforms 190 pinning 224
McData OPENconnectors SMI-S Interface planner wizards 9
user guide 193 probe 269
McData provider 198 Probe job status 171

352 TotalStorage Productivity Center V3.3 Update Guide


provisioning of storage 9 configuration analysis policies 237
public communication configuration history 6, 233
port 9513 67 context sensitive reporting 4
putty tool 297 integrated reports and alerts 226
launch SAN Planner 254
pin list persistence 224
R pinning 224
Redbooks Web site 349 reporting groups 7
Contact us xi topology viewer
Remote Desktop Connections.TPC upgrade enhancements 3
remote desktop connections 147 TotalStorage Productivity Center
Resource Manager 311 component install 57
Rollup reporting license 58
Asset information report 273 server 312
rollup reporting categories 271 TotalStorage Productivity Center (TPC) 309
rpm tool 296 TPC dashboard
FlashCopy volumes 211
S TPC for Replication 11, 208
SAN Planner 245 TPC server probe definition 269
configuration information 254 TPC subordinate server
Get Recommendation button 261 number data sources 266
Planner Selection pane 253, 256 TPC subordinate servers 266
requirements 248 TPC superuser 121
saved plan 255 TPC upgrade
Select Elements pane 256 services to stop 148
workload profile 257 TPCDB database 65
Zone Planner pane 260 Truststore command syntax 283
SAN Planners truststore password 103
invocation 251 TS3310 namespace 217
SAN planning tools 8 TS3310 SMI-S Provider
SAN Volume Controller (SVC) 314 monitoring 220
Scan/Probe Agent TS3310 Tape Library 13
NAS device 178 TSRMsrv1 user ID 71
schema log files 63 Typical install 16
schema name 61 typical install 30
SDD multipath drivers 9
Security Certificates panel 47 U
Service Location Profile 194 UNIX Proxy Agent 159
Single-partition instance 86 UNIX proxy Data agent 174
SMI-S Server Interface upgrade 128
configuration 196 upgrade path 129
Snapshots in Range slider 234 user ID 309, 311
Standard Edition 128
Standard Edition install 16
storage device 312 V
important information 312 VMWare alerts 287
Storage Planner 9 VMWare certificates 280
VMWare data source
adding 283
T VMWare environment discovery 285
Tape Library VMWare ESX Server 277
embedded CIMOM 215 VMWare ESX server 12
TPC connection 216 configuration 280
Tape Library TS3310 215 rui.crt file 283
TOOLSDB VMWare Support 12
DB2 tools database 116 VMWare truststore
Topology Viewer 2, 223 vmware.jks 283
Change Monitoring 233 VMWare VI Data Source
change monitoring 7 connectivity 284
Change Rover mode 7 VMWare VI data source 279, 283

Index 353
VMWare Virtual Infrastructrue
planning 279
VMWare virtual machine
Fabric agent 279
VMWare VirtualCenter 278
Volume Performance Advisor 8, 246
Volume Placement Advisor 233
Volume Planner 8, 246

W
Windows Proxy Agent 160
workload profile 257

X
X11 forwarding 295
putty installation 304

Z
Zone Planner 9, 247
zone planning 260

354 TotalStorage Productivity Center V3.3 Update Guide


TotalStorage Productivity Center
V3.3 Update Guide
TotalStorage Productivity Center V3.3
Update Guide
TotalStorage Productivity Center V3.3 Update Guide
(0.5” spine)
0.475”<->0.873”
250 <-> 459 pages
TotalStorage Productivity Center V3.3 Update Guide
TotalStorage Productivity Center
V3.3 Update Guide
TotalStorage Productivity Center
V3.3 Update Guide
Back cover ®

TotalStorage
Productivity Center V3.3
Update Guide ®

Implement TPC V3.3 IBM TotalStorage Productivity Center provides an integrated


storage infrastructure management solution that is designed to
INTERNATIONAL
on supported
allow you to manage every point of your storage infrastructure, TECHNICAL
platforms
between the hosts through the network and fabric, through to the SUPPORT
physical disks. It can help simplify and automate the ORGANIZATION
Learn to effectively
use new functions management of devices, data, and storage networks.
IBM TotalStorage Productivity Center V3.3 continues to build on
Manage storage the function provided in prior releases. This book takes you
subsystems with TPC through what is new in TotalStorage Productivity Center and BUILDING TECHNICAL
V3.3 explains how to implement and use the new function. INFORMATION BASED ON
PRACTICAL EXPERIENCE
Enhancements include:
򐂰 Enterprise roll-up reports from multiple TPC instances IBM Redbooks are developed
throughout global corporate environments by the IBM International
򐂰 New analytic capabilities for improved system availability Technical Support
with rapid discovery of performance problems Organization. Experts from
IBM, Customers and Partners
򐂰 Comprehensive configuration guidance with intelligent from around the world create
contrast comparisons to help reduce SAN outages caused by timely technical information
configuration changes; based on realistic scenarios.
򐂰 New storage planning wizards, which offer intelligent Specific recommendations
configuration guidance via preset best-practices policies are provided to help you
implement IT solutions more
򐂰 Dynamic security and violation alerting effectively in your
򐂰 Quick access to vital information through new “favorite environment.
grouping” features, which allow the storage administrator to
save a group of resources that are deemed mission-critical,
and then quickly recall and monitor that group.
For more information:
ibm.com/redbooks

SG24-7490-00 ISBN 0738485187

Das könnte Ihnen auch gefallen