Sie sind auf Seite 1von 120

Student Guide for

Hitachi VSP Gx00 With NAS Modules


CS&S Preparation

THC2887

Courseware Version 1.0


Corporate Headquarters Regional Contact Information
2825 Lafayette Street Americas: +1 408 970 1000 or info@HDS.com
Santa Clara, California 95050-2639 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@HDS.com
www.HDS.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@HDS.com

© Hitachi Data Systems Corporation 2016. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Hitachi Content Platform Anywhere, Hitachi Live
Insight Solutions, ShadowImage, TrueCopy, Universal Storage Platform, Essential NAS Platform, Hi-Track, and Archivas are trademarks or registered trademarks of Hitachi Data
Systems Corporation. Pentaho is a trademark or registered trademark of Hitachi Data Systems Corporation and Pentaho. IBM, S/390, XRC, z/OS, VTF, ProtecTIER,
HyperFACTOR, and Flashcopy are trademarks or registered trademarks of International Business Machines Corporation. Microsoft, SQL Server, Hyper-V, PowerShell,
SharePoint, and Windows are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks, and company names are properties of their
respective owners.

ii
Table of Contents
1. Overview........................................................................................................... 1-1
Workshop Objectives ............................................................................................................................. 1-1
Acronyms ............................................................................................................................................. 1-2
Prerequisites ......................................................................................................................................... 1-3
Hands-On Labs Objectives ..................................................................................................................... 1-4

2. Terminology and Cross-Reference .................................................................... 2-1


Translation and Cross-Reference of Storage System Names ...................................................................... 2-1
Hitachi Midrange Storage System Configuration References ...................................................................... 2-3
MPC Definition ...................................................................................................................................... 2-4

3. VSP Gx00 and VSP Fx00 HNAS Platform Documentation ................................. 3-1
HM800/VSP Gx00/VSP Fx00 Documentation ............................................................................................ 3-2
One Person’s Approach to Navigating the HM800 Maintenance Manual....................................................... 3-3
HM800 to RAID800 Documentation Comparison ....................................................................................... 3-4
Getting the NAS Platform Documentation ................................................................................................ 3-5
HM800 to HNAS Documentation Comparison ........................................................................................... 3-6

4. User Interfaces Used by Field Support ............................................................. 4-1


User Interfaces and Their Uses ............................................................................................................... 4-2
Maintenance Interfaces and Network Connections .................................................................................... 4-3
The Maintenance PC and the MPC GUI .................................................................................................... 4-4
The Storage Device List on the Maintenance PC ....................................................................................... 4-5
Web Console on the Maintenance PC ...................................................................................................... 4-6
On the Maintenance PC, the Storage Device List Opens the MPC Software Screen ....................................... 4-7
Web Console – Version of BEM That Runs on the Maintenance PC ............................................................. 4-8
Multiple Paths to Access the Maintenance Utility GUI ................................................................................ 4-9
Accessing the Maintenance Utility From the Web Console or Storage Navigator ......................................... 4-10
Accessing the Maintenance Utility GUI From the MPC GUI ....................................................................... 4-11
The “GUM” GUI – Block-Only Configuration ........................................................................................... 4-12
When To Use Which User Interface....................................................................................................... 4-13
MPC GUI (Formerly the SVP Application) ............................................................................................... 4-14
Why Have an SVP in the Non-Japan Markets? ........................................................................................ 4-15
The Service Processor (SVP)................................................................................................................. 4-16
Service Processor Rear View ................................................................................................................. 4-17

iii
Table of Contents

Service Processor LAN Configuration ..................................................................................................... 4-18


Storage Device List on the SVP ............................................................................................................. 4-19
Browser Access to the SVP – User Management Only ............................................................................. 4-20
VSP Gx00 and VSP Fx00 Networks Diagram ........................................................................................... 4-21
Accessing the Maintenance Utility GUI for a Block-Only System ............................................................... 4-21
Storage Management GUIs for the Customer ......................................................................................... 4-22
Register the Hitachi VSP Gx00 or VSP Fx00 Module to HCS .................................................................... 4-23
Hitachi Storage Advisor (HSA) .............................................................................................................. 4-24

5. Initial System Assessment................................................................................ 5-1


Identify Installed Hardware Components ................................................................................................. 5-1
Checking Installed Software and Firmware .............................................................................................. 5-2

6. Standard CS&S Tasks ........................................................................................ 6-1


Storage Capacity Upgrade – Adding Disk Boxes........................................................................................ 6-2
Adding Components – Drives .................................................................................................................. 6-3
Create New Parity Groups – Storage Navigator ........................................................................................ 6-4
Create LDEVs ........................................................................................................................................ 6-5
Create a Cache Logical Partition (CLPR)................................................................................................... 6-6
Create a Resource Group (RSG or RG) .................................................................................................... 6-7
Set Host Mode Options .......................................................................................................................... 6-8
Set a System Option Mode (SOM) ........................................................................................................... 6-9
System Option Modes .......................................................................................................................... 6-10
Setting CEMD (Virtual) Jumper ............................................................................................................. 6-11
Checking Installed Firmware Versions With the Maintenance Utility GUI ................................................... 6-12
Collect Dump With the Dump Tool – .bat File on the SVP ........................................................................ 6-13
Do Not Collect Dump Through the GUI .................................................................................................. 6-14
Record a Block Environment Config Backup ........................................................................................... 6-15
Cross-Controller Internal Network ......................................................................................................... 6-16
Internal Networking – Block-Only.......................................................................................................... 6-17
Fan Chassis With Batteries (4U Controller) ............................................................................................. 6-18
Battery Naming Scheme....................................................................................................................... 6-19
Block Configuration With CHBs Only ...................................................................................................... 6-20
VSP G800 Only – FE Expansion Chassis in Unified Configuration .............................................................. 6-21
VSP G800 Only – CHBB Slot Locations (Rear) ......................................................................................... 6-22
Verify Block Licenses Are Installed ........................................................................................................ 6-23
Check Installed Firmware Versions ........................................................................................................ 6-24

iv
Table of Contents

7. Firmware Upgrade ............................................................................................ 7-1


Microcode Exchange Wizard Tool ............................................................................................................ 7-1
Firmware Upgrade Checklist ................................................................................................................... 7-2
Maintenance Interfaces and Network Connections .................................................................................... 7-3
Run the Firmware Upgrade From the SVP ................................................................................................ 7-4
Check Installed Firmware Versions Before Firmware Upgrade .................................................................... 7-5
Start Upgrade – Run setup.exe on the SVP .............................................................................................. 7-6
Firmware Upgrade: Confirm License Agreement ....................................................................................... 7-7
Firmware Upgrade: Start Installation ....................................................................................................... 7-7
Environmental Settings I ........................................................................................................................ 7-8
Select Update Objects............................................................................................................................ 7-8
Environmental Settings II ....................................................................................................................... 7-9
Select Update Objects............................................................................................................................ 7-9
Update Firmware Screen ...................................................................................................................... 7-10

8. Troubleshooting................................................................................................ 8-1
Always Check for Pinned Data ................................................................................................................ 8-1
Multiple Concurrent Failures Requires Careful Planning ............................................................................. 8-2
Replacing a Memory DIMM ..................................................................................................................... 8-3
Block Environment SIM Messages and RC Codes ...................................................................................... 8-4
Collect Dump Using the SVP ................................................................................................................... 8-5
Troubleshooting for the Maintenance Utility ............................................................................................. 8-5
Management Interface Connectivity Problems – Rebooting the GUM From the BEM ..................................... 8-6
Rebooting the GUM From the Maintenance Utility ..................................................................................... 8-7
Forceful Hardware Reset of the GUM....................................................................................................... 8-8

9. Hi-Track for Unified Systems ............................................................................ 9-1


Register the Storage Array With Hi-Track Agent ....................................................................................... 9-2
Register the NAS SMU to Hi-Track Monitor............................................................................................... 9-3
Workshop Prerequisite Materials Review Summary ................................................................................... 9-4

Training Course Glossary ........................................................................................ G-1

Evaluating This Course ............................................................................................ E-1

v
Table of Contents

vi
1. Overview
Welcome to the prerequisites training for VSP Gx00 and VSP Fx00 With NAS Modules Hands-On
Workshop for CS&S and Field Support Professionals.

This training is the basis for your participation in the on-site, hands-on workshops where you
will get the opportunity to directly interact with and perform maintenance tasks on a Hitachi
Virtual Storage Platform (VSP) Gx00 or VSP Fx00 system.

Workshop Objectives

 This presentation has been created to support, guide and supplement


the hands-on workshop for CS&S (field support) for VSP Gx00 and VSP
Fx00 models and VSP Fx00 with NAS modules*

 This presentation reviews the prerequisite knowledge and skills that


workshop participants should have at the time of the workshop

 This presentation also covers the knowledge and skills that workshop
participants are expected to learn

*At the time this material was created, HDS intended to offer VSP Fx00 with NAS Modules in the future.
As of October, 2016, this line of storage models was not yet officially available.

This prerequisites guide will prepare you for the Hands-On Workshop for CS&S (Field Support
staff) for VSP Gx00 and VSP Fx00 With NAS Modules.

Page 1-1
Overview
Acronyms

When you are comfortable with the information in this prerequisites guide, you will be ready to
start immediately after you arrive for the workshop sessions. You will get the most out of the
limited hands-on time in the lab.

The majority of information in this pre-requisites guide is referenced to other related or pre-
requisite training courses. The training course or courses in which information is found will be
identified, if applicable. References to locations in the documentation, particularly the storage
system maintenance manual, are also included.

Acronyms

 NAS: Network Attached  REST: Representational  NAT: Network Address


Storage State Transfer Translation

 NATP: Network Address  SMU: System  SFM: Server Farm


Translation Protocol Management Unit Migration

 EVS: Enterprise Virtual  NDMP: Network Data  SCSI: Small Computer


Server Management Protocol Systems Interface

 CTL: Controller Module  BALI: BlueArc OS and  FC*: Fibre Channel


Linux Incorporated (*this acronym will only be
 SSH: Secure Shell used where space is limited)
 AVN: Admin Vnode Node
 SOAP: Simple Object  GUM: Gateway for Unified
Access Protocol  RBAC: Role Based Access Management
Control

Here are some of the acronyms used in this module. Please familiarize yourself with them
before continuing.

Page 1-2
Overview
Prerequisites

Prerequisites
 This guide covers how to:
• Access and use VSP Gx00 and VSP Fx00 Device Manager Storage Navigator GUI (Block Element
Manager or BEM)
• Connect and use the VSP Gx00 and VSP Fx00 Maintenance PC
• Access and use the MPC (formerly the SVP Application) GUI
• Obtain and use the VSP Gx00 and VSP Fx00 Maintenance Manual
• Obtain and use the Hitachi NAS Platform (HNAS) Platform Documentation
• Obtain and use the VSP Gx00 and VSP Fx00 Microcode Exchange Wizard Tool (83-MCTool)

 You should also have 20-50GB of free space on the C:\ drive of your laptop
• This is required in order to install the Maintenance PC software on your laptop

These hands-on workshop sessions are brief and have a very specific objective of providing a
“learning by doing” experience.

During the workshops, very little time will be spent on lectures or PowerPoint presentations.
This prerequisite web-based training is provided so that classroom lectures can be eliminated or
minimized during the workshops. This prerequisite training covers information that you can get
from other Hitachi Data Systems Academy courses including:

• THI2651 – Installing and Supporting Hitachi Virtual Storage Platform Midrange Family (3
day ILT)
• THC2794 – Hitachi Virtual Storage Platform Gx00 With NAS Modules Differences (3 day
ILT)
This prerequisites guide covers how to:

• Use the Device Manager Storage Navigator user interface. This is now frequently called
the Block Element Manager or BEM
• Connect the maintenance laptop to the VSP Gx00 and VSP Fx00 system for maintenance
activities
• Use the Maintenance PC GUI
• Use the Maintenance Manual documentation
• Use the VSP Gx00 and VSP Fx00 Microcode exchange wizard tool (83-MCTool)
Be aware that, if you want to install the Maintenance PC software on your laptop, you need 20
to 50 gigabytes of free disk space.

Page 1-3
Overview
Hands-On Labs Objectives

Hands-On Labs Objectives

 Connect and use the Maintenance PC

 Upgrade a VSP Gx00 to VSP Gx00 with NAS modules

 Upgrade a VSP Fx00 to VSP Fx00 with NAS modules

 Access and navigate the NAS Manager (SMU) GUI

 Upgrade the firmware of a VSP Gx00, VSP Fx00, VSP Gx00 with NAS modules,
or VSP Fx00 with NAS modules

 Configure your personal laptop as a Maintenance PC

 Develop an understanding of the VSP Gx00, VSP Fx00, VSP Gx00 with NAS modules,
and VSP Fx00 with NAS modules architecture in a way that will be useful to Field Support
personnel

As mentioned in the previous page, this workshop is designed to be a “learn by doing”


experience. Instructors and lab assistants will guide the activities, while other experienced field
support staff will be on hand to advise and assist. Participants are expected to work directly with
the system for the majority of the time spent in the workshop.

When you attend the hands-on workshop, you will connect and use the Maintenance PC. You
will perform a VSP Gx00/VSP Fx00 to VSP Gx00/VSP Fx00 with NAS modules upgrade. Based
on that experience, you should be able to understand and perform a VSP Fx00 to VSP Fx00
with NAS modules upgrade, too. You will access and navigate using the NAS Manager GUI.
This is also referred to and is comparable to the SMU of the NAS gateway configuration. You
will perform a unified firmware upgrade, including both the block system and NAS OS upgrade.
Finally, you will learn about the VSP Gx00, VSP Fx00, and VSP Fx00 with NAS module
architecture and operation in a way that should be helpful when you are called on to perform
maintenance on such a system.

Page 1-4
2. Terminology and Cross-Reference
In the next few slides, we will review some terminology related to the VSP Gx00 and VSP Fx00
storage systems.

Translation and Cross-Reference of Storage System Names

VSP Gx00/VSP Fx00 Supports


Controller GISD Controller Supports
or VSP Fx00 with NAS modules
name name height CHBB
NAS modules (“unified”)
VSP G200 CBSS, CBSL HM800 2U No No
VSP G400/G600 and
CBLM HM800 4U No Yes
F400/F600
4U
VSP G800 and F800 CBLH HM800 6U with Yes Yes
CHBB
VSP G1000
DKC-0 RAID800,
DKC810I 10U No No
R800

This table shows the different models in the VSP Gx00, VSP Fx00, VSP Gx00 with NAS modules,
and VSP Fx00 with NAS modules families.

Notice that the VSP G200 does not support the option with NAS modules. VSP G200 is available
for block-only storage support.

A customer could configure a VSP G200 behind Hitachi NAS (HNAS) Platform. The VSP G200
does not support unified storage configuration.

Page 2-1
Terminology and Cross-Reference
Translation and Cross-Reference of Storage System Names

This table also shows a terminology translation that you may encounter when using the product
technical documentation, particularly the maintenance manual.

The engineering organization refers to the family of storage systems as “HM800.” Extra letters
at the end of HM800 are used to differentiate the controller options – small, medium or large –
that correspond to VSP Gx00, VSP F400, VSP F600, or VSP F800.

Other important information is that only the “large” models—the VSP G800 and VSP F800—
support the additional channel host expansion chassis (CHBB) component. The CHBB can be
added to the CBLH controller to increase the number of available front-end channel host ports.
Notice that VSP G400, VSP G600, VSP F400 and VSP F600 do not support the addition of the
CHBB.

Page 2-2
Terminology and Cross-Reference
Hitachi Midrange Storage System Configuration References

Hitachi Midrange Storage System Configuration References

 Approved unified storage system branding for Hitachi Virtual Storage


Platform (or VSP)
• VSP Gx00 for block-only and VSP Fx00 for block-only
• VSP Gx00 with NAS modules and VSP Fx00 with NAS modules
 Note: also referred to as VSP Gx00 unified and VSP Fx00 unified where
space is limited (informal)

 Legacy modular storage systems


• Hitachi Unified Storage (or HUS): HUS 110, HUS 130, HUS 150

The approved references or names for the new “unified” storage systems are “VSP Gx00 With
NAS Modules” and “VSP Fx00 With NAS Modules.”

It is never correct to use the descriptor “Unified storage” with a capital “U.”

In these training materials, we will mostly use the correct “VSP Gx00 with NAS modules” or
“VSP Fx00 with NAS modules.” Sometimes we may use the shortened, more informal “VSP
Gx00 and VSP Fx00 unified,” with lower case “u.”

When we want to indicate a VSP Gx00 or VSP Fx00 that is not configured with NAS modules,
we will include the descriptor “block-only.”

If you see the specific phrase, “Hitachi Unified Storage (HUS),” know that it refers to the older
modular, mid-range storage family that includes the HUS 110, 130 and 150. It is not correct to
use the phrase “Hitachi Unified Storage” when referring to VSP Gx00 or VSP Fx00 with NAS
modules. The VSP Gx00 and VSP Fx00 series are sometimes called “midrange” storage. The
VSP Fx00 series of storage systems is part of the Hitachi all flash array offerings.

Page 2-3
Terminology and Cross-Reference
MPC Definition

MPC Definition

Maintenance PC MPC GUI

10.0.0.99

“MPC” is sometimes used as the acronym for “Maintenance PC.” However, there is also a
special user interface that is labeled “MPC.” In these training materials, we will always refer to
the Maintenance PC as such, and will not use the acronym to refer to it. We will refer to the
MPC GUI as the MPC GUI to eliminate any confusion over the use of the MPC acronym.

Another note: In the Japanese market, the HM800 storage systems do not include a Service
Processor (or SVP). Thus the HM800 Maintenance Manual specifies that the IP address of the
Maintenance PC should be set to 10.0.0.15 and it is shown as a strikethrough with a red line.

In the rest of world (ROW) markets – countries outside Japan – the VSP Gx00 and VSP Fx00
systems can be ordered with the SVP. When the SVP is provided with the system from the
Hitachi Distribution Center, the SVP maintenance network IP address is set to 10.0.0.15.
Therefore that IP address cannot also be used for the Maintenance PC.

A different IP address must be used for the Maintenance PC. The recommended address is
10.0.0.99, as shown.

Page 2-4
3. VSP Gx00 and VSP Fx00 HNAS Platform
Documentation
In this section, we will identify the various sets of documentation that are used when working
with VSP GX00 and VSP FX00 storage systems.

Page 3-1
VSP Gx00 and VSP Fx00 HNAS Platform Documentation
HM800/VSP Gx00/VSP Fx00 Documentation

HM800/VSP Gx00/VSP Fx00 Documentation

There are two main documentation libraries for the VSP Gx00 and VSP Fx00 series and for VSP
Gx00 and VSP Fx00 with NAS modules.

These include the Maintenance Documentation Library (MDL) and the Product Documentation
Library (PDL).

For the unified configurations, you will also need the Hitachi NAS documentation.

The libraries and documents are available from TISC and Hitachi Data Systems Support Connect.

These workshops focus on the information contained in the MDL but occasionally also need to
reference the PDL and the NAS documentation.

Because the unified NAS Platform functionality is delivered by the same HNAS code that is used
in the HNAS gateway offerings, you will also need to become familiar with the content and
organization of the HNAS documentation.

Page 3-2
VSP Gx00 and VSP Fx00 HNAS Platform Documentation
One Person’s Approach to Navigating the HM800 Maintenance Manual

One Person’s Approach to Navigating the HM800 Maintenance


Manual

Here is an easy way to use the HM800 Maintenance Manual.

• Download the .iso file. For the HM800 Maintenance Manual, this is file HM056-nn.iso.

• Put the .iso file into a folder on your laptop.

• Extract the .iso into that folder.

• This folder will then contain an index.html file. Use this index.html file to launch the
HM800 Maintenance Manual. This process opens a page with the list of all the separate
sections. You can then navigate and launch the section of the Maintenance Manual you
need.

In the example shown on this slide, the Firefox browser was used. A new browser tab is opened
for each new document selected. You can keep multiple documents open in the same browser
session. Document links within pdf sections of the Maintenance Manual work only in that
document. If a link takes you to another pdf, you will have to be sure that the other pdf is open
in another tab and you will need to manually navigate between the different sections.

If you get confused or lost, you can close all browser tabs and return to the main index.html
page and start again.

The more familiar you become with the HM800 Maintenance Manual and where the information
is in the different sections, the easier it will be for you to use this important reference when you
need it for field tasks.

Page 3-3
VSP Gx00 and VSP Fx00 HNAS Platform Documentation
HM800 to RAID800 Documentation Comparison

HM800 to RAID800 Documentation Comparison

Here are the index.html pages from DW800 (VSP Gx00 and VSP Fx00) on the left and RAID800
(VSP G1500 and VSP F1500) on the right. You can see they are very similar, but there are some
differences.

If you are a field support professional who is already familiar with VSP G1500, VSP F1500 and
how to use the RAID800 Maintenance Manual, comparing the documentation differences to the
newer VSP Gx00 and VSP Fx00 Maintenance Manual may help your learning.

Here is a list of differences you can compare.

• Compare the contents of the two Start sections

• Review the contents of the DW800 HDS Rack Section

• Compare the contents of the DW800 HDS SVP section to the DCK810I SVP Section

• Notice that the DKC810I Maintenance Manual does not contain a Maintenance PC
section

Here is a challenge. Locate the procedure for recording a block environment configuration
backup in these two different types of storage systems. In which section of the maintenance
manual is this procedure found for the two different systems?

Page 3-4
VSP Gx00 and VSP Fx00 HNAS Platform Documentation
Getting the NAS Platform Documentation

Getting the NAS Platform Documentation

Many colleagues still rely on TISC, rather than Hitachi Data Systems Support Connect, to locate
and download the documentation.

To locate the relevant NAS Platform documentation that applies to the VSP Gx00 and VSP Fx00
with NAS modules unified NAS implementation, select HNAS 4000 series as the Hitachi Data
Systems Product Family in the TISC selection screen.

Page 3-5
VSP Gx00 and VSP Fx00 HNAS Platform Documentation
HM800 to HNAS Documentation Comparison

HM800 to HNAS Documentation Comparison

HM800 – VSP Gx00 or VSP Fx00

Offers the
consolidated “libraries”
in .iso format
NAS Platform

Few “FE-” documents

The NAS functionality of the VSP Gx00 and VSP Fx00 unified storage systems is delivered by the
same NAS OS code that runs on the HNAS gateway implementations. Therefore, HNAS
documents that describe software features, functionality, GUIs and interfaces, and user
interaction apply both to the Hitachi NAS gateway offerings and also to the VSP Gx00 and VSP
Fx00 unified systems.

If you are an experienced Hitachi block storage professional, you will need to think outside the
traditional set of VSP Gx00 and VSP Fx00 documents. Some essential information is in NAS
documentation.

If you are an experienced NAS professional, you will need to learn to use the VSP Gx00 and VSP
Fx00 documentation.

People who are familiar with either the HM800 or HNAS can benefit from some information
about the differences in approach to documentation.

Because the block storage documentation was developed by the GISD (formerly ITPD) group in
Japan and the NAS Platform documentation originated with the underlying BlueArc NAS
products, they are organized quite differently.

Now that you will be dealing with “unified” systems, it will be very helpful for you to become
familiar with both the block storage and the NAS Platform documentation.

While the relevant documentation and instructions for field support personnel about storage
systems is found in “FE-” Hitachi Data Systems internal documents, there are very few “FE-”
documents for the NAS Platforms.

Page 3-6
4. User Interfaces Used by Field Support
In this section, we will identify and review the purpose and function of the various user
interfaces for VSP Gx00 and VSP Fx00 systems.

Page 4-1
User Interfaces Used by Field Support
User Interfaces and Their Uses

User Interfaces and Their Uses

# Model Installation Installation Block to Unified Daily Maintenance


(Physical (Software setup) Upgrading operation
installation)

Rack- Initial Initial Setup H/W addition, Provisioning, Failure Parts Upgrading Down-
mounting, Startup S/W installation setting replacement grading
Cabling, and setup changes, etc. Hardware Software
etc.

1 VSP G800 Block CE & Partner BECK IST MU -- N/A MU MU SDL MU


(HM800 H)
End-User N/A N/A N/A -- HCS, HSA MU N/A SDL N/A

2 Unified CE & Partner BECK IST MU, SMU MU, SMU N/A MU MU SDL MU

End-User N/A N/A N/A N/A HCS, HSA MU N/A SDL N/A

3 VSP G600 Block CE & Partner BECK IST MU -- N/A MU MU SDL MU


(HM800 M3)
End-User N/A N/A N/A -- HCS, HSA MU N/A SDL N/A

4 Unified CE & Partner BECK IST MU, SMU MU, SMU N/A MU MU SDL MU

End-User N/A N/A N/A N/A HCS, HSA MU (*) N/A SDL (*) N/A

5 VSP G400 Block CE & Partner BECK IST MU -- N/A MU MU SDL MU


(HM800 M2)
End-User N/A N/A N/A -- HCS, HSA MU N/A SDL N/A

6 Unified CE & Partner BECK IST MU, SMU MU, SMU N/A MU MU SDL MU

End-User N/A N/A N/A N/A HCS, HSA MU N/A SDL N/A

7 VSP G200 Block CE & Partner BECK IST MU -- N/A MU MU SDL MU


(HM800 S)
End-User BECK IST MU,ISWR -- HCS, HSA MU MU SDL N/A

This table was created for the block-only configurations, so the NAS Platform interfaces are not
included. It shows various user interfaces and their intended uses.

Here are the definitions of the acronyms used in this table:

• BECK: Back-End Configuration Kit

• IST: Initial Startup Tool. This is installed in the VSP Gx00 and VSP Fx00 SVP

• MU: Maintenance Utility. This can be found in the Gateway for Unified Management
(or GUM) GUI

• SMU: System Management Unit

• ISWR: Initial Setup Wizard

• HSA: Hitachi Storage Advisor. This product name replaced Hitachi Infrastructure Director
(or HID)

• SDL: Storage Device List.

Page 4-2
User Interfaces Used by Field Support
Maintenance Interfaces and Network Connections

Maintenance Interfaces and Network Connections

Maintenance PC
MPC software
SVP
GUM
Internal LAN
Management LAN
Maintenance Port

This diagram is found in the HM800 Maintenance Manual Firmware section on page FIRM01-20.
It shows which user interfaces are available, on which platform they run, how they are
interconnected and how you access them.

The MPC GUI runs on the Maintenance PC platform. When it is used, the Maintenance PC must
be connected to the maintenance port on one of the storage system controllers. Normally, the
Maintenance PC is connected to the maintenance port on Controller 1. The Controller 1
maintenance port IP address is 10.0.0.16. Set the IP address of the Maintenance PC to
10.0.0.99 with subnet mask 255.255.255.0.

This diagram shows the SVP. In the documented maintenance procedures, the Maintenance PC
is normally shown as the way to access the Maintenance Utility. However, it is also possible to
access the Maintenance Utility through the SVP.

Page 4-3
User Interfaces Used by Field Support
The Maintenance PC and the MPC GUI

The Maintenance PC and the MPC GUI

In the VSP Gx00 and VSP Fx00 architecture, the ability to access certain sensitive maintenance
operations has been removed from the Service Processor. Many maintenance tasks, including
replacement of failed hardware components are now managed through the Maintenance Utility
which runs on the controller.

A specially-configured Maintenance PC must be used for certain installation and maintenance


tasks, including installing NAS code for the first time, setting System Option Modes (SOMs), and
recording a block system configuration backup. Please note that NAS configuration backup is
performed from the NAS Manager or SMU.

The Maintenance PC is always connected to the maintenance port on one of the storage system
controllers. You can revisit the previous slide which shows the network and connection points
for the different system components, including the Maintenance PC.

Because this training covers all three of these CS&S tasks, you must understand the
Maintenance PC and how to access the MPC GUI that runs there.

In the hands-on workshop, the lab environment includes a configured and working Maintenance
PC.

However, we have heard reports from CS&S early adopters that installing and configuring the
MPC software on the CE laptop consistently has been a problem.

One challenge is that the CE laptop must have sufficient hard disk free space. The specifications
indicate 50 gigabytes of free space. However, one colleague did successfully install the
Maintenance PC software with a little more than 20 gigabytes of free space.

Page 4-4
User Interfaces Used by Field Support
The Storage Device List on the Maintenance PC

The Maintenance PC software also installs a range of utility software including Java, Flash,
PuTTY, and others. We have all faced the “Java” challenges when configuring our laptops and
servers.

Information about the Maintenance PC is found in the HM800 Maintenance Manual Maintenance
PC Section.

The Maintenance PC Specifications list shown here is found on page MPC01-10.

The Storage Device List on the Maintenance PC

Before you can use the Maintenance PC to communicate with the storage system, you must
register the storage system to the Maintenance PC Storage Device List.

In order to register a VSP Gx00 or VSP Fx00 to the Storage Device list on the Maintenance PC,
the Maintenance PC MUST be connected to the storage system’s controller 1 maintenance port.

If you look closely at the Storage Device List image shown here, you will see that the MPC
address is shown as 10.0.0.15.

The Storage Device List on the Maintenance PC can be used differently from the Storage Device
List on the SVP. The SVP is tightly integrated into the specific VSP Gx00 and VSP Fx00 storage
arrays. Therefore the Storage Device List on the SVP can communicate only with that one
specific array. Attempts to register any other storage array to the Storage Device List on an SVP
will fail.

Page 4-5
User Interfaces Used by Field Support
Web Console on the Maintenance PC

Web Console on the Maintenance PC

1. Click the [Start Service] button of the Storage System Icon.


Note: When Starting Service is [Auto], the service starts automatically after
starting the Maintenance PC. Go to Step 4.

The Web Console requires starting a set of services on the Maintenance PC. You can start, stop
and monitor the status of these services from the Storage Device List. This information is found
in the MPC section of the Maintenance Manual.

Page 4-6
User Interfaces Used by Field Support
On the Maintenance PC, the Storage Device List Opens the MPC Software Screen

On the Maintenance PC, the Storage Device List Opens the MPC
Software Screen

The Storage Device List on the Maintenance PC operates differently than the Storage Device
List on the SVP. When you click the storage system icon in the Storage Device List on the
Maintenance PC, the MPC Software window is displayed.

Depending on how you interact with this window, you can navigate either to the Web Console
or to the MPC GUI.

To access either interface from the MPC software, you must enter the User Name and Password
credentials at the top of the screen.

After you enter the user name and password, the “Go to MPC” button becomes active.

If you want to access the MPC GUI, you must click the “Go to MPC” button quickly before the
Web Console opens.

Page 4-7
User Interfaces Used by Field Support
Web Console – Version of BEM That Runs on the Maintenance PC

Web Console – Version of BEM That Runs on the Maintenance PC

The term “Web Console” should be used for a very specific meaning. The Web Console is the
version of the Block Element Manager that runs on or from the Maintenance PC.

This can be confusing. The Web Console looks and behaves exactly like the Hitachi Device
Manager Storage Navigator interface that runs on the SVP. But notice that in the very top bar of
the GUI, you see the words “Web Console.”

In the VSP G1500 and VSP F1500 enterprise storage system architecture, there are two
separate running versions of the Block Element Manager. In the VSP G1500 and VSP F1500
enterprise storage architecture these both run on the SVP.

The Web Console is displayed when the user connects to the SVP with remote desktop protocol
(RDP). In the newest VSP midrange systems, you can display the Device Manager Storage
Navigator GUI when using a browser to connect to the SVP IP address. However only a few
administrative functions are active. This configuration has been implemented to enforce the
use of other storage management software such as Hitachi Command Suite or Hitachi Storage
Advisor.

In the enterprise system architecture, there are operations you can perform with Storage
Navigator that are not supported from the Web Console.

However, again, be careful in your terminology and do not use “Web Console” and “Storage
Navigator” as synonyms because they are two different GUIs in VSP Gx00 and VSP Fx00
architecture, even though they look and behave the same.

Page 4-8
User Interfaces Used by Field Support
Multiple Paths to Access the Maintenance Utility GUI

Multiple Paths to Access the Maintenance Utility GUI

There are three ways to access the Maintenance Utility GUI.

The Maintenance Utility GUI runs on the controller. Each of the two controllers runs an instance
of the Maintenance Utility GUI. The system architecture takes care of the communication to the
other controller when operations are performed through the Maintenance Utility GUI.

This diagram shows the two access paths to the Maintenance Utility GUI from the Maintenance
PC.

On the Maintenance PC, you can first access the MPC GUI and then access the Maintenance
Utility GUI from there.

Or you can access the Block Element Manager or Web Console (Storage Navigator instance)
that runs on the Maintenance PC and then access the Maintenance Utility GUI from there.

Although it is not shown in this diagram, the third option is to access the Block Element
Manager or Storage Navigator on the SVP and then access the Maintenance Utility GUI from
there.

Page 4-9
User Interfaces Used by Field Support
Accessing the Maintenance Utility From the Web Console or Storage Navigator

Accessing the Maintenance Utility From the Web Console or


Storage Navigator

You can access the Maintenance Utility GUI from the Web Console (which runs on the
Maintenance PC) or from the Block Element Manager or Storage Navigator which runs on the
SVP. This slide shows screen images of the navigation path to open the Maintenance Utility GUI
from Storage Navigator.

Instructions for using the Maintenance Utility GUI tell you to connect the Maintenance PC to the
maintenance port on Controller 1.

You use the Maintenance Utility GUI to perform many CS&S tasks including the maintenance
replacement of failed hardware components.

The Web Console runs on the Maintenance PC which is connected to the Controller 1
maintenance port. Therefore the access to the Maintenance Utility GUI is across the
maintenance LAN.

Page 4-10
User Interfaces Used by Field Support
Accessing the Maintenance Utility GUI From the MPC GUI

Accessing the Maintenance Utility GUI From the MPC GUI

You can also access the Maintenance Utility GUI from the MPC GUI.

This slide shows the navigation path to open the Maintenance Utility from the MPC GUI.

Because the MPC GUI runs on the Maintenance PC and the Maintenance PC is connected to the
Controller 1 maintenance port, this communication path is across the maintenance LAN.

Page 4-11
User Interfaces Used by Field Support
The “GUM” GUI – Block-Only Configuration

The “GUM” GUI – Block-Only Configuration

Management IP
address of
Controller1

If the VSP Gx00 or VSP Fx00 is configured for block only and NAS nodules have not been
installed, when the administrator connects to the controller web service with a browser, as
shown here, the login screen for the Maintenance Utility is displayed.

Because no NAS platform or services are running, there is no option available for accessing any
NAS features.

The Gateway for Unified Management (GUM) GUI for VSP Gx00 and VSP Fx00 with NAS
modules is shown on the next slide so you can see the difference when NAS modules are
installed.

Page 4-12
User Interfaces Used by Field Support
When To Use Which User Interface

When To Use Which User Interface

A very important and informative table is found in the HM800 Maintenance Manual Maintenance
PC section starting on page MPC01-221.

This table tells you which interface to use to perform different tasks and operations in the VSP
Gx00 and VSP Fx00 systems.

Page 4-13
User Interfaces Used by Field Support
MPC GUI (Formerly the SVP Application)

MPC GUI (Formerly the SVP Application)

There are just a few CS&S tasks that can be performed only from the MPC GUI on the
Maintenance PC. You will find these when the MPC GUI is in Modify Mode or in Mode Mode.

With the MPC GUI in Modify Mode, you can record a block environment configuration backup.

With the MPC GUI in Mode Mode, you can set System Option Modes (SOMs).

Notice that the Install button becomes active when the MPC GUI is set into either Modify Mode
or Mode Mode. You can also access the Maintenance Utility GUI from the MCP GUI, even when
the MPC GUI is in View Mode.

Page 4-14
User Interfaces Used by Field Support
Why Have an SVP in the Non-Japan Markets?

Why Have an SVP in the Non-Japan Markets?

On the Maintenance PC On the SVP

Ability to register multiple storage Ability to register only one array, the
systems to one Maintenance PC one to which the SVP is connected

The Hitachi Data Systems management software offerings are not used in the Japanese market.
Hitachi Command Suite and Hitachi Storage Advisor are offered and encouraged only outside of
Japan. Therefore, the system architecture does not need a management software interface
point in the Japanese market.

In order to enable the integration and use of the Hitachi Data Systems management software
offerings, the HM800 architecture was modified to provide an integration point for the
communication between one or more storage systems and the management software
environments which can support many different types of storage arrays, including Hitachi
Content Platform (HCP) and HNAS systems.

This modification potentially causes confusion for access and navigation in the rest of world
(ROW) implementations. The SVP is basically a Maintenance PC with some functionality omitted.
However, both the Maintenance PC and the SVP run instances of the Block Element Manager
(Device Manager Storage Navigator).

Page 4-15
User Interfaces Used by Field Support
The Service Processor (SVP)

The Service Processor (SVP)

HM800 Maintenance Manual SVP Technical Reference

VSP Gx00 and VSP Fx00 Hardware Installation and Reference Guides
 VSP G200 Installation and Reference Guide (FE-94HM8020-nn)

 VSP G400 and G600 Installation and Reference Guide (FE-94HM8022-nn)

 VSP F400 and F600 Installation and Reference Guide (FE-94HM8045-nn)

 VSP G800 Installation and Reference Guide (FE-94HM8026-nn)

 VSP F800 Installation and Reference Guide (FE-94HM8046-nn)

The VSP Gx00 and VSP Fx00 SVP is a component that is not part of the standard system
architecture in the Japanese market. It was added for the rest of world (ROW) distribution.
Therefore, the SVP is not documented in the traditional sections of the HM800 Maintenance
Manual.

In the VSP Gx00 and VSP Fx00 architecture, the SVP provides certain functions.

It is the interface between Hitachi Command Suite (HCS) or Hitachi Storage Advisor (HSA) and
the storage system. The user management software communicates with the Block Element
Manager (BEM) that runs on the SVP and the BEM, in turn, communicates with the storage
system controllers across the management network.

Notice that the GUM runs on the controller and communicates across the internal LAN. This is
an important feature of the VSP Gx00 and VSP Fx00 With NAS Modules architecture.

The Block Element Manager is labeled Hitachi Device Manager - Storage Navigator. The user
interfaces never carry the label Block Element Manager or BEM.

If you would like more information about the Service Processor (SVP), please refer to the
documents listed here.

Page 4-16
User Interfaces Used by Field Support
Service Processor Rear View

Service Processor Rear View

LAN1/3/4 attached to management LAN LAN2 attached to maintenance LAN or MPC

Note: Local Area Connection numbers are assigned randomly for each SVP unit, so there is no
relation between physical port assignment (LAN1/2/3/4) and Local Area Connection numbering

A Service Processor (SVP) is an optional 1U server manufactured by Supermicro. One customer


option is to order the SVP to be pre-installed and configured during the Configure to Order
(CTO) process at the Hitachi Data Systems Distribution Center before the VSP Gx00 or VSP
Fx00 system is shipped to the customer.

A Hitachi-provided SVP runs Windows 7 Embedded operating system. The primary function of
the Service Processor is to be the interface point between the customer management software,
Hitachi Command Suite or Hitachi Storage Advisor, and the storage system.

Because the SVP must be connected to both VSP Gx00 and VSP Fx00 controllers, a network
bridge is configured to join three of the four NICs on the SVP for the management network
connections.

The fourth NIC is configured for the maintenance network.

Check the bridge relation by connecting a LAN cable to each port one-by-one before configuring
the bridge.

Another option is for the customer to provide their own SVP laptop or server.

Page 4-17
User Interfaces Used by Field Support
Service Processor LAN Configuration

Service Processor LAN Configuration

Create a network bridge using on-board NICs

Setting an IP address to the bridge


and to the maintenance port

Here is the SVP Windows OS configuration view of the network interfaces configuration.

Page 4-18
User Interfaces Used by Field Support
Storage Device List on the SVP

Storage Device List on the SVP

It can be a challenge to keep track of where you are and where you want to be or need to be
when deciding which interface to use for your VSP Gx00 or VSP Fx00 task.

Here you see an example of the Storage Device List running on the SVP. To get this interface,
you must RDP to the SVP.

This software is installed and configured as part of the VSP Gx00 or VSP Fx00 Configure to
Order process at the Distribution Center when the SVP is ordered with the system.

If you look closely in the upper right hand corner, you can see that it identifies the SVP IP
address. Just like the Storage Device List that runs on the Maintenance PC, the storage system
must be registered to the Storage Device List. An SVP manages only one storage system. The
storage system should be registered to the Storage Device List on the SVP as part of the
Configure to Order process.

In order to access the Block Element Manager running on the SVP, the services must be
running and the system’s status must show “Ready” in its Storage Device List entry. You can
stop, start and monitor the status of the Block Element Manager services on the SVP from the
Storage Device List entry as shown here. If you ever need to replace or rebuild a VSP Gx00 or
VSP Fx00 SVP, you will have to install and configure the SVP software. Replacing or rebuilding
the SVP is outside the scope of this workshop. Instructions on how to recover and rebuild the
SVP are found in the HM800 Maintenance Manual SVP Technical Reference.

Page 4-19
User Interfaces Used by Field Support
Browser Access to the SVP – User Management Only

Browser Access to the SVP – User Management Only

Web browser access from


Device Manager directly to
the SVP IP address offers
very limited functionality:
 Initial Setting
 User Account
management

Warning Notice:
You can use the initial setting functions of the storage system such as account management
and program product management after you log on. Use Hitachi Command Suite for applying
the configuration setting of the storage system after the initial setting.

If you use a browser to go directly to the SVP management IP address, you will get what
appears to be the Storage Navigator GUI. However, you will quickly find that the only
operations you can perform are in the User Management area.

You must remember to RDP to the SVP and then use the Device List entry to access the fully
functional Storage Navigator interface.

If you have experience with Hitachi RAID storage systems and are accustomed to browsing to
the SVP IP address to access Storage Navigator, you need to understand the Block Element
Manager (BEM) architecture differences in the VSP Gx00 and VSP Fx00 platforms. On a VSP
Gx00 or VSP Fx00, when you use a browser to access the SVP web server, a Device Manager
Storage Navigator (BEM) login screen is displayed. You may think that you will gain access to
the Storage Navigator GUI. However, in the VSP Gx00 and VSP Fx00 architecture, the Storage
Navigator version has very limited functionality. You can only run initial setting and user
account management operations from this interface.

Page 4-20
User Interfaces Used by Field Support
VSP Gx00 and VSP Fx00 Networks Diagram

VSP Gx00 and VSP Fx00 Networks Diagram

This diagram is found in the HM800 Maintenance Manual SVP Technical Reference section. It
shows more detail about the network connections and software components of the Service
Processor (SVP).

Accessing the Maintenance Utility GUI for a Block-Only System

 Select either CTL1 or CTL2 management IP address in your browser

If a VSP Gx00 or VSP Fx00 system is configured as block only, when you access the controller
IP address with a supported browser, the Maintenance Utility GUI login page is displayed.

Page 4-21
User Interfaces Used by Field Support
Storage Management GUIs for the Customer

Storage Management GUIs for the Customer

Hitachi Command Suite – Hitachi Device Manager

Hitachi Storage Advisor (HSA)

Hitachi Data Systems strongly encourages customers to move in the direction of our
management software offerings, such as Hitachi Command Suite (HCS) which includes Hitachi
Device Manager (HDvM) or Hitachi Storage Advisor (HSA).

The Block Element Manager - Storage Navigator interface is hidden and is difficult to access
directly.

Hitachi Storage Advisor was previously named “Hitachi Infrastructure Director” or HID. You may
find some references to HID in these training materials because the diagrams have not been
updated to reflect the new name.

As mentioned in the description of the purpose and function of the Service Processor or SVP, all
storage provisioning, management and administration should be performed either through
Command Suite or Storage Advisor.

Part of the integration of a VSP Gx00 or VSP Fx00 system into a customer environment includes
installing and configuring the management software to recognize and register the new storage
array. The SVP management IP address is specified to the management software. The
management software queries the storage array, retrieves its components and configuration
and registers the storage array fully into the management software database.

Because these workshops are focused on the storage system setup tasks that would be
performed before an array was registered to the management software, we only mention that
these management tools are meant to be part of each customer environment where VSP Gx00
or VSP Fx00 systems are installed.

Page 4-22
User Interfaces Used by Field Support
Register the Hitachi VSP Gx00 or VSP Fx00 Module to HCS

The added value of these management tools is that they can manage multiple and complex
storage environments and they implement “wizard” or best practice configuration options.

If you want to learn more about storage provisioning and storage administration, please attend
the available training courses.

Register the Hitachi VSP Gx00 or VSP Fx00 Module to HCS

HCS v8.4.1 automatically detects the unified NAS Platform

This slide shows an overview of the steps to register the VSP Gx00 or VSP Fx00 storage
systems (block) to Hitachi Command Suite v8.4.1.

If the VSP Gx00 or VSP Fx00 is configured with NAS modules, you only need to register the
system to Command Suite one time. There is no need to separately register the NAS
environment. Command Suite will detect the unified NAS Platform.

In a lab test, Command Suite automatically detected the NAS Platform as part of the registered
VSP Gx00 and VSP Fx00 systems. However, Command Suite displayed a message that some
configuration is required in the NAS Manager itself. In the NAS Manager you tell the NAS
Modules to which HCS system to report its configuration and other information.

Page 4-23
User Interfaces Used by Field Support
Hitachi Storage Advisor (HSA)

Hitachi Storage Advisor (HSA)

Here is an example of the Hitachi Storage Advisor GUI. You can see its graphic, high-level,
easy-to-use, simplified approach to storage administration. The GUI is job-role driven, wizard-
architected and integrates best practice storage provisioning and SAN management. If you want
to learn more about Hitachi Storage Advisor, look into related training courses available from
the Hitachi Data Systems Academy.

Page 4-24
5. Initial System Assessment
Let’s take a look at how to perform an initial system assessment of a newly installed VSP Gx00
or VSP Fx00 system.

Identify Installed Hardware Components

Use the
Maintenance
Utility to confirm
the installed
hardware

There are several ways you can identify or confirm the installed hardware components of VSP
Gx00 or VSP Fx00 systems.

• First, you can use the Block Element Manager (BEM) running on the SVP.

• Second, you can use the Maintenance Utility.

• Third, you can use the MPC GUI.

Page 5-1
Initial System Assessment
Checking Installed Software and Firmware

Here is a screen example of the Maintenance Utility. Within the Hardware section in the left
menu tree, you can navigate to the Controller Chassis and any installed Drive Box. On those
screens, you can review which specific hardware components are installed.

You can also review the hardware component status.

The Maintenance Utility is used to identify and replace failed hardware components. You will get
the opportunity to practice maintenance procedures when you attend the workshop.

Checking Installed Software and Firmware

Be sure to confirm the installed firmware components against the Engineering Change Notice
(ECN).

The Engineering Change Notice is available in TISC and is delivered also on the Documents and
Programs CD/DVD/.iso file that accompanies the microcode.

Page 5-2
6. Standard CS&S Tasks
Now we will identify and review the standard VSP Gx00 and VSP Fx00 maintenance tasks
expected of CS&S field support professionals.

Page 6-1
Standard CS&S Tasks
Storage Capacity Upgrade – Adding Disk Boxes

Storage Capacity Upgrade – Adding Disk Boxes

 Adding more disk enclosures


and disks:
1. Attach new DBL, DBS, DBF,
or DB60.
2. Insert drives.

 Select the Hardware screen in


Maintenance Utility and choose
1. Chassis  Install
• A new empty chassis will be
shown

Performing a VSP Gx00 or VSP Fx00 storage capacity upgrade is a standard CS&S task. A
storage capacity upgrade involves adding more disks to an existing system. Adding disks may
also require adding more Disk Boxes (DBs).

This slide shows the screen Maintenance Utility screen navigation and identifies the steps to add
new Disk Boxes.

Adding disks to a VSP Gx00 or VSP Fx00 system may be required when configuring a VSP Gx00
or VSP Fx00 block-only system to a unified system, with NAS modules.

Page 6-2
Standard CS&S Tasks
Adding Components – Drives

Adding Components – Drives

 Install drives
1. Select the newly added
drive box (in this case,
Drive Box – 00).
2. Select Drives.
3. Click Install.

 The system will detect the


newly installed drives

Performing a VSP Gx00 or VSP Fx00 storage capacity upgrade is a standard CS&S task.

This slide shows the Maintenance Utility screen navigation for adding Drives.

Adding drives to a VSP Gx00 or VSP Fx00 may be required in the case of a VSP Gx00 or VSP
Fx00 block-only system to a unified, with NAS modules, configuration.

The bulleted text steps on this slide identify how to add additional disk drives to the disk boxes.

After the new drives are inserted, the system detects them and adds the new drives to the
available inventory. They are then available to configure into Parity Groups.

Page 6-3
Standard CS&S Tasks
Create New Parity Groups – Storage Navigator

Create New Parity Groups – Storage Navigator

A member of the Administrators user group can access the SVP and login directly to the Storage
Navigator (Block Element Manager or BEM) GUI. When there are disk drives available and not
yet assigned to any parity group, use this interface to create new parity groups.

Use the left side of the Create Parity Groups screen to configure one or more new parity groups
and add them to the Selected Parity Groups list on the right side of the screen. When the Auto
Drive Selection option is chosen, you can select the check box and click the Detail button to
view which disk drives have been selected for the new parity groups.

Page 6-4
Standard CS&S Tasks
Create LDEVs

Create LDEVs

Creating LDEVs is considered part of block storage provisioning. There are times when this may
be a CS&S task.

If you have to perform a block-only to NAS upgrade for VSP Gx00 and VSP Fx00 with NAS
modules, the documented procedure requires you to manually create one LDEV.

You can create LDEVs using Device Manager Storage Navigator GUI, as shown.

There are many parameter configurations for LDEVs. To learn more, consult the VSP Gx00 and
VSP Fx00 Provisioning Guide, MK-94HM8014-nn.

Page 6-5
Standard CS&S Tasks
Create a Cache Logical Partition (CLPR)

Create a Cache Logical Partition (CLPR)

A separate Cache Logical Partition (CLPR) is required in the “with NAS modules” configuration.
This CLPR is dedicated to the I/O for the NAS System LUNs.

Creating CLPRs is a standard but infrequent storage administration task.

When an existing VSP Gx00 or VSP Fx00 system is converted to a unified configuration, a CLPR
for the NAS System disks must be manually created.

The NAS System LUs CLPR should be given the CLPR name “NASSystemCLPR.” The precise
CLPR name is important but the CLPR number does not matter.

The information about how to create a CLPR is found in the VSP Gx00 and VSP Fx00
Performance Guide, MK-94HM8012-nn.

Find the section, “Creating A CLPR.”

If you want to know more about Cache Logical Partitions and when and why to use them, you
can find more information in the Performance Guide.

Page 6-6
Standard CS&S Tasks
Create a Resource Group (RSG or RG)

Create a Resource Group (RSG or RG)

Resource Groups (RSG) are supported in the SVOS operating system. Resource Groups are used
to group and provide security to limit access to sets of storage resources.

In the VSP Gx00 and VSP Fx00 with NAS modules unified configuration, a Resource Group is
created to isolate and protect the NAS system resources. This RSG is created by the NAS OS
installation scripts . There are no manual steps you need to take even if you are performing a
NAS upgrade to a VSP Gx00 or VSP Fx00 block-only system.

When working with VSP Gx00 or VSP Fx00 with NAS modules systems, it is important to
understand what are Resource Groups, how they can be manually created and managed. This
is important so that you can review and verify the “with NAS modules” configuration.

Instructions on how to create a Resource Group are found in the VSP Gx00 and VSP Fx00
Provisioning Guide, MK-94HM8014-nn.

Page 6-7
Standard CS&S Tasks
Set Host Mode Options

Set Host Mode Options

Host Mode Option settings are an attribute of Host Groups within the front end channel ports in
Hitachi storage systems. Host Mode Options are set with the Edit Host Groups process.

Two new and specific Host Mode options are used when configuring a unified system. Host
Mode Options 7 and 58 are set for ports CL1-A and CL2-A in a VSP Gx00 or VSP Fx00 with NAS
modules. The process for setting Host Mode Options can be found in several places in the
documentation.

For the specific setting of the Host Mode Options required for the unified NAS Platform, refer to
the HM800 Maintenance Manual Installation Section on page INST07-01-140. This shows how
to set Host Mode Options using Storage Navigator, the Block Element Manager.

You can also refer to the VSP Gx00 and VSP Fx00 Provisioning Guide, MK-94HM8014-nn.

Page 6-8
Standard CS&S Tasks
Set a System Option Mode (SOM)

Set a System Option Mode (SOM)

Setting System Option Modes (SOMs) can be done only through the MPC GUI. This GUI can be
run only from the Maintenance PC, so only Hitachi Data Systems or partner staff can set SOMs.
Traditionally setting SOMs has been a CS&S task.

If your experience is primarily with earlier NAS systems, you may not yet be familiar with
System Option Modes on the Hitachi storage arrays.

The ability to set SOMs is required when upgrading a VSP Gx00 or VSP Fx00 block-only to
unified configuration. System Option Mode 318 is specified to be set for VSP Gx00 or VSP Fx00
With NAS Modules.

Information on how to access the MPC Utility “gray box” maintenance interface (formerly known
as the SVP application) is in the HM800 Maintenance Manual Maintenance PC section. See
pages MPC05-800 through MPC05-840.

Page 6-9
Standard CS&S Tasks
System Option Modes

System Option Modes

System Option Modes supported for each VSP Gx00 and VSP Fx00 model are documented in the
respective Hardware and Installation Reference Guides.

VSP Gx00 and VSP Fx00 Hardware Installation and Reference Guides:

• VSP G200 Installation and Reference Guide FE-94HM8020-nn

• VSP G400/G600 Installation and Reference Guide FE-94HM8022-nn

• VSP F400/F600 Installation and Reference Guide FE-94HM8045-nn

• VSP G800 Installation and Reference Guide FE-94HM8026-nn

• VSP F800 Installation and Reference Guide FE-94HM8046-nn

Page 6-10
Standard CS&S Tasks
Setting CEMD (Virtual) Jumper

Setting CEMD (Virtual) Jumper

Maintenance Utility GUI

To set or enable the CE mode CEMD (virtual) jumper setting, access the Maintenance Utility
GUI and navigate to Menu > System Management > Edit System Parameters.

On the Edit System Parameters screen, there are 4 check boxes. These are the “virtual”
jumpers. [In other systems, physical jumpers are set on pins on one or more printed circuit
boards (PCBs)]. In the VSP Gx00 and VSP Fx00 architecture, jumper-enabled functions are
controlled by “virtual jumpers” or check boxes accessed through one of the management
interfaces.

The virtual jumpers can only be enabled and disabled through the Maintenance Utility GUI.
These virtual jumpers are not supported in any other GUI interface or in the CLI.

Page 6-11
Standard CS&S Tasks
Checking Installed Firmware Versions With the Maintenance Utility GUI

Checking Installed Firmware Versions With the Maintenance


Utility GUI

You can view the installed firmware component versions using the Maintenance Utility GUI. To
view the firmware versions, access the Maintenance Utility GUI. Navigate to Administration >
Firmware. A list of firmware components with the installed version is displayed.

Page 6-12
Standard CS&S Tasks
Collect Dump With the Dump Tool – .bat File on the SVP

Collect Dump With the Dump Tool – .bat File on the SVP

The correct way to collect diagnostic dumps on a VSP Gx00 or VSP Fx00 system is to run the
appropriate Dump Tool .bat file from a command prompt window on the SVP.

Instructions for collecting either a normal or a detailed dump using the Dump Tool .bat script is
found in the VSP Gx00 and VSP Fx00 Storage Administrator Guide, MK-94HM8016-nn.

In the command prompt session, set the current directory path on the SVP to:

C:\mapp\wk\<serial number>\DKC200\mp\pc

The Dump Tool executables are located here.

After you launch the .bat file, the system will do all the rest. Collecting a detailed dump takes
about 5 to 10 minutes.

Page 6-13
Standard CS&S Tasks
Do Not Collect Dump Through the GUI

Do Not Collect Dump Through the GUI

Warning:
The Download System
Dump function of the
GUI collects
incomplete data

The HM800 Maintenance Manual Maintenance PC section on page MPC03-880 provides the
documentation for collecting a VSP Gx00 and VSP Fx00 system dump through the Maintenance
Utility GUI.

At the time this training material was created, this dump collection procedure did not collect all
the diagnostics needed by Hitachi Global Support Center.

When you need to collect a dump, be sure to follow the procedure documented in the System
Administrator Guide, MK-94HM8016-nn as presented on the previous slide. You can search this
PDF file for the phrase, “Dump Tool.”

Page 6-14
Standard CS&S Tasks
Record a Block Environment Config Backup

Record a Block Environment Config Backup

From the System Administrator Guide MK-95HM8016-nn

There are two ways to record a VSP Gx00 and VSP Fx00 block environment configuration
backup. The System Administrator Guide, MK-94HM8016-nn, documents the procedure
using .bat file on the SVP. Some information from this document is shown on this slide.

In the VSP Gx00 or VSP Fx00 architecture, the MPC GUI running on the Maintenance PC can be
used to record a block environment configuration backup. Use of the Maintenance PC is limited
to Hitachi Data Systems personnel or partner staff.

Page 6-15
Standard CS&S Tasks
Cross-Controller Internal Network

Cross-Controller Internal Network

10.251.0.15/4.15
10.1.0.15/4.15
172.24.0.15/4.15
172.24.0.15/4.15
10.198.0.15/4.15
10.17.0.15/4.15
10.97.0.15/4.15
10.251.0.15/4.15
172.17.0.15/4.15
172.31.0.15/4.15
192/168.0.15/4.15

You can manually configure the IP address values of the VSP Gx00 or VSP Fx00 internal
network through the Maintenance Utility GUI. The default IP values for the internal network are
different depending on whether the system is configured for block-only or with NAS modules.
The installation of the NAS modules changes the internal network IP addresses of the two
controllers to 10.251.0.15 and 10.251.4.15 respectively. Remember: Do not change these
values.

In the training labs, when we want to reset a system from unified back to block-only
configuration, we must manually change the internal network IP addresses back to the block-
only default values. Remember: This is the only time that you would manually change the
internal network IP addresses.

Page 6-16
Standard CS&S Tasks
Internal Networking – Block-Only

Internal Networking – Block-Only

Maintenance Management LAN Maintenance


Laptop
172.16.0.0/16 Laptop

172.16.25.50 172.16.25.60
Maintenance port Management port Management port Maintenance port

GUM1 eth0 GUM2 eth0

NAT NAT
eth1.4001 eth1.4002 eth1.4002 eth1.4001

192.0.16.8 126.255.0.15 126.255.4.15 192.0.17.8

Internal LAN
126.255.0.0/16

192.0.16.53 126.255.48.1 126.255.16.1 126.255.0.48-51 126.255.4.48-51 126.255.20.1 126.255.52.1 192.0.17.53

eth0 eth1 eth1 eth0

LPAR#1 Hypervisor Block MPs Block MPs Hypervisor LPAR#2


Maintenance
for NAS for NAS
Management
Internal

CTL#1 CTL#2

The Internal and maintenance LANs have five selectable subnets, but these cannot be freely
configured. The IP addresses are assigned into subnet external management LAN IP addresses,
and can be configured to be compatible with the connection to the customer’s datacenter
management network. While IPv6 is also supported, it is not shown here.

The default internal LAN IP addresses for a VSP Gx00 or VSP Fx00 block-only configuration are
126.255.0.15 and 126.255.4.15 for controller 1 and controller 2, respectively. Note: These
values are changed during the NAS code installation when installing the NAS modules.

Page 6-17
Standard CS&S Tasks
Fan Chassis With Batteries (4U Controller)

Fan Chassis With Batteries (4U Controller)

BAT-O11

BAT-B11

BKMF-11

These diagrams are taken from the HM800 Maintenance Manual Location section. Hitachi
storage system components are sometimes identified as being “Basic” or “Optional.” You can
think of it this way: “Basic” means “required.”

All the battery components with a “B” before their two digit identification number are required
and are found in every VSP Gx00 and VSP Fx00 system. The batteries with an “O” before the
two digit ID number are optional. They are installed depending on the cache or NAS modules
configuration of the specific VSP Gx00 or VSP Fx00 system.

The “rear side” batteries in BKMF-10 and BKMF-20 are two batteries that are required for the
“with NAS module” configuration.

Here, BKMF stands for Backup Module Fan and BAT stands for battery.

Page 6-18
Standard CS&S Tasks
Battery Naming Scheme

Battery Naming Scheme

BAT-XYY
Type Location
• B – Basic • 1Y – CTL1
• O – Optional • 2Y – CTL2
• F – File • Y1 – Fan #

The battery component ID indicates whether a battery is basic (required), optional, or required
for the file (with NAS module) configuration.

Page 6-19
Standard CS&S Tasks
Block Configuration With CHBs Only

Block Configuration With CHBs Only

When they are added, the NAS modules must be installed in the A, B, C and D positions in both
controllers. That means that any CHBs already installed in those slots must be relocated to
make room in the A, B, C and D slots. Migration of the block workload can be done
nondisruptively, but it is a multi-step process and is not discussed during these workshops.

Page 6-20
Standard CS&S Tasks
VSP G800 Only – FE Expansion Chassis in Unified Configuration

VSP G800 Only – FE Expansion Chassis in Unified Configuration

Cluster I/F HFB


PCI Cable Data LAN I/F
Connection PK (future use)

SW PK (2 PKs-
upper and
lower)
CHB

2xPSU

LANB DKB

PCI cables

In the case of a VSP G800 or VSP F800, the optional FE Expansion Chassis is supported. This is
also called the Channel Board Box, CHB Box or CHBB. The CHBB provides the ability to expand
the number of front end (FE) ports on the large, VSP G800 system.

Page 6-21
Standard CS&S Tasks
VSP G800 Only – CHBB Slot Locations (Rear)

VSP G800 Only – CHBB Slot Locations (Rear)

PCP Package x 2

Slots CTL2

Slots CTL1

The CHBB front end expansion chassis is supported only for use with HM800H (VSP G800 or
VSP F800). It doubles the number of available FE ports. The CHBB holds up to 8 additional CHB
PCBs, four per controller. CHBs in the CHBB must be installed in pairs. It is connected to the
CBX via two PCIe cables per controller. It requires two additional rack unit slots. Only one CHBB
per CBX is supported. It offers four additional external slots per controller for CHB installation.
At the time this training material was created, the 16/32 gigabit per second CHBs were not
supported in the CHBB. Be sure to check the supported configurations.

Page 6-22
Standard CS&S Tasks
Verify Block Licenses Are Installed

Verify Block Licenses Are Installed

If the required licenses are


not installed, install them

The three Program Product (PP) License Keys required in the block environment to support the
NAS Platform are all included in the base SVOS licenses set. They are:

Open Volume Management,

Resource Partition Manager (which has the ability to create and manage Resource Groups),

And Virtual Partition Manager (which has the ability to create and manage Cache Logical
Partitions or CLPRs).

The need for these three PP licenses for the “with NAS modules” configuration is documented in
the HM800 Maintenance Manual Installation Section on page INST07-01-10.

The customer should receive these license keys along with their VSP Gx00 or VSP Fx00 system,
even if the system was originally ordered in block-only configuration. Review the installed
licenses and confirm that these three license keys are installed. The instructions for installing
license keys are found in the HM800 Maintenance Manual Maintenance PC section on page
MPC03-380.

Page 6-23
Standard CS&S Tasks
Check Installed Firmware Versions

Check Installed Firmware Versions

VSP Gx00 and VSP Fx00 installed firmware versions can be viewed through the MPC GUI,
formerly called the SVP Application. Here is an example of the MPC GUI Versions view. The
process to view installed firmware versions using the MPC GUI is documented in the HM800
Maintenance Manual Maintenance PC Section.

Firmware versions can also be viewed through the Maintenance Utility.

Page 6-24
7. Firmware Upgrade
Now we will review information about VSP Gx00 and VSP Fx00 firmware upgrades.

Microcode Exchange Wizard Tool

 https://support.hds.com/en_us/user/tech-tips/e/2016july/T2016071201.html

You must know how to get and use the Microcode Exchange Wizard Tool. The VSP Gx00 and
VSP Fx00 Microcode Exchange Wizard Tool is also known as 83-MCTool.

This version was obtained from the link shown on this slide.

Page 7-1
Firmware Upgrade
Firmware Upgrade Checklist

Firmware Upgrade Checklist

Action MM Reference
1 Confirm current firmware component versions <good practice>
2 Clear browser caches; close all browsers FIRM03-51 (1)
3 Run setup.exe – run as administrator FIRM03-51 (3)
4 Continue with firmware upgrade process FIRM03-52 through FIRM03-66
5 Verify “From” and “To” firmware versions FIRM03-67
6 GUM reboots (*) FIRM03-68 (5)
7 Wait for the firmware update processing to complete FIRM03-68 17
(**)
8 Confirm component firmware versions 83-MCTool
9 Confirm all hardware normal status 83-MCTool
10 Confirm HCS and HSA access <good practice>

This table presents a shortened, step-wise view of the firmware upgrade procedure with links to
pages in the HM800 Maintenance Manual Firmware Section. Some of the steps are common
sense and are not explicitly specified in the documented instructions.

The 83-MCTool refers to the instructions you will get from the VSP Gx00 and VSP Fx00
Microcode Exchange Wizard Tool.

The one asterisk note indicates that you will not lose connection when the GUM reboots
because you are not communicating through the GUM.

The two asterisk note indicates the installation of a firmware update in a system with NAS
modules can run up to 540 minutes (9 hours). The time required for a unified system firmware
upgrade is somewhat shorter with SVOS 7.0, released in October, 2016.

Page 7-2
Firmware Upgrade
Maintenance Interfaces and Network Connections

Maintenance Interfaces and Network Connections

 Maintenance PC
 MPC software
 SVP
 GUM
 Internal LAN
 Management LAN
 Maintenance Port

Important: Run firmware


upgrades from the SVP

You saw this diagram earlier in the training materials. We present it again, because it is actually
found in the HM800 Maintenance Manual Firmware section.

The Firmware section includes the instructions for firmware upgrades. When you read the
firmware upgrade instructions, you will notice that the instructions seem to specifically say to
run the firmware upgrade from the Maintenance PC. However the instructions also seem to say
that it is possible to run the firmware upgrade from the SVP. This diagram does not show the
process of running a firmware upgrade from the SVP. However, the supported Hitachi Data
Systems procedure is to run the firmware upgrade from the SVP.

This statement is repeated on the next slide.

Page 7-3
Firmware Upgrade
Run the Firmware Upgrade From the SVP

Run the Firmware Upgrade From the SVP

All the instructions in the HM800 Maintenance Manual Firmware Section specify running
firmware updates from the MPC. However, Hitachi Data Systems Product Support has informed
us that firmware upgrades can and should be run from the SVP. Be sure to upgrade Storage
Navigator on the Maintenance PC.

Page 7-4
Firmware Upgrade
Check Installed Firmware Versions Before Firmware Upgrade

Check Installed Firmware Versions Before Firmware Upgrade

The instructions in the Maintenance Manual do not specify checking the current firmware
versions. This is, however, a common sense thing to do.

Take time to review and confirm the installed firmware component versions against the current
ECN. There is a documented procedure to correct mismatching firmware. Refer to TRBL02-370.
If you identify a firmware mismatch in a running system, be sure to report this condition to
Hitachi Global Support Center (GSC) and get their advice and support as you correct the
mismatch. The system must also be clean of any outstanding hardware maintenance issues
before starting a firmware upgrade.

Page 7-5
Firmware Upgrade
Start Upgrade – Run setup.exe on the SVP

Start Upgrade – Run setup.exe on the SVP

Wait until this screen is closed.

This screen image was taken from an MPC. You should run the firmware upgrade from the SVP.

Here is the outline of a firmware upgrade process on a VSP Gx00 or VSP Fx00 system.

1. Start firmware upgrade on SVP.

2. Take care to use the correct .iso file (file name H8-SVP-xxx-yy) for mounting.

3. Execute setup.exe as administrator.

If there is no DVD drive mounted on SVP, copy all files to a work folder on the SVP. Here are
the software locations:

• Storage Navigator is installed on both the MPC and SVP,

• MPC software is installed only on the Maintenance PC,

• And firmware is installed on the controllers and back-end components.

Page 7-6
Firmware Upgrade
Firmware Upgrade: Confirm License Agreement

Firmware Upgrade: Confirm License Agreement

Here are examples of the first screens displayed after the firmware upgrade setup.exe is started.
Refer to the Firmware section of the Maintenance Manual. See page FIRM03-52.

Firmware Upgrade: Start Installation

When this screen is


displayed, allow access

This slide shows the dialog sequence screens that you will encounter during the firmware
upgrade. Refer to FIRM03-51 through FIRM03-67.

Page 7-7
Firmware Upgrade
Environmental Settings I

Environmental Settings I

 List of registered Storage


Systems
 Select the one to upgrade
 Select Update Objects
chooses the components
to be upgraded
(Storage System/Storage
Navigator)

This slide shows the Environmental Settings screen you will see in the firmware upgrade
process.

Select Update Objects

With Apply, return to


Environmental Settings

After the Environmental Settings screen, you will be presented with the Select Update Objects
screen. Select both check boxes. This will apply updated Storage Navigator GUI software and all
the storage system firmware.

Page 7-8
Firmware Upgrade
Environmental Settings II

Environmental Settings II

 Check whether all


versions are
correct
 When settings are
completed for all
storage, click
Apply

After you complete the Select Update Objects screen, the system returns to the Environmental
Settings screen. Click Apply to continue.

Select Update Objects

 Windows shows the


progress of update
process
 In case Firmware was
selected, the Update
button is enabled for
firmware upgrade

Next, the Environmental Settings screen reports that Storage Navigator update has been
successful. Then, click the Update button to apply the controller firmware.

Page 7-9
Firmware Upgrade
Update Firmware Screen

Update Firmware Screen

The screen example shows the Update Firmware Screen for the unified model.

 List of Firmware
components
 Phase I uploads files
 Phase II updates
firmware

After the update is completed, this screen appears.

Be sure to check the firmware status after the update is completed in Maintenance Utility. A
firmware upgrade for a VSP Gx00 or VSP Fx00 with NAS modules can run for many hours. Be
sure to schedule unified system firmware upgrades carefully in collaboration with the customer.
Refer to the Firmware section of the Maintenance Manual section FIRM03-68 which indicates
run time durations for firmware upgrades for block-only and unified systems.

Page 7-10
8. Troubleshooting
In this section we will review some basic troubleshooting concepts and practices.

Always Check for Pinned Data

This button
will blink

It is important to check the storage system for the existence of pinned data before performing
any maintenance tasks. Pinned data is data updates that are stranded in the data cache and the
system has not successfully recorded the data to the physical disk. Indication of the existence
of pinned data is shown by the “Pin…” button blinking in the Maintenance view from the MPC
GUI.

Refer to instructions in the HM800 Maintenance Manual Troubleshooting Section, starting on


page MPC05-1140, for instructions on how to protect pinned data before a maintenance task. It
is advisable to report pinned data to the Hitachi Global Support Center and follow their guidance
in recovering pinned data.

Page 8-1
Troubleshooting
Multiple Concurrent Failures Requires Careful Planning

Multiple Concurrent Failures Requires Careful Planning

If a system has multiple concurrent component failures, you will need guidance and direction
from the Global Support Center.

Page 8-2
Troubleshooting
Replacing a Memory DIMM

Replacing a Memory DIMM

 These mainboards do not


have any DIMM position
indicator LEDs Insert a new
memory DIMM
 Carefully locate the failed
DIMM based on the error 2 times. This
messages and the DIMM ensures a good
position layout connection on a
new component

Hint from a server maintenance


expert:
Move a “good” DIMM to the slot
where the DIMM failed. Install the
new DIMM in the empty slot

When asked to perform a memory DIMM replacement, be very careful to locate the correct
DIMM position because the mainboards do not have indicator LEDs for failed memory DIMMs.
Use the error message information and the DIMM position layout diagrams in the Technical
Guide and also as marked on the mainboard. Press the memory module locking latches to
release the memory module.

Here is an important hint from a server maintenance expert. To help verify that the DIMM has
failed and not a slot, take one of the good DIMMs and move it to the slot where the DIMM
failure occurred. Then, install the replacement DIMM into the now empty slot. This provides
double confirmation that the DIMM has failed and not the DIMM slot.

To install a DIMM, set the replacement DIMM into the empty slot. Apply even pressure across
the top of the DIMM module until it clicks into the slot. Make sure the locking latches are
engaged.

It is best practice to insert, remove and re-insert a new DIMM module. This ensures that any
coating applied to the contacts is scratched and that good contact is made between the
connectors in the slot and the connectors on the DIMM module.

Page 8-3
Troubleshooting
Block Environment SIM Messages and RC Codes

Block Environment SIM Messages and RC Codes

This diagram shows how System Sense Bytes (SSBs), System Information Messages (SIMs) and
Action Codes (ACCs) are used when the VSP Gx00 or VSP Fx00 block environment detects a
hardware error or failure. Because the NAS modules are now integrated hardware components,
hardware errors of the NAS SFPs, NAS module DIMMs or the NAS modules themselves are
detected through internal SSBs and are reported through SIM messages and the system’s Alert
status.

This internal hardware error detection does not recognize NAS logical errors which are reported
through the NAS Platform.

This troubleshooting flow diagram is found in the Troubleshooting section of the Maintenance
Manual. See page TRBL01-10.

Page 8-4
Troubleshooting
Collect Dump Using the SVP

Collect Dump Using the SVP

1. Use the remote desktop (RDP) to navigate to the SVP.

2. Prepare the Maintenance Utility (MU).

3. Open a Windows command prompt with administrator permissions.

4. Move the current directory to the folder where the tool is available.

5. Execute Dump_Detail.bat and specify the output directory for the dump file.

6. A completion message box displays. Press any key to acknowledge the message and
close the message box.

7. Close the Windows command prompt.

Note: NAS logs and diagnostics must be collected separately from the NAS Manager GUI or command line.

Hitachi Global Support Center requires that storage system dumps be collected by running the
executable dump tool from the SVP. This tool creates a single diagnostic bundle that contains
block storage system logs and diagnostics only including Block dump and Maintenance Utility
and SVP diagnostics. Here are the steps in the dump collection process.

Troubleshooting for the Maintenance Utility

Page 8-5
Troubleshooting
Management Interface Connectivity Problems – Rebooting the GUM From the BEM

If you cannot access the Maintenance Utility, then you must troubleshoot the Maintenance
Utility and its access. A table of different Maintenance Utility access problems is found in the
HM800 Maintenance Manual Maintenance PC Section starting on page MPC01-530. The
Maintenance Utility is served by the GUM which runs on the controller.

Management Interface Connectivity Problems – Rebooting the


GUM From the BEM

The GUM component alone can be rebooted using the Block Element Manager – Device
Manager Storage Navigator. Select Maintenance Utility > Hardware > GUM reboot > entry of
the controller whose GUM you want to reboot. Recall that the GUM runs on the controller and
that the Block Element Manager runs on the SVP. If rebooting the GUM does not resolve
connectivity issues, you can perform the GUM Reset. This is covered on the next slide.

Page 8-6
Troubleshooting
Rebooting the GUM From the Maintenance Utility

Rebooting the GUM From the Maintenance Utility

It is also possible to reboot the GUM from the Maintenance Utility. However, this may seem
somewhat illogical because the Maintenance Utility is accessed through the GUM. So, if you can
still log onto the Maintenance Utility but it is not behaving correctly, you may want to reboot
the GUM. You will lose the connection to the Maintenance Utility and will have to reconnect
after the GUM has rebooted.

In the Menu section of the left pane on the Maintenance Utility GUI, select System
Management > Reboot GUM. The instructions for rebooting the GUM from the Maintenance
Utility are found in HM800 Maintenance Manual Maintenance PC Section on page MPC03-730.

Page 8-7
Troubleshooting
Forceful Hardware Reset of the GUM

Forceful Hardware Reset of the GUM

The diagram shown on this slide is taken from the HM800 Maintenance Manual Location Section.
Item 1-5 in the diagram is the LAN Reset button found in the center of the front of the VSP
Gx00 and VSP Fx00 chassis (VSP G400, VSP G600, and VSP G800).

If the GUM cannot be contacted through any of its software interfaces, the GUM can be forcibly
reset by pressing the hardware reset button for 1 second. Instructions for using the LAN-RST
button are found in HM800 Maintenance Manual Maintenance PC Section on page MPC03-870.

Page 8-8
9. Hi-Track for Unified Systems
Here is a quick review of Hi-Track Remote Monitoring and what you need to know when
configuring and using Hi-Track with VSP Gx00 or VSP Fx00 with NAS modules systems.

Page 9-1
Hi-Track for Unified Systems
Register the Storage Array With Hi-Track Agent

Register the Storage Array With Hi-Track Agent

Inbound
Customer Site remote access
Hi-Track
Site Manager
SVP running on
Agent SVP or
standalone
PC

Outbound HDS
SVP
communication Hi-Track
Agent to Hitachi Data Center
Systems
through FTP,
Older Model https or dial-up

Other Hitachi products: Modular Storage,


HCP, HNAS switches, and so on

To set up a VSP Gx00 or VSP Fx00 with NAS modules for Hi-Track monitoring, register the block
storage array using the Hi-Track Agent. The NAS platform is registered with Hi-Track Monitor.
See more information on the next slide.

For full instructions and the correct version of Hi-Track, consult the documentation
from http://hitrack.hds.com.

Page 9-2
Hi-Track for Unified Systems
Register the NAS SMU to Hi-Track Monitor

Register the NAS SMU to Hi-Track Monitor

Hi-Track support for VSP Gx00 and VSP Fx00 with NAS modules unified systems requires that
the block “side” is registered to Hi-Track and that the NAS Platform is registered with Hi-Track
Monitor. The VSP Gx00 and VSP Fx00 With NAS Modules Differences course, THC2794, contains
information about Hi-Track for unified systems.

Page 9-3
Hi-Track for Unified Systems
Workshop Prerequisite Materials Review Summary

Workshop Prerequisite Materials Review Summary

 Upon completion of this course, you should be able to:


• Access the Block Element Manager (BEM)
• Access the Maintenance Utility
• Access the MPC GUI
• Locate and follow documented procedures in the HM800 Maintenance
Manual
• Plan and execute a VSP Gx00 firmware upgrade from the SVP

By reviewing and understanding the information in this prerequisite training, you should now be
able to perform these tasks.

This ends the prerequisite review for the VSP Gx00 and VSP Fx00 With NAS Modules Hands-On
Workshop for CS&S.

Page 9-4
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AIX — IBM UNIX.


AaaS — Archive as a Service. A cloud computing AL — Arbitrated Loop. A network in which nodes
business model. contend to send data and only 1 node at a
AAMux — Active-Active Multiplexer. time is able to send data.

ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.

ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-Band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before

HDS Confidential: For distribution only to authorized parties. Page G-1


proceeding with other work. Asynchronous this term are subject to proprietary
I/O operations enable an initiator to have trademark disputes in multiple countries at
multiple concurrent I/O operations in the present time.
progress. Also called Out-of-Band BIOS — Basic Input/Output System. A chip
virtualization. located on all computer motherboards that
ATA —Advanced Technology Attachment. A disk governs how a system boots and operates.
drive implementation that integrates the BLKSIZE — Block size.
controller on the disk drive itself. Also
known as IDE (Integrated Drive Electronics). BLOB — Binary large object.

ATR — Autonomic Technology Refresh. BP — Business processing.

Authentication — The process of identifying an BPaaS —Business Process as a Service. A cloud


individual, usually based on a username and computing business model.
password. BPAM — Basic Partitioned Access Method.
AUX — Auxiliary Storage Manager. BPM — Business Process Management.
Availability — Consistent direct access to BPO — Business Process Outsourcing. Dynamic
information over time. BPO services refer to the management of
-back to top- partly standardized business processes,
including human resources delivered in a
—B— pay-per-use billing relationship or a self-
B4 — A group of 4 HDU boxes that are used to service consumption model.
contain 128 HDDs. BST — Binary Search Tree.
BA — Business analyst. BSTP — Blade Server Test Program.
Back end — In client/server applications, the BTU — British Thermal Unit.
client part of the program is often called the
Business Continuity Plan — Describes how an
front end and the server part is called the
organization will resume partially or
back end.
completely interrupted critical functions
Backup image—Data saved during an archive within a predetermined time after a
operation. It includes all the associated files, disruption or a disaster. Sometimes also
directories, and catalog information of the called a Disaster Recovery Plan.
backup operation.
-back to top-
BASM — Basic Sequential Access Method.
BATCTR — Battery Control PCB.
—C—
CA — (1) Continuous Access software (see
BC — (1) Business Class (in contrast with EC,
HORC), (2) Continuous Availability or (3)
Enterprise Class). (2) Business Coordinator.
Computer Associates.
BCP — Base Control Program.
Cache — Cache Memory. Intermediate buffer
BCPii — Base Control Program internal interface. between the channels and drives. It is
BDAM — Basic Direct Access Method. generally available and controlled as 2 areas
BDW — Block Descriptor Word. of cache (cache A and cache B). It may be
battery-backed.
BED — Back end director. Controls the paths to
the HDDs. Cache hit rate — When data is found in the cache,
it is called a cache hit, and the effectiveness
Big Data — Refers to data that becomes so large in of a cache is judged by its hit rate.
size or quantity that a dataset becomes
awkward to work with using traditional Cache partitioning — Storage management
database management systems. Big data software that allows the virtual partitioning
entails data capacity or measurement that of cache and allocation of it to different
requires terms such as Terabyte (TB), applications.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) CAD — Computer-Aided Design.
or Yottabyte (YB). Note that variations of

Page G-2 HDS Confidential: For distribution only to authorized parties.


CAGR — Compound Annual Growth Rate. CDWP — Cumulative disk write throughput.
Capacity — Capacity is the amount of data that a CE — Customer Engineer.
storage system or drive can store after CEC — Central Electronics Complex.
configuration and/or formatting.
CentOS — Community Enterprise Operating
Most data storage companies, including HDS, System.
calculate capacity based on the premise that
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, Centralized Management — Storage data
1GB = 1,024 megabytes, and 1TB = 1,024 management, capacity management, access
gigabytes. See also Terabyte (TB), Petabyte security management, and path
(PB), Exabyte (EB), Zettabyte (ZB) and management functions accomplished by
Yottabyte (YB). software.

CAPEX — Capital expenditure — the cost of CF — Coupling Facility.


developing or providing non-consumable CFCC — Coupling Facility Control Code.
parts for the product or system. For example, CFW — Cache Fast Write.
the purchase of a photocopier is the CAPEX,
and the annual paper and toner cost is the CH — Channel.
OPEX. (See OPEX). CH S — Channel SCSI.
CAS — (1) Column Address Strobe. A signal sent CHA — Channel Adapter. Provides the channel
to a dynamic random access memory interface control functions and internal cache
(DRAM) that tells it that an associated data transfer functions. It is used to convert
address is a column address. CAS-column the data format between CKD and FBA. The
address strobe sent by the processor to a CHA contains an internal processor and 128
DRAM circuit to activate a column address. bytes of edit buffer memory. Replaced by
(2) Content-addressable Storage. CHB in some cases.
CBI — Cloud-based Integration. Provisioning of a CHA/DKA — Channel Adapter/Disk Adapter.
standardized middleware platform in the CHAP — Challenge-Handshake Authentication
cloud that can be used for various cloud Protocol.
integration scenarios.
CHB — Channel Board. Updated DKA for Hitachi
An example would be the integration of Unified Storage VM and additional
legacy applications into the cloud or enterprise components.
integration of different cloud-based
Chargeback — A cloud computing term that refers
applications into one application.
to the ability to report on capacity and
CBU — Capacity Backup. utilization by application or dataset,
CBX —Controller chassis (box). charging business users or departments
based on how much they use.
CC – Common Criteria. In regards to Information
Technology Security Evaluation, it is a CHF — Channel Fibre.
flexible, cloud related certification CHIP — Client-Host Interface Processor.
framework that enables users to specify Microprocessors on the CHA boards that
security functional and assurance process the channel commands from the
requirements. hosts and manage host access to cache.
CCHH — Common designation for Cylinder and CHK — Check.
Head. CHN — Channel adapter NAS.
CCI — Command Control Interface. CHP — Channel Processor or Channel Path.
CCIF — Cloud Computing Interoperability CHPID — Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN— Cache Memory Hierarchical
cloud computing. Star Network.
CDP — Continuous Data Protection. CHT — Channel tachyon. A Fibre Channel
CDR — Clinical Data Repository. protocol controller.
CICS — Customer Information Control System.

HDS Confidential: For distribution only to authorized parties. Page G-3


CIFS protocol — Common internet file system is a • Private cloud (or private network cloud)
platform-independent file sharing system. A • Public cloud (or public network cloud)
network file system accesses protocol
• Virtual private cloud (or virtual private
primarily used by Windows clients to
network cloud)
communicate file access requests to
Windows servers. Cloud Enabler —a concept, product or solution
that enables the deployment of cloud
CIM — Common Information Model.
computing. Key cloud enablers include:
CIS — Clinical Information System.
• Data discoverability
CKD ― Count-key Data. A format for encoding
• Data mobility
data on hard disk drives; typically used in
the mainframe environment. • Data protection
CKPT — Check Point. • Dynamic provisioning
CL — See Cluster. • Location independence

CLA – See Cloud Security Alliance. • Multitenancy to ensure secure privacy

CLI — Command Line Interface. • Virtualization

CLPR — Cache Logical Partition. Cache can be Cloud Fundamental —A core requirement to the
deployment of cloud computing. Cloud
divided into multiple virtual cache
memories to lessen I/O contention. fundamentals include:

Cloud Computing — “Cloud computing refers to • Self service


applications and services that run on a • Pay per use
distributed network using virtualized • Dynamic scale up and scale down
resources and accessed by common Internet
protocols and networking standards. It is Cloud Security Alliance — A standards
distinguished by the notion that resources are organization active in cloud computing.
virtual and limitless, and that details of the Cloud Security Alliance GRC Stack — The Cloud
physical systems on which software runs are Security Alliance GRC Stack provides a
abstracted from the user.” — Source: Cloud toolkit for enterprises, cloud providers,
Computing Bible, Barrie Sosinsky (2011). security solution providers, IT auditors and
Cloud computing often entails an “as a other key stakeholders to instrument and
service” business model that may entail one assess both private and public clouds against
or more of the following: industry established best practices,
standards and critical compliance
• Archive as a Service (AaaS) requirements.
• Business Process as a Service (BPaas)
CLPR — Cache Logical Partition.
• Failure as a Service (FaaS)
Cluster — A collection of computers that are
• Infrastructure as a Service (IaaS) interconnected (typically at high-speeds) for
• IT as a Service (ITaaS) the purpose of improving reliability,
• Platform as a Service (PaaS) availability, serviceability or performance
(via load balancing). Often, clustered
• Private File Tiering as a Service (PFTaaS) computers have access to a common pool of
• Software as a Service (SaaS) storage and run special software to
• SharePoint as a Service (SPaaS) coordinate the component computers'
activities.
• SPI refers to the Software, Platform and
Infrastructure as a Service business model. CM ― (1) Cache Memory, Cache Memory Module.
Cloud network types include the following: Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB
• Community cloud (or community x 2 areas) of capacity. It is available and
network cloud) controlled as 2 areas of cache (cache A and
• Hybrid cloud (or hybrid network cloud)

Page G-4 HDS Confidential: For distribution only to authorized parties.


cache B). It is fully battery-backed (48 hours). Corporate governance — Organizational
(2) Content Management. compliance with government-mandated
CM DIR — Cache Memory Directory. regulations.

CME — Communications Media and CP — Central Processor (also called Processing


Unit or PU).
Entertainment.
CPC — Central Processor Complex.
CM-HSN — Control Memory Hierarchical Star
Network. CPM — Cache Partition Manager. Allows for
partitioning of the cache and assigns a
CM PATH ― Cache Memory Access Path. Access
partition to a LU; this enables tuning of the
Path from the processors of CHA, DKA PCB
system’s performance.
to Cache Memory.
CPOE — Computerized Physician Order Entry
CM PK — Cache Memory Package. (Provider Ordered Entry).
CM/SM — Cache Memory/Shared Memory. CPS — Cache Port Slave.
CMA — Cache Memory Adapter. CPU — Central Processing Unit.
CMD — Command. CRM — Customer Relationship Management.
CMG — Cache Memory Group. CSA – Cloud Security Alliance.
CNAME — Canonical NAME. CSS — Channel Subsystem.
CNS — Cluster Name Space or Clustered Name CS&S — Customer Service and Support.
Space. CSTOR — Central Storage or Processor Main
CNT — Cumulative network throughput. Memory.
CoD — Capacity on Demand. C-Suite — The C-suite is considered the most
important and influential group of
Community Network Cloud — Infrastructure individuals at a company. Referred to as
shared between several organizations or “the C-Suite within a Healthcare provider.”
groups with common concerns.
CSV — Comma Separated Value or Cluster Shared
Concatenation — A logical joining of 2 series of Volume.
data, usually represented by the symbol “|”.
In data communications, 2 or more data are CSVP — Customer-specific Value Proposition.
often concatenated to provide a unique CSW ― Cache Switch PCB. The cache switch
name or reference (such as, S_ID | X_ID). connects the channel adapter or disk adapter
Volume managers concatenate disk address to the cache. Each of them is connected to the
spaces to present a single larger address cache by the Cache Memory Hierarchical
space. Star Net (C-HSN) method. Each cluster is
Connectivity technology — A program or device's provided with the 2 CSWs, and each CSW
ability to link with other programs and can connect 4 caches. The CSW switches any
devices. Connectivity technology allows of the cache paths to which the channel
programs on a given computer to run adapter or disk adapter is to be connected
routines or access objects on another remote through arbitration.
computer. CTG — Consistency Group.
Controller — A device that controls the transfer of CTL — Controller module.
data from a computer to a peripheral device
CTN — Coordinated Timing Network.
(including a storage system) and vice versa.
CU — Control Unit. Refers to a storage subsystem.
Controller-based virtualization — Driven by the
The hexadecimal number to which 256
physical controller at the hardware
microcode level versus at the application LDEVs may be assigned.
software layer and integrates into the CUDG — Control Unit Diagnostics. Internal
infrastructure to allow virtualization across system tests.
heterogeneous storage and third party CUoD — Capacity Upgrade on Demand.
products.
CV — Custom Volume.

HDS Confidential: For distribution only to authorized parties. Page G-5


CVS ― Customizable Volume Size. Software used context, data migration is the same as
to create custom volume sizes. Marketed Hierarchical Storage Management (HSM).
under the name Virtual LVI (VLVI) and Data Pipe or Data Stream — The connection set up
Virtual LUN (VLUN). between the MediaAgent, source or
CWDM — Course Wavelength Division destination server is called a Data Pipe or
Multiplexing. more commonly a Data Stream.
CXRC — Coupled z/OS Global Mirror. Data Pool — A volume containing differential
-back to top- data only.
—D— Data Protection Directive — A major compliance
and privacy protection initiative within the
DA — Device Adapter.
European Union (EU) that applies to cloud
DACL — Discretionary access control list (ACL). computing. Includes the Safe Harbor
The part of a security descriptor that stores Agreement.
access rights for users and groups.
Data Stream — CommVault’s patented high
DAD — Device Address Domain. Indicates a site performance data mover used to move data
of the same device number automation back and forth between a data source and a
support function. If several hosts on the MediaAgent or between 2 MediaAgents.
same site have the same device number
Data Striping — Disk array data mapping
system, they have the same name.
technique in which fixed-length sequences of
DAP — Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
DAS — Direct Attached Storage. regular rotating pattern.
DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is
kilobytes per second for a CD-ROM drive, in
transferred together. For example, the
bits per second for a modem, and in
X-modem protocol transfers blocks of 128
megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size,
often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and
DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
recoverability, cost, and what ever DFSMS — Data Facility Storage Management
parameters the organization defines as Subsystem.
critical to its operations. DFSM SDM — Data Facility Storage Management
Data Migration — The process of moving data Subsystem System Data Mover.
from 1 storage device to another. In this

Page G-6 HDS Confidential: For distribution only to authorized parties.


DFSMSdfp — Data Facility Storage Management 8 LUs; a large one, with hundreds of disk
Subsystem Data Facility Product. drives, can support thousands.
DFSMSdss — Data Facility Storage Management DKA ― Disk Adapter. Also called an array control
Subsystem Data Set Services. processor (ACP). It provides the control
DFSMShsm — Data Facility Storage Management functions for data transfer between drives
Subsystem Hierarchical Storage Manager. and cache. The DKA contains DRR (Data
Recover and Reconstruct), a parity generator
DFSMSrmm — Data Facility Storage Management circuit. Replaced by DKB in some cases.
Subsystem Removable Media Manager.
DKB — Disk Board. Updated DKA for Hitachi
DFSMStvs — Data Facility Storage Management Unified Storage VM and additional
Subsystem Transactional VSAM Services. enterprise components.
DFW — DASD Fast Write. DKC ― Disk Controller Unit. In a multi-frame
DICOM — Digital Imaging and Communications configuration, the frame that contains the
in Medicine. front end (control and memory
DIMM — Dual In-line Memory Module. components).
Direct Access Storage Device (DASD) — A type of DKCMN ― Disk Controller Monitor. Monitors
storage device, in which bits of data are temperature and power status throughout
stored at precise locations, enabling the the machine.
computer to retrieve information directly DKF ― Fibre disk adapter. Another term for a
without having to scan a series of records. DKA.
Direct Attached Storage (DAS) — Storage that is DKU — Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data. DKUPS — Disk Unit Power Supply.
Director class switches — Larger switches often DLIBs — Distribution Libraries.
used as the core of large switched fabrics.
DKUP — Disk Unit Power Supply.
Disaster Recovery Plan (DRP) — A plan that
describes how an organization will deal with DLM — Data Lifecycle Management.
potential disasters. It may include the DMA — Direct Memory Access.
precautions taken to either maintain or DM-LU — Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan. cache.
Disk Administrator — An administrative tool that DMP — Disk Master Program.
displays the actual LU storage configuration.
DMT — Dynamic Mapping Table.
Disk Array — A linked group of 1 or more
physical independent hard disk drives DMTF — Distributed Management Task Force. A
generally used to replace larger, single disk standards organization active in cloud
drive systems. The most common disk computing.
arrays are in daisy chain configuration or DNS — Domain Name System.
implement RAID (Redundant Array of DOC — Deal Operations Center.
Independent Disks) technology.
A disk array may contain several disk drive Domain — A number of related storage array
trays, and is structured to improve speed groups.
and increase protection against loss of data. DOO — Degraded Operations Objective.
Disk arrays organize their data storage into DP — Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL — Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL — (1) (Dynamic) Data Protection Level or (2)
Denied Persons List.

HDS Confidential: For distribution only to authorized parties. Page G-7


DR — Disaster Recovery. EHR — Electronic Health Record.
DRAC — Dell Remote Access Controller. EIG — Enterprise Information Governance.
DRAM — Dynamic random access memory. EMIF — ESCON Multiple Image Facility.
DRP — Disaster Recovery Plan. EMPI — Electronic Master Patient Identifier. Also
DRR — Data Recover and Reconstruct. Data Parity known as MPI.
Generator chip on DKA. Emulation — In the context of Hitachi Data
DRV — Dynamic Reallocation Volume. Systems enterprise storage, emulation is the
logical partitioning of an Array Group into
DSB — Dynamic Super Block. logical devices.
DSF — Device Support Facility. EMR — Electronic Medical Record.
DSF INIT — Device Support Facility Initialization
ENC — Enclosure or Enclosure Controller. The
(for DASD).
units that connect the controllers with the
DSP — Disk Slave Program. Fibre Channel disks. They also allow for
DT — Disaster tolerance. online extending a system by adding RKAs.
DTA —Data adapter and path to cache-switches. ENISA – European Network and Information
Security Agency.
DTR — Data Transfer Rate.
EOF — End of Field.
DVE — Dynamic Volume Expansion.
EOL — End of Life.
DW — Duplex Write.
EPO — Emergency Power Off.
DWDM — Dense Wavelength Division
Multiplexing. EREP — Error Reporting and Printing.

DWL — Duplex Write Line or Dynamic ERP — Enterprise Resource Planning.


Workspace Linking. ESA — Enterprise Systems Architecture.
-back to top- ESB — Enterprise Service Bus.

—E— ESC — Error Source Code.


ESD — Enterprise Systems Division (of Hitachi).
EAL — Evaluation Assurance Level (EAL1
through EAL7). The EAL of an IT product or ESCD — ESCON Director.
system is a numerical security grade ESCON ― Enterprise Systems Connection. An
assigned following the completion of a input/output (I/O) interface for mainframe
Common Criteria security evaluation, an computer connections to storage devices
international standard in effect since 1999. developed by IBM.
EAV — Extended Address Volume. ESD — Enterprise Systems Division.
EB — Exabyte. ESDS — Entry Sequence Data Set.
EC — Enterprise Class (in contrast with BC, ESS — Enterprise Storage Server.
Business Class). ESW — Express Switch or E Switch. Also referred
ECC — Error Checking and Correction. to as the Grid Switch (GSW).
ECC.DDR SDRAM — Error Correction Code Ethernet — A local area network (LAN)
Double Data Rate Synchronous Dynamic architecture that supports clients and servers
RAM Memory. and uses twisted pair cables for connectivity.
ECM — Extended Control Memory. ETR — External Time Reference (device).
ECN — Engineering Change Notice. EVS — Enterprise Virtual Server.
E-COPY — Serverless or LAN free backup. Exabyte (EB) — A measurement of data or data
storage. 1EB = 1,024PB.
EFI — Extensible Firmware Interface. EFI is a
specification that defines a software interface EXCP — Execute Channel Program.
between an operating system and platform ExSA — Extended Serial Adapter.
firmware. EFI runs on top of BIOS when a -back to top-
LPAR is activated.

Page G-8 HDS Confidential: For distribution only to authorized parties.


—F— achieved by including redundant instances
of components whose failure would make
FaaS — Failure as a Service. A proposed business the system inoperable, coupled with facilities
model for cloud computing in which large- that allow the redundant components to
scale, online failure drills are provided as a assume the function of failed ones.
service in order to test real cloud
deployments. Concept developed by the FAIS — Fabric Application Interface Standard.
College of Engineering at the University of FAL — File Access Library.
California, Berkeley in 2011. FAT — File Allocation Table.
Fabric — The hardware that connects Fault Tolerant — Describes a computer system or
workstations and servers to storage devices component designed so that, in the event of a
in a SAN is referred to as a "fabric." The SAN component failure, a backup component or
fabric enables any-server-to-any-storage procedure can immediately take its place with
device connectivity through the use of Fibre no loss of service. Fault tolerance can be
Channel switching technology. provided with software, embedded in
Failback — The restoration of a failed system hardware or provided by hybrid combination.
share of a load to a replacement component. FBA — Fixed-block Architecture. Physical disk
For example, when a failed controller in a sector mapping.
redundant configuration is replaced, the FBA/CKD Conversion — The process of
devices that were originally controlled by converting open-system data in FBA format
the failed controller are usually failed back to mainframe data in CKD format.
to the replacement controller to restore the FBUS — Fast I/O Bus.
I/O balance, and to restore failure tolerance.
FC ― Fibre Channel or Field-Change (microcode
Similarly, when a defective fan or power
update). A technology for transmitting data
supply is replaced, its load, previously borne
between computer devices; a set of
by a redundant component, can be failed
standards for a serial I/O bus capable of
back to the replacement part.
transferring data between 2 ports.
Failed over — A mode of operation for failure-
FC RKAJ — Fibre Channel Rack Additional.
tolerant systems in which a component has
Module system acronym refers to an
failed and its function has been assumed by
additional rack unit that houses additional
a redundant component. A system that
hard drives exceeding the capacity of the
protects against single failures operating in
core RK unit.
failed over mode is not failure tolerant, as
failure of the redundant component may FC-0 ― Lowest layer on Fibre Channel transport.
render the system unable to function. Some This layer represents the physical media.
systems (for example, clusters) are able to FC-1 ― This layer contains the 8b/10b encoding
tolerate more than 1 failure; these remain scheme.
failure tolerant until no redundant FC-2 ― This layer handles framing and protocol,
component is available to protect against frame format, sequence/exchange
further failures. management and ordered set usage.
Failover — A backup operation that automatically FC-3 ― This layer contains common services used
switches to a standby database server or by multiple N_Ports in a node.
network if the primary system fails, or is FC-4 ― This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA ― Fibre Channel Adapter. Fibre interface
accessibility. Also called path failover. card. Controls transmission of fibre packets.
Failure tolerance — The ability of a system to FC-AL — Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
reduced performance level, when 1 or more consortium of computer and mass storage
of its components has failed. Failure device manufacturers, and is now being
tolerance in disk subsystems is often standardized by ANSI. FC-AL was designed

HDS Confidential: For distribution only to authorized parties. Page G-9


for new mass storage devices and other physical link rates to make them up to 8
peripheral devices that require very high times as efficient as ESCON (Enterprise
bandwidth. Using optical fiber to connect System Connection), IBM's previous fiber
devices, FC-AL supports full-duplex data optic channel standard.
transfer rates of 100MB/sec. FC-AL is FIPP — Fair Information Practice Principles.
compatible with SCSI for high-performance Guidelines for the collection and use of
storage systems. personal information created by the United
FCC — Federal Communications Commission. States Federal Trade Commission (FTC).
FCIP — Fibre Channel over IP. A network storage FISMA — Federal Information Security
technology that combines the features of Management Act of 2002. A major
Fibre Channel and the Internet Protocol (IP) compliance and privacy protection law that
to connect distributed SANs over large applies to information systems and cloud
distances. FCIP is considered a tunneling computing. Enacted in the United States of
protocol, as it makes a transparent point-to- America in 2002.
point connection between geographically FLGFAN ― Front Logic Box Fan Assembly.
separated SANs over IP networks. FCIP
relies on TCP/IP services to establish FLOGIC Box ― Front Logic Box.
connectivity between remote SANs over FM — Flash Memory. Each microprocessor has
LANs, MANs, or WANs. An advantage of FM. FM is non-volatile memory that contains
FCIP is that it can use TCP/IP as the microcode.
transport while keeping Fibre Channel fabric FOP — Fibre Optic Processor or fibre open.
services intact.
FQDN — Fully Qualified Domain Name.
FCoE – Fibre Channel over Ethernet. An
encapsulation of Fibre Channel frames over FPC — Failure Parts Code or Fibre Channel
Ethernet networks. Protocol Chip.
FCP — Fibre Channel Protocol. FPGA — Field Programmable Gate Array.
FC-P2P — Fibre Channel Point-to-Point. Frames — An ordered vector of words that is the
FCSE — Flashcopy Space Efficiency. basic unit of data transmission in a Fibre
FC-SW — Fibre Channel Switched. Channel network.
FCU— File Conversion Utility. Front end — In client/server applications, the
FD — Floppy Disk or Floppy Drive. client part of the program is often called the
front end and the server part is called the
FDDI — Fiber Distributed Data Interface.
back end.
FDR — Fast Dump/Restore.
FRU — Field Replaceable Unit.
FE — Field Engineer.
FS — File System.
FED — (Channel) Front End Director.
FedRAMP – Federal Risk and Authorization FSA — File System Module-A.
Management Program. FSB — File System Module-B.
Fibre Channel — A serial data transfer FSI — Financial Services Industries.
architecture developed by a consortium of
FSM — File System Module.
computer and mass storage device
manufacturers and now being standardized FSW ― Fibre Channel Interface Switch PCB. A
by ANSI. The most prominent Fibre Channel board that provides the physical interface
standard is Fibre Channel Arbitrated Loop (cable connectors) between the ACP ports
(FC-AL). and the disks housed in a given disk drive.
FICON — Fiber Connectivity. A high-speed FTP ― File Transfer Protocol. A client-server
input/output (I/O) interface for mainframe protocol that allows a user on 1 computer to
computer connections to storage devices. As transfer files to and from another computer
part of IBM's S/390 server, FICON channels over a TCP/IP network.
increase I/O capacity through the FWD — Fast Write Differential.
combination of a new architecture and faster -back to top-

Page G-10 HDS Confidential: For distribution only to authorized parties.


—G— only 1 H2F that can be added to the core RK
Floor Mounted unit. See also: RK, RKA, and
GA — General availability. H1F.
GARD — General Available Restricted HA — High Availability.
Distribution.
Hadoop — Apache Hadoop is an open-source
Gb — Gigabit. software framework for data storage and
GB — Gigabyte. large-scale processing of data-sets on
Gb/sec — Gigabit per second. clusters of hardware.
GB/sec — Gigabyte per second. HANA — High Performance Analytic Appliance,
a database appliance technology proprietary
GbE — Gigabit Ethernet.
to SAP.
Gbps — Gigabit per second.
HBA — Host Bus Adapter — An I/O adapter that
GBps — Gigabyte per second. sits between the host computer's bus and the
GBIC — Gigabit Interface Converter. Fibre Channel loop and manages the transfer
of information between the 2 channels. In
GCMI — Global Competitive and Marketing
order to minimize the impact on host
Intelligence (Hitachi).
processor performance, the host bus adapter
GDG — Generation Data Group. performs many low-level interface functions
GDPS — Geographically Dispersed Parallel automatically or with minimal processor
Sysplex. involvement.
GID — Group Identifier within the UNIX security HCA — Host Channel Adapter.
model. HCD — Hardware Configuration Definition.
gigE — Gigabit Ethernet. HD — Hard Disk.
GLM — Gigabyte Link Module. HDA — Head Disk Assembly.
Global Cache — Cache memory is used on demand HDD ― Hard Disk Drive. A spindle of hard disk
by multiple applications. Use changes platters that make up a hard drive, which is
dynamically, as required for READ a unit of physical storage within a
performance between hosts/applications/LUs. subsystem.
GPFS — General Parallel File System. HDDPWR — Hard Disk Drive Power.
GSC — Global Support Center. HDU ― Hard Disk Unit. A number of hard drives
(HDDs) grouped together within a
GSI — Global Systems Integrator.
subsystem.
GSS — Global Solution Services.
Head — See read/write head.
GSSD — Global Solutions Strategy and
Heterogeneous — The characteristic of containing
Development.
dissimilar elements. A common use of this
GSW — Grid Switch Adapter. Also known as E word in information technology is to
Switch (Express Switch). describe a product as able to contain or be
GUI — Graphical User Interface. part of a “heterogeneous network,"
consisting of different manufacturers'
GUID — Globally Unique Identifier.
products that can interoperate.
-back to top-
Heterogeneous networks are made possible by
—H— standards-conforming hardware and
H1F — Essentially the floor-mounted disk rack software interfaces used in common by
(also called desk side) equivalent of the RK. different products, thus allowing them to
(See also: RK, RKA, and H2F). communicate with each other. The Internet
itself is an example of a heterogeneous
H2F — Essentially the floor-mounted disk rack
network.
(also called desk side) add-on equivalent
similar to the RKA. There is a limitation of HiCAM — Hitachi Computer Products America.

HDS Confidential: For distribution only to authorized parties. Page G-11


HIPAA — Health Insurance Portability and infrastructure, operations and applications)
Accountability Act. in a coordinated fashion to assemble a
HIS — (1) High Speed Interconnect. (2) Hospital particular solution.” — Source: Gartner
Information System (clinical and financial). Research.
Hybrid Network Cloud — A composition of 2 or
HiStar — Multiple point-to-point data paths to
cache. more clouds (private, community or public).
Each cloud remains a unique entity but they
HL7 — Health Level 7. are bound together. A hybrid network cloud
HLQ — High-level Qualifier. includes an interconnection.
HLS — Healthcare and Life Sciences. Hypervisor — Also called a virtual machine
manager, a hypervisor is a hardware
HLU — Host Logical Unit.
virtualization technique that enables
H-LUN — Host Logical Unit Number. See LUN. multiple operating systems to run
HMC — Hardware Management Console. concurrently on the same computer.
Hypervisors are often installed on server
Homogeneous — Of the same or similar kind.
hardware then run the guest operating
Host — Also called a server. Basically a central systems that act as servers.
computer that processes end-user
Hypervisor can also refer to the interface
applications or requests.
that is provided by Infrastructure as a Service
Host LU — Host Logical Unit. See also HLU. (IaaS) in cloud computing.
Host Storage Domains — Allows host pooling at Leading hypervisors include VMware
the LUN level and the priority access feature vSphere Hypervisor™ (ESXi), Microsoft®
lets administrator set service levels for Hyper-V and the Xen® hypervisor.
applications. -back to top-
HP — (1) Hewlett-Packard Company or (2) High
Performance.
HPC — High Performance Computing. —I—
HSA — Hardware System Area. I/F — Interface.
HSG — Host Security Group. I/O — Input/Output. Term used to describe any
HSM — Hierarchical Storage Management (see program, operation, or device that transfers
Data Migrator). data to or from a computer and to or from a
peripheral device.
HSN — Hierarchical Star Network.
IaaS —Infrastructure as a Service. A cloud
HSSDC — High Speed Serial Data Connector.
computing business model — delivering
HTTP — Hyper Text Transfer Protocol. computer infrastructure, typically a platform
HTTPS — Hyper Text Transfer Protocol Secure. virtualization environment, as a service,
Hub — A common connection point for devices in along with raw (block) storage and
a network. Hubs are commonly used to networking. Rather than purchasing servers,
connect segments of a LAN. A hub contains software, data center space or network
multiple ports. When a packet arrives at 1 equipment, clients buy those resources as a
port, it is copied to the other ports so that all fully outsourced service. Providers typically
segments of the LAN can see all packets. A bill such services on a utility computing
switching hub actually reads the destination basis; the amount of resources consumed
address of each packet and then forwards (and therefore the cost) will typically reflect
the packet to the correct port. Device to the level of activity.
which nodes on a multi-point bus or loop are IDE — Integrated Drive Electronics Advanced
physically connected. Technology. A standard designed to connect
Hybrid Cloud — “Hybrid cloud computing refers hard and removable disk drives.
to the combination of external public cloud IDN — Integrated Delivery Network.
computing services and internal resources
iFCP — Internet Fibre Channel Protocol.
(either a private cloud or traditional

Page G-12 HDS Confidential: For distribution only to authorized parties.


Index Cache — Provides quick access to indexed IOC — I/O controller.
data on the media during a browse\restore IOCDS — I/O Control Data Set.
operation.
IODF — I/O Definition file.
IBR — Incremental Block-level Replication or
IOPH — I/O per hour.
Intelligent Block Replication.
IOPS – I/O per second.
ICB — Integrated Cluster Bus.
IOS — I/O Supervisor.
ICF — Integrated Coupling Facility.
IOSQ — Input/Output Subsystem Queue.
ID — Identifier.
IP — Internet Protocol. The communications
IDR — Incremental Data Replication. protocol that routes traffic across the
iFCP — Internet Fibre Channel Protocol. Allows Internet.
an organization to extend Fibre Channel IPv6 — Internet Protocol Version 6. The latest
storage networks over the Internet by using revision of the Internet Protocol (IP).
TCP/IP. TCP is responsible for managing IPL — Initial Program Load.
congestion control as well as error detection
IPSEC — IP security.
and recovery services.
IRR — Internal Rate of Return.
iFCP allows an organization to create an IP
SAN fabric that minimizes the Fibre Channel ISC — Initial shipping condition or Inter-System
fabric component and maximizes use of the Communication.
company's TCP/IP infrastructure. iSCSI — Internet SCSI. Pronounced eye skuzzy.
An IP-based standard for linking data
IFL — Integrated Facility for LINUX.
storage devices over a network and
IHE — Integrating the Healthcare Enterprise. transferring data by carrying SCSI
IID — Initiator ID. commands over IP networks.
IIS — Internet Information Server. ISE — Integrated Scripting Environment.
ILM — Information Life Cycle Management. iSER — iSCSI Extensions for RDMA.
ILO — (Hewlett-Packard) Integrated Lights-Out. ISL — Inter-Switch Link.

IML — Initial Microprogram Load. iSNS — Internet Storage Name Service.


ISOE — iSCSI Offload Engine.
IMS — Information Management System.
ISP — Internet service provider.
In-Band Virtualization — Refers to the location of
the storage network path, between the ISPF — Interactive System Productivity Facility.
application host servers in the storage ISPF/PDF — Interactive System Productivity
systems. Provides both control and data Facility/Program Development Facility.
along the same connection path. Also called ISV — Independent Software Vendor.
symmetric virtualization. ITaaS — IT as a Service. A cloud computing
INI — Initiator. business model. This general model is an
Interface —The physical and logical arrangement umbrella model that entails the SPI business
supporting the attachment of any device to a model (SaaS, PaaS and IaaS — Software,
connector or to another device. Platform and Infrastructure as a Service).
Internal Bus — Another name for an internal data ITSC — Informaton and Telecommunications
bus. Also, an expansion bus is often referred Systems Companies.
to as an internal bus. -back to top-

Internal Data Bus — A bus that operates only —J—


within the internal circuitry of the CPU,
Java — A widely accepted, open systems
communicating among the internal caches of
programming language. Hitachi’s enterprise
memory that are part of the CPU chip’s
software products are all accessed using Java
design. This bus is typically rather quick and
applications. This enables storage
is independent of the rest of the computer’s
administrators to access the Hitachi
operations.

HDS Confidential: For distribution only to authorized parties. Page G-13


enterprise software products from any PC or (all or portions of 1 or more disks) that are
workstation that runs a supported thin-client combined so that the subsystem sees and
internet browser application and that has treats them as a single area of data storage.
TCP/IP network access to the computer on Also called a volume. An LDEV has a
which the software product runs. specific and unique address within a
Java VM — Java Virtual Machine. subsystem. LDEVs become LUNs to an
open-systems host.
JBOD — Just a Bunch of Disks.
JCL — Job Control Language. LDKC — Logical Disk Controller or Logical Disk
Controller Manual.
JMP —Jumper. Option setting method.
LDM — Logical Disk Manager.
JMS — Java Message Service.
LDS — Linear Data Set.
JNL — Journal.
JNLG — Journal Group. LED — Light Emitting Diode.

JRE —Java Runtime Environment. LFF — Large Form Factor.


JVM — Java Virtual Machine. LIC — Licensed Internal Code.
J-VOL — Journal Volume. LIS — Laboratory Information Systems.
-back to top- LLQ — Lowest Level Qualifier.

—K— LM — Local Memory.

KSDS — Key Sequence Data Set. LMODs — Load Modules.

kVA— Kilovolt Ampere. LNKLST — Link List.

KVM — Kernel-based Virtual Machine or Load balancing — The process of distributing


Keyboard-Video Display-Mouse. processing and communications activity
evenly across a computer network so that no
kW — Kilowatt. single device is overwhelmed. Load
-back to top- balancing is especially important for
networks where it is difficult to predict the
—L— number of requests that will be issued to a
LACP — Link Aggregation Control Protocol. server. If 1 server starts to be swamped,
LAG — Link Aggregation Groups. requests are forwarded to another server
with more capacity. Load balancing can also
LAN — Local Area Network. A communications
refer to the communications channels
network that serves clients within a
themselves.
geographical area, such as a building.
LOC — “Locations” section of the Maintenance
LBA — Logical block address. A 28-bit value that
Manual.
maps to a specific cylinder-head-sector
address on the disk. Logical DKC (LDKC) — Logical Disk Controller
Manual. An internal architecture extension
LC — Lucent connector. Fibre Channel connector
to the Control Unit addressing scheme that
that is smaller than a simplex connector (SC).
allows more LDEVs to be identified within 1
LCDG — Link Processor Control Diagnostics. Hitachi enterprise storage system.
LCM — Link Control Module. Longitudinal record —Patient information from
LCP — Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM. LPAR — Logical Partition (mode).
LCSS — Logical Channel Subsystems. LR — Local Router.
LCU — Logical Control Unit. LRECL — Logical Record Length.
LD — Logical Device. LRP — Local Router Processor.
LDAP — Lightweight Directory Access Protocol. LRU — Least Recently Used.
LDEV ― Logical Device or Logical Device
(number). A set of physical disk partitions

Page G-14 HDS Confidential: For distribution only to authorized parties.


LSS — Logical Storage Subsystem (equivalent to Control Unit. The local CU of a remote copy
LCU). pair. Main or Master Control Unit.
LU — Logical Unit. Mapping number of an LDEV. MCU — Master Control Unit.
LUN ― Logical Unit Number. 1 or more LDEVs. MDPL — Metadata Data Protection Level.
Used only for open systems. MediaAgent — The workhorse for all data
LUSE ― Logical Unit Size Expansion. Feature used movement. MediaAgent facilitates the
to create virtual LUs that are up to 36 times transfer of data between the data source, the
larger than the standard OPEN-x LUs. client computer, and the destination storage
media.
LVDS — Low Voltage Differential Signal
Metadata — In database management systems,
LVI — Logical Volume Image. Identifies a similar data files are the files that store the database
concept (as LUN) in the mainframe information; whereas other files, such as
environment. index files and data dictionaries, store
LVM — Logical Volume Manager. administrative information, known as
-back to top- metadata.
MFC — Main Failure Code.
—M— MG — (1) Module Group. 2 (DIMM) cache
MAC — Media Access Control. A MAC address is memory modules that work together. (2)
a unique identifier attached to most forms of Migration Group. A group of volumes to be
networking equipment. migrated together.
MAID — Massive array of disks. MGC — (3-Site) Metro/Global Mirror.
MAN — Metropolitan Area Network. A MIB — Management Information Base. A database
communications network that generally of objects that can be monitored by a
covers a city or suburb. MAN is very similar network management system. Both SNMP
to a LAN except it spans across a and RMON use standardized MIB formats
geographical region such as a state. Instead that allow any SNMP and RMON tools to
of the workstations in a LAN, the monitor any device defined by a MIB.
workstations in a MAN could depict Microcode — The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a
Fortan Pascal C
MAN.
High-level Language
MAPI — Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping — Conversion between 2 data
Hardware
addressing spaces. For example, mapping
refers to the conversion between physical
Microprogram — See Microcode.
disk block addresses and the block addresses
of the virtual disks presented to operating MIF — Multiple Image Facility.
environments by control software. Mirror Cache OFF — Increases cache efficiency
Mb — Megabit. over cache data redundancy.
MB — Megabyte. M-JNL — Primary journal volumes.
MBA — Memory Bus Adaptor. MM — Maintenance Manual.
MBUS — Multi-CPU Bus. MMC — Microsoft Management Console.
MC — Multi Cabinet. Mode — The state or setting of a program or
device. The term mode implies a choice,
MCU — Main Control Unit, Master Control Unit,
which is that you can change the setting and
Main Disk Control Unit or Master Disk
put the system in a different mode.

HDS Confidential: For distribution only to authorized parties. Page G-15


MP — Microprocessor. NFS protocol — Network File System is a protocol
MPA — Microprocessor adapter. that allows a computer to access files over a
network as easily as if they were on its local
MPB – Microprocessor board.
disks.
MPI — (Electronic) Master Patient Identifier. Also
NIM — Network Interface Module.
known as EMPI.
MPIO — Multipath I/O. NIS — Network Information Service (originally
called the Yellow Pages or YP).
MP PK – MP Package.
NIST — National Institute of Standards and
MPU — Microprocessor Unit.
Technology. A standards organization active
MQE — Metadata Query Engine (Hitachi). in cloud computing.
MS/SG — Microsoft Service Guard. NLS — Native Language Support.
MSCS — Microsoft Cluster Server. Node ― An addressable entity connected to an
MSS — (1) Multiple Subchannel Set. (2) Managed I/O bus or network, used primarily to refer
Security Services. to computers, storage devices and storage
subsystems. The component of a node that
MTBF — Mean Time Between Failure.
connects to the bus or network is a port.
MTS — Multitiered Storage.
Node name ― A Name_Identifier associated with
Multitenancy — In cloud computing, a node.
multitenancy is a secure way to partition the
infrastructure (application, storage pool and NPV — Net Present Value.
network) so multiple customers share a NRO — Network Recovery Objective.
single resource pool. Multitenancy is one of NTP — Network Time Protocol.
the key ways cloud can achieve massive
economy of scale. NVS — Non Volatile Storage.
-back to top-
M-VOL — Main Volume.
MVS — Multiple Virtual Storage. —O—
-back to top- OASIS – Organization for the Advancement of
Structured Information Standards.
—N—
OCC — Open Cloud Consortium. A standards
NAS ― Network Attached Storage. A disk array
organization active in cloud computing.
connected to a controller that gives access to
a LAN Transport. It handles data at the file OEM — Original Equipment Manufacturer.
level. OFC — Open Fibre Control.
NAT — Network Address Translation. OGF — Open Grid Forum. A standards
NDMP — Network Data Management Protocol. A organization active in cloud computing.
protocol meant to transport data between OID — Object identifier.
NAS devices.
OLA — Operating Level Agreements.
NetBIOS — Network Basic Input/Output System.
OLTP — On-Line Transaction Processing.
Network — A computer system that allows
OLTT — Open-loop throughput throttling.
sharing of resources, such as files and
peripheral hardware devices. OMG — Object Management Group. A standards
organization active in cloud computing.
Network Cloud — A communications network.
The word "cloud" by itself may refer to any On/Off CoD — On/Off Capacity on Demand.
local area network (LAN) or wide area ONODE — Object node.
network (WAN). The terms “computing"
OpenStack – An open source project to provide
and "cloud computing" refer to services
orchestration and provisioning for cloud
offered on the public Internet or to a private
environments based on a variety of different
network that uses the same protocols as a
hypervisors.
standard network. See also cloud computing.

Page G-16 HDS Confidential: For distribution only to authorized parties.


OPEX — Operational Expenditure. This is an multiple partitions. Then customize the
operating expense, operating expenditure, partition to match the I/O characteristics of
operational expense, or operational assigned LUs.
expenditure, which is an ongoing cost for PAT — Port Address Translation.
running a product, business, or system. Its
counterpart is a capital expenditure (CAPEX). PATA — Parallel ATA.

ORM — Online Read Margin. Path — Also referred to as a transmission channel,


the path between 2 nodes of a network that a
OS — Operating System. data communication follows. The term can
Out-of-Band Virtualization — Refers to systems refer to the physical cabling that connects the
where the controller is located outside of the nodes on a network, the signal that is
SAN data path. Separates control and data communicated over the pathway or a sub-
on different connection paths. Also called channel in a carrier frequency.
asymmetric virtualization. Path failover — See Failover.
-back to top-
PAV — Parallel Access Volumes.
—P— PAWS — Protect Against Wrapped Sequences.
P-2-P — Point to Point. Also P-P. PB — Petabyte.
PaaS — Platform as a Service. A cloud computing PBC — Port Bypass Circuit.
business model — delivering a computing PCB — Printed Circuit Board.
platform and solution stack as a service. PCHIDS — Physical Channel Path Identifiers.
PaaS offerings facilitate deployment of
PCI — Power Control Interface.
applications without the cost and complexity
of buying and managing the underlying PCI CON — Power Control Interface Connector
hardware, software and provisioning Board.
hosting capabilities. PaaS provides all of the PCI DSS — Payment Card Industry Data Security
facilities required to support the complete Standard.
life cycle of building and delivering web PCIe — Peripheral Component Interconnect
applications and services entirely from the Express.
Internet.
PD — Product Detail.
PACS – Picture Archiving and Communication PDEV— Physical Device.
System.
PDM — Policy based Data Migration or Primary
PAN — Personal Area Network. A Data Migrator.
communications network that transmit data
PDS — Partitioned Data Set.
wirelessly over a short distance. Bluetooth
and Wi-Fi Direct are examples of personal PDSE — Partitioned Data Set Extended.
area networks. Performance — Speed of access or the delivery of
PAP — Password Authentication Protocol. information.
Petabyte (PB) — A measurement of capacity — the
Parity — A technique of checking whether data
amount of data that a drive or storage
has been lost or written over when it is
system can store after formatting. 1PB =
moved from one place in storage to another
1,024TB.
or when it is transmitted between
computers. PFA — Predictive Failure Analysis.
Parity Group — Also called an array group. This is PFTaaS — Private File Tiering as a Service. A cloud
a group of hard disk drives (HDDs) that computing business model.
form the basic unit of storage in a subsystem. PGP — Pretty Good Privacy. A data encryption
All HDDs in a parity group must have the and decryption computer program used for
same physical capacity. increasing the security of email
Partitioned cache memory — Separate workloads communications.
in a “storage consolidated” system by PGR — Persistent Group Reserve.
dividing cache into individually managed

HDS Confidential: For distribution only to authorized parties. Page G-17


PI — Product Interval. Provisioning — The process of allocating storage
PIR — Performance Information Report. resources and assigning storage capacity for
an application, usually in the form of server
PiT — Point-in-Time.
disk drive space, in order to optimize the
PK — Package (see PCB). performance of a storage area network
PL — Platter. The circular disk on which the (SAN). Traditionally, this has been done by
magnetic data is stored. Also called the SAN administrator, and it can be a
motherboard or backplane. tedious process. In recent years, automated
PM — Package Memory. storage provisioning (also called auto-
provisioning) programs have become
POC — Proof of concept.
available. These programs can reduce the
Port — In TCP/IP and UDP networks, an time required for the storage provisioning
endpoint to a logical connection. The port process, and can free the administrator from
number identifies what type of port it is. For the often distasteful task of performing this
example, port 80 is used for HTTP traffic. chore manually.
POSIX — Portable Operating System Interface for PS — Power Supply.
UNIX. A set of standards that defines an
PSA — Partition Storage Administrator .
application programming interface (API) for
software designed to run under PSSC — Perl Silicon Server Control.
heterogeneous operating systems. PSU — Power Supply Unit.
PP — Program product. PTAM — Pickup Truck Access Method.
P-P — Point-to-point; also P2P. PTF — Program Temporary Fixes.
PPRC — Peer-to-Peer Remote Copy. PTR — Pointer.
Private Cloud — A type of cloud computing PU — Processing Unit.
defined by shared capabilities within a Public Cloud — Resources, such as applications
single company; modest economies of scale and storage, available to the general public
and less automation. Infrastructure and data over the Internet.
reside inside the company’s data center
P-VOL — Primary Volume.
behind a firewall. Comprised of licensed
-back to top-
software tools rather than on-going services.
—Q—
Example: An organization implements its QD — Quorum Device.
own virtual, scalable cloud and business
units are charged on a per use basis. QDepth — The number of I/O operations that can
run in parallel on a SAN device; also WWN
Private Network Cloud — A type of cloud
QDepth.
network with 3 characteristics: (1) Operated
solely for a single organization, (2) Managed QoS — Quality of Service. In the field of computer
internally or by a third-party, (3) Hosted networking, the traffic engineering term
internally or externally. quality of service (QoS) refers to resource
reservation control mechanisms rather than
PR/SM — Processor Resource/System Manager. the achieved service quality. Quality of
Protocol — A convention or standard that enables service is the ability to provide different
the communication between 2 computing priority to different applications, users, or
endpoints. In its simplest form, a protocol data flows, or to guarantee a certain level of
can be defined as the rules governing the performance to a data flow.
syntax, semantics and synchronization of
QSAM — Queued Sequential Access Method.
communication. Protocols may be
-back to top-
implemented by hardware, software or a
combination of the 2. At the lowest level, a —R—
protocol defines the behavior of a hardware RACF — Resource Access Control Facility.
connection.
RAID ― Redundant Array of Independent Disks,
or Redundant Array of Inexpensive Disks. A

Page G-18 HDS Confidential: For distribution only to authorized parties.


group of disks that look like a single volume telecommunication links that are installed to
to the server. RAID improves performance back up primary resources in case they fail.
by pulling a single stripe of data from
multiple disks, and improves fault-tolerance A well-known example of a redundant
either through mirroring or parity checking system is the redundant array of
and it is a component of a customer’s SLA. independent disks (RAID). Redundancy
contributes to the fault tolerance of a system.
RAID-0 — Striped array with no parity.
RAID-1 — Mirrored array and duplexing. Redundancy — Backing up a component to help
ensure high availability.
RAID-3 — Striped array with typically non-
rotating parity, optimized for long, single- Reliability — (1) Level of assurance that data will
threaded transfers. not be lost or degraded over time. (2) An
attribute of any commuter component
RAID-4 — Striped array with typically non-
(software, hardware or a network) that
rotating parity, optimized for short, multi-
consistently performs according to its
threaded transfers.
specifications.
RAID-5 — Striped array with typically rotating
REST — Representational State Transfer.
parity, optimized for short, multithreaded
transfers. REXX — Restructured extended executor.
RAID-6 — Similar to RAID-5, but with dual RID — Relative Identifier that uniquely identifies
rotating parity physical disks, tolerating 2 a user or group within a Microsoft Windows
physical disk failures. domain.
RAIN — Redundant (or Reliable) Array of RIS — Radiology Information System.
Independent Nodes (architecture). RISC — Reduced Instruction Set Computer.
RAM — Random Access Memory.
RIU — Radiology Imaging Unit.
RAM DISK — A LUN held entirely in the cache
R-JNL — Secondary journal volumes.
area.
RAS — Reliability, Availability, and Serviceability RK — Rack additional.
or Row Address Strobe. RKAJAT — Rack Additional SATA disk tray.
RBAC — Role Base Access Control. RKAK — Expansion unit.
RC — (1) Reference Code or (2) Remote Control. RLGFAN — Rear Logic Box Fan Assembly.
RCHA — RAID Channel Adapter. RLOGIC BOX — Rear Logic Box.
RCP — Remote Control Processor. RMF — Resource Measurement Facility.
RCU — Remote Control Unit or Remote Disk RMI — Remote Method Invocation. A way that a
Control Unit. programmer, using the Java programming
RCUT — RCU Target. language and development environment,
can write object-oriented programming in
RD/WR — Read/Write. which objects on different computers can
RDM — Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA — Remote Direct Memory Access. Java version of what is generally known as a
RPC (remote procedure call), but with the
RDP — Remote Desktop Protocol.
ability to pass 1 or more objects along with
RDW — Record Descriptor Word. the request.
Read/Write Head — Read and write data to the RndRD — Random read.
platters, typically there is 1 head per platter ROA — Return on Asset.
side, and each head is attached to a single
actuator shaft. RoHS — Restriction of Hazardous Substances (in
Electrical and Electronic Equipment).
RECFM — Record Format Redundant. Describes
the computer or network system ROI — Return on Investment.
components, such as fans, hard disk drives, ROM — Read Only Memory.
servers, operating systems, switches, and

HDS Confidential: For distribution only to authorized parties. Page G-19


Round robin mode — A load balancing technique delivery model for most business
which distributes data packets equally applications, including accounting (CRM
among the available paths. Round robin and ERP), invoicing (HRM), content
DNS is usually used for balancing the load management (CM) and service desk
of geographically distributed Web servers. It management, just to name the most common
works on a rotating basis in that one server software that runs in the cloud. This is the
IP address is handed out, then moves to the fastest growing service in the cloud market
back of the list; the next server IP address is today. SaaS performs best for relatively
handed out, and then it moves to the end of simple tasks in IT-constrained organizations.
the list; and so on, depending on the number SACK — Sequential Acknowledge.
of servers being used. This works in a
looping fashion. SACL — System ACL. The part of a security
descriptor that stores system auditing
Router — A computer networking device that information.
forwards data packets toward their
destinations, through a process known as SAIN — SAN-attached Array of Independent
routing. Nodes (architecture).

RPC — Remote procedure call. SAN ― Storage Area Network. A network linking
computing devices to disk or tape arrays and
RPO — Recovery Point Objective. The point in other devices over Fibre Channel. It handles
time that recovered data should match. data at the block level.
RPSFAN — Rear Power Supply Fan Assembly. SAP — (1) System Assist Processor (for I/O
RRDS — Relative Record Data Set. processing), or (2) a German software
RS CON — RS232C/RS422 Interface Connector. company.

RSD — RAID Storage Division (of Hitachi). SAP HANA — High Performance Analytic
Appliance, a database appliance technology
R-SIM — Remote Service Information Message. proprietary to SAP.
RSM — Real Storage Manager. SARD — System Assurance Registration
RTM — Recovery Termination Manager. Document.
RTO — Recovery Time Objective. The length of SAS —Serial Attached SCSI.
time that can be tolerated between a disaster SATA — Serial ATA. Serial Advanced Technology
and recovery of data. Attachment is a new standard for connecting
R-VOL — Remote Volume. hard drives into computer systems. SATA is
R/W — Read/Write. based on serial signaling technology, unlike
current IDE (Integrated Drive Electronics)
-back to top-
hard drives that use parallel signaling.
—S— SBM — Solutions Business Manager.
SA — Storage Administrator. SBOD — Switched Bunch of Disks.
SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SAA — Share Access Authentication. The process SBX — Small Box (Small Form Factor).
of restricting a user's rights to a file system
SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors
connector that is larger than a Lucent
from both the file system object itself and the
connector (LC). (2) Single Cabinet.
share to which the user is connected.
SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing
SCP — Secure Copy.
business model. SaaS is a software delivery
model in which software and its associated SCSI — Small Computer Systems Interface. A
data are hosted centrally in a cloud and are parallel bus architecture and a protocol for
typically accessed by users using a thin transmitting large data blocks up to a
client, such as a web browser via the distance of 15 to 25 meters.
Internet. SaaS has become a common SD — Software Division (of Hitachi).

Page G-20 HDS Confidential: For distribution only to authorized parties.


SDH — Synchronous Digital Hierarchy. • Specific performance benchmarks to
SDM — System Data Mover. which actual performance will be
periodically compared
SDO – Standards Development Organizations (a
general category). • The schedule for notification in advance of
network changes that may affect users
SDSF — Spool Display and Search Facility.
Sector — A sub-division of a track of a magnetic • Help desk response time for various
disk that stores a fixed amount of data. classes of problems

SEL — System Event Log. • Dial-in access availability


Selectable Segment Size — Can be set per • Usage statistics that will be provided
partition. Service-Level Objective — SLO. Individual
Selectable Stripe Size — Increases performance by performance metrics built into an SLA. Each
customizing the disk access size. SLO corresponds to a single performance
characteristic relevant to the delivery of an
SENC — Is the SATA (Serial ATA) version of the
overall service. Some examples of SLOs
ENC. ENCs and SENCs are complete
include: system availability, help desk
microprocessor systems on their own and
incident resolution time, and application
they occasionally require a firmware
response time.
upgrade.
SeqRD — Sequential read. SES — SCSI Enclosure Services.

Serial Transmission — The transmission of data SFF — Small Form Factor.


bits in sequential order over a single line. SFI — Storage Facility Image.
Server — A central computer that processes SFM — Sysplex Failure Management.
end-user applications or requests, also called
SFP — Small Form-Factor Pluggable module Host
a host.
connector. A specification for a new
Server Virtualization — The masking of server generation of optical modular transceivers.
resources, including the number and identity The devices are designed for use with small
of individual physical servers, processors, form factor (SFF) connectors, offer high
and operating systems, from server users. speed and physical compactness and are
The implementation of multiple isolated hot-swappable.
virtual environments in one physical server.
SHSN — Shared memory Hierarchical Star
Service-level Agreement — SLA. A contract Network.
between a network service provider and a
SID — Security Identifier. A user or group
customer that specifies, usually in
identifier within the Microsoft Windows
measurable terms, what services the network
security model.
service provider will furnish. Many Internet
service providers (ISP) provide their SIGP — Signal Processor.
customers with a SLA. More recently, IT SIM — (1) Service Information Message. A
departments in major enterprises have message reporting an error that contains fix
adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC — Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM — Single In-line Memory Module.
of outsourcing network providers.
SLA —Service Level Agreement.
Some metrics that SLAs may specify include:
SLO — Service Level Objective.
• The percentage of the time services will be
SLRP — Storage Logical Partition.
available
SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
(director names). This type of information is

HDS Confidential: For distribution only to authorized parties. Page G-21


used for the exclusive control of the can send and receive TCP/IP messages by
subsystem. Like CACHE, shared memory is opening a socket and reading and writing
controlled as 2 areas of memory and fully non- data to and from the socket. This simplifies
volatile (sustained for approximately 7 days). program development because the
SM PATH— Shared Memory Access Path. The programmer need only worry about
Access Path from the processors of CHA, manipulating the socket and can rely on the
DKA PCB to Shared Memory. operating system to actually transport
messages across the network correctly. Note
SMB/CIFS — Server Message Block
that a socket in this sense is completely soft;
Protocol/Common Internet File System.
it is a software object, not a physical
SMC — Shared Memory Control. component.
SME — Small and Medium Enterprise. SOM — System Option Mode.
SMF — System Management Facility. SONET — Synchronous Optical Network.
SMI-S — Storage Management Initiative SOSS — Service Oriented Storage Solutions.
Specification.
SPaaS — SharePoint as a Service. A cloud
SMP — Symmetric Multiprocessing. An IBM- computing business model.
licensed program used to install software
SPAN — Span is a section between 2 intermediate
and software changes on z/OS systems.
supports. See Storage pool.
SMP/E — System Modification
Spare — An object reserved for the purpose of
Program/Extended.
substitution for a like object in case of that
SMS — System Managed Storage. object's failure.
SMTP — Simple Mail Transfer Protocol. SPC — SCSI Protocol Controller.
SMU — System Management Unit. SpecSFS — Standard Performance Evaluation
Snapshot Image — A logical duplicated volume Corporation Shared File system.
(V-VOL) of the primary volume. It is an SPECsfs97 — Standard Performance Evaluation
internal volume intended for restoration. Corporation (SPEC) System File Server (sfs)
SNIA — Storage Networking Industry developed in 1997 (97).
Association. An association of producers and SPI model — Software, Platform and
consumers of storage networking products, Infrastructure as a service. A common term
whose goal is to further storage networking to describe the cloud computing “as a service”
technology and applications. Active in cloud business model.
computing.
SRA — Storage Replicator Adapter.
SNMP — Simple Network Management Protocol. SRDF/A — (EMC) Symmetrix Remote Data
A TCP/IP protocol that was designed for Facility Asynchronous.
management of networks over TCP/IP,
SRDF/S — (EMC) Symmetrix Remote Data
using agents and stations.
Facility Synchronous.
SOA — Service Oriented Architecture.
SRM — Site Recovery Manager.
SOAP — Simple Object Access Protocol. A way for
SSB — Sense Byte.
a program running in one kind of operating
system (such as Windows 2000) to SSC — SiliconServer Control.
communicate with a program in the same or SSCH — Start Subchannel.
another kind of an operating system (such as SSD — Solid-State Drive or Solid-State Disk.
Linux) by using the World Wide Web's
SSH — Secure Shell.
Hypertext Transfer Protocol (HTTP) and its
Extensible Markup Language (XML) as the SSID — Storage Subsystem ID or Subsystem
mechanisms for information exchange. Identifier.
Socket — In UNIX and some other operating SSL — Secure Sockets Layer.
systems, socket is a software object that SSPC — System Storage Productivity Center.
connects an application to a network SSUE — Split Suspended Error.
protocol. In UNIX, for example, a program

Page G-22 HDS Confidential: For distribution only to authorized parties.


SSUS — Split Suspend. TCO — Total Cost of Ownership.
SSVP — Sub Service Processor interfaces the SVP TCG – Trusted Computing Group.
to the DKC. TCP/IP — Transmission Control Protocol over
SSW — SAS Switch. Internet Protocol.
Sticky Bit — Extended UNIX mode bit that TDCONV — Trace Dump Converter. A software
prevents objects from being deleted from a program that is used to convert traces taken
directory by anyone other than the object's on the system into readable text. This
owner, the directory's owner or the root user. information is loaded into a special
Storage pooling — The ability to consolidate and spreadsheet that allows for further
manage storage resources across storage investigation of the data. More in-depth
system enclosures where the consolidation failure analysis.
of many appears as a single view. TDMF — Transparent Data Migration Facility.
STP — Server Time Protocol. Telco or TELCO — Telecommunications
STR — Storage and Retrieval Systems. Company.
Striping — A RAID technique for writing a file to TEP — Tivoli Enterprise Portal.
multiple disks on a block-by-block basis, Terabyte (TB) — A measurement of capacity, data
with or without parity. or data storage. 1TB = 1,024GB.
Subsystem — Hardware or software that performs TFS — Temporary File System.
a specific function within a larger system. TGTLIBs — Target Libraries.
SVC — Supervisor Call Interruption. THF — Front Thermostat.
SVC Interrupts — Supervisor calls. Thin Provisioning — Thin provisioning allows
S-VOL — (1) (ShadowImage) Source Volume for storage space to be easily allocated to servers
In-System Replication, or (2) (Universal on a just-enough and just-in-time basis.
Replicator) Secondary Volume. THR — Rear Thermostat.
SVP — Service Processor ― A laptop computer Throughput — The amount of data transferred
mounted on the control frame (DKC) and from 1 place to another or processed in a
used for monitoring, maintenance and specified amount of time. Data transfer rates
administration of the subsystem. for disk drives and networks are measured
Switch — A fabric device providing full in terms of throughput. Typically,
bandwidth per port and high-speed routing throughputs are measured in kb/sec,
of data via link-level addressing. Mb/sec and Gb/sec.
SWPX — Switching power supply. TID — Target ID.
SXP — SAS Expander. Tiered Storage — A storage strategy that matches
Symmetric Virtualization — See In-Band data classification to storage metrics. Tiered
Virtualization. storage is the assignment of different
categories of data to different types of
Synchronous — Operations that have a fixed time
storage media in order to reduce total
relationship to each other. Most commonly
storage cost. Categories may be based on
used to denote I/O operations that occur in
levels of protection needed, performance
time sequence, such as, a successor operation
requirements, frequency of use, and other
does not occur until its predecessor is
considerations. Since assigning data to
complete.
particular media may be an ongoing and
-back to top-
complex activity, some vendors provide
—T— software for automatically managing the
Target — The system component that receives a process based on a company-defined policy.
SCSI I/O command, an open device that Tiered Storage Promotion — Moving data
operates at the request of the initiator. between tiers of storage as their availability
TB — Terabyte. 1TB = 1,024GB. requirements change.
TCDO — Total Cost of Data Ownership. TLS — Tape Library System.

HDS Confidential: For distribution only to authorized parties. Page G-23


TLS — Transport Layer Security. secondary servers, set up protection and
TMP — Temporary or Test Management Program. perform failovers and failbacks.
TOD (or ToD) — Time Of Day. VCS — Veritas Cluster System.
TOE — TCP Offload Engine. VDEV — Virtual Device.
Topology — The shape of a network or how it is VDI — Virtual Desktop Infrastructure.
laid out. Topologies are either physical or VHD — Virtual Hard Disk.
logical.
VHDL — VHSIC (Very-High-Speed Integrated
TPC-R — Tivoli Productivity Center for
Circuit) Hardware Description Language.
Replication.
VHSIC — Very-High-Speed Integrated Circuit.
TPF — Transaction Processing Facility.
VI — Virtual Interface. A research prototype that
TPOF — Tolerable Points of Failure.
is undergoing active development, and the
Track — Circular segment of a hard disk or other details of the implementation may change
storage media. considerably. It is an application interface
Transfer Rate — See Data Transfer Rate. that gives user-level processes direct but
Trap — A program interrupt, usually an interrupt protected access to network interface cards.
caused by some exceptional situation in the This allows applications to bypass IP
user program. In most cases, the Operating processing overheads (for example, copying
System performs some action and then data, computing checksums) and system call
returns control to the program. overheads while still preventing 1 process
from accidentally or maliciously tampering
TSC — Tested Storage Configuration.
with or reading data being used by another.
TSO — Time Sharing Option.
Virtualization — Referring to storage
TSO/E — Time Sharing Option/Extended.
virtualization, virtualization is the
T-VOL — (ShadowImage) Target Volume for amalgamation of multiple network storage
In-System Replication. devices into what appears to be a single
-back to top- storage unit. Storage virtualization is often
—U— used in a SAN, and makes tasks such as
archiving, backup and recovery easier and
UA — Unified Agent. faster. Storage virtualization is usually
UBX — Large Box (Large Form Factor). implemented via software applications.
UCB — Unit Control Block.
UDP — User Datagram Protocol is 1 of the core There are many additional types of
protocols of the Internet protocol suite. virtualization.
Using UDP, programs on networked Virtual Private Cloud (VPC) — Private cloud
computers can send short messages known existing within a shared or public cloud (for
as datagrams to one another. example, the Intercloud). Also known as a
UFA — UNIX File Attributes. virtual private network cloud.
UID — User Identifier within the UNIX security VLL — Virtual Logical Volume Image/Logical
model. Unit Number.
UPS — Uninterruptible Power Supply — A power VLUN — Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage. VLVI — Virtual Logical Volume Image. Marketing
UR — Universal Replicator. name for CVS (custom volume size).
UUID — Universally Unique Identifier. VM — Virtual Machine.
-back to top- VMDK — Virtual Machine Disk file format.
—V— VNA — Vendor Neutral Archive.
vContinuum — Using the vContinuum wizard, VOJP — (Cache) Volatile Jumper.
users can push agents to primary and
VOLID — Volume ID.

Page G-24 HDS Confidential: For distribution only to authorized parties.


VOLSER — Volume Serial Numbers. WWNN — World Wide Node Name. A globally
Volume — A fixed amount of storage on a disk or unique 64-bit identifier assigned to each
tape. The term volume is often used as a Fibre Channel node process.
synonym for the storage medium itself, but WWPN ― World Wide Port Name. A globally
it is possible for a single disk to contain more unique 64-bit identifier assigned to each
than 1 volume or for a volume to span more Fibre Channel port. A Fibre Channel port’s
than 1 disk. WWPN is permitted to use any of several
VPC — Virtual Private Cloud. naming authorities. Fibre Channel specifies a
VSAM — Virtual Storage Access Method. Network Address Authority (NAA) to
distinguish between the various name
VSD — Virtual Storage Director. registration authorities that may be used to
VTL — Virtual Tape Library. identify the WWPN.
VSP — Virtual Storage Platform. -back to top-
VSS — (Microsoft) Volume Shadow Copy Service.
—X—
VTOC — Volume Table of Contents.
XAUI — "X"=10, AUI = Attachment Unit Interface.
VTOCIX — Volume Table of Contents Index.
VVDS — Virtual Volume Data Set. XCF — Cross System Communications Facility.

V-VOL — Virtual Volume. XDS — Cross Enterprise Document Sharing.


-back to top-
XDSi — Cross Enterprise Document Sharing for
—W— Imaging.

WAN — Wide Area Network. A computing XFI — Standard interface for connecting a 10Gb
internetwork that covers a broad area or Ethernet MAC device to XFP interface.
region. Contrast with PAN, LAN and MAN.
XFP — "X"=10Gb Small Form Factor Pluggable.
WDIR — Directory Name Object.
XML — eXtensible Markup Language.
WDIR — Working Directory.
XRC — Extended Remote Copy.
WDS — Working Data Set. -back to top-

WebDAV — Web-Based Distributed Authoring —Y—


and Versioning (HTTP extensions).
YB — Yottabyte.
WFILE — File Object or Working File.
Yottabyte — The highest-end measurement of
WFS — Working File Set. data at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WINS — Windows Internet Naming Service. that all the computer hard drives in the
WL — Wide Link. world do not contain 1YB of data.
-back to top-
WLM — Work Load Manager.
WORM — Write Once, Read Many.
—Z—
WSDL — Web Services Description Language. z/OS — z Operating System (IBM® S/390® or
WSRM — Write Seldom, Read Many. z/OS® Environments).
z/OS NFS — (System) z/OS Network File System.
WTREE — Directory Tree Object or Working Tree.
z/OSMF — (System) z/OS Management Facility.
WWN ― World Wide Name. A unique identifier
zAAP — (System) z Application Assist Processor
for an open-system host. It consists of a 64-
(for Java and XML workloads).
bit physical address (the IEEE 48-bit format
with a 12-bit extension and a 4-bit prefix).

HDS Confidential: For distribution only to authorized parties. Page G-25


ZCF — Zero Copy Failover. Also known as Data
Access Path (DAP).
Zettabyte (ZB) — A high-end measurement of
data. 1ZB = 1,024EB.
zFS — (System) zSeries File System.
zHPF — (System) z High Performance FICON.
zIIP — (System) z Integrated Information
Processor (specialty processor for database).
Zone — A collection of Fibre Channel Ports that
are permitted to communicate with each
other via the fabric.
Zoning — A method of subdividing a storage area
network into disjoint zones, or subsets of
nodes on the network. Storage area network
nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
SANs, traffic within each zone may be
physically isolated from traffic outside the
zone.
-back to top-

Page G-26 HDS Confidential: For distribution only to authorized parties.


Evaluating This Course
Please use the online evaluation system to help improve our
courses.

Learning Center Sign-in location:

https://learningcenter.hds.com/Saba/Web/Main

Page E-1
Evaluating This Course

Page E-2