Beruflich Dokumente
Kultur Dokumente
April 2009
Welcome to Symmetrix V-Max Series Maintenance. The AUDIO portion of this course is supplemental to the material
and is not a replacement for the student notes accompanying this course. EMC recommends downloading the Student
Resource Guide from the Supporting Materials tab, and reading the notes in their entirety.
Copyright 2009 EMC Corporation. All rights reserved.
These materials may not be copied without EMC's written consent.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR
A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, EMC ControlCenter, AlphaStor, ApplicationXtender, Captiva, Catalog Solution, Celerra, CentraStar,
CLARalert, CLARiiON, ClientPak, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender,
DiskXtender 2000, Documentum, EmailXaminer, EmailXtender, EmailXtract, eRoom, FLARE, HighRoad, InputAccel,
Navisphere, OpenScale, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, Smarts, SnapShotServer,
SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, Xtender,
Xtender Solutions are registered trademarks; and EMC Developers Program, EMC OnCourse, EMC Proven, EMC Snap,
EMC Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems,Automated Resource Manager,
AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, Centera, CLARevent, Codebook Correlation Technology,
EMC Common Information Model, CopyCross, CopyPoint, DatabaseXtender, Direct Matrix, EDM, E-Lab, Enginuity,
FarPoint, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, Invista, Max Retriever, MediaStor,
MirrorView, NetWin, NetWorker, nLayers, OnAlert, Powerlink, PowerSnap, RecoverPoint, RepliCare, SafeLine, SAN
Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI,
SymmEnabler, Symmetrix DMX, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation.
All other trademarks used herein are the property of their respective owners.
Course Overview
Course Description
This course will present the V-Max array features and enhancements from a
maintenance perspective. The focus will be on Field Replaceable Units.
Intended Audience
This course is intended for those involved in the maintenance of Symmetrix VMax arrays.
EMC believes the information in this publication is accurate as of its publication date and is based on
PreGA product information. The information is subject to change without notice. For the most current
information see the EMC Support Matrix and the product release notes in Powerlink.
2009 EMC Corporation. All rights reserved.
Course Overview
-2
This course will present the V-Max Array features and enhancements from a maintenance
perspective. The focus will be on Field Replaceable Units.
This course is intended for those involved in the maintenance of V-Max Symmetrix systems.
Course Objectives
Upon completion of this course, you should be able to:
Differentiate between the Symmetrix V-Max and Symmetrix V-Max SE
systems
Troubleshoot Symmetrix V-Max systems
Replace Field Replaceable Units (FRUs) using SymmWin scripts and
Electrostatic Discharge (ESD) equipment
Course Overview
-3
Course Topics
Module 1:
Architectural Overview
Module 2:
Course Overview
-4
There are two modules in this course that address the following topics:
Architectural Overview
Field Replaceable Units
List of Demonstrations
Demo1:
Electrostatic Discharge
Demo2:
Demo3:
Demo4:
Demo5:
Demo6:
Demo7:
Demo8:
Replace Drive
Demo9:
Demo10:
Demo11:
Demo12:
Demo13:
Demo14:
Demo15:
Course Overview
-5
-6
In module 1, we will look at the architectural overview. Upon completion of this module, you
should be able to:
Distinguish the Symmetrix V-Max and Symmetrix V-Max SE from each other based on
System Bay component lay-out
Operate switches and interpret light indicators of various Symmetrix V-Max components
-7
The next few slides will cover Symmetrix V-Max Introduction. Symmetrix V-Max Series with
Enginuity stands out with its higher performance (Over 2X performance of DMX-4) and
usable capacity (more usable capacity and more efficient cache utilization). Total Cost of
Ownership improves as well, by leveraging the latest drive technologies and savings on
energy, footprint, weight, and acquisition cost. The virtual and physical environments of
these systems are easier to manage due to faster and easier configuration options which
translates into a reduction in labor and potential errors. The business continuance
capabilities are optimal (cost and performance) with EMCs offer of the industrys first zero
RPO 2-site long distance replication solution.
-8
Online Transaction Processes and other Tier 0 or 1 workloads will be accelerated with the
implementation of high performance multi-core CPU processors that perform with up to
twice the IOPS and with twice the front-end and back-end connectivity (up to 128 front-end
and back-end ports) when compared with the DMX-4.
Over 2 PB usable disk capacity is available, as well as 944GB of cache (472GB mirrored)
with increased metadata efficiency.
Up to 64 FE ports
Up to 128 FE ports
-9
The Virtual Matrix Architecture uses V-Max Engines, each containing a portion of Global Memory
and two Directors capable of managing hosts, disks, and remote connections simultaneously. As
shown, this architecture allows for scalability in all aspects: Front-end connectivity, Global
Memory, Back-end connectivity, and disk capacity. Global Memory has little meta data overhead
due to improvements found in Enginuity 5874, allowing 2,400 disk devices to be configured with
RAID-1 or other types of RAID.
Symmetrix V-Max
Symmetrix V-Max SE
V-Max Engines
1-8
Director Boards
2 - 16
96 - 2400
48 - 360
Physical Memory
128 1024 GB
128 GB
16 - 128
16
FICON Ports
8 - 64
GigE/iSCSI Ports
8 - 64
SRDF Ports
4 - 32
- 10
With the introduction of a new line of Symmetrix systems, EMC announces two variations of
the Symmetrix V-Max systems: the Symmetrix V-Max Series with Enginuity models (from
now on: V-Max), and the Symmetrix V-Max SE model (from now on: V-Max SE ).
V-Max arrays contain up to 16 Director boards, 80 to 2,400 disk drives, and either 128 Fibre
Channel front-end ports, or 64 FICON ports, or 64 GigE/iSCSI ports, or a combination
thereof.
The V-Max SE model always consists of a single V-Max Engine with 2 Director boards.
Depending on the use of the expansion bay, the system contains between 48 and 360 disk
drives, 16 Fibre Channel front-end ports, or 8 FICON ports, or 8 GigE/iSCSI ports.
Note 1: This is the amount of memory that physically can be installed in the system. The
customers usable amount of memory is less due to the systems memory requirements as
well as the mirroring of the memory.
10
Symmetrix V-Max
Symmetrix V-Max SE
- 11
The Product Serial Number (PSN) label is no longer the white or yellow label located on the
physical rack. Instead, the Product Serial Number is a tag using an 11 character serial
number with prefixes.
- HK1: Franklin (MA) United States
- CK2: Cork, Ireland
- AP3: Apex (NC) United States
The Symmetrix V-Max has its tag positioned at the front top right of the System Bay. The
Symmetrix V-Max SE has its tag positioned at the front right center of the System Bay
(attached to the bottom and middle rail holes).
Symmetrix V-Max arrays have their prefix followed by 926, i.e. HK1926xxxxx, while
Symmetrix V-Max SE systems continue their prefix with 949, i.e. CK2949xxxxx.
11
V-Max Engine 5
UPS
KVM
V-Max Engine 4
Rear View
Front View
Enclosure 6
Enclosure 3
Service Processor
Directions
Thisisaninteractivegraphic.
Useyourmousetoexplore
thiscomponent.
Enclosure 2
Enclosure 1
- 12
The next few slides will cover the Symmetrix V-Max. The important components to focus on
are the Uninterruptible Power Supply (UPS), the Server (Service Processor) with the ,
Keyboard-Video-Mouse (KVM), the Matrix Interface Board Enclosure (MIBE), and the V-Max
Engines which are positioned in Enclosures (up to eight Engines for a Symmetrix V-Max
System Bay).
12
3C
2C
1C
2A
1A
1B
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
System
Bay
Storage
Bay
Bay Numbering
2B
1D
2D
3D
5A
4A
3A
2A
1A
1B
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
System
Bay
Storage
Bay
Symmetrix V-Max
2B
3B
4B
5B
- 13
The numbering of the Storage Bays is different for a Symmetrix V-Max then it is for the DMX
Series. Instead of having all Storage Bays to the left of the System Bay numbered from 1 up
to 5 ending with A (from a front view perspective), and all bays to the right of the System
Bay numbered from 1 up to 5 ending with B, the Symmetrix V-Max uses a different
numbering scheme. This is because, unlike the quadrants used in the DMX-Series, the
Symmetrix V-Max uses octants and therefore has Storage Bays ending with A, B, C, and D
asshown in the graphic.
13
1x V-Max Engine
Engine 4
Engine
Engine
Daisy
Direct
Chain
Connect
Storage
Bay 2A
Storage
Bay 1A
System
Bay
Module 1: Architectural Overview
- 14
This configuration has only one (1) Engine in the System Bay, which is located in enclosure
#4. The graphic shows a front view.
Drive population is for the lower half of the cabinet of Storage Bay 1A (direct connect) and
2A (daisy chain). This allows for a total of 240 drives in the whole system.
14
2x V-Max Engines
Engine
Engine
Daisy
Direct
Chain
Connect
Engine 5
Engine 4
Engine
Engine
Daisy
Direct
Chain
Connect
Storage
Bay 2A
Storage
Bay 1A
System
Bay
Module 1: Architectural Overview
- 15
This configuration has two V-Max Engines in the System Bay, which are located in
enclosures #4 and #5. The graphic shows a front view.
Drive population is for fully populated Storage Bays 1A (direct connect) and 2A (daisy
chain). This allows for a total of 480 drives in the whole system.
15
3x V-Max Engines
Engine
Engine
Daisy
Direct
Chain
Connect
Engine 5
Engine 4
Engine
4
Engine
4
Engine 3
Engine
Engine
Daisy
Direct
Direct
Daisy
Chain
Connect
Connect
Chain
Storage Bay
2A
Storage Bay
1A
System
Bay
Storage Bay
1B
Storage Bay
2B
Module 1: Architectural Overview
- 16
This configuration has three V-Max Engines in the System Bay, which are located in
enclosures #3, #4, and #5. The graphic shows a front view. Drive population is for fully
populated Storage Bays 1A and 1B (both direct connect), as well as 2A and 2B (both daisy
chain). This allows for a total of 720 drives in the whole system.
16
4x V-Max Engines
Engine
Engine
Engine
Engine
Daisy
Direct
Direct
Daisy
Chain
Connect
Connect
Chain
Engine
Engine
Engine 6
Engine 5
Engine 4
Engine
4
Engine
4
Engine 3
Daisy
Direct
Direct
Daisy
Chain
Connect
Connect
Chain
Storage Bay
2A
Storage Bay
1A
System
Bay
Storage Bay
1B
Storage Bay
2B
Module 1: Architectural Overview
- 17
This configuration has four V-Max Engines in the System Bay, which are located in
enclosures #3, #4, #5, and #6. The graphic shows a front view. Drive population is for fully
populated Storage Bays 1A and 1B (both direct connect), as well as 2A and 2B (both daisy
chain). This allows for a total of 960 drives in the whole system.
17
5x V-Max Engines
Engine
Engine
Engine
Engine
Daisy
Chain
Direct
Connect
Direct
Connect
Daisy
Chain
Engine
Engine
3
Daisy
Chain
Engine 6
Engine 5
Engine 4
Engine
Engine
Engine
Engine
Engine
Daisy
Chain
Daisy
Chain
Direct
Connect
Daisy
Chain
Direct
Connect
Engine 2
Direct
Connect
Storage
Bay 3C
Storage
Bay 2C
Storage
Bay 1C
Storage
Bay 2A
Storage
Bay 1A
System
Bay
Storage
Bay 1B
Engine 3
Storage
Bay 2B
- 18
This configuration has five V-Max Engines in the System Bay, which are located in
enclosures #2, #3, #4, #5, and #6. The graphic shows a front view. Drive population is for
fully populated Storage Bays 1A and 1B (both direct connect), half-filled Storage Bays 1C
(direct connect), and half-filled Storage Bays 2C and 3C (daisy chain), as well as fully
populated Storage Bays 2A and 2B (daisy chain). This allows for a total of 1,320 drives in
the whole system.
18
6x V-Max Engines
Engine
Engine
Engine
Engine
Engine
Daisy
Chain
Daisy
Chain
Direct
Connect
Daisy
Chain
Direct
Connect
Engine
Engine 7
Engine 6
Engine
Direct
Connect
Daisy
Chain
Engine
Engine
3
Daisy
Chain
Engine 5
Engine 4
Engine
Engine
Engine
Engine
Engine
Daisy
Chain
Daisy
Chain
Direct
Connect
Daisy
Chain
Direct
Connect
Engine 2
Direct
Connect
Storage
Bay 3C
Storage
Bay 2C
Storage
Bay 1C
Storage
Bay 2A
Storage
Bay 1A
System
Bay
Storage
Bay 1B
Engine 3
Storage
Bay 2B
- 19
This configuration has six V-Max Engines in the System Bay, which are located in
enclosures #2, #3, #4, #5, #6, and #7. The graphic shows a front view. Drive population is
for fully populated Storage Bays 1A, 1B, and 1C (both direct connect), as well as fully
populated Storage Bays 2A, 2B, 2C, and 3C (daisy chain). This allows for a total of 1,680
drives in the whole system.
19
7x V-Max Engines
Engine
Engine
Engine
Engine
Engine
7
Daisy
7
Daisy
7
Direct
5
Daisy
5
Direct
Chain
Chain
Connect
Chain
Connect
Engine
7
Engine
6
Engine
Engine
6
Direct
6
Daisy
Connect
Chain
Engine
Engine
Engine
Engine
Engine
3
Direct
3
Daisy
1
Direct
1
Daisy
1
Daisy
Connect
Chain
Connect
Chain
Chain
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
1B
2B
1D
2D
3D
Engine
5
Engine
4
Engine
Engine
Engine
Engine
Engine
2
Daisy
2
Daisy
2
Direct
4
Daisy
4
Direct
Chain
Chain
Connect
Chain
Connect
Engine
3
Engine
2
Engine
1
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
3C
2C
1C
2A
1A
System Bay
- 20
This configuration has seven V-Max Engines in the System Bay, which are located in the
enclosures numbered 1 through 7. The graphic shows a front view. Drive population is for
fully populated Storage Bays 1A-2A, 1C-3C, 1B-2B, and half-filled bays 1D-3D. This allows
for a total of 2,040 drives in the whole system.
20
8x V-Max Engines
Engine
8
Engine
Engine
Engine
Engine
Engine
7
Daisy
7
Daisy
7
Direct
5
Daisy
5
Direct
Chain
Chain
Connect
Chain
Connect
Engine
7
Engine
6
Engine
Engine
Engine
Engine
Engine
6
Direct
6
Daisy
8
Direct
8
Daisy
8
Daisy
Connect
Chain
Connect
Chain
Chain
Engine
Engine
Engine
Engine
Engine
3
Direct
3
Daisy
1
Direct
1
Daisy
1
Daisy
Connect
Chain
Connect
Chain
Chain
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
1B
2B
1D
2D
3D
Engine
5
Engine
4
Engine
Engine
Engine
Engine
Engine
2
Daisy
2
Daisy
2
Direct
4
Daisy
4
Direct
Chain
Chain
Connect
Chain
Connect
Engine
3
Engine
2
Engine
1
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
Storage
Bay
3C
2C
1C
2A
1A
System Bay
- 21
This configuration has eight V-Max Engines in the System Bay, which are numbered from 1
through 8. The graphic shows a front view.
Drive population is for fully populated Storage Bays 1A-2A, 1C-3C, 1B-2B, and 1D-3D. This
allows for a total of 2,400 drives in the whole system.
21
UPS
Server
KVM
SPS
MIBE
V-Max Engine 4
Drive Enclosure 4
Drive Enclosure 3
Directions
Thisisaninteractivegraphic.
Useyourmousetoexplore
thiscomponent.
Drive Enclosure 2
Drive Enclosure 1
SPS
- 22
The next slides will cover the Symmetrix V-Max SE. The Symmetrix V-Max SE arrays
consists of the following components: 8 Drive Enclosures, 1 V-Max Engine, 3 Standby
Power Supply (SPS) trays, 1 Uninterruptible Power Supply (UPS), a Server (Service
Processor), with Keyboard-Video-Mouse (KVM) assembly, and 1 Matrix Interface Board
Enclosure (MIBE) tray.
22
- 23
In the Symmetrix V-Max and Symmetrix V-Max SE System Bays, an enclosure points out
the location where a V-Max Engine can be positioned, in the same way as in a DMX-4 a
Director or Memory board is positioned in a slot.
The combined hardware of drives, management modules, directors, power supplies, and
blowers is therefore called a V-Max Engine, not an enclosure.
23
V-Max SE Configurations
Direct
Direct
connect
connect
Drive
Drive
Engine 4
Direct
Daisy
Dual Bay
Single Bay
Enclosures
chained
Enclosures
Engine 4
Drive
Enclosures
Direct
connect
connect
Drive
Drive
Enclosures
Enclosures
System
Bay
2009 EMC Corporation. All rights reserved.
Expansion
Bay
System
Bay
Module 1: Architectural Overview
- 24
A Symmetrix V-Max SE is a SINGLE bay system with only one (1) enclosure which is
always occupied with V-Max Engine #4. Eight Drive Enclosures are located in the cabinet
using only direct connect and WITHOUT the addition of a storage bay, and therefore without
daisy chained Drive Enclosures. This allows for a total of up to 120 drives in the system.
A Symmetrix V-Max SE DUAL-bay system is also configured with only one (1) V-Max
Engine (#4) in the System Bay. All eight direct connect Drive Enclosures are located in the
System Bay, whereas an expansion bay is added to increase storage capacity. This allows
for a total of up to 360 drives in the system (240 in the expansion bay and 120 in the System
Bay). From a front view perspective, the expansion bay is always placed to the left of the
System Bay.
24
Rear View
16
12
8
4
15
11
7
3
14
10
6
Directions
13
9
5
Front View
Thisisaninteractivegraphic.
Useyourmousetoexplore
thiscomponent.
2009 EMC Corporation. All rights reserved.
- 25
The next few slides will cover the Storage Bay of the Symmetrix V-Max arrays. V-Max array
Storage Bays are similar to the Storage Bays of the DMX Series.
What is different though is the cabling with unique labels.
A Storage Bay consists of either eight or sixteen Drive Enclosures, contain 48 to 240 drives,
and eight (8) SPS modules.
The drive enclosures are numbered 1 to 16 as in this graphic. They are daisy-chained up,
for example drive enclosure #1 is daisy-chained to drive enclosure #5 while drive enclosure
#9 is daisy-chained to drive enclosure #13.
25
Enclosure ID
Loop ID
4Gb/s
RJ-11
LCC-A / LCC-B
- 26
Link Control Card A, referred to as LCC-A, connects to the odd Director of a V-Max Engine,
while LCC-B connects to the even Director of that same V-Max Engine.
The Drive Enclosure has a RJ-11 type connector for a cable that is connected to the SPS
modules for monitoring purposes. That way, the LCC can communicate with the SPS
through the cable that connects them.
Commands and read status are send from the SPS modules that supplies power for the
Drive Enclosure , using the RJ-11 RX/TX differential data lines, to the director.
Per 4 Drive Enclosures there is 1 SPS tray. That means a total of 8 LCC boards and 2 SPS
units. To monitor the SPS modules, only 2 LCCs are needed.
Six grey-colored RJ-11 cables are therefore missing and those LCCs do not monitor any
SPS, yet depend on the two LCC ports with the grey cables, one for SPS/Zone A and one
for SPS/Zone B, to do all the communication. Both the LCC primary (PRI) ports that
connect directly to the back-end (DA) ports, and the expansion (EXP) ports, that daisy chain
Drive Enclosures, are High Speed Serial Data Connectors (HSSDC).
26
0
7
0
5
0
3
0
1
D1 C1 B1 A1
12 11 10 9
0
6
0
4
0
2
0
0
D0 C0 B0 A0
8 7
0
7
0
3
0
1
0
5
D1 C1 B1 A1
= Loop ID
0
6
0
4
0
2
0
0
= Enclosure ID
D0 C0 B0 A0
= Port Number
1
7
1
5
1
3
1
1
16 15 14 13
2
7
D1 C1 B1 A1
12 11 10 9
1
6
1
4
1
2
1
0
D0 C0 B0 A0
1
7
1
5
1
3
1
1
D1 C1 B1 A1
1
6
1
4
1
2
1
0
D0 C0 B0 A0
LCC A and B
16 15 14 13
16 15 14 13
2
5
2
3
2
1
D1 C1 B1 A1
12 11 10 9
2
6
2
4
2
2
2
0
D0 C0 B0 A0
8 7
2
7
2
3
2
1
2
5
D1 C1 B1 A1
2
6
2
4
2
2
2
0
D0 C0 B0 A0
Module 1: Architectural Overview
- 27
The Symmetrix V-Max Storage Bays green and blue LED numbering is shown here. The
LED lay-out shows the system as seen from the rear.
Both green and blue LEDs are positioned on the LCCs, of which there are two (2) on each
Drive Enclosure (LCC-A and LCC-B). The green LED is set by the system. The blue LED is
the Enclosure ID setting and has a recessed push-button switch located between the three
clock-wise rotating arrows. In order to read the new switch setting, you must reset the LCC
by reseating the LCC card, or power cycling the whole bay. The numbers that are not within
a colored circle are indicating the Drive Enclosure number.
27
16 15 14 13
2
7
System Bay
B
A
= Loop ID
Expansion Bay 1A
LCC A and B
2
6
2
5
2
4
12 11 10 9
1
7
1
6
1
5
1
4
8 7
2
3
2
2
2
1
2
0
1
3
1
2
1
1
1
0
= Enclosure ID
- 28
The numbering of the green and blue LEDs for both the System Bay and the Storage Bay
for a Symmetrix V-Max Model SE are shown. The Enclosure ID for the direct-connect Drive
Enclosures in the System Bay are always 0, while the Enclosure ID in the Storage Bay for
the first loop expansion is 1, and the second loop expansion 2. The LED lay-out shows
the system as seen from the rear. The numbers that are not within a colored circle are
indicating the Drive Enclosure number.
Enclosure IDs can be changed manually, using the recessed push-button switch located
between the three clock-wise rotating arrows. In order to activate the new switch setting
after you make changes, reset the LCC by reseating the LCC card, or power cycling the
whole V-Max Engine or bay. The green marker LED is fixed (cannot be altered manually)
and is determined by the position of the Drive Enclosure in the bay.
28
Drive Enclosure
Drive Enclosure Rear
Supports 4Gb FC only
Dual switched loop
configuration provides
redundancy, improved
isolation of faults, port
bypass capability
Utilizes same DMX
chassis, power supply,
and cooling modules
Drives, LCCs, Power
Supplies, and blower
modules are fully
redundant within Drive
Enclosure, and hot
swappable
- 29
Symmetrix V-Max arrays are configured with capacities of up to 120 disk drives for a half
populated bay or 240 disk drives for a full populated bay.
Each Drive Enclosure includes the following components:
- Redundant power and cooling modules for disk drives
- Two Link Control Cards (LCCs)
- 5 to 15 disk drives
29
Drive Support
Drive
4 Gb/s FC (15K)
146GB, 300GB, and 450GB
4 Gb/s FC (10K)
400GB
- 30
Each disk has a green and amber LED. The green LED will light intermittently to indicate
disk activity, while the amber LED is used to mark the drive and may be turned on manually
or by a replacement script. Note that SATA drives (7.2 krpm) are in reality 3Gb/s adapted to
4Gb/s Fibre Channel.
Drives that are introduced with the Symmetrix V-Max models have dual colored emblem
labels. This differentiates them from DMX-Series drives.
30
- 31
Two Power Distribution Panels (PDPs), one for each zone, provide a centralized cabinet
interface and distribution control of the AC power input lines to the Storage Bay PDUs. The
Power Distribution Panels contain the manual AC power On/Off control switches, which are
accessible through the rear door. Power Distribution Panels are available for single phase
and 3-phase depending on the type and geographical location of the V-Max array.
31
2
4
4
5
1.
2.
3.
4.
5.
6.
SPS B
SPS A
AC Out
AC In
On/Off Switch
RS-232 monitor
3
6
On-line
On-Battery
Replace Battery
Internal Check
2009 EMC Corporation. All rights reserved.
- 32
The green LED of the Standby Power Supply (SPS) indicates On-line Enabled if the LED is
steady ON, and indicates On-line and Charging if the LED is flashing. Please keep in mind
that replacing a SPS requires the use of the lift tool as these components are heavy (29 kg
or 65 lbs).
One SPS tray is required for each four Drive Enclosures and contains 2 SPS units with a
total of up to eight SPS units to support up to 16 Drive Enclosures in the Storage Bay. If AC
power fails to both Zone A and Zone B, the SPS assemblies can maintain power for two 5minute periods to allow the system to vault. Only then does the Symmetrix system shut
down.
32
V-Max Engine
Base configuration is one
Symmetrix V-Max Engine
Maximum configuration is 8 V-Max
Engines per system
V-Max Engines are connected via
Virtual Matrix to aggregate and
share system resources
- 33
The next couple of slides will cover the Virtual Matrix Engine. Depending on the type of VMax array, a minimum of one single V-Max Engine, the base configuration, and up to eight
V-Max Engines can be configured in the system.
33
Power
Supply
Power
Supply
Power On
2009 EMC Corporation. All rights reserved.
Enclosure fault
Module 1: Architectural Overview
- 34
V-Max Engines are positioned in specific slots called Enclosures. An example is being given
here of the single V-Max Engine in a Symmetrix V-Max SE system, showing details of the
front view with two (2) power supplies on the side and four (4) blowers, otherwise known as
fans, positioned in the middle.
34
PS
MM
BE
BE
SIB
BE
BE
SIB
FE
FE
FE
PS
PS
MM
MM
FE
Even DIR
PS
Odd DIR
MM
Even DIR
Odd DIR
Module 1: Architectural Overview
- 35
Besides a still picture of a V-Max Engine, two important lay-outs of the V-Max Engine are
shown.
The first lay-out shows the names of the components and their physical locations. These are
the Power Supplies, Backend I/O Modules, Management Modules, System Interface
Boards, and Front-end I/O Modules.
The second lay-out shows the locations of the odd and even directors within a V-Max
Engine, including their respective Front-end I/O Module assignments.
35
MM-B
Storage
A0 B1
C0 D1
Mod 0
Mod 1
A0 B1
C0 D1
Mod 0
Mod 1
Mod 4
E0 F1
Mod 5
G0 H1
PS-A
SIB-B
SIB-A
Mod 2 and 3
SIB-B
SIB-A
Mod 2 and 3
Mod 4
E0 F1
MM-A
Mod 5
G0 H1
Bay
Host
Host
Host
C
Module 1: Architectural Overview
- 36
Here shown are the port assignments for the backend I/O modules (which use QSFP type
connectors), the front-end I/O modules, and the SIB modules.
This illustration is of a Front-end I/O module with 4 ports per module. Keep in mind that this
configuration and the port assignments are only valid for a fibre channel I/O module. These
are different for FICON and GigE front-end I/O modules.
36
Odd Dir
Odd Dir
Even Dir
Even Dir
- 37
All cables have both From and To labeling showing all information needed to cable up a
system or to trace a cable in the event troubleshooting is necessary.
The graphics and text show the labels of the cables that run between the Back-end I/O
Modules and Direct Connect Drive Enclosures (here depicted as DAE) on the larger V-Max
systems with one or multiple V-Max Engines. The smaller V-Max SE systems have their
Odd and Even Mod 0 (zero) or 1 connected to either Drive Enclosures 1,2,3,4 or Drive
Enclosures 5,6,7,8.
37
V-Max Engine
Directors
Enclosure/Engine 8
15 + 16
Enclosure/Engine 7
13 + 14
Enclosure/Engine 6
11 + 12
Enclosure/Engine 5
9 + 10
Enclosure/Engine 4
7+8
Enclosure/Engine 3
5+6
Enclosure/Engine 2
3+4
Enclosure/Engine 1
1+2
- 38
Enclosures are populated from the inside out, starting with enclosure #4 which holds V-Max
Engine #4 and consists of Directors 7 and 8. Director numbers are derived from the V-Max
Engine number. Dual-Initiator pairs are contained within the same V-Max Engines, while
Memory is mirrored across V-Max Engines.
38
Red
Enclosure 7
Blue
Enclosure 6
Green Enclosure 5
Yellow Enclosure 4
Orange Enclosure 3
Purple Enclosure 2
Pink
Enclosure 1
Dir 16
Dir 15
Dir 14
Dir 13
Dir 12
Dir 11
Dir 10
Dir 9
Dir 8
Dir 7
Dir 6
Dir 5
Dir 4
Dir 3
Dir 2
Dir 1
Module 1: Architectural Overview
- 39
Specific colors are used to indicate the V-Max Engines. This is very useful in order to
retrace the cables, which have colored sheathes in the same color scheme as the labels on
the cable guides. The Symmetrix V-Max uses octants (one of eight segments) based on the
number of V-Max Engines that can be placed in the System Bay. (Note: This is different in
the DMX-Series where the system is sub-divided into quadrants, and where each back-end
director pair connects to the lower or upper half of the Storage Bays, positioned either left
(A-side) or right (B-side) from the System Bay). The colors for the various octants are as
follows:
- Enclosure 1 (Dir 1 and Dir 2): Pink
- Enclosure 2 (Dir 3 and Dir 4): Purple
- Enclosure 3 (Dir 5 and Dir 6): Orange
- Enclosure 4 (Dir 7 and Dir 8): Yellow
- Enclosure 5 (Dir 9 and Dir 10): Green
- Enclosure 6 (Dir 11 and Dir 12): Blue
- Enclosure 7 (Dir 13 and Dir 14): Red
- Enclosure 8 (Dir 15 and Dir 16): White
39
2
3
3
1.
2.
3.
4.
- 40
The hardware guides for the four (4) front-end I/O Modules are part of the V-Max Engine kits
which are usually found in the empty SPS frame for that particular V-Max Engine. The black
inserts and the blue clip-on parts need to be manually installed. This includes putting the
labels on the blue covers for which there are specific locations, see the numbered items on
this graphic. The picture insert shows a front-end segment (black plastic) without the blue
cover attached to it.
40
Director Layout
= Module number
0
!
Backend
!
Backend
Rack#
B
V-Max Engine#
- 41
The next few slides will cover the Directors. The orange LED indicates either a Fault status
or the directors Boot (POST) sequence, as can be seen on power up and during director
replacements. Each Director consists of two Backend I/O modules, one System Interface
Board (SIB), and eight memory slots. The front-end connections for a director are found on
separate Front-end I/O modules. These are either Fibre Channel, iSCSI (or used as GigE),
FICON, or a combination thereof.
41
1.
2.
3.
4.
5.
6.
Shroud
Shroud label
Cache Memory Module
Latch
Key alignment slot
DIMM alignment notch
7
6
2009 EMC Corporation. All rights reserved.
3
6
- 42
The Global Memory for the Symmetrix V-Max systems consists of multiple physically local
memory modules pooled together. Unlike the Symmetrix DMX Series and other legacy
systems, there are no dedicated memory boards or dedicated enclosures for Global Memory
in the Symmetrix V-Max systems. Each director has eight cache memory module slots.
Every cache slot contains a module (DIMM=Dual Inline Memory Module) that has the part
number (e.g. 022-000-119) engraved in the metal shield surrounding the memory module.
The serial number label is attached to the top of the modules shield and is asked for during
cache memory module replacements.
42
Memory Rules
Memory is located on each director consisting of eight 2GB, 4GB, or 8GB
Cache Memory Modules which must all have equal capacities
Maximum physical memory V-Max Engine capacity is 32GB, 64GB, or 128GB
Single (1) enclosure systems have memory mirrored within the same
enclosure (intra-V-Max Engine)
Multiple (2-8) enclosure systems have memory mirrored across
enclosures (inter-V-Max Engine)
Memory is mirrored between V-Max Engines from an odd to an even director
- 43
There are two important rules which must be observed in order to avoid wasting memory.
Rule number 1 states that the Director boards within the same V-Max Engine must have the
same amount of memory. Rule number 2 enforces that there are at least two V-Max Engines
with the same amount of memory.
In order to accomplish a memory upgrade, both rules must be temporarily broken, as only
one board is physically swapped out at a time. Once all the necessary boards have been
physically changed, the Global Memory can be expanded to include the additional capacity.
At this point, the system will once again adhere to both rules. Upgrades are made per
Director, not per one individual Cache Memory Module. When a new configuration is loaded
on a Symmetrix V-Max array, the actual physical memory of each director is learned.
Symmwin then re-calculates the director memory pairing and saves this into the
configuration file (IMPL.bin). When new V-Max Engines are added to an existing
configuration, the Symmwin upgrade script recalculates the memory pairing and establishes
the new mirroring pairs as needed.
The maximum physical raw V-Max Engine memory capacity when using 2GB Cache
Memory Modules is 32 GB, using 4GB Cache Memory Modules the maximum is 64 GB, and
for 8GB Cache Memory Modules the total capacity is 128GB.
43
- 44
Whether Cache Memory Modules need to be transferred to a new Director board depends
on the activity, that is: the module that needs to be exchanged. Three scenarios are given
here:
- Scenario #1: Memory Upgrade. Since the Director boards are shipped with Cache
Memory Modules, there is no need to exchange any of the Cache Memory Modules from the
old Director board to the new Director board.
- Scenario #2: Director board failure. To keep the number of spare parts to a minimum, the
replacement Director contains no Cache Memory Modules. Those still properly working
Cache Memory Modules on the old director boards will need to be moved to the new director
board.
- Scenario #3: Cache Memory Module failure. Although this activity requires the removal of
the director, the replacement part is simply one single Cache Memory Module.
44
QSFP Connector
I/O Module
- 45
The next few slides will cover the I/O modules. Each Director within a V-Max Engine
contains two (2) back-end I/O modules. There is only one port on the back-end I/O module,
which holds a single Quad Small Form-Factor Pluggable (QSFP) connector. Quad
because this SFP connection has physically 4 smaller fibre channel cables aggregated into
one cable. On the other end of the connection, four (4) separate cables are routed to
different Drive Enclosures, providing Fibre Channel connectivity to Disk Drives.
45
- 46
The I/O Module Carrier holds two (2) front-end I/O modules. It provides connection between
the director to an Open Systems or Mainframe host through these front-end I/O modules.
I/O Module Carriers are available with or without the hardware Compression Offload Engine.
46
E0
F0
Module 4
G0
H0
Module 5
E0 E1
Module 4
G0 G1
Module 5
- 47
There are two (2) Front-end I/O Modules directly attached to a V-Max Engines Director,
these are module 4 and module 5.
Fibre Channel Front-end I/O modules support the interface to a front-end host or switch
connection. A Fibre Channel Front-end I/O module supports 4 Fibre ports per module, with
each of the ports operating at 2Gb/s or 4Gb/s. The lower speed is indicated by a green light,
while the higher speed is indicated by the blue light.
iSCSI/GigE Front-end I/O modules support the interface to a front-end iSCSI host or the
SRDF (GigE) connection to another Symmetrix V-Max system. An iSCSI/GigE Front-end I/O
module supports two (2) ports, while each individual port is able to operate at 1.25 Gb/s
(only). (Note 1: SFP port = Small Form-Factor Pluggable port.)
FICON Front-end I/O modules support the interface to a front-end host or switch connection.
A FICON Front-end I/O module supports 2 FICON ports per module, with each of the ports
operating at 2Gb/s or 4Gb/s.
47
- 48
The next few slides will cover the Virtual Matrix interface. This graphic shows director-todirector communication over the systems virtual matrix.
A system can include up to 16 directors in a Symmetrix V-Max array with eight V-Max
Engines. Each of the two MIBEs found in a single V-Max Engine contain 16 ports.
Therefore, the dual redundant MIBEs connect a total of 32 ports, enabling director-todirector communications. Whether directors store data in Global Memory that ends up on
their own physical memory module banks, or store data on physical memory modules that is
placed on other Directors, the data send to Global Memory always passes through one of
these MIBEs.
48
Storage Bay
Global Memory
A
MIBE
ure
AA
2009 EMC Corporation. All rights reserved.
SIB
B
MIBE B
Module 1: Architectural Overview
- 49
The MIBE is what binds the directors and their respective memory together. Using the SIB
ports A and B connections, all directors communicate though the MIBE. Note that the slice
lay-out chosen in this graphic is only valid for Fibre Channel only configurations since 4 ports
are shown for the Front-end I/O Module.
49
MIBE
Port 0
Port 2
Port 4
Port 6
Port 8
Dir 9
Dir 11
Dir 13
Dir 15
Dir 1
Dir 3
Dir 5
Dir 7
Dir 10
Dir 12
Dir 14
Dir 16
Dir 2
Dir 4
Dir 6
Dir 8
Port 1
Port 3
Port 5
Port 7
Port 9
Dir
Port
V-Max Engine
Director number
MIBE
Port number
- 50
Each director contains one (1) System Interface Board (SIB) that connects to one (1) port of
each MIBE to provide complete failover capabilities should one of the MIBEs require
maintenance. System Interface Board connectivity through the MIBEs allows for the
directors of all V-Max Engines to communicate with each other.
The order in which the MIBE ports are populated runs from the outside to the inside of the
MIBE enclosure in the same order V-Max Engines are added to the System Bay (that is 1st
Director 7+8, then Dir 9+10, then Dir 5+6, etc.).
The question could arise why a MIBE is needed in a Symmetrix V-Max SE array considering
there is always only one (1) V-Max Engine with two (2) Directors that are both already
connected to the same V-Max Engine midplane. The answer is found in the statement that,
as with the Symmetrix V-Max arrays, the Symmetrix V-Max SE also ALWAYS sends out
data to its cache memory modules using the MIBE, even in the event that the memory
resides on the same Director board that received the I/O from the host.
50
Rack#
To MIBE
Enclosure #
- 51
The System Interface Board (SIB) provides fabric connectivity between a Director and two
(2) MIBEs.
Besides the two (2) ports on the right-hand side of the graphic, two (2) hex displays are
found on the left-hand side. The first display shows the Rack number. This is the Enclosure
ID of the System Bay and therefore always 0 (zero). The second display shows the
Enclosure number, which is different from the Enclosure ID. This number is used to identify
in which Enclosure slot a V-Max Engine is positioned in, ranging from 1 to 8 in a Symmetrix
V-Max array, and always found to be 4 in a Symmetrix V-Max SE system since thats the
only V-Max Engine installed.
51
Even Dir
Even Dir
Odd Dir
Odd Dir
- 52
The graphics and text show the labels of the cables that run between the System Interface
Boards of the Directors within a V-Max Engine to their respective MIBEs.
Labels will be different for any V-Max Engine other than the V-Max Engine 4 as shown here.
For instance, V-Max Engine 5 will show on its label Dir 9 and Dir 10 which are connected to
Port 0 and Port 1 respectively on the MIBEs.
52
Management Module
2
1.
2.
3.
4.
5.
6.
7.
8.
Fault
Power good
USB port (Service light)
Management LAN port
Peer Service LAN port
Server UPS
SPS
NMI button
Rear view
8
PS-B
Backend
SIB Port A + B
Backend Backend
SIB Port A + B
Backend
MM-B
PS-A
MM-A
Management
Module locations
within a V-Max
Engine
Module 1: Architectural Overview
- 53
The next slides will cover the Management Module. Within each V-Max Engine, two
management modules monitor and control the environment the V-Max Engine operates in.
There is a similarity in the way the management modules monitor Symmetrix V-Max arrays,
as do XCM boards in DMX Series systems. Three of the management modules activities
are: (1) Monitor the SPS units, (2) Reset the UPS if required, and (3) Communicate with
other V-Max Engines positioned in the same system. Communication to several hardware
components is provided through Ethernet. The Ethernet port as indicated by the number 4
could either be: (1) directly connected to the Service Processor, or (2) connected to another
V-Max Engine. The Management Module is directly connected to the Service Processor
should it be positioned in either the highest or the lowest V-Max Engine number in the
system. Note that when performing a V-Max Engine upgrade (adding a V-Max Engine), the
Ethernet cables needs to be moved accordingly. V-Max Engines are always daisy-chained
to their abutting V-Max Engines, positioned above and below. The management module
provides connectivity to the Service Processor, between V-Max Engines, a Server
connectivity for reset purposes, USB connectivity for the System Bay door light, and RS-232
connectivity to the server SPS.
53
- 54
The light panel cables of the V-Max array System Bay door are connected to the
Management Module USB ports of V-Max Engine number 4. For the Storage Bay, the cable
assembly is connected to the 2nd AC slot in the top PDU on both the left and right side of
the cabinet.
When replacing the light panel assembly, do remember that the labeled cable always goes
into the right side connector. The light panel cable assembly attaches to the two connectors
on the light panel and is secured to the front door of the system bay or storage bay by a
series of tie wraps. The cable assembly is fed through a clip at the top of the bay. For the
system bay, one cable is threaded down through the right side of the system and connected
to the right side management module USB port. The left cable runs along the top
management cable trough down the left side of the system and connected to the USB port
of the left side management module. The light panel cable assembly is replaced from the
front of the System Bay or Storage Bay to the rear of the bay.
54
1. Battery compartment
2. AC MAIN present
3. On battery
4. Power On/Off
5
4
5. AC AUX present
6. Replace battery
- 55
The next slides will cover the UPS and Server. The system feeds from an Uninterruptible
Power Supply (UPS) to keep the server up and running in the event of an AC power failure.
The UPS contains four status LEDs. Two green LEDs, AC MAIN input present, and AC
AUX input present are lit during normal UPS operation. An amber LED (On battery) is lit
when the UPS is operating on battery power. A red LED (Replace battery) is lit if the battery is
detected to be low in capacity or in an out of specification condition.
55
Server Front
1 2 3 4
1.
2.
3.
4.
Power On
Hard disk activity
LAN 1 activity
LAN 2 activity
- 56
The Server acts as the Service Processor that runs the SymmWin and other utilities, e.g.
SMC and call home. The Uninterruptible Power Supply (UPS) will keep the Server, KVM,
and optional modem up and running in the event of an AC power failure. Preference over
the implementation of the ESRS Gateway has made modem setups 2nd choice.
56
Server Rear
7
1
2
5
3
1. KVM - Mouse
2. KVM - Keyboard
3. KVM Monitor
KVM
4. USB ports
5. Ethernet: CS-Spare
Highest Engine #
Lowest Engine #
MM-B
MM-A
- 57
The Symmetrix V-Max server comes with a KVM (Keyboard/Video/Mouse) attached to it. In
the event any of the KVM components fail, any regular VGA display, mouse, or keyboard can
be attached. The integrated keyboard does not need to be detached when a USB keyboard is
attached. They can work simultaneously, which could prove beneficial when the tracker ball of
the KVM keyboard is still in operation.
Clearly indicated are the connections for the green and purple Ethernet cables that are
attached to the Management Modules of the lowest and highest V-Max Engines as discussed
earlier in the Management Module section.
The V-Max Engine arrays have a blue Ethernet cable attached to the port indicated by
number 8. Use this port for ESRS implementation, not the CS-Spare port.
57
MIBE
Director
Fabric Cable
SIB
I/O Module
SFP
Blower Module
Drive
Management Module
- 58
As seen in Module 1 the disk bay components are the same as DMX-3 and DMX-4 when
running replacement scripts.
This module will discuss and demonstrate the components and replacement scripts that are
new to Symmetrix V-Max arrays.
Upon completion of this module, students should be able to:
1. Identify all Field Replaceable Units (FRUs), and
2. Resolve issues during replacement.
58
V-Max Engines
Drive Enclosure (DE)
Power Components
MIBE
Uninterruptible Power Supply (UPS)
- 59
Hardware components in the System Bay such as Power Supplies, Blowers, and SPS
modules have LEDs that show their current condition. Be sure to visually inspect all
components for fault LED indicators and run the Health Check script before running
replacement scripts.
59
.
Enclosure Fault
(Amber LED)
.
2009 EMC Corporation. All rights reserved.
Drive Fault
(Amber LED)
- 60
Hardware components in the Storage Bay such as drives, Link Control Cards, Power
Supplies and SPS modules must be in working order for proper installation.
Be sure to visually inspect all components for fault LED indicators and run the Health Check
script before running replacement scripts.
60
- 61
Use the ESD kit when handling directors, cache memory modules, and I/O modules. If an
emergency arises and an ESD kit is unavailable, follow the procedures under Procedures
without an ESD kit in the maintenance manual.
61
- 62
This demonstration will show how to avoid ESD damage to the sensitive parts of the
Symmetrix V-Max system hardware components.
62
- 63
All V-Max Engines contain two physical director boards with two Back-end I/O Modules that
are identified as Modules 0 and 1, a System Interface Board identified as Modules 2 and
3, and up to 8 logical directors (referred to as slices) identified as A through H.
V-Max Engines contain two I/O Module Carriers, where each I/O Module Carrier contains
two Front-end I/O Modules. Each I/O Module Carrier is an extension to either the odd or
even Director. The even Director is highlighted here with the associated Front-end I/O
Modules 4 and 5. The graphic reveals that both modules 4 are used for Fibre Channel,
while both modules 5 are used for FICON.
63
- 64
This demonstration will show the installation procedure for the upgrade of a second V-Max
Engine.
The same video can be used for the replacement of a V-Max Engine.
64
CPU Board
SIB
BE I/O Module
BE I/O Module
Director
CPU Board
Cache Memory Modules
SIB
BE I/O Module
BE I/O Module
- 65
When replacing a Director board, the script will warn that the associated Front-end or SRDF
Director will go offline, and that the customer must be notified that I/O activity through these
directors will be interrupted during the replacement process. You should interpret this to the
customer as: I/O activity through these Directors will be interrupted during the replacement
process unless hosts have EMC PowerPath or suitable failover software configured and
active. If the director slice is an active SRDF link extra precaution must be taken.
Example: Warning! Directors 16e,16f,16g, 16h will soon go unavailable. Please insure
customer has alternate I/O paths. I/O should be suspended on the ports to these directors.
65
Director
- 66
66
- 67
In this example, Director 7 is clearly highlighted in purple in the script diagram and text. When
replacing a director, the original I/O Modules and Cache Memory Modules need to be
removed and re-installed via the script. Since the Director board is heavy on the end opposite
of where the ports are located, careful removal of the Director board is required, not allowing
the board to tip when sliding it out.
67
- 68
68
CPU Board
SIB
BE I/O Module
BE I/O Module
Director
CPU Board
Cache Memory Modules
SIB
BE I/O Module
BE I/O Module
- 69
The System Interface board (SIB) provides connectivity between the Director and MIBEs
referred to as Virtual Matrix A and Virtual Matrix B. The SIB contains Quad Small Formfactor Pluggable (QSFP) cable connections and two hex displays for the Rack and Enclosure
numbers. The LED will illuminate green if power is good and yellow if there is a fault and
service is required.
The QSFP LED is normally off. Yellow indicates a loss of signal or the port is marked for
replacement. The script will warn that logical Directors e, f, g, and h will soon become
unavailable. Please make sure that the customer has alternate I/O paths available since I/O
will be suspended on the ports to the director.
69
- 70
To replace a System Interface Board (SIB), select SymmWin Procedure Wizard, then FRU
Replacement Tools, then Enclosure Slot component replacements, followed by Replace
SIB.
Continue the script and it will identify the failing SIB or ask you to select the failing SIB
location. In this example, Director 7 in V-Max Engine 4 was selected.
70
- 71
This demonstration will show the replacement procedure for the SIB.
71
CPU Board
SIB
BE I/O Module
BE I/O Module
Director
CPU Board
Cache Memory Modules
SIB
BE I/O Module
BE I/O Module
- 72
The customer should be notified that the associated Front-end Director will go offline during
the Cache Memory Module replacement process.
I/O activity could be halted unless hosts have EMC PowerPath or suitable failover software
active.
72
- 73
To replace a Cache Memory Module, select SymmWin Procedure Wizard, then FRU
Replacement Tools, then Enclosure Slot component replacements, followed by Replace
memory (DIMMs). The script will automatically detect the failing Cache Memory Module and
you will be able to select it from the Cache Memory Module list.
If the script can not find a failed cache memory module, the user can select the director and
cache memory module from a list as shown in the graphic on the top right-hand side. In this
example, DIMM 3 on Director 7 of Enclosure Slot 4 was selected. Director 7 is clearly
highlighted in the diagram and text as shown in the graphic on the top left-hand side. This
specific script asks for the customer to be notified that Directors 7E, 7F, and 7G will be taken
offline and to stop I/O activity to these Directors. This means that the customer should have
EMC PowerPath or suitable failover software active. Note that since logical Director 7G is
configured as FiCON, logical Director 7H is configured as Link CPU and not part of the listed
directors for which failover paths were requested.
The Marker LED comes on and asks for the removal of the cables from I/O Module 0, I/O
Module 1, SIB port B, and SIB Port A. Follow the directions as per script in the bottom
graphic, removing the director board as indicated.
73
- 74
The script asks to follow proper ESD guidelines to avoid static discharge damage to the
hardware. In this example, Cache Memory Module 3 is clearly shown in the diagram and
highlighted in text. The script message reads: IMPORTANT, verify the serial numbers of the
Cache Memory Modules you are removing and replacing!. That serial number label is found
on the top of the Cache Memory Module shield.
74
Cache Memory
Module shroud
Cache
Memory
Modules
- 75
Eight Cache Memory Modules are positioned in their slots on a Director. You must use the
ESD kit when handling directors, including taking out directors for the purpose of replacing
one or more cache memory modules. As can be seen in this slide, memory is accessible after
removing the shroud. Removal of the shroud is done by pressing on all four (4) corners.
75
- 76
The script requests to slide the Director only halfway into the V-Max Engine after all Director
board modules have been re-installed. Follow the directions on the left side of the screen. At
this point all cables should have been re-connected. Once completed, the board is to be fully
inserted. Click OK and continue following the script until successful completion.
76
- 77
This demonstration will show the replacement procedure for the Cache Memory Module.
77
CPU Board
SIB
BE I/O Module
BE I/O Module
Director
CPU Board
Cache Memory Modules
SIB
BE I/O Module
BE I/O Module
PS Module
B
PS Module
A
- 78
The V-Max Engine has N+1 power redundancy with two (2) 1,200 Watt Power Supply
modules. When looking from the rear, Power Supply module A is on the right, and Power
Supply module B is positioned on the left. Both individual power supplies can supply power
to all V-Max Engine components. Each MIBE also has its own redundant power.
78
- 79
The procedure to replace a failing V-Max Engine power supply from the front of the array is as
follows. Access the script from SymmWin Procedure Wizard, then select FRU Replacement
Tools, followed by Replace Power Component. The script will identify the failing power
supply module, and optionally show the power cabling so that you may verify it is correct.
Depending on the level of Enginuity, the script could ask to Disconnect the power cord by
depressing the top and bottom tabs of the large black connector. This step, however, is NOT
required.
79
- 80
This demonstration will show the replacement procedure for the Power Supply.
80
Blower (Fan)
- 81
Each V-Max Engine contains 2 Power Supplies and 4 Blower modules, positioned in the front
of the array.
Replace a failing blower module from SymmWin Procedure Wizard, then select FRU
Replacement Tools, followed by Replace Power Component.
The script shows the location of the faulty blower module, and a message is displayed stating
vault monitoring is suspended for 10 minutes.
The system, however, still vaults immediately should the incorrect Blower be removed.
81
- 82
This demonstration will show the replacement procedure for the Blower module.
82
Drive
- 83
To replace a Drive, access the disk replacement script from SymmWin Procedure Wizard,
then select FRU Replacement Tools, then Disk drive replacements, followed by Replace
Disk Drive. The location is clearly highlighted showing the Drive bay and disk location within
the Drive Enclosure of the Storage bay.
Correct seating of components is necessary to ensure that the system is able to detect the
components listed in the configuration file.
Therefore, be sure to visually check a drive for proper seating after replacement.
83
- 84
84
MIBE
Director
CPU Board
SIB
Virtual Matrix
BE I/O Module
Enclosure A
BE I/O Module
Director
Virtual Matrix
CPU Board
Cache Memory Modules
SIB
Enclosure B
BE I/O Module
BE I/O Module
RS232
Captive Screw
Module 2: Field Replaceable Units (FRU)
- 85
The MIBE green LED is normally ON, indicating that power is ON and within specification. If
the green LED is OFF, power to the MIBE is missing.
The MIBE amber LED is normally OFF, indicating normal operating conditions. When the
Marker LED is ON, the MIBE requires service, or marks the loss of signal on the port.
85
MIBE (Continued)
- 86
86
MIBE (Continued)
Rear View
Front View
- 87
Remove any access covers from the front of the unit covering the MIBE. At the rear of the
MIBE, perform the following steps:
1. Disconnect any Virtual Matrix cables from the rear of the MIBE
2. Disconnect the RS-232 cable from the rear of the MIBE
3. Disconnect the gray power cable.
4. Disconnect the black power cable.
5. Completely loosen the captive screw at the rear of the defective MIBE.
At the front of the defective MIBE, remove the two MIBE power supplies and use them to
repopulate the new MIBE. Refer to the script for details. Slide the MIBE out from the front of
the System Bay.
87
- 88
88
Fabric Cable
FE I/O Mod
CPU Board
FE I/O Mod
FE I/O Mod
CPU Board
CPU Board
SIB
BE I/O Mod
BE I/O Mod
FE I/O Mod
FE I/O Mod
SIB
BE I/O Mod
BE I/O Mod
FE I/O Mod
FE I/O Mod
SIB
BE I/O Mod
BE I/O Mod
CPU Board
FE I/O Mod
SIB
BE I/O Mod
BE I/O Mod
- 89
To replace a fabric cable, select SymmWin Procedure Wizard, then select FRU
Replacement Tools, followed by Fabric component replacements and choose Replace
Fabric Cable. If the script is unable to identify any failing Fabric Cable you are asked to select
from a list.
In this example, the selection was made for the cable that runs from Port A of Director 9 to
Port 0 of MIBE A.
The graphic shows that Director 9 has a redundant path via MIBE B.
89
- 90
The script will highlight both the port on the MIBE port as well as the SIB of the Director. As
before, L does not refer to Left; instead, MIBE-L 2A is on the right-hand side when
looking from the rear of the System Bay as per graphic.
90
- 91
This demonstration will show the replacement procedure for a Fabric Cable.
91
CPU Board
SIB
BE I/O Module
BE I/O Module
Director
CPU Board
Cache Memory Modules
SIB
BE I/O Module
BE I/O Module
- 92
Front-end I/O Modules are highlighted in the graphic, together with I/O Module Carrier A in
the V-Max Engine. I/O Modules 4 and 5 belong to the Odd Director and reside in the rightmost I/O Module Carrier as seen from the rear of the array.
When replacing I/O Modules 4 or 5 in the I/O Module Carrier, the ports associated with that
I/O Module will need to be brought offline to avoid having I/O activity through these directors
being interrupted during the replacement process.
Check with the customer to make sure all affected hosts have EMC PowerPath or suitable
failover software active.
92
CPU Board
FE I/O Mod
FE I/O Mod
CPU Board
CPU Board
SIB
BE I/O Mod
BE I/O Mod
FE I/O Mod
FE I/O Mod
SIB
BE I/O Mod
BE I/O Mod
FE I/O Mod
FE I/O Mod
SIB
BE I/O Mod
BE I/O Mod
CPU Board
FE I/O Mod
SIB
BE I/O Mod
BE I/O Mod
- 93
To replace the I/O Module Carrier, start the script from SymmWin Procedure Wizard, then
select FRU Replacement Tools, followed by Enclosure Slot component replacements and
choose Replace IO Module Carrier. The script will allow to make a selection from a list of I/O
Module Carriers should it be unable to find any failed I/O Module Carrier. Per V-Max Engine
there are two (2) I/O Module Carriers to choose from, I/O Module Carrier A, or I/O Module
Carrier B. The script refers to a V-Max Engine as ES which stands for Enclosure Slot. I/O
Module Carrier A is positioned on the right when looking from the rear of the array.
93
- 94
In the previous screen, I/O Module Carrier B in ES-5 was chosen. This V-Max Engine
contains Directors 9 and 10, as per top graphic in the screen. Carrier B contains two (2) I/O
Modules that both provide the front-end ports for Director 10. As per bottom graphic, the script
asks to pull both I/O Modules and NOT the I/O Module Carrier itself yet. This may seem to be
counter-intuitive, however this is the correct approach.
Now that both front-end I/O Modules have been removed, and as per bottom graphic in the
screen, the script asks to pull out the I/O Module Carrier itself.
94
- 95
Continuing the script as per top graphic, multiple steps are given to install the I/O Module
Carrier. The script asks for the new I/O Module Carrier to be inserted first, without the frontend I/O Modules. The next steps will ask you to insert the I/O Modules and attach the cables.
Do not use these screens out of sequence, but follow the scripts exactly as they appear.
As per bottom graphic, the script ensures the correct I/O Module Serial Number is inserted in
the correct position. The front-end ports for I/O Module E and F of Director 10 are inserted
here.
95
- 96
Continuing the script as per top graphic, the front-end ports for I/O Module E and F of
Director 10 are now inserted.
As per bottom graphic, the new I/O Module Carrier has been installed with the original I/O
Modules re-installed. This final step requires re-cabling.
96
- 97
This demonstration will show the replacement procedure for an I/O Module Carrier.
97
CPU Board
SIB
BE I/O Module
BE I/O Module
Director
CPU Board
Cache Memory Modules
SIB
BE I/O Module
BE I/O Module
- 98
Within each I/O Module Carrier, the two Front-end I/O Modules and their respective Small
Form Factor Pluggable (SFPs) connectors can be replaced when needed.
I/O Modules 4 and 5 can be Fibre Channel, iSCSI, FiCON, or a combination thereof, but
should be similar for both directors for fail-over purposes.
Multi-mode SFPs are identified by a black sleeve around the handle, while this is blue for
single-mode Small Form Factor Pluggable connectors.
LEDs show the status of a Front-end I/O Module. Green indicates power is on, without a fault.
Yellow indicates a fault condition. No light indicates the module is powered off.
The individual SFPs also have their status shown through LEDs. Green indicates low speed,
while blue means high speed is detected. An alternating green and blue LED indicates a bad
port, or a port with the SFP unplugged. A blinking blue LED indicates that the port requires
service.
98
- 99
Start the replacement of a Front-end Fibre Channel I/O Module by selecting the script from
the SymmWin Procedure Wizard, followed by FRU Replacement Tools, then select
Enclosure Slot component replacements, and choose Replace IO Module. As per top
graphic, Front-end I/O Module 4 of odd Director 7 in carrier A of Enclosure Slot 4 has been
selected for replacement.
As can be seen in the bottom graphic, a screen display warns that the ports associated with
the directors I/O module will become unavailable. Inform the customer that all I/O activity
through these directors will be interrupted during the replacement process and could impact
the customer unless hosts have EMC PowerPath or suitable failover software active. The
failing I/O Module Carrier is identified on the left-hand side of the screen. Make sure to tighten
the I/O Module Carrier lock screws, since failure to do so might result in accidentally pulling
the I/O Module Carrier.
99
- 100
As per top graphic, the script asks to pull the I/O Module first, before removing the SFPs from
the old I/O Module and populating the new I/O Module with those same SFPs.
After inserting the new I/O Module and moving the SFPs from the old I/O module to the new
I/O module, reconnect the cables as per instructions on the left of the bottom graphic.
100
- 101
This demonstration will show the replacement procedure for a Front-end I/O Module
101
SFP Port
- 102
Replacing a SFP does not require the I/O Module to be removed. Simply access the script
from SymmWin Procedure Wizard, then select FRU Replacement Tools, followed by
Enclosure Slot component replacements, and choose Replace SFP. The script prompts
you that the SFP will become unavailable and recommends to stop I/O activity through this
SFP.
In this example, this is for SFP 1 of Director 7 (Odd director) in V-Max Engine 4, indicated as
07E/1 which is positioned in I/O Module 4, here indicated as 07E/F. Make sure the
customer has EMC PowerPath or suitable failover software active since all I/O activity through
this SFP will be interrupted during the replacement process.
102
- 103
This demonstration will show the replacement procedure for an SFP Port.
103
CPU Board
SIB
BE I/O Module
BE I/O Module
Director
CPU Board
Cache Memory Modules
SIB
BE I/O Module
BE I/O Module
- 104
The Back-end I/O Module is replaced using the same script as the Front-end I/O Module. The
script asks to select the Director and either I/O Module 0 or I/O Module 1. Use the cable
labels to verify the correct port connection attached to the Backend I/O Module. The LED
above the port can light up as green or yellow. Green indicates Power is ON and within
specification. Yellow indicates the module has a fault condition. If there is no light at all, the
module is powered off.
The four (4) LEDs below the port can light up as blue or green, or both and have the following
meaning:
1. Steady On blue indicates a high-speed and active connection,
2. Alternating (blinking) green and blue indicates the port has a fault condition
3. Blinking blue indicates that the port requires service by an EMC Customer Engineer.
104
- 105
This demonstration will show the replacement procedure for a Back-end I/O module.
105
Management Module
MM-A ES-4
Management
Module A
V-Max
Engine
Number 4
2009 EMC Corporation. All rights reserved.
- 106
There are two (2) Management Modules available per V-Max Engine, as shown in the top
graphic. The associated Front-end Director will not go offline during the replacement of the
power supply or SPS of the Management Module.
To replace a failing management module access the script from SymmWin Procedure
Wizard, then select FRU Replacement Tools, followed by Replace Power Component. If
a Management Module is failing it will be automatically detected and listed during the
replacement procedure.
Should all Management Modules be found to be in correct working order, a list with all power
components is presented.
In this example, and as per bottom graphic, the selection is made for Management Module
A in V-Max Engine number 4 (MM A ES 4).
106
- 107
This demonstration will show the replacement procedure for a Management module.
107
Course Summary
Key points covered in this course:
The Symmetrix V-Max systems are available as Symmetrix V-Max arrays
and Symmetrix V-Max SE systems, both using Virtual Matrix Architecture.
V-Max arrays consist of V-Max Engines, each containing a portion of
Global Memory and two Directors capable of managing Front End, Back
End, and remote connections simultaneously
The V-Max array consist of many Field Replaceable Units which can be
exchanged for new components using the SymmWin Procedure Wizard
Course Summary
- 108
108