Sie sind auf Seite 1von 40

redhat--- rpm


Its far better to buy a wondeful Company at a FAIR PRICE then a FAIR Company at
a WONDEFUL Price!!!
SSP Request-- for 2010 office
for getting License order.ART Request For 2007
The clariion basic architecture allows for flexible host attached through optica
l Fibre channel or copper iSCSI ports in the array.
* The connection to all back-end disks is through Fibre channel loops, implement
ed using copper cables.

FC5500 was the first full fibre channel storage system. in 1997
Cx 200,400,600 2002
CX 300,500,700 2003
CX 300i,500i 2005
CX3-20, 40,80 2006
CX3-10c, 20c,20f,40c,40f 2007
CX4- 120, 240, 480 480P, 960 2008--- 4th Generation of CX

Clarion Secure CLI

naviseccli -h <SPaddr> <commands>
migrate, spcollect,clone,alpa,bind,clearlog,emconfiguration,getagent,getcontrol,
arrraycommpath,cachecard,clearstats,failovermode,getall isns, ntp,nqm,arrayname,
StoragePools: are collection of physical disks that share the same RAID protecti
on scheme/RAID level.
Access control is implemented in the CLARIION using the Access Logix code that r
uns on Storage Processor.
This uses the concept of Storage Groups. LUNs are assigned to a Storage Groups &
hosts are associated with SG to
allow access.
HotSpare-- Takes the place of failed disk within a RAID group.Can be located any
where except on Vault drives.
Logical Unit (LUN) is a grouping of one or more disks into one span of disk stor
age space.
Storage Groups are a feature of Access Logix & used to implement LUN masking.

DataIntegrity features:
-Mirrored Write cache -- ensures both SP's contain up-to-date, identical, write
-De-stage writecache to disk upon power failure
-SNiiF verify
-Background verify - per RAID Group
Vault Drives:
Drives 0-4 in the first enclosure on CX Series
Drives 0-9 in the first enclosure on FC Series
Holds write cache content in the event of a failure.
Persistent Cache
Write cache data is maintained under these scenarios:
Non-dsiruptive upgrades (NDUs)
Single Storage Processor (SP) Restart
Single Storage Processor (SP) Removed
SP or I/O module replacement or repair
SPS singlr SP hard fault or transient h/w failure
Power supply failure
Burst Smoothing- Absorbs bursts of writes into memory, avoids disks becoming a b
Locality: Afeature which Merges Several writes to the same disk area (Stripe) in
to a single operation.

Three levels of flushing:

-Idle fushing
-Watermark processing
-forced flushing

Drive spin down feature-- to spindown to 0 rpm to save power.

Atleast 1 Storage domain must becreated and each domain must have a domain maste
r- to manage multiple CLARiiONs from a single user interface.
The domain master maintains the master DB of configuration information for the d
omain and shares this with the other storage nodes in the domain.

* The DAE3P-Os which is located at Bus 0, enclosure 0 contains the system disks
in the first five slots.
these disks store the storage Processor OS and configuration information.
the DAE3P-Os can also contain upto 10 additional data disks.
ALUA-- Asymmetrix Logical Unit Access is a SCSI communication standards.
It allows host to see a LUN over paths through both SPs, even though the LUN is
still owned by only one of the SPs.
Although this allows a LUN to be seen over both SPs, the data will normally be s
ent to the LUN over an "Optimal" path that goes through the owning SP.
It is a request forwarding implementation that honors the storage system's LUN o
wnership feature. however when necessary orr appropriate, it allows I/O to route
either SP, when the data is transmitted to the LUN over a path through the non-o
wning SP, the data is redirected to the owning SP over the CMI path to the disk
holding the LUN.
BEST PRACTICES for ALUA Failover/Performance
* Balance LUN ownership b/w the 2 SPs.
* Ensure that hosts are still using optimal paths to their LUNs.
* Ensure that all LUNs are returned to the default owner(SP) and Non-Disruptive
Upgrades (NDUs).
* Incase of failure or performance issue, ensure I/O is routed through the opti
mal path.
*ALU with Powerpath is the optimal solution.

The CX4-120/240/480 SPE is 2U tall & 24 Inches deep rack mountable.

2U SPE Contains:
1) 2 SPs ( Each SP is a combination of Replaceable Units => CPU Module + I/O Mod
ules = SP
2) 4 Power Supplies with integrated Blower Modules=> Each power supply/blower Mo
dule is a FRU
3) Midplane for 12V DC Distribution, CMI, Resume PROM,(contails array part no, W
WN seed, midplane part no) & Peer-Peer functions.
- Dual AC cords hardwired to chassis.
-Each cord connects to 2 power supplies.

Each SP boots independently

3 LEDs, b/w the power supplies for each SP may be monitored during the SP Boot s
If all steps are completed successfully, the Fault LED will Turn Off, & the Powe
r LED will remain on.
SP Power LED ( Green)
SP Fault LED (Amber/Blue)

* An SP can be hot-swapped from the enclosure in its entirety. An I/o Module or

I/O Filler cannot be hot-swapped from a powered SP.
In either case, the entire SP will lose power immediately.

CPU Module LEDs

Green means that power is good
Amber/blue means that there is a fault
White means that it is unsafe to remove the CPU.

there are 2 Power supply per CPU so total of 4 Powersupply in each system.
Both power supply for a SP must be removed before the CPU Module can be removed.

The base CX-120 model contains a single SPS until mounted as SPS-A.
the connection that would normally attach to SPS-B is connected to the PDU (Powe
r Distribution Unit).
If the optional 2nd SPS is ordered, the power connections are made the same as o
ther CX-240/480 models.
In CX-240 & 480 Models:
SPS-A --- PS-A0 & PS-B0
SPS-B--- PS-A1 & PS-B1
The SPS units should only be used to power the SPE & first DAE.
The management LAN IP address is provided by the customer and set during array i
the Service LAN has a fixed IP address and subnet Mask.
GbE Service LAN -- (SPA=
NMI (NonMaskable Interrupt) button forces a memory dump of the SP's memory and s
hould only be used under direction of Engineering or Tech Support Level 2.

@ types of I/O Modules are available for CX4

Fibre Channel & iSCSI Modules
the FC I/O Module contains 4 Ports
the iSCSI I/O Module contains 2 Ports.
All CX4 models require 1 FC I/O Module per SP installed in slot 0 & 1 iSCSI I/O
Front-end SFP Optical Transceivers
Small form-factor Pluggable -- SFP is an optical Transceivers.
Available in Clariion CX4 series is the 4 Port 8Bg/s FC I/O module, which suppor
ts 2/4/8 Gb/s Front end host transfer rates.
It will connect to the same 2 & 4 Gb/s Back-end DAE's as the current 4Gb/s FC I/
O modules.
An 8 Gb SFP will be required for 8Gb FE connectivity.
Bicolor green LED indicates power good
Amber LED indicates Fault.
The link LED for link speed, activity, & port marking.
If the LED is green-- then the link speed is either 2 or 4 Gb/s
If the LED is blue -- then the link speed is 8Gb/s Fc connection.
Port marking blinks the LED with/without Fc connection for port identification.
** Any FC port can be configured as either front-end host/switch connection or b
ack-end DAE connection.
Each FC port can operate at 2, 4, or 8Gb/s.
In perviously released ULTRA FLEX Technology the FC port is 4 FC I/O modules of
4Gb/s & iSCSI I/o Modules of 1Gb/s.
Available in the CX4 series line is the 4 FcI/O modules of 8Gb/s & iSCSI I/O mod
ule of 10Gb/s.
** The I/O module can only run @ 10Gb/s & will not auto-negotiate to 1Gb/s or an
y other speed.

New SATA Flash Drives use SATA II Protocol natively within flash drive & use a p
addle card to present the FC interface to the storage system.
They can be used interchangeably with FC flash drives.
-- SATA FLASH drives may be combined with FC FLASH drives & conventional FC HDDs
in the same enclosure.
-- SATA FLASH drives may be combined with FC FLASH drives in the same RAID grou
--SATA FLASH drives can spare for FC FLASH drives & vice versa.
--As is the case with any type of flash drive, HDDs cannot spare for FLASH drive
Drive Spin dowm:
It is the CLARiiON's 2nd Generation Spindown Product, with the 1st being availab
le for only the CLARiiON Disk Library.
Using this feature ,the disk drive transition into a low power state when idle -
by spinning down the spindle motor (0RPM)- creates a power saving of 55 %- 60 %
Unisphere /CLI shd be used to configyre this. This feature can be enabled either
at the Array or RAID Group Level.
* this feature is only available on CX4 platform.
*Vault drives (Drives in slot 0-4) will not be eligible for spin down & spindown
will not be available for any layered features( MetaLUN, thin LUN, WIL,RLP, CLP
*Only SATA drives are qualified to participate in power savings.
The cables conform to industry std Fibre Channel Passive SFP connectors
Copper FC Cables utilize industry std HSSDC2 connections on the DAE side.
* SFP to HSSDC2 cables are only required for SP to the first DAE.
* HSSDC2 to HSSDC2 cables are required for DAE to DAE Connections.
*2/4 Gb/s on FC Copper cable support.

-Supports upto 15 low-profile, 2/4Gb/s FC disk Drives.
- Two 4Gb/s LCCs
- two Power supply/Cooling Modules
-Uses same cables as DAE2P Enclosure
* 8 Meters max b/w DAE3Ps
* 5 Meters max b/w SPEs & DAEs
DAE3P Rear View:
The SPA loop components are mounted in the bottom ofthe enclosure and the SPB lo
op components are mounted in the top of enclosure.
* Each power supply in DAE3P enclosure is capable of powering the entire enclosu
re in the event of power supply failure.
* the 2 LCCs cards connect to all the disks in the enclosure & operate independe
** The Loop ID & Enclosure ID on both LCC cards within an enclosure must always
DAE3P Rules:
* the first 5 drives in the OS enclosure (BE0,Enc 0) must all be of equal size/s
If possible keep all drives in a chassis of same size & speed. If not , try to i
nstall drives of same types in groups of five.
* Donot mix 2Gb/s drives with 4Gb/s drives in a DAE3P enc.
*ATA Drives are not allowed in a FC enclosure Chassis.
empty drive slots must be filled with drive fillers.

Tthe objective of disk enclosure interconnect rules is to reserve the highest po

ssible BE loop speeds.
A BE Loop can operate @ one of 2 speeds & will operate @ the lowest loop speed.
Each FC DAE BE loop will support upto 120 FC Drives or max of 8 DAEs per loop.
Each SATA Enabled DAE BE loop will support upto 120 SATA Drives.
Loop 0 supports upto 105 SATA Drives, in addition to the first DAE, which includ
es the mandatory FC drives to support the VAULT OS boot drives. CX-120 is except
ion to this.
CX-120 Supports only 1 BE loop.
CX-240 model supports 2 BR loops per SP.
CX-480 supports 4 BE loops per SP.
CLARiiON CX4-120 Overview:
-- UltraScale Architecture
->> 2 Single 1.2Ghz dual core LV-Woodcrest CPU modules
->> 6GB system Memory.
-- Connectivity
->> 128 high availability hosts per array
->> Max storage Groups 256
->> Upto 6 I/O Modules (FC &iSCSI)
- 8 FE 1 Gb/s iSCSI host ports max.
-12 FE 4 Gb/s FC host ports max.
- 2 BE 4 Gb/s FC disk ports max.
-4 FE 10Gb/s iSCSI ports max.
-- Scalablility
->> upto 1024 LUNs
->> upto 120 Drives.
CX4-240 Overview:
It has an UltraScale Architecture & a 8 GB System memory. It can connect to 512
HA hosts & can support both DC & iSCSI I/O modules.
It can scale upto 204 LUNs and 240 drives.
upto 8 I/O Modules.
-12 FE 1 gb/s iSCSI host ports max
-12 FE 4 Gb/s FC host ports max
- 4 BE 4 gb.s FC disk ports max
- 4 FE 10Gb/s iSCSI host ports max.
CX4-480 Overview:
Ultrascale Architecture
-Two 2.2 Ghz dual core LV-Woodcrest CPU modules
- 16 GB System memory.
-256 HA Hosts
-upto 10 I/O Modules.
16 FE 1 Gb/s iSCSI host ports max.
16 FE 4 Gb/s FC host ports max
8 BE 4 Gb/s FC disk ports max
SnapView -business uses
Disk-based Backup & recovery, Data warehousing, Decision Support & data testing,
Data reporting & Data movement.
Snapview allows multiple business processes to have concurrent , parallel access
to information.
* Snapview creates logical Point-in-time views of production info using Snapshot
& Point-in-time copies using Clones. Snapshots use only a fraction of original d
isk space, while clones require same amt of disk space as source.

Production Host-- Server where customer applns are executed.
Backup host-- Host where backup processing occurs.
Admsnap utility-- An exe prgm which runs interactively or with a script to manag
e clones/ snapshots.
Source LUN- Production LUN
Activate-- Maps a snapshot to an available snapshot session
Snapshot-- A Point-in-time copy of a source LUN
RLP-- Reserved Lun Pool - Private area used to contain CoFW data.
It hols all the original data from the source LUN when the host writes to a chun
k for the first time.
Chunk-- Are an aggregate of multiple disk blocks that SnapView uses to perform C
oFW operations. size is set to 64KB.
CoFW-- When a chunk is changed on the Source LUN for the first time, data is cop
ied to a reserved area.
to start a tracking mechanism & create a virtual copy that has the potential to
be seen by a host, we need to start a session.
A session is associated with one or more snapshots, each of which is accociated
with a unique Source LUN. Once a session has been started, data is moved to the
RLP as required by the CoFw mechanism.
We need to activate the snapshot to make it appear online to the host.
Fracture-- Process of breaking off a clone from its source, once a clone is frac
tured ,
Any server I/O requests made to Source LUN, fater you fracture the clone, are no
t copied to the clone unless you manuallu perform synchronisation.
Synchronising a fractured clone unfractures the clone & updates the contents on
the clone with its source LUN.
Clone group-- Contails Source LUN & all of its clones. when u create a clone gro
up, u specify a LUN to be cloned, this LUN is referred to as Source LUN.
Once u create a clone group the snapview assigns a unique id to group.
Clone Private LUN-- it record info. about data chunks modified in the source LUN
& clone LUN after you have fractured the clone.
Amodified data chunk is a chunk of data that a server chages by writing to the c
lone / source LUN.
Reserved Average Source LUN=
total size ofsource LUNs/ Number of Source LUNs
Reserved LUN size = 10% average source LUN size ( CoFW Factor)
Create twice as many Reserved LUNs as source LUNs ( Overflow LUN factor)

A SnapView session is a Point-in-time copy of a source LUN. the session keeps t

racks of how the sourceLUN looks like at a particular Point-in-time.
* After u start a snapview session & as the production server writes to the Sour
ce LUN, the s.w stores a copy ofthe original data in the Reserved LUN pool in ch
This process is referred to as CoFw & occurs only once, which is when the server
first modifies a data chunk on source LUN.
SnapView Session Modes:
Persistent Vs. Consistent
Persistent Mode:
- All sessions are persistent.
- survives failures & trespasses
-available on a per-session basis.
Consistent Mode:
- Preserves the pomt-in-time restartable copy across a set of source LUNs.
-Available on a per-session basis.
Counts as one of the 8 sessions per source LUN limit
- Not available on AX series storage Systems.

SnapView-- Activating/Deactivating snapshots

Activating a snapshot:
- Maps a SnapView snapshot to a snapshot session.
- Makes it visible to a secondary host
Makes it inacessible to secondary host.
Doesnot flush host buffers( unless performed through Admsnap utility)
Keeps CoFW process active.
SnapView Rollback:
Process to recover data on source LUN
Reverses the pointer-based process.
Restores the contents of source LUN to a point-in-time.

Clone is a complete copy of source LUN tat uses copy-based technology.

Instant Restore: Reverse Syn copies clone data to the source LUN. Data on the so
urce is over written with clone data. As soon as the reverse syn begins,
the source LUN seems to be identical to the clone. this feature is known as Inst
ant restore.
A consistent fracture is when you fracture more than one clone at the same time
in order to preserve the point-in-time restartable copy across the set of clone.

Mirror View:
Mirror Synchronisation: Mechanism to copy data from the primary LUN to a seconda
ry LUN
Mechanism may use fracture log/write intent log to avoild full data copy.
Mirror Fracture:
-condition when a secondary is unreachable by the primary
-Can be invoked by admin command.
MirroR Availability States:
-Inactive-- Admin control to stop mirror processing
- Active -- I/O allowed ( normal State)
-Attention -- Admin action req
Mirror Promote:
-changes an image's role
Mirror Data States:
-Out of Sync- Full sync needed
- In sync- Primary LUN & Secondary LUN contain identical data
- Rolling back-- Primary LUN is returned to state at a previously defined point
in time.
-Consistent-- Write intent log, or fracture log, may be needed
-Synchronizing- Mirror sync operayion in progress
Fracture Log is a bitmap held in the memory of the storage processor that owns t
he primary image.
The log indicates which physical areas of the primary have been updated since co
mm was interrupted with the secondary.
*** Write Intent Log---
the record of recent changes to the primary image is stored in persistent memory
on a private LUN reserved for the mirroring s/w. If the primary storage system
the optional write intent log can be used to quickly synchronise the secondary i
mages when the primary storage system becomes available.
This eliminates the need for full sync of secondary images.
The write intent log keeps track of writes that have not yet been made to the re
mote image for the mirror. It allows for fast recovery.

- upto 4,096 LUNs
- upto 480 drives.
Data Cabling:
Bus 0 - I/O Module 0. Port 0
Bus 1 - I/O Module 0. Port 1
Bus 2 - I/O Module 1. Port 0
Bus 3 - I/O Module 1. Port 1

CX 960 Overview:
Ultrascale Architecture:
-4 Quad Core 2.33Ghz Clovertown CPU Modules.
-32 GB System Memory
4096 HA hosts.
- upto 12 I/O Modules
16 FE 1 gb/s iSCSI Hosts
24 FE 4 Gb/s FC host port
32 BE 4 Gb/s FC Disk port
8 FE 10Gb/s iSCSI host ports.
-upto 4096 LUNs
-upto 960 hosts.
CX4-960 use Processor throttling, a mechanism that enables dynamic switching of
the CPU core voltage & CPU Clock frequency.
This slowdown the processor/Clock speed, reducing the heat generation.
This feature is only available on CX-960 platform.
Rule for hot spare:
there shd be one hot spare Drive for upto 30 drives.

Utilizing MirriView's bi-directional capability, any CLARiiON can be engaged in

up to 4 relationships with pther systems.
Means you can have each system be both a source & a target. Relations can be wit
h diff models of clariion.
Mirrorview can also be used to consolidate or "FAN-IN" information on 1 remote c
lariion for purposes of consolidated backups, simplified failover or consolidate
d remote processing activities.
upto 4 source clarrions can be mirroed up to a singlr clariion target system.
Snapview is licensed separetly from MirrorView & supports both pointer based sna
pshots as well as full binary copies of clones.
When used with Mirrorview, snapshot & clones provide local replicas of primary o
r secondary images. the y allow secondary access to data.
Consistency Group Operations/ functions:
Create a CG--- Defines th grp & doesnt add group members.
Destroy a CG-- Allowed only if it has no members.
Add a member to the group-- Adds MV/A or MV/S Secondary images to the group
-- Creates the group if it is the first image/ membe
r added.
Remove a member from a group-- Removes MV/A secondary image from the group.
Fracture a CG-- stops updates to sec images, Adminsitratively fractures all mirr
ors in group.
Synchronize a CG-- Syns all mirror in grp, grp must be adminly fractured, OR grp
must be consistent and set to manual update.
Promote a CG-- promotes all mirrors in the grp, cmd must be directed to secondar
y clariion, frp must be In-syn or consistent.

MirrorView thinLUNs:
with R29+ , primary & secondary images can be created on thin LUNs
All combinations of Thin & R29+ Traditional LUNs are allowed.
Thin LUNs & Pre-R29 LUNs are NOT allowed to co-exist in any relationship.
Once the R29+ s/w package is committed, all Pre-R29 Traditional LUNs becomes R29
+ Traditional LUNs
MirrorView thin LUN Checks:
-- when adding a secondary Thin LUN or Synchronising existing mirrors, ensure th
at the Thin Pool has enough capacity for the syntn to complete. else a MEDIA FAI
LURE adminstrative fracture will occur.
However in case of MirrorView/S it checks for the secondary images size before s
tarting syn.
Business Uses of SAN Copy Replication S/w:
1) Data Migration
2) Data Mobility
3) Content Distribution
4) Disaster Recovery
SAN Copy s/w runs on a SAN copy storage system. SAN Copy copies data at a block
It copies data directly from a logical unit on one storage system to destination
logical units on another, without using host resources.
SAN Copy Terminologies:
LUN-- clariion Logical Unit
Source LUN-- the LUN to be replicated
Destination LUN-- the LUN where the data will be replicated.
SAN Copy Session-- Persistent definition consisting of a source and Destination
SAN copy Storage System-- System on which SAN copy is licensed.
SAN copy Port-- Clariion SP port used by SAN Copy
Quiesce-- Hault all I/O on a LUN.
Block Level Copy-- Reads & writes at the block level (Whole LUN)
PUSH-- SAN copy system reads data from one of its LUNs, writes data to destinati
on LUNs.
PULL-- SAN copy system Reads from a source LUN, wrtres data to one od its LUNs.
** Before SAN Copy session can be created, the enabler must be installed on the
Physical connections must be established b/w storage sysm.
Each SAN Copy port participating in a session must be zoned to one or more ports
of each SP that participates in the session.
Logical connections must be established b/w storage systems. -- SAN copy initai
ators must be added to Storage Groups.
SAN copy uses SnapView to create a snapshot of the source LUN, & actually reads
from the snapshot during the update, so there is a consistent PIT view of data b
eing transffered.
A fill SAN copy session copies the entire contents to thedestination LUN. Inorde
r to create a consistent copy, Writes must be quiesced on the source LUN for the
duration of the session.
Incremental SAN Copy (ISC) allows the transfer of changed data only from source
to destination.
SAN Copy supports Thin LUNs with Flare R29 and greater. both source and destinat
ion LUNs can be thin LUNs.

***** Exception--- >>> With pull copies-- When the source is on the remote array
, the Copy cannot be provisioned as Thin. It can only be provisioned as traditio
nal copy.
SAN Copy/E copies data from CX 300 & AX series storage systems to CX Series Stor
age Systems running SAN copy.
* for SAN Arrays LUN is the fundamental unit of block storage that can be provis
the host's disk driver treats the array LUN identically to a Direct-attached dis
k spindle, presenting it to the OS as a raw device/character device.
Where as in NAS- the NAS Appliance presents storage in the form of a file system
that the host can mount & use via network Protocols (NFS/CIFS).
** FABRIC-- Is a logically defined space in which Fibre Channel nodes can commun
icate with each other.
Incase od DAS: Due to Static Configuration, the bus needs to be quiesced for eve
ry device reconfiguration. every connected host loses access to all storage on t
he bus during this process.
In Parallel SCSI : Devices on the bus must be set to a unique ID in the range of
0 to 15. Addition of new devcies need carefull planning.

Fan-Out ==> One storage port services multiple host ports.

Fan-In ==> One host port accesses storage ports on multiple arrays.

FC-SW switched Fabric ==> uses a 24-bit address to route traffic, & can accomoda
te as many as 15 million devices in a single fabric.
A FC HBA is a standard PCI or Serial Bus peripheral card on the host computer, j
ust like a SCSI adapter.
Fibre channel is a serial data transfer interface that operates over copper wire
or optical fiber at data rates upto 8Gbps & upto 10Gbps when used as ISL on sup
ported switches.
SCSI Commands are mapped to Fibre Channel constructs, then encapsulated & transp
orted within Fibre Channel frames., which heps in high speed transfer of multipl
e protocols over same physical interface.
FC-0 Physical Interface optical & Electrical Interfaces cables, connetors etc.
FC-1 Encode/Decode Link control Protocols, ordered sets.
FC-2 Exchange & Sequence Management, Frame Structure, Flow Control
FC-3 Common Srvices
FC-4 SCSI-3 FCP, FC Link encapsulation, Single byte Command Code sets etc.
Upper Layer Protocol SCSI -3 , IP, ESCON/FIPS, etc.

FC Frames:=> contains-- Header info, data, CRC & frame delineation markers.
Max data carried in a frame is 2112 bytes with the total frame size @ 2148 byte
Header contains the source & Destination addresses.
Typefield interpretation is dependent on whether the frame is a link control or
a FC data frame.
FC addresses are 24 bits in length, unlike MAC these are not burntin but are as
signed when a node enters the loop/ is connected to switch.
3 bytes --- 24 bits (domain-- Area(port)--AL_PA)
dimain-Area ==> FC-Sw
AL_PA is generally 00 unless the device is in a Arbitrated Loop.
Physical address is switch specific & dynamically generated by the fabric login.
A WWN, is a 64-bit address used in FC networks to uniquely identify each element
in the network.
It is burnt on device by vendor.

there are 2 designations of WWN:

WorldwidePortName WWPN & worldWideNodeName WWNN. both r 64 bit, but difference l
ies in where each value is "physically" assigned.
A server may have dual HBAs installed, thus having multiple ports/connection to
AWWPN is assigned to each physical port & a WWNN represents the entire server wh
ich can be referred to as node process.

Common Fabric Services:

LOGIN SERVICE address : FFFFFE desc: All nodes register in login service when
performing FLOGI.
NAME SERVICE address: FFFFFC desc: Store info. about devices attached to fa
bric during PLOGI.
FABRIC CONTROLLER address: FFFFFD desc: State Change Notification to nodes
registered in the fabric.
Management Server desc: Singlr access point for the services above
To provide single acess point to provide 3 services based on virtual containers
called zones ( 3 services: 1. Name Server 2. FabricConfiguration server 3. Fabri
c Zone Server.)

Zone: A collection of nodes defined to reside in a closed space. Nodes inside a

zone are aware of nodes in the zone they belong but noe the zones outside of it.

Fan-Out Ratio: Maximum number of initiators that can access a single storage por
t through a SAN
Fan-In Ratio: Max number of storage ports that can be accessed by a single initi
ator through SAN.

Topology Model:
1) Mesh Fabrictopology: partial, Full all switches are connectedto each other.
2) Compound Core-Edge Fabric: Core connectivity tier is made up of swithes conf
igured in a full mesh.
- Core tiers are only used for ISLs
- Edge tiers are used for host or Storage Connectivity.
3) Simple Core-Edge Fabric
-- Can be 2/3 tier, Single Core tier, 1/2 Edge tier,
-- In 2 tier topology, storage is connected to core & hosts are connected to E
Switches are connected to eachother in a fabric using ISLs. This is accomplished
by connecting them to eachother through an expansion Port on the switch.
ISLs are used to transfer node-tonode data traffic, aswell as fabric management
traffic from one switch to another.
Oversubscription Ratio: Measure of the theoritcal utilization of an ISL.
If possible aviod - host to storage connectivity whenever performance requiremen
ts are stringent.
IF ISLs are unavoidable, the performance impliccations shd be carefully consider
ed during design stage.
Best Practices while Adding ISLs in a Fabric:
Always connect each switch to atleast 2 other switches in a fabric.this prevents
a single link failure frm causing toatl loss of connectivity to nodes on that s

3 types of ISLs are:

MultiMode ISL--
SingleMode ISL--
iSCSI- Provides a means of transporting SCSI packets over TCP/IP.
iSCSI node name/ SCSI device name of an iSCSI device-- contain 3 parts
1. Type designator
2. Naming Authority
3. String determined by naming authority
2 name formats:
iqn: iSCSI Qualified Names
eui: Extended Unique Identifier.

FCIP-- is a IP based storage networking technology.

FCIP enables transmission of information by tunneling data between SAN facilitie
s over IP networks.
The primary purpose of FCIP entity is to forward FC frames.
** Primitive signals, sequences, & Class 1 FC frames are not transmitted through
FCIP as they cannot encode using FC frame encapsulation.
FCIP doesnot participate in the FC frame routing.

FCoE- Fibre Channel Over Copper-- is a new technology protocol iin the process o
f being defined by the T11 stds Committee.
As a physical interface it uses CNA Conergent N/W Adapters.

B-Series has webtools & Connetrix Manager as a Web GUI management console.
MDS-Series has Cisco Fabric Manager and Device Manager.
SAN Manager, an intergral part of EMC ECC, provides some management & monitoring
capabilities for devices from both vendors.
Connetrix Manager is a Java based Licensed Tool
Fabric & Device Manaer must be installed in a server & can contain several clien
this simplifies mgmt of MDS series thro intergrated approach to fabric administr
ation, device discovery, topology mapping, & config functions for the switch, fa
bric, & port.
Port type Security refers to the ability to restrict which kind of functions a s
witch port can assume.
Example: A switch port can be restricted to only functioning as an F_port or an
CHAP authentication applies to iSCSI interfaces.
Persistent port disable means that a port remains disabled across reboots.
When a deivce logs into a fabric, it registers with the name server.
When a port logs into a fabric, it goes through a device discovery process with
other devices registered as SCSI FCP in the name server.
* the zoning function controls this process by only letting ports in the same zo
ne establish these link level services.
A collection of Zones is called ZoneSet./ Zone config
EMC Recommeds Single HBA Zoning: Consists of a single HBA port & 1/ more storage
- A separate zone for each HBA.which makes zone mgmt easier when relacing HBAs
A port can reside in multiple zones.
VSANs: Are the user defined Logical SANs, VSANs enhance overall fabric security
by creating multiple logical SANs over a singlr h/w infrastructure.
A VSAN can exist within a single-switch chassis or span multiple chasses.
Nodes on one VSAN cannot connect to nodes on other VSAN.
Each VSAN has its own Active zone Set , name server, and routing protocol.
EMC currently supports 20 VSANs
Ingress & Egress Ports:
LUN Masking Prevents multiple host from trying to access the same volume present
ed on the common storage port.

Storage Groups perform function of LUN masking

PowerPath Family of Products: Is an enabler for EMC host based solutions like mu
ltipathing, data migration, hostbased encryption.
1) PowerPath Multipathing
=> Multipathing & Load Balancing across a SAN.
2) PowerPath Migration Enabler
=> Allows for migration of data across a SAN using a host.
3) PowerPath encryption with RSA
=> Encrypts data traversing a SAN from the host using RSA Key Manager So
4) PowerPath/VE
=> PowerPath Multipathing for Virtual environments.
PowerPath sits between the Application & HBA Driver. Applications directs I/O to
PowerPath, PowerPath directs I/O to an optimal path based on current workload &
path availability.
** PowerPath takes into account path usage & availability before deciding which
path to send the I/O down.
Powerpath keeps a table of paths, & references this table & drives the I/O to ei
ther a native or Pseudo device. the OS creates a native devices to represent & p
rovide access to logical devices.
** A Native Device is a path-Specific & represents a single path to a logical de
The device is "Native" In that it is proveded by Os for use with application
s. Applns dont need to be re-configured to use native devices.
A Pseudo Device represents a single logical device and the set of al paths leadi
ngto it. there is one pseudo device per path.
** In most cases, the application must be reconfigured to use pseudo devices; ot
herwise, load balancing & path failover functionality are not available.
In example:
Logical device 0 & 1 are reffered to by pseudo device names emcpower1c & emcpow
Each pseudo device represents the set of paths connected to its respective lo
gical device: emcpower1c represents a set of paths connected to logical device 0
; & emcpower2c represents theset of paths connected to logical device 1.

Active-Active Array:: Means all interfaces to a device are active simultaneously

PowerPath has 3 licenses available. full PoerPath License permit the user to tak
e advantage of the full set of powerpath load balancing & path failover function
ality PowerPath SE licenses support back-end failover only.
that is only paths from switch to storage arrays are candidates for failover. Ba
ck end storage failover is for single HBS tha have one path to the switch & is z
onedto have 2 paths to array.
A powerpath/VE license enables full powerpath multipathing in a virtual environ
ment.(Hyper-V & VMware vSphere).
PowerPath Fabric Failover doesnot have multi-pathing or loadbalancing capability

*** For Every I/O, the powerpath filter driver looks at the Volume Path Set & se
lects the path based on the load balanceing policy & failover settings for a dev
** PowerPath deosnot manage the I/O Queue ; it manages the placement of I/O requ
ests in the queue.
PowerPath Load Balancing Policies:
-- Symm_opt/ CLAR_opt/ Adaptive (default)
-- I/o requests are balanced across multiple paths based on composition
of reads , writes and user-assigned device/appln priorities.
-- Default policy on systems with a valid Poerpath license.
-- Round Robin
-- I/O requests are distributed to each available path in turn.
-- Requests are assigned to the path with the fewest number of requests
in the queue.
-- Requests are assigned to path with the fewest total blocks in the que
-- Path failover only
--No redirect
-- Disable path failover & load balancving
-- Default for Symmetrix when there is no license key
-- Default on CLARiiON with base license.
-- Basic Failover
-- Powerpath fabric failover functionality
-- Default for CLARiiON & symmetrix when there is no license key.
PowerPath Automatic Path Testing & Restore:
-- Auto-Probe : Tests for dead paths
-- Periodically probes inactive paths to identify failed paths before se
nding user I/O to them.
--Auto-restore: tests for restored paths
-- Periodically probes failed/closed paths to determine if they have bee
n repaired.
Auto-probe function uses SCSI Inquiry commands for the probe, so that even a no
t-ready device returns successfully.
the powerpath GUI is used to
Configure, Monitor & Manage Powerpath devices.
Powerpath Administartor has two panes:
1) on left side is the Scope Pane- where powerpath objects are displayed in a h
ierarchical list.
2) On right side is the Result Pane- that provides a view of configuration stati
stics for PowerPath Objects selected in the scope pane.
Powerpath CLI Commands:\
Powermt check
Powermt check_reistration
Powermt config
Powermt display
Powermt display options
Powermt display paths
Powermt load
Powermt restore
Powermt remove
Powermt save
Powermt set mode
Powermt set periodic_autorestore
Powermt set policy
Powermt set priority
Powermt set write_throttle
Powermt set write_throttle_queue
Powermt Version
PowerPath Encryption with RSA is a host-based data-at-rest encryption solution u
tilizing EMC Powerpath & RSA Key Manager for the Datacenter.
-- It safeguards data in the event that a disk is removed from an array or unaut
horised data access.
--Centralised Key Management
-- Consistent Encryption Technology
--Flexible Encryption-- chooses LUN or volumes to encrypt.
--Replication support.
EMC PowerPath Migration Enabler is a solution that leverages the same underlying
technology as PoerPath, & enables other technologies like array-based replicati
on & virtualisation, to eliminate the application downtime during the data migra
tions or virtualization implementations.

PowerPath /VE (Virtual Environment )::

The PowerPath/VE solution consists of 2 major compnents, the power path Remote M
anagement Server & 1/more ESX Hosts.
A functional TCP/IP connection is required b/w these components.
The power path Remote Management Server is where the Powerpath CLI is installed.
From this server, you can send PowerPath commands to an ESX host via power path
remote tools. This server is also used to install, upgrade, or uninstall PowerP
ath/VE code on an ESX Server after VMware vSphere CLI is installed.
* The CLARiiON management utilities are made up of the EMC Unisphere and Navisph
ere secure CLI. they are centralized storage system management tools used for co
nfiguring and managing CLARiiON Storage systems.
* Unisphere is a web-based UI that allows you to securely manage CLARiiON storag
e systems locally on the same LAN, or remotely over Internet using a common brow
** Unisphere can manage CLARiiONs running FLARE release 19 & above, but certian
views will not work with older releases.
** Unisphere Security uses password-based authentication that is implemented by
the Storage Management Server software installed on each storage system in the d
Supported Roles: Administrator, Manager, Security Administrator, Monitor, Local
Replication, Replication, Replication & recovery.
Security Admin has full access to security & domain features.
Manager can configure a storage system.
All users can monitor the storage system.
Admin can maintain user accounts Aswell as configure a storage system.

A Domain : is a group of systems that a user can manage from the same management
application session. each domain has a directory that defines the systems in a
to add a CLARiiON to a domain, user can scan for CLARiiON systems on the subnet
or optionally supply the IP address of CLARiiON SP by clicking on Add/Remove Sys
tems" in the Domains View.

NTP: Network Time Protocol==>

UCT: Universal Coordinated Time
It provides a single appication & singlesecurity model for all CLI commands, it
combines key features ofthe existing classic & Java CLI.
Creation of Security File command:
naviseccli -h <sp-IP address> -addusersecurity -user <username> -scope <0|1> pas
sword <pw>
Syntax to remove security file:
naviseccli - removeusersecurity
set NTP command:
naviseccli -h <SP IP address> ntp -list
-Control - Start/stop
-Servers- NTP servers
-ALL - All output
naviseccli -h ntp -set -start yes -interval 45 -servers 128.250.3
6.3 -serverkey 5 -keyvalue 12345
Reboot SP
-naviseccli -h <spaddr> -user <> -password <> -scope 0 -rebootsp
To get the SP out of the hold state , the user needs to issue a rebootsp command
to the peer of the holding SP.
Display SP port :
naviseccli -h <sp> port -list - sfpstate
Set SP POrt Speed: naviseccli -h <sp> spportspeed -set

view Faults List syntax:

naviseccli -h <sp> faults -list


Virtual Capacity:
% of virtual capacity that is allocated.
The threshold can be set between 50% - 80%. And the default is 70%.
Once a set threshold is reached an alert will be triggered.
Tiering Tab: is to automatically transfer data to different storage Tiers depen
ding on needs.
To delete a Pool it should be EMPTY.

** If the RAID Group is not fragmented then the Free Capacity & Largest Contiguo
us Free Space is IDENTICAL.
** Largest Contiguous Free Space:::-->Size of the largest LUN that can be creat
ed under this RAID Group.

Thin- if enabled THIN LUNs will be created on the pool which provides ondemand s
torage Allocation. Else a fully allocated Pool LUNs will be created.
NavisecCLI: Bind command is used to Create a LUN within existing RAID group.
getlun to view LUN Information. syntax: getlun <lun-no> <optional flags>

Access Logix:
Is a licensed software package that runs on each storage Processor. It controls
access to data on a host-by-host basis, allowing multiple hosts to have indivisu
al, independent views of a storage system.
Storage Group: Is a collection of one/more LUNs or Meta LUNs to which you connec
t 1/more servers.
Access Logix- Functionality
LUN Masking--
Access Logix makes certain LUNs from the hosts that are not authorised to see th
em, & presents those LUNs only to the authoeised servers.
It internally performs the mapping of Array LUN members "ALU" to host LIN member
s "HLUs"
It determines which physical addresses "Device numbers" each attached host uses
for its LUNs.
** When host agents stratup, shortly after host boot time, they sebd initiator i
nfo to all connected storage systems. This initiator info is stored in Access Lo
gix DB.
Access to LUN is controlled by info stored in the Access Logix Db, which is stor
ed in a reserved area of CLARiiON disk.
Initiator Registration Records: FC
Host name
Host IP Address
OS type.
Initiator Registration Records: FC
-SCSI Names
iqn: Iscsi Qualified Names
eui: Extended Unique Identifier.

the access to the LUNs is controlled by ACL, which contains the 128 bit Globally
Unique ID of the LUN,
128 bit unique IDs for the HBAs in the host. (64 bit WWN + 64 bit WWPN)
128 bit CLARiiON SP Port WWN.
* Any host may be connected to only one storage Group on any storage system.
* No host may be connected to more than 4 storage Groups. ie., a host can only u
se LUNs from four or less storage systems.
Host are identified by a IP address & Host names.
Automatic Registration: Process of making a host known to the storage system.

Access Logix, once installed & activated on a storage system, can only be disabl
ed through the CLI.

** Note the strict rules for removing a LUN.

If it is anything other than a Standard LUN, its special attributes must be remo
ved before it can be destroyed.
Disable Access Logix thro NavisecCLI : sc_off
-- This command sc_off disables data access control & turns off shared storage f
U can use this to reset the storage system to its factory unshared storage setti

LUN MIGRATION: or Virtual LUN Migration allows data to be moved from one LUN to
another, regardless of RAID Type, disk type, speed & no of disks in the RAID gro
It is supported in only CX-3 & CX-4.
Departmental Switches--8-64 ports, speed : 4/8Gbps
Multi-protocol Routers-- 16-18 ports, speed: 4Gbps
Departmental Switches-- upto 384 ports speed: 4/8/10Gbps
The 16/32/48 port FC blabes run @ 4 Gbps with ED48000 director & can run @ 8Gbps
when istalled in the ED-DCX director.
**CP blade for DCX chassis- 2 per DCX Chassis
**CR8 (core) Blade- purpose is to handle internal routing as well as support for
the new ICL connections. Internal frame routing is made available by utilizing
4 condor2 ASICs per blade.
Also on each blade, are 2 ICL ports, 0 & 1. Each provides 128Gbps of bandwidth f
or connection to another DCX chassis.
ED-DCX-8B WWN Card status LEDs:
DCX has 2 WWNs; WWN 0 on left & WWN 1 on the right. Status LEDs are visible thro
ugh the bezel & indicate power & fault conditions for each blade in the chassis.
LED Meanings LED Meanings
1-4 => Port Blades 7 => CP8 1
5 => CR8 Core 0 8 => CR8 Core 1
6 => CP8 0 9-12 => Port Blades
DCX -Port Side
12 total slots
- 8 Port Blade Slots (1-4, 9-12)
-2 CP Blades (6,7)
-2 Core blades (5,8)
B-Series Trunking:
the optional ISL Trunking feature allows inter-switch links b/w 2 Connetrix B mo
dels to merge into a logical link.
ISL Trunking reduces situations that require ststic traffic routes & indivisual
ISL management.
Trunking optimizes Fabric performance by distributing the traffic across the sha
red bandwidth of all the ISLs in trunking group.

B-Series switches support 3 fabric Modes,

Native Brocade
Mc Data Fabric/ Mc Data Open Fabric
Mc Data Fabric is configured as Interopmode 2 on B-Series switches.
Mc Data Open Fabric is configured as Interopmode 3.
The "INTEROPMODE" command allows changing b/w Mc Data Fabric, Mc Data Open Fabri
c & back to Brocade Native mode.
** You must issue the interopmode command on all switches in the fabric & all sw
itches must have the same mode set.
AP-7600B & PB-48K-AP blades are supported for EMC Invista. These processors run
code that allows for Invista to perform its functions.

Fibre Channel routing Serivces enables device connectivity across SAN boundaries
through logical SANs called LSANs.
* With FCRS, you can interconnect devices without having to redesign /reconfigur
e the enire fabric.
Integrated Routing Support:
Integrated Routing is a new licensed capability introduced in Fabric OS v6.1.
It allows any port in a ED-DCX-B with 8 Gig port blades, DS-5300B/ DS-5100B to b
e configured as an EX_port.
This eliminates the need to add a PB-48K-18i blade or use the MP-7500B for fibre
Channel routing purposes.

FCIP-- FC over IP is a TCP/IP Tunneling Protocol that allows a trasparent interc

onnection of geographically distributed SAN islands through an IP-based network.
B-Series iSCSI capable switch functions as a Gateway device to provide iSCSI to
FC Protocol translation.
Connectrix PB-48K-16IP blade translates the info to FC protocol.

Confuguring Brocade Switches:

To manage a B-series switch from the serial connection, connect the B-Series ser
ial cable to the Comm Port for the switch.
Create & open a HyperTerminal session using your COMM port as the connection met
Set the BAUD rate to 9600 & set Flow control to 'NONE'. Login using admin level

To manage M-Series Switch from the serial connection, connect the M-Series seria
l cable to the Comm port fortheswitch.
Create & open a HyperTerminal session using your COMM port as the connection met
Set the connection properties to the following : 57600 bits per second, 8
Data bits, None for Parity, 1 Stop Bit, None for Flow control.
change the IP address, subnet mask & gateway address using the ipconfig <ipAddre
ss> <subnet-mask> <gateway IP-address> command.
fabricshow-to display all switches on a fabric.
portcfgshow-- to display permanent config for each port.
zoneshow- displays current zoning config.
Defined Config: these are all the zones & Zonecfg's saved on swi
Effective Config: Displays the active zonecfg for the fabric.
2 discovery methods are supported in Connectrix Manager 9.0 & higher:
1) Individual Discovery-- uses IP addr to discover switches & result is displaye
d on "Selected individual Switches' discovery field.
2) Sub-net discovery-- it utilizes subnet broadcasts. all switches discovered us
ing subnet discovery are displayed in the selected subnets area.
For connectrix manager to manage a switch, it must be first discovered. From Dis
cover Menu, select Setup to add a range of ip addresses for discovery.
From Setup menu box select the Out-Of-band tab, then click on Add buttom @ the b
ottom of the box. Once the Address properties box appears, enter the IP address
& subnet mask. u can also enter Range of ip addr from this menu. under ADD MULT
IPLE option & mention last IP address.

Using Group Manager:

Group Configuration Management lets you perform selected changes related to the
configuration & monitoring tasks to multiple switches at the same time.
Group Manager, combined with the FTP server bundling enables B- Series firmware
FTP server bundling allows admin to configure internal/external FTP servers. thi
s feature can be accessed through the 'SAN' menu then by choosing 'options'..
The internal FTP server is bundled with the connectrix manager application for w
indows to support firmwaredownload & supportsave functions for B-Series switches
This req v5.0/ higher & switches/director must be directly discovered.
Use the Group Manager-> Install firmware wizard to load, delete & modify firmwar
e in the repository. This can be found under the Connetrix Manager 'configure' M
enu -> Group Manager. Choose Install firmware radio button to enter this process

4 Status messages suppied to report the Firmware upgrade Progress:

1) In-Progress
2) Not Started
3) completed
4) Failed

Data Collection with Group Manager is the enabler, providing storage locations f
or the files.
To perform Data collections select Configure-> Group Manager from the CM Menu ba
This will bring you to the wizard which guides you through the process & provide
s a progress status.
4 supportsave status options:
1) Not Started
2) In-Progress
3) completed
4) Failed
Zoning: In connectrix
To view/ configure zoning Select fabric wwn under ' Zoning Scope'.
To view active zone set for this fabric, select the Active Zoneset Tab.
If the zoneset is not in the Zoning Library of the connectrix Manager Server, an
error message will be displayed.

MDS 9000 Product Series:
The 9500 models are the director class & include the 9513, the 9509 & the 9506.
The 9200 models are single supervisor switches, they include the 9222i, & the le
gecy 9216 a or i.
The 9100 models are departmental switches & include the 9134 & 9124.---- 4 Gbps
MDS-9120 & 9140 2 Gbps
The Connectrix MDS 9513 is a 13 slot Director-class switch.
-- It has 11 slots available for mudules & 2 for supervisors.
-- Any module can be plcaed in any available slot.
-- It supports max of upto 528 ports.
-- It has 2 fan trays & 3 crossbar Fabric modules that plug in directly into the
back of chassis.
-- These crossbar perform the switching function handled by the supervisor modul
es on other MDS 9500 series director.
The MDS 9509 Director chassis has 9 slots.
-- Two slots, 5 & 6 are reserved for Supervisors;
-- the other 7 can contain any switching module.
-- Slots are numbered 1 thro 9, from top to bottom.
the MDS 9506 chassis has 6 slots
-- Two slots, 5 & 6 are reserved for Supervisors;
-- 2 PEM (Power Entry Modules)
-- upto 4 switching modules.
The MDS-9222i is a 2nd Generation MDS Chassis containing 1 Expansion slot.
-- It integrates 18 auto-sensing 4 Gbps FC ports, & 4 1Gbps Ethernet Ports & an
expansion slot into a single chassis.
-- Both supervisor & Line card functionality are supported in slot 1.
-- slot 1 is pre configured with 18 FC ports & 4 Ethernet Ports, leaving slot 2
available for any switching module.
-- Management connectivity is provided by the Management Ethernet Port.
The MDS-9134 offers upto 32 autosensing 4 gig FC ports & 2 -10Gbps in a single c
-- Each fibre Channel port has a dedicaated bandwidth.
-- It has the flexibility to expand from the 24 port based model upto 32 ports i
n 8-port increments with use of license.
-- the 2 10 Gbps port supports a range of optics for ISL connections to other MD
S 10G port.
-- Both 10 G Ports must be activated with License that is independent from 4 gig

The MDS-9124 is a 24-port departmental switch.

-- All 24 ports are 4 gig capable.
--It offers flexible, on demand port activation thro the use of s/w licensing.
--Ports can be activated in 8 port increments.
--Default base switch has 8 active ports.

MDS 9216i Multiprotocol Fabric Switch is a Legacy Model

--supports Multi protocol with 14 2 gig FC ports & 2 1 gig Ethernet ports instal
led in slot 1.
-- Slot 2 may contain any available switching module.
--The Fc ports are fully subscribed @ 2 gig & the GE (GigaEthernet) Ports suppor
t both FCIP & iSCSI.

Generation -2 FC Modules:
Address key SAN consolidation requirements.
Port Group on 2nd Gen:
--Each portgroup has 12.8 gbps of internal bandwidth.
-- Any port group can be configured to have dedicated bandwidth @ 1, 2, or 4 Gbp
-- All remaining ports in the portgroup share the remaining bandwidth.
--Any port in dedicated bandwith mode has access to extended buffers.
--Any port in shared BW mode has only 16 buffers.
-- Ports in shared mode cannot be used as ISLs.

MultiProtocol Services module:

-- This line card contain 18 4 Gbps FC ports & 4 1GigE ports.
--It supports both iSCSI & FCIP & is optimised for SAN extension with features l
ike SME( Storage Media Encryption), Hardware based IPSec supports gigi speed enc
ryption for secure SAN extension.
--FCIP Tape acceleration improves performance of remote backup appln.
FCIP Disc write Acceleration.
Extended-distance capability with extended buffer credits on each FC port.
Backward compatible.
MDS IPS Module:
--Provides 8 1 gig Ethernet ports.
-- This is a Generation 1 module.
-- Supports both FCIP & iSCSI

Storage Services Module( SSM):

-- Offers Advanced services functionality.
-- SSM is based on 32-port Fibre Channel Linecard, with additional ASICs added f
or appln support.
-- the RecoverPoint & Invista applns require the SSM Module.
-- Its a Genearation 1 Module.
VSAN ===>> VSANs help achieve traffic isolation in the fabric by adding control
over each incoming & outgoing port.
VSANs maintain separate fabric services, such as the name server & the login ser
Each VSAN also maintains its own Zoning database.
EMC supports 20 VSANs per switch.
VSANs are identified by their number.
Every switch has a VSAN1, the default VSAN, & a VSAN 4094, the isolation VSAN.
Membership of VSANs can be defined by physical interfaces or WWN.
A device will only log into the VSAN if it is a member of.


The SAN-OS s/w provides CFS to enable efficient database distribution.
CFS simplifies SAN provisioning by automatically synchronizing & distrubuting co
nfiguration databases to all switches in a fabirc.
It is an in-band protocol, capable of distributing data over Fibre Channel or IP
CFS has the ability to discover CFS-capable switches in the fabric & to discover
appln capabilities in all CFS capable switches.
Call Home, Global Device Alias, DPVM, IVR, iSNS, NTP, Port Security, RADIUS/TACA
CS+, Role Based ACL, Syslog,VSAN fctimer,fcdomain allowed list,RSCN event timer,
SCSI flow services, iSCSI load balancing, FSCM.
A feature that allows multiple VSANs to share a common interface to another phys
ical switch.
Referred as VSAN Trunking.
It requires TE Ports ( Trunking E Ports). These are unique ports & are currently
not compatible with any other vendor switch.
It become a TE Port, an interface must be configured as Trunk Mode "ON" & link t
o another port on another switch that also has Trunk Mode "ON".
This feature requires Extended ISL links.
Port Channel::
Are an aggregation of multiple physical interfaces into one logical interface.
they provide higher aggregated bandwidth, load balancing, & link redundancy.
Port Channels can connect to interfaces across switching modules, so a failure o
f a switching module can not bring doen the PortChannel Link.

***NPIV: N Port ID Virtualization:====>

It provides a means to assign multiple FCIDs to a single port.
This feature allows multiple applications on the N port to use different identif
iers & allow access control, zoning, & port security to be implemented @ the app
lication level.
* Allows HBA port sharing b/w server partitions or Virtual Machines.
All Virtual N ports must still belong to same VSAN . The VSAN is determined by t
he VSAN assigned to that physical port.
U must globally enable NPIV for all VSANs on the MDS switch to allow the NPIV-en
abled applications to use multiple N port identifiers.
IVR ( Inter VSAN Routing) ==>
The MDS SAN OS supports communication b/w select devices in different VSANs via
Inter-VSAN Routing (IVR).
With IVR, resources across VSANs can be accessed without compromising other VSAN
Data trafic can be transported b/w specific initiators & targets on different VS
ANs without merging the VSANs into a single logical fabric.

MDS SAN Extension Package provides integrated support for FCIP .

IVR for FCIP-- allows selective transfer of data traffic on diff VSANs without m
erging fabrics.
FCIP Write Acceleration-- Improves appln performance when storage traffic is rou
ted over WANs.
FCIP Tape Acceleration-- Helps achieve near full throughput over distance.
SME: Storage Media Encryption==>
It encrypts tape storage media. It is available on MDS-9200 & MDS-9500 series sw
itches & directors.
It requires 18/4 Multi-services blade & a SME license.
SME is built upon FIPS level 3 ( Federal Info. Processing Standards).
SME integrates in the SAN as a transparent fabric service.
MDS Configuration
Terminal Setup & config:
* To connect to the console port of any MDS Switch, use a rollover RJ-45 cable.
1) Verify swicth is powered off.
2) cable the VT100 to the switch console port.
3) Terminal setup use:
8 data bits
No Parity
1 Stop bit
No flow control
4) Power on switch
5) Switch boots automatically
6) Switch prompt appears on the console screen.
The Basic System Configuration Setup Utility guides through the Initial Config o
f the system.
This setup configures only enough connectivity for the Management of system.
If the switch has no startup configuration, this utility must be used to define
the password for the default user of admin.
This must be done through Console port.
After the initail setup is completed and saved, you can connect through the CLI
to make additional changes in the configuration.
Configure r/w SNMP community string:
Enter the Switch name:
Continue with out-of-band (Mgmt0) Management configuration? (Y/N): Y
Mgmt0 IPV4 address: < enter IP address of switch>
Mgmt0 IPV4 Subnet mask: < Enter Subnet mask?
Configure the default gateway? 9y/n)[yes] : Enter
IP address of default gateway: <Enter Gateway IP address that is the router's In
terface >
Configure advanced IP Options? : No
Enable the Telnet Service? N
Enable the ssh Service ? Y
Configure the ntp server ? Y
NTP server IP address : < IP addr of NTP server>
Configure default switchport interface state : Shut
Configure default switchport trunk mode: On
Configure default zone policy : Deny
Enable full zoneset distribution: N
After this It will ask for review---------------------
Would you like to edit the config:[ N/Y]: Default is [n]
Use this config and save it? [y/N] : Yes
On power-on, only supervisor modules are powered up, and line card modules stay
powered down.
The loader program verifies the kickstart image and loads it.
The loader program will use the kickstart image from bootflash based on the boot
These variables are defined during s/w installation & are stored in NVRAM. they
can also be edited manually from the command line.
When kickstart loading is complete, a system Image from bootflash is loaded.
If no image is found/ if image is corrupted/ wrong image type, -- Kickstart stop
s @ the switch (boot) # prompt.
Copy Files from a FTP/TFTP Server:
When upgrading, the s/w image files must be copied into any one or 2 targets --
ie bootflash/slot0--- both of which are contained on the supervisor modules.
there are various methods that can be used to copy files from a remote location
to these targets.
1) through FTP/TFTP:
Switch# copy TFTP://<IP Address of TFTP Server>/kick2llb.bin bootflash:kick2llb
Switch# copy TFTP://<IP Address of TFTP Server>/sys2llb.bin bootflash:sys2llb.b
2) In Case of New supervisor 2 Modules , there is an USB slot that can be used f
or transporting these files.
Switch# copy slot0:kick2llb.bin bootflash:kick2llb.bin
Switch# copy slot0:sys2llb.bin bootflash:sys2llb.bin
Using the Install ALL command:
1. Create a backup of your existing configuration file:
Switch# Copy run tftp://<IP Address>/file
2.From the active supervisor, perform upgrade:
Switch# install all
3. View upgrade supervisor module:
Switch# Show module
4. Save your running configuratio:
switch# copy running-config startup-config
Steps required to configure FC Interfaces:
The MDS Platform offers ports that can be configured as either full-line rate or
* Each module or switch is configured for one of these modes.
With full-line rate--- Every port gets the full speed & capacity of that port.
Oversubscription mode-- Means that the resources, including bandwidth, are share
d by all the ports in that port group.
Below are few FC Interface Attributes:
Parameter Default
------------- -----------------
Interface Mode Auto
Interface Speed Auto
Administrative State Shutdown
Trunk Mode On
Trunk-allowed VSANs 1-4093
Interface VSAN Default VSAN (1)
Beacon mode Off
These can be changed through CLI/ through Device Manager.
** The Showinterface brief: Command displays all interface & their status.
This cmd can be altered to show either an individual interface or range of inter
Switch# Show port-resources module 8--> used to display port resources.
* Each port in the port group & the resources that it is currently using are dis

VSAN Configuration::
A feature that leverages the advantages of isolated fabrics with the capabilitie
s that address the limitations of isolated san islands.
VSANs are configured by setting the following attributes:
VSAN ID -- identifies the VSAN by number ( 2-4093)
VSAN name-- Identifies the VSAN for mgmt purposes upto 32 chars long
Load Balancing-- S_ID/D_ID/OX_ID default
Identifies the load balancing param ifthe VSAN extends across ISLs
VSAN State-- Active (default)
VSAN Interface Membership

Create a VSAN & specify its name:

(config)# vsan database
(config-vsan-db)# vsan 2
(config-vsan-db)# vsan 2 name HR_VSAN
(config-vsan-db)# vsan 3
(config-vsan-db)# vsan 3 name DEV_VSAN
Assign VSAN interface membership:
(config-vsan-db)# vsan 2 interface fc1/10-15, fc2/3
(config-vsan-db)# vsan 3 interface iscsi2/1
Configuring VSAN Domain IDs through CLI:
Mds9509-14((config)# fcdomain domain 3 preferred vsan 5
Configures the switch in VSAN 5 to request a preferred domain Id 3 & accepts any
value assigned by principal switch.
Mds9509-14((config)#no fcdomain domain 3 preferred vsan 5
Resets the configured domain ID to 0( default) in VSAN 5. The configured domain
Id becomes 0 preferred.
Mds9509-14((config)# fcdomain domain 2 preferred VSAN 237
Configures the switch in VSAN 237 to accept only a specific value and moves the
local interfaces in VSAN 237 to an isolated state if the requested domain ID is
not granted.
switch(config)# no fcdomain domain 2 static VSAN 237
Resets the configured domain ID to factory defaults in VSAN 237. The configured
domain ID becomes 0 preferred.

VSAN Trunking:
Allows multiple VSANs to share a common interface( Require TE port , Requires EI
Interface may be a single EISL or a port channel.
VSAN merger
Each FC interface has an associated trunk-allowed VSAN list.

Displaying VSAN trunking Information:

switch# show interface fc1/13
invoked from the EXEC mode displays VSAN trunking info for a port.
Configuring TRUNK Mode:
Switch# config
switch(config) # interface fc1/1
switch(config-if)# switchport trunk mode on
switch(config-if)# switchport trunk mode off
switch(config-if)# switchport trunk mode auto

All zones & zonesets are contained in each VSAN.
Each VSAN can have multiple Zone sets but only 1 zone set can be active at any g
iven time.
Each VSAN has afull zone DB and an active zone DB.
Only active Zone sets are distributed to other physical switches.

Switch# Show zone ? -- list available commands

Zone Creation:
switch(config)# zone name test VSAN 1
switch(config-zone)# member pwwn 12:12:12:12:12:12:12:12
switch(config-zone)# end
Remove the Pwwn from the zone with "no" <arg>:
switch(config)# zone name test VSAN 1
switch(config-zone)# no member pwwn 12:12:12:12:12:12:12:12
switch(config-zone)# end

SAN Extension-- FCIP

FCIP is a p-p tunneling protocol which connects distributed FC SAN islands.
Implementation of FCIP on IPS Modules
The MDS IPS Module can support the following:
2 TCP connections per FCIP tunnel ( control & Data)
3 FCIP Tunnels per G/E port (EMC supports 1 FCIP per G/E port)

An FCIP interface is configured on both sides of link.

The virtual interface consists of a profile which defines the physical GE port &
the TCP Parameters, & an interface which defines which profile to use & the pee
r IP address.
SAN Extension -iSCSI
Internet Small Computer Systems Interface
Encapsulation of SCSI-level commands and data into IP
SCSI data encapsulation into the TCP/IP byte-stream
Allows for hosts with IP connectivity to access FC based storage.
FC is a serial data transfer interface intended for connecting high-speed storag
e devices to computers.
FC Address (24 bit physical addr) is assigned when a FC node is connected to a s

STD FrontEnd Port

MirrorView FrontEnd Port
BackEnd Port
Single Initiator Zoning:==> Always put Only 1 HBA in a Zone with Storage Ports.
Each HBA port can only talk to storage ports in the same zone.
HBAs & Storage Ports may be members of more than one zone.
A method to transfer block data using a TCP/IP n/w.
TOE- TCP offload Engine-- full TCP/IP offload & iSCSI & SCSI part on host only
iSCSI HBA- offloads both
iSCSI CHAP Security ( Challenge Handshake Authentication Protocol)
CHAP Target sends challenge to CHAP initiator
Initiator responds with a calculated value to the target.
Target checks the calculated value, & if it matches , login continues.
IF mutual CHAP is enabled, initiator will authenticate target using same process
All connections frm host to an array must use the same protocol.
-- Conn must be all FC or all iSCSI
-- NIC & HBA iSCSI conn cannot be mixed in the same server.
Donot connect a singleserver to both Clarrion FC Storage system & a Clariion iSC
SI Storage.
Servers with iSCSI HBAs & servers with NIC can connect to same iSCSI storage sys
iSCSI N/W Req:
-- Layer 2 n/ws are recommended over Layer 3 n/ws.
N/w shd be dedicated solely to iSCSI Config.
if using MDS switches, EMC recommends creating a dedicated VSAN for all iSCSI tr
vLAN tagging protocol is not supported.
iSCSI Basic Connectivity Verification:
ping-- check basic conn
TraceRoute: provides info on no of hops req for packet to reach its destn. Used
in layer 3
Powerpath supports both types of connectivity (FC & iSCSI)
Native MPIO ( Multipathing I/O)
Limited "Out of the Box" functonality compared to powerpath
Device States in Poerpath:
A native device path can be in 1/2 states:
Live: path is usable for i/o
Dead: path has been detected as failed
A native device path can be in 1/2 Modes:
Active Mode:Path is available to pp for dervicing i/o
Default mode, can manually be changed by admin.
Standby Mode:
path is available to pp, but is not servicing mode.
2 types of path failover:
-array -Initaited LUN Tresspass-- cause: an SP fails/ need to reboor. Powerpath
logs a "Follow Over"
Host-Initiated LUN tresspass-- P P detects a failure-- ex: cable break, port fai
lure, PP initaites the tresspas & logs event.

CLARiiON Active/Active Mode ( ALUA)

Asymmetrix Logical Unit Access (ALUA)
-- Asymmetirx accessibility to logical units thro various ports.
Req forwarding implementation
Communication menthods to pass io's b/w sp's
s/w on the controller forwards re to other controller
Not an Activie -Active Array Model!!
I/O are not serviced by both SPs for a given LUN
I/O are redirected to the SP owning the LUN.
Array Failover Mode: 4 mentions its a ALUA Mode
Pwermt display--- shows devices with host is connected to path which it is conne
a Maximum of 16 allowable paths to any given LUN
PP Integration with Volume Managers:
Logical Volumes reside above native devices & PP devices in the I/O Stack.
PP has been qualified for compatibility with most major 3rd party volume manager
1/more paths to the same LUN, presented as a single pseudo device, created by po
Appln are configured to use native devices for each LUN
neviseccli -h <IP add> getagent
Navisphere Server Utility:
Donot install Server Utility on a VM ware Virtual M/cs.
NIC initiators must use Navisphere server Utility.
iSCSI HBAs use SAN surfer.

WINDOWS Connectivity-- requirements

- 1/more CLARiiON storage systems.
- 1/more Windows hosts
- supported HBAs, HBA drivers, switches & cables
- A mgmt stn in the envi.
- supported host on OS.
- supported browser
- Supported JRE.
- Network connectivity to storage systems.
Correctly configured switch zoning
Correctly configured network.

Windows Connectivity-- LUNs

- check & validate all host to CLARiiON connectivity
- Add the LUNs to storage group
- Run Windows disk Manager
- Disk Write signature on disk (Initialize)
- Run Diskpart to align the data & partition the disk.
- Add drive letter (s), mount points
- Create volumes
- Copy all info to config worksheet.
Basic Disks:
Is the default disk type for windows 2003
Backward compatible with other Win-os
Disk must be partitioned for filesystem use
- MBR- up to 4 partitions per disk or 3 primary 1 extended or 4 primary
- GPT upto 128 partitions per disk
- RAW devices are allowed
Partitions can be expanded ( with diskpart utility)
Volume sets cannot be created on basic disks.
Dynamic sisks: Are used with Dynamic volumes
They cannot contain partitions/volumes
**Dynamic volumes can be grown from native windows utilities
5 dynamic volume types are supported
- Simple Volumes-- equivalent area of single disk
- Spanned Volumes-- Equlvalent of concatenated disks.
- Stiprd volumes-- Equlvalent data stripped across available disks with
no data protection.
- Mirroed Volumes-- Equlvalent to identical copy of data spread across
multiple disks.
- RAID-5 Volumes.-- Equlvalent
** Data Alignment:
Alignment of data to LUN structure-- elements of 64k which is fixed. associated
with stripped volumes.
Important when OS writes data to begining of LUN
Misalinged Data- Causes disk crossings, may not allow stripe alignment of large,
uncached writes.
Misaligned Windows Blocks:
MBR- Master Boot Record is of size 63 sectors which is reserved @ the begining o
f a disk.
Standard CLARiiON LUN which is a 128 blocks in size.
63 for MBR & 65 Free blocks for hosts for writting.
2 methodsof Alligning Data to avoid Misalignment :
1- Native to Clarrion is LUN offset Alignment Method --Ex: offset of 63 blocks
common value used in case of windows envi-- when we use 63 blocks the clariion
will bind an extra stripe when it creates a LUN. ie 0 to n-1 here 63-1=62.
2. Host based alignment-- Aligining data with diskpar/diskpart 128 sectors reser
verd by windows
2 elements of 64k. however 1st element is for MBR so host I/O will begin
on 2nd element of 64k.
DSIKPART> list disk
DISKPART> Create Partition primary align=64
Windows connectivity- Best Practices:
Use storage system- based RAID
RAID type based on access pattern & appln.
Use basic disks
Leave cache page size @ 8k in mixed envi.
8k page size for MS Exchange
Leave elemnet size @ 64K for CLARiiON LUNs.
Align LUN on a 64K boundary with Diskpart
EMC cx4 connectivity supported with both V2.4 & V2.6 Linux kernels
Some clustering solutions are supported (RAC,VAS)
Host connectivity:
HBA device driver may be loaded/unloaded on the fly.
Driver options may also be changed without a reboot.
EMC doesnot support the mixing of diff HBAs in the same server.
Emulex FC HBA driver functions below the std LINUX SCSI driver.
CLARiiON Storage Specicifc Settings:
Multipathing sloution dictates the Failover Mode settings
EMC Current PP versions-v5.1 & up: with FLARE 28: Failover mode =4 (ALUA)
PP versions prior to V5.1: continue to use Failover Mode=1
Veritas/DMP: use Failover Mode=2
Native Multipathing DM-MPIO & FLARE 28:
-use Failover Mode=4
Linux server HBAs must be registered
#dmsetup ls
#chkconfig --list | grep multipathd

vi /etc/multipathd.conf
devnode_blacklist {

lsmod command -- shows list of those drivers

#./hbacmd list
# ./hbacmd allnodeinfo <wwpn>
/proc/scsi/lpfc file
#cat /proc/scsi/lpfc/2
we will see the mapping of b/w controller & target # & wwpn

#modinfo lpfc |grep parm

# more /etc/modprobe.conf
Detaermine which disks are available- dmesg
# dmesg |grep scsi

# cd /proc/scsi
#more scsi
Device bus scan on linux host-- As linux cant pickup newly provisioned devices w
ithout discruption to existing devices
the HBA driver module must be unloaded and reloaded inorder to create usable SCS
I devices for new LUNs
#/etc/init.d/naviagent stop # shutting down naviagent
#/etc/init.d/powerpath stop
# modeprobe -r lpfc #unload driver
#lsmod | grep lpfc
#modprobe lpfc #reload driver
#lsmod | grep lpfc
# /etc/init.d/powerpath start
#/etc/init.d/naviagent start
Persistent Binding for Linux devices
Native device names - "/dev/sdX " are not guaranteed to persist.
using fdisk to Align a Linux Partition
#iscsi -iname Linux iSCSI Commands
#iscsi -ls

Alret Generation
Triggered by FLARE events
Triggered by configuration issues
Exceptions for GUI generated events
Simplifies Event Mgmt.
Event Monitor Architecture:
Event Monitor relies on confi file-- navimon.cfg which is a text file, which can
be edited manually.
Monitor Agents run on 1/more hosts/SP & watch over the systems.

2 Models:
Centralized monitoring model
distributed monitoring model
Event Monitor Templete
These define the events that trigger a response, and the response.
Each templete has a unique name
Responses to any of those specified events:
SMTP email
EMC dial home
User specified application.
Severity: Critical Error, Error, Warning, Information
Messages are formatted according to a format string:
Ex: "Event%N% occured at %T%"

Navisphere Analyzer:
Helps us to analyze the performance information.
Examine past & present performance.
Performance Survey allows at a glance view of params that are over a preset thre
NA views:
8 object types with performance info.
Disk,sp,LUNs, metaLUN, SnapView Session, MirrorView/A mirror, RAID Group, SP por
Abbrevated D,S,L,M,SS,S,RG,P
Analyser has 7 data views for traditional LUNs
3 for thin LUNs
-Performance overview
-Performance servey
-Performance summary
-Performance detail
-I/O Distribution summary
-I/O Distribution detail
-LUN I/O disk detail
Analyzer Polling & Logging:
Higher Polling rate is configurable-- Real time archieve interval from 60sec to
3600 sec. Default is 60.
**Rate @ which info gathering is performed can be configurable
Archive interval logging is configurable
from 60sec to 3600sec .Default is 120
user control of start/stop logging
NQM-Navisphere QOS Manager:
Define and achieve Service level goals for multiple appln.
Provide SLA thro user defined performance ploicies
NQM Queuing
NQM Control Engine
NQM Measuring
Control Methods:
Limits-- supports 32 I/O class /policy
Cruise Control-- supports only 2 I/O class per policy
Fixed Queue Length--

UI, Integrated Scheduler, Archiving Tools
Fall Back Policy:-> Backup policy if primary policy cannot be achieved.
NQM POLICY BUIDER STEPS::Select open->Select I/O classes-> Set goals->run
Host Integration of ESX Provisioning:
Initiator Registration- HBA-- is automatic without the need for NaviAgent
Lun Masking is performed @ clariion
Max luns that can be presented to a ESX is 256.
How iSCSI LUNs are discovered:
Static config
Send Targets
-- iSCSI device returns its target info as well as any additional target info th
at it knows about.
VMFS VOLUME?? Repository of VMs & VM state data
-Each VM config file is stored in its own subdirectory.
Repository for other files.
Addressed by a volume ID, a datastore name, & phy add
Accessible in the service console underneath /vmfs/volumes
Create a VMFS:
Select device location (iSCSI / FC LUN)
Specify Datastore name
Specify disk Capacity
Specify Max file size & allocation unit size
RDM Raw Device Mapping Volume::
Why use a raw LUN with a LUN?
Enables use of SAN mgmt s/w inside a guest OS
Allows VM clustering across boxes/ phy2virtual
An RDM allows a special file in a VMFS volume to act as a proxy for a raw device
An RDM allows a raw LUN to use snapshots.

FLARE Release 29
Virtualization aware navisphere--Enables Navishere to display relationship info
b/w VMware ESX Server, VMs, CLARiiON Luns
Drice spin down-- provides power savings ration of 55-60% an a per drive basis,
all 1TB SATA II drives are qualified for this.
10Gpbs iSCSI Support--
LUN Shrink-- reclaims unused storage space.
Virtual Provisioning (Phase 2 replication support)
VLAN Tagging
NQM, Analyzer and replication enhancements.
VLAN Tagging:
Enables admin to better analyze & tune an applns n/w performance.
Seggregation of I/O load by business unit for better chargeback options.
only specified hosts have acces to VLAN data stream.
Protocol 802.1Q
Allows communication with multiple VLANs over one physical cable.
VLAN ID is a 12 bit field.(1-4095)
Event Hearbeat :
Allows EMC to know when an array has stopped dialing home.
Send hearbeat event to EMC periodically from Distributed Monitor or Centralised
EventMonitor -template -destroy -templateName <Heartbeat templatename> -messner

Creation of PIT copies of data
Support for consistent online backup/dat replication
offload backup & other processing from production hosts.
Used in Testing, Decision Support scenarios, data replications.
SnapShots: Use Pointer based replication along with Copy on First Write technolo
it has 3 managed objects: Snapshots, session, Reserved LUN pool
Clones: Make full copies of source LUN.
3 managed objects: Clone Group, Clone Clone Private LUN.
Fracture Log -- a bit map which is kept in SP memory which tracks the changes on
8 copies of multiple PIT's are allowed on both types.
**consistent session start, consistent clone fracture-- for consistency of data
across multiple objects
Acess to PIT copy
Clones need an initail full synchronization, which is time consuming
Snapshot data is immediately available for use.
Performance impact on source LUN:
Snapshots use CoFW, which increases responses times.
Fractured Clones are independent of their source LUNs
USe of disk Space:
A Clone occupies the same amount of space as its source LUNs.
A snapshot uses around 20% of space of its source LUN.
Recovery from Dataloss from Source LUN:
Snapshot depends on source LUN for operation
Completely independent of source LUN.
Objects per source LUN 8 clones 8 snapsots/8 sessions
Sorce LUNs /Storage systems upto 1024 for clones upto 512 snapshots
Objects /ss upto 2048 upto 2048 snapshots
Clone Groups /SS 1024 n/a
Reserved LUNs/SS n/a 512
Snapshot is an instantaneous frozen virtual copy of a LUN on a storage system.
Frozen: An unchanging PIT view.snapshot will not change unless the user writes t
o it.
Instantaneous: Created instantly-no data is copied at creation time.
Reserved LUN Pool::Area where we can put the original chunks of data before we m
odify them on source LUN.
Session: Mechanism that performs the actual tracking of data changes.
when we start a session, the COFW mechanism is enabled. And from that point on a
ny time a chunk is modified for the first time the data in that chunk will be sa
ved into the Reserved LUNpool
when we stop a session, No further Writes into the reserved LUN pool takes place
. when COFW Mechanism is disabled
any source LUN may have upto 8 snapview session at any time.
COFW Operation: allows efficient utilization of copy space.
- We one make 1 copy of dat which is changed & we make it on first time the data
Chunks are fixed size of 64kB
Chunks are saved when they re modified for the first tme.
Reserved LUNS Recommendations:
total no of RL is limited upto 512 F28 release
they may be of diff sizes.
these are assigned as req-- nochecking of size, disk type or RAID type -- least
.2 % of the size ofthe sorce which is performed initially for first RL
can be traditional LUNs only. thn lins not allowed.
Avg source LUNsize = total size of source LUN/ No of src luns
PSM contains info regarding Reserved LUn & source luns involved in Snap session.
Its a perdefined memory.