Sie sind auf Seite 1von 95

Advanced Storage Area

Network Design
Edward Mazurek
Technical Lead Data Center Storage Networking
emazurek@cisco.com

@TheRealEdMaz

BRKSAN-2883

Agenda

Introduction

Design Principles

Storage Fabric Design Considerations

Data Center SAN Topologies

Intelligent SAN Services

Q&A

Introduction

An Era of Massive Data Growth


Creating New Business Imperatives for IT
10X Increase in Data Produced (From 4.4T GB to 44T GB)

32B IoT Devices (Will be Connected to Internet)

By 2020
40% of Data Will Be Touched by Cloud

85% of Data for Which Enterprises Will Have


Liability and Responsibility
IDC April 2014: The Digital Universe of Opportunities: Rich Data and Increasing Value of Internet of Things

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

Evolution of Storage Networking.


Enterprise Apps: OLTP, VDI, etc.

Big Data, Scale-Out NAS

Cloud Storage (Object)

Compute Nodes

REST API

Fabric

Fabric

Block and/or File Arrays

Multi-Protocol (FC, FICON, FCIP, FCoE, NAS, iSCSI, HTTP)


Performance (16G FC, 10GE, 40GE, 100GE)
Scale (Tens of Thousands P/V Devices, Billions of Objects)
Operational Simplicity (Automation, Self-Service
Provisioning)
BRKSAN-2883
2016 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Enterprise Flash Drives = More IO


Significantly More IO/s per Drive at Much Lower Response Time

Drive performance hasnt


changed since 2003 (15K
drives)

Supports new application


performance requirements

Price/performance making
SSD more affordable

Solid state drives dramatically


increase IOPS that a given
array can support

Increased IO directly translates


to increased throughput

110
100

SATA drives
(8 drives)

Response Time Msec

90

100% Random Read Miss 8KB


One Drive per DA Processor - 8 processors

80
70
60

15K rpm drives


(8 drives)

50
40

Enterprise Flash
Drives (8 drives)

30
20
10
0
0

5000

10000

15000

20000

25000

30000

35000

40000

45000

IOPs
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

Design Principles

Design Principles

VSANs

Zoning and Smart Zoning

16G and Forward Error Correction

N-Port Virtualization

Trunking and Port-channeling

MDS Internal CRC handling

Device-alias

SAN Security

FC and/or FCoE

VSANs
Introduced in 2002

A Virtual SAN (VSAN) Provides a Method to Allocate Ports within a


Physical Fabric and Create Virtual Fabrics

Analogous to VLANs in Ethernet

Virtual fabrics created from larger cost-effective redundant physical


fabric

Reduces wasted ports of a SAN island approach

Fabric events are isolated per VSAN which gives further isolation
for High Availability

FC Features can be configured on a per VSAN basis.

ANSI T.11 committee and is now part of Fibre Channel standards


as Virtual Fabrics

Per Port Allocation

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

10

VSAN

Assign ports to VSANs

Logically separate fabrics

Hardware enforced

Prevents fabric disruptions

RSCN sent within fabric only

Each fabric service (zone server, name server,


login server, etc.) operates independently in
each VSAN

Each VSAN is configured and managed


independently

phx2-9513# show fspf vsan 43


FSPF routing for VSAN 43
FSPF routing administration status is enabled
FSPF routing operational status is UP
It is an intra-domain router
Autonomous region is 0
MinLsArrival = 1000 msec , MinLsInterval = 2000 msec
Local Domain is 0xe6(230)
Number of LSRs = 3, Total Checksum = 0x00012848

vsan database
vsan 2 interface fc1/1
vsan 2 interface fc1/2
vsan 4 interface fc1/8
vsan 4 interface fc1/9
phx2-9513# show zoneset active vsan 43
zoneset name UCS-Fabric-B vsan 43
zone name UCS-B-VMware-Netapp vsan 43
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

11

Zoning & VSANs


1. Assign physical ports to VSANs
VSAN 2
Disk2

Zone A

Host1

2. Configure zones within each VSAN

Disk3
Disk1
Zone C

Zone B

Disk4

3. Assign zones to zoneset

Host2

Zoneset 1

Each VSAN has its own zoneset

4. Activate zoneset in VSAN

VSAN 3
Zone A

Members in a zone can access each other;


members in different zones cannot access
each other

Devices can belong to more than one zone

Host4
Zone B
Host3
Disk6
Zoneset 1

A zone consists of multiple zone members

Disk5

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

12

Zoning examples

Non-zoned devices are members of


the default zone

A physical fabric can have a maximum


of 16,000 zones (9700-only network)

Attributes can include pWWN, FC


alias, FCID, FWWN, Switch Interface
fc x/y, Symbolic node name, Device
alias

zone name AS01_NetApp vsan 42


member pwwn 20:03:00:25:b5:0a:00:06
member pwwn 50:0a:09:84:9d:53:43:54

device-alias name AS01


pwwn 20:03:00:25:b5:0a:00:06
device-alias name NTAP
member pwwn 50:0a:09:84:9d:53:43:54
zone name AS01_NetApp vsan 42
member device-alias AS01
member device-alias NTAP

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

13

The Trouble with sizable Zoning


All Zone Members are Created Equal

Recommendation: 1-1 zoning

Each pair consumes an ACL


entry in TCAM
Result: n*(n-1) entries

10,000
8,000
6,000
4,000

2,000
0

0
10
20
30
40
50
60
70
80
90
100

Any member can talk to any


other member

Number of ACLs
Number of ACL Entries

Standard zoning model just


has members

Number of Members
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

14

Smart Zoning
Operation

Today 1:1Operation
Zoning
Zones

Cmds

ACLs

Create
zones(s)

32

96

Add an
initiator

+4

an
+12 Add +8

Add a
target

+8

+24

Create
64
zones(s)

Today Many
Operation
- Many
Zones

Cmds

13

Create
132
zones(s)

+1
+1

initiator

Add
a
+16
target

ACLs

Smart Zoning
Zones

Cmds

ACLs

13

64

Add+24
an
initiator

+1

+8

Add
a
+24
target

+1

+16

Feature added in NX-OS 5.2(6)

Allows storage admins to create larger zones while still keeping premise of single initiator & single target

Dramatic reduction SAN administrative time for zoning

Utility to convert existing zone or zoneset to Smart Zoning

BRKSAN-2883

8xI
4xT

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

15

How to enable Smart Zoning


New Zone

Existing Zone

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

16

Zoning Best Practices

no zone default-zone permit

All devices must be explicitly zoned

zone mode enhanced

Acquires lock on all switches while zoning changes are underway


Enables full zoneset distribution

zone smart-zoning enable

zone confirm-commit

Allows for more efficient zoning


Causes zoning changes to be displayed during zone commit

zoneset overwrite-control New in NX-OS 6.2(13)

Prevents a different zoneset than the currently activated zoneset from being
inadvertently activated
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

17

IVR - Inter-VSAN Routing

Enables devices in different VSANs to


communicate

Allows selective routing between specific


members of two or
more VSANs

VSAN 2
Disk2

Zone A

Host1

Disk3
Disk1
Zone C

Zone B

Disk4

Host2

Zoneset 1

Resource sharing, i.e., tape libraries and


disks

IVR Zoneset

VSAN 3

Host4
Zone B
Host3
Disk6
Zoneset 1

Disk5

Zone A

Traffic flow between selective devices

A collection of IVR zones that must be activated


to be operational
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

18

Forward Error Correction - FEC

Allows for the correction of some errors in frames

Almost zero latency penalty

9710-2# show interface fc1/8

Can prevent SCSI timeouts and aborts

fc1/8 is trunking
Port mode is TE

Applies to MDS 9700 FC and MDS 9396S

Port vsan is 1
Speed is 16 Gbps

Applies to 16G fixed speed FC ISLs only


switchport speed 16000

Rate mode is dedicated


Transmit B2B Credit is 500
Receive B2B Credit is 500

Configured via:

B2B State Change Number is 14


Receive data field Size is 2112

switchport fec tts

No reason not to use it!

Beacon is turned off


admin fec state is up
oper fec state is up
Trunk vsans (admin allowed and active) (1-2,20,237)

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

19

Trunking & Port Channels


Trunking

Single-link ISL or PortChannel ISL can be configured to


become EISL (TE_Port)
Trunk
VSAN1

VSAN1

VSAN2

VSAN2

VSAN3

VSAN3

Traffic engineering with pruning VSANs on/off the trunk


Efficient use of ISL bandwidth

TE Port

TE Port

Port Channel
TE Port

E Port

Port
Channel

TE Port

E Port

Up to 16 links can be combined into a PortChannel


increasing the aggregate bandwidth by distributing traffic
granularly among all functional links in the channel
Load balances across multiple links and maintains optimum
bandwidth utilization. Load balancing is based on the
source ID, destination ID, and exchange ID

If one link fails, traffic previously carried on this link is


switched to the remaining links. To the upper protocol, the
link is still there, although the bandwidth is diminished. The
routing tables are not affected by link failure
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

20

N-Port Virtualization
Scaling Fabrics with Stability

N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a switch to act like a server/HBA
performing multiple fabric logins through a single physical link

Physical servers connect to the NPV switch and login to the upstream NPIV core switch

No local switching is done on an FC switch in NPV mode

FC edge switch in NPV mode does not take up a domain ID

Helps to alleviate domain ID exhaustion in large fabrics


Blade Server

phx2-9513 (config)# feature npiv

FC NPIV
Core Switch

NPV Switch

F-Port

N-Port

FC1/1

Server1
N_Port_ID 1

FC1/2

Server2
N_Port_ID 2

FC1/3

Server3
N_Port_ID 3

F-Port

NP-Port

F_Port
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

21

Comparison Between NPIV and NPV


NPIV (N-Port ID
Virtualization)
Used by HBA and FC
switches
Enables multiple logins on
a single interface
Allows SAN to control and
monitor virtual machines
(VMs)
Used for VMWare, MS
Virtual Server and Linux
Xen applications

NPV (N-Port Virtualizer)


Used

by FC (MDS 9124,
9148, 9148S, etc.), FCOE
switches (Nexus 5K), blade
switches and Cisco UCS
Fabric InterConnects (UCS
6100, 6200, 6300)
Aggregate multiple
physical/logical logins to the
core switch
Addresses the explosion of
number of FC switches
Used for server consolidation
applications
Called End Host Mode in
UCS
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

22

NPV Uplink Selection


NPV supports automatic selection of NP uplinks. When a server interface is brought up, the NP uplink
interface with the minimum load is selected from the available NP uplinks in the same VSAN as the
server interface.
When a new NP uplink interface becomes operational, the existing load is not redistributed automatically
to include the newly available uplink. Server interfaces that become operational after the NP uplink can
select the new NP uplink.

Manual method with NPV Traffic-Maps associates one or more NP uplink interfaces with a server
interface.
Note: Use of parallel NPV links will pin traffic to one NPV link. Use of SAN Portchannels with NPV actual
traffic will be load balanced.

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli_rel_4_0_1a/CLIConfigurationGuide/npv.html#wp1534672

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

23

NPV Uplink Selection UCS Example

NPV uplink selection can be automatic or manual

With UCS autoselection, the vHBAs will be uniformly assigned to the available
uplinks depending on the number of logins on each uplink
Blade Server

Cisco Nexus 5548


NPV Switch

NP-Port

FC NPIV
Core Switch

F-Port

FC1/1
FC1/2
FC1/3
FC1/4
FC1/5
FC1/6

F_Port
F_Port

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

24

Uplink Port Failure

Failure of an uplink moves pinned hosts from failed port to up port(s)

Path selection is the same as when new hosts join NPV switch and pathing
decision is made
2 devices re-login

Blade Server

Cisco Nexus 5548


NPV Switch

NP-Port

FC NPIV
Core Switch

F-Port

FC1/1
FC1/2
FC1/3
FC1/4
FC1/5
FC1/6

F_Port
Port is Down

F_Port

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

25

Uplink Port Recovery

No automatic redistribution of hosts to recovered NP port

Blade Server

Cisco Nexus 5548


NPV Switch

NP-Port

FC NPIV
Core Switch

F-Port

FC1/1
FC1/2
FC1/3
FC1/4
FC1/5
FC1/6

F_Port
Port is Up

F_Port

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

26

New F-Port Attached Host

New host entering fabric is automatically pinned to recovered NP_Port

Previously pinned hosts are still not automatically redistributed


Blade Server

Cisco Nexus 5548


NPV Switch

NP-Port

FC NPIV
Core Switch

F-Port

FC1/1
FC1/2
FC1/3
FC1/4
FC1/5
FC1/6

F_Port

F_Port

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

27

Auto-Load-Balance
npv_switch(config)# npv auto-load-balance disruptive

This is Disruptive!
Disruptive load balance works independent of automatic selection of interfaces and a configured traffic map of external
interfaces. This feature forces reinitialization of the server interfaces to achieve load balance when this feature is
enabled and whenever a new external interface comes up. To avoid flapping the server interfaces too often, enable this
feature once and then disable it whenever the needed load balance is achieved.
If disruptive load balance is not enabled, you need to manually flap the server interface to move some of the load to a
new external interface.
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/6_2/configuration/guides/interf
aces/nx-os/cli_interfaces/npv.html#pgfId-1072790

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

29

F-Port Port Channel and F-Port Trunking


Enhanced Blade Switch Resiliency
F-Port Port Channel
Blade System

F-Port Port
Channel

Core
Director

F-Port Port Channel w/ NPV

Bundle multiple ports in to 1 logical link


Storage

Any port, any module


High-Availability (HA)

Blade N
Blade 2

Blade Servers are transparent if a cable, port, or line


cards fails

Blade 1

Traffic Management
N-Ports

F-Ports

Higher aggregate bandwidth

F-Port Trunking

Blade System

Core
Director
F-Port
Trunking
VSAN1

Blade N
Blade 2

VSAN2

Blade 1

VSAN3
N-Port

F-Port

Hardware-based load balancing

F-Port Trunking w/ NPV


Partition F-Port to carry traffic for multiple VSANs
Extend VSAN benefits to Blade
Servers
Separate management domains
Separate fault isolation domains

Differentiated services: QoS, Security


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

30

FLOGI Before Port Channel


phx2-5548-3# show flogi database
-------------------------------------------------------------------------------INTERFACE
VSAN
FCID
PORT NAME
NODE NAME
-------------------------------------------------------------------------------fc2/9
12
0x020000 20:41:00:0d:ec:fd:9e:00 20:0c:00:0d:ec:fd:9e:01
fc2/9
12
0x020001 20:02:00:25:b5:0b:00:02 20:02:00:25:b5:00:00:02
fc2/9
12
0x020002 20:02:00:25:b5:0b:00:04 20:02:00:25:b5:00:00:04
fc2/9
12
0x020003 20:02:00:25:b5:0b:00:01 20:02:00:25:b5:00:00:01
fc2/10
12
0x020020 20:42:00:0d:ec:fd:9e:00 20:0c:00:0d:ec:fd:9e:01
fc2/10
12
0x020021 20:02:00:25:b5:0b:00:03 20:02:00:25:b5:00:00:03
fc2/10
12
0x020022 20:02:00:25:b5:0b:00:00 20:02:00:25:b5:00:00:00

5548

D2

fc2/9

fc2/10

fc2/1

fc2/2

Total number of flogi = 7


phx2-5548-3#

Fabric
Interconnect
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

31

FLOGI- After port channel


phx2-5548-3# show flogi database
-------------------------------------------------------------------------------INTERFACE
VSAN
FCID
PORT NAME
NODE NAME
-------------------------------------------------------------------------------San-po3
12
0x020040 24:0c:00:0d:ec:fd:9e:00 20:0c:00:0d:ec:fd:9e:01
San-po3
12
0x020001 20:02:00:25:b5:0b:00:02 20:02:00:25:b5:00:00:02
San-po3
12
0x020002 20:02:00:25:b5:0b:00:04 20:02:00:25:b5:00:00:04
San-po3
12
0x020003 20:02:00:25:b5:0b:00:01 20:02:00:25:b5:00:00:01
San-po3
12
0x020021 20:02:00:25:b5:0b:00:03 20:02:00:25:b5:00:00:03
San-po3
12
0x020022 20:02:00:25:b5:0b:00:00 20:02:00:25:b5:00:00:00

5548

D2

2/9

2/10

san-po-3

Total number of flogi = 6

2/1

phx2-5548-3#

2/2

Fabric
Interconnect
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

32

Port-channel design considerations


All types of switches

Name port-channels the same on both sides

Common port allocation in both fabrics

ISL speeds should be >= edge device speeds

Maximum 16 members per port-channel allowed

Multiple port-channels to same adjacent switch should be equal cost

Member of VSAN 1 + trunk other VSANs

Check TCAM usage on NPIV core switch:

show system internal acl tcam-usage


show system internal acltcam-soc tcam-usage

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

33

Port-channel design considerations


Director class

Split port-channel members across multiple line cards

When possible use same port on each LC:

Ex. fc1/5, fc2/5, fc3/5, fc4/5, etc.

If multiple members per linecard distribute across port-groups

show port-resources module x

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

34

Port-channel design considerations


Fabric switches

Ensure enough credits for distance

Can rob buffers from other ports in port-group that are out-of-service

Split PC members across different FWD engines to distribute ACLTCAM

For F port-channels to NPV switches (like UCS FIs)

For E port-channels(or just ISLs) using IVR

Each host/target session that gets translated will take up ACLTCAM on each member

Use following table:

Each devices zoning ACLTCAM programming will be repeated on each PC member

Ex. On a 9148S a 6 member port-channel could be split across the 3 fwd engines as follows:
fc1/1, fc1/2, fc1/17, fc1/18, fc1/33 and fc1/34

Split large F port-channels into two separate port-channels each with half members

Consider MDS 9396S for larger scale deployments


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

35

Port-channel design considerations


Fabric switches

Check TCAM usage after major zoning operations

MDS9148s-1# show system internal acltcam-soc tcam-usage

TCAM Entries:
=============

Zoning region is
the most likely to
be exceeded

Region1
Region2
Region3
Region4
Region5
Region6
Mod Fwd
Dir
TOP SYS SECURITY
ZONING
BOTTOM
FCC DIS
FCC ENA
Eng
Use/Total Use/Total Use/Total
Use/Total Use/Total Use/Total
--- --- ------ ---------- --------- ------------ --------- --------- --------1
1
INPUT
19/407
1/407
98/2852 *
4/407
0/0
0/0
1
1
OUTPUT
0/25
0/25
0/140
0/25
0/12
1/25
1
2
INPUT
19/407
1/407
0/2852 *
4/407
0/0
0/0
1
2
OUTPUT
0/25
0/25
0/140
0/25
0/12
1/25
1
3
INPUT
19/407
1/407
0/2852 *
4/407
0/0
0/0
1
3
OUTPUT
0/25
0/25
0/140
0/25
0/12
1/25
---------------------------------------------------

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

36

F port-channel design considerations


Ports are allocated to fwd-engines according the following table:
Switch/Module

Fwd
Engines

Port Range(s)

Fwd-Eng Number

Zoning Region
Entries

Bottom Region
Entries

MDS 9148

fc1/25-36 & fc1/45-48

2852

407

fc1/5-12 & fc1/37-44

2852

407

1-4 & 13-24

2852

407

fc1/5-12 & eth1/1-8

2852

407

fc1/1-4 & fc1/13-20 &


fc1/37-40

2852

407

fc1/21-36

2852

407

ips1/1-2

2852

407

1-16

2852

407

17-32

2852

407

33-48

2852

407

MDS 9250i

MDS 9148S

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

37

F port-channel design considerations


Switch/Module

Fwd
Engines

Port Range(s)

Fwd-Eng Number

Zoning Region
Entries

Bottom Region
Entries

MDS 9396S

12

1-8

49136

19664

9-16

49136

19664

17-24

49136

19664

25-32

49136

19664

33-40

49136

19664

41-48

49136

19664

49-56

49136

19664

57-64

49136

19664

65-72

49136

19664

73-80

49136

19664

81-88

10

49136

19664

89-96

11

49136

19664

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

38

F port-channel design considerations


Switch/Module

Fwd
Engines

Port Range(s)

Fwd-Eng Number

Zoning Region
Entries

Bottom Region
Entries

DS-X9248-48K9

1-48

27168

2680

DS-X9248-96K9

1-24

27168

2680

25-48

27168

2680

1-12

27168

2680

13-24

27168

2680

1-8

49136

19664

9-16

49136

19664

17-24

49136

19664

25-32

49136

19664

1-12

49136

19664

13-24

49136

19664

25-36

49136

19664

37-48

49136

19664

DS-X9224-96K9

DS-X9232-256K9

DS-X9248-256K9

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

39

F port-channel design considerations


Switch/Module

Fwd
Engines

Port Range(s)

Fwd-Eng Number

Zoning Region
Entries

Bottom Region
Entries

DS-X9448-768K9

1-8

49136

19664

9-16

49136

19664

17-24

49136

19664

25-32

49136

19664

33-40

49136

19664

41-48

49136

19664

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

40

MDS Internal CRC handling

New feature in 6.2(13) to handle frames internally corrupted due to bad HW

Frames that are received corrupted are dropped at the ingress port

These frames are not included in this feature

In rare cases frames can get corrupted internally due to bad hardware

These are then dropped


Sometimes difficult to detect

New feature detects the condition and isolates hardware

5 possible stages where frames can get corrupted

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

41

Internal CRC handling


Stages of Internal CRC Detection and Isolation
The five possible stages at which
internal CRC errors may occur in a
switch:
1. Ingress buffer of a module

2. Ingress crossbar of a module


3. Crossbar of a fabric module
4. Egress crossbar of a module
5. Egress buffer of a module
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

42

Internal CRC handling

The modules that support this functionality are:

Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module


Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet Switching Module
Cisco MDS 9700 Fabric Module 1
Cisco MDS 9700 Supervisors

Enabled via the following configuration command:

hardware fabric crc threshold 1-100

When detected failing module is powered down

New in NX-OS 6.2(13)

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

43

Device-alias

device-alias(DA) is a way of naming PWWNs

DAs are distributed on a fabric basis via CFS

device-alias database is independent of VSANs

If a device is moved from one VSAN to another no DA changes are needed

device-alias can run in two modes:


Basic device-alias names can be used but PWWNs are substituted in config
Enhanced device-alias names exist in configuration natively Allows rename without
zoneset re-activations

device-alias are used in zoning, IVR zoning and port-security

copy running-config startup-config fabric after making changes!

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

44

Device-alias

device-alias confirm-commit

Displays the changes and prompts for confirmation

MDS9710-2(config)# device-alias confirm-commit enable


MDS9710-2(config)# device-alias database
MDS9710-2(config-device-alias-db)# device-alias name edm pwwn 1000000011111111
MDS9710-2(config-device-alias-db)# device-alias commit
The following device-alias changes are about to be committed
+ device-alias name edm pwwn 10:00:00:00:11:11:11:11
Do you want to continue? (y/n) [n]

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

45

Device-alias

Note: To prevent problems the same device-alias is only allowed once per
commit.

Example:

MDS9148s-1(config)# device-alias database


MDS9148s-1(config-device-alias-db)# device-alias name test pwwn 1122334455667788
MDS9148s-1(config-device-alias-db)# device-alias rename test test1
Command rejected. Device-alias reused in current session :test
Please use 'show device-alias session rejected' to display the rejected set of commands and for the
device-alias best-practices recommendation.

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

46

SAN Design Security Challenges


SAN design security is often overlooked as an area of concern

Application integrity and security is addressed, but not back-end storage network carrying actual data
SAN extension solutions now push SANs outside datacenter boundaries

Not all compromises are intentional

FC

Accidental breaches can still have the same consequences

SAN design security is only one part of complete data center solution

Host access securityone-time passwords, auditing, VPNs


Storage securitydata-at-rest encryption, LUN security

External DOS
or Other
Intrusion

Theft

Privilege Escalation/
Unintended Privilege

Unauthorized
Connections
(Internal)

Application
Tampering
(Trojans, etc.)

Data
Tampering

SAN

LAN
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

48

SAN Security
Secure management access

Role-based access control CLI, SNMP, Web

Secure management protocols

SSH, SFTP, and SNMPv3

Device/SAN
Management
Security Via SSH,
SFTP, SNMPv3, and
User Roles

FC CT Management security
New in NX-OS 6.2(9)
Port-security
RADIUS or
TACACS+ or LDAP
Server for
Authentication

Secure switch control protocols

TrustSec
FC-SP (DH-CHAP)

AAA - RADIUS, TACACS+ and LDAP

User, switch and iSCSI host authentication

Fabric Binding

Prevent unauthorized switches from joining fabric

Port-security

SAN
Protocol
Security
(FC-SP)
iSCSIAttached
Servers

VSANs
Provide
Secure
Isolation

Hardware-Based
Zoning Via Port and
WWN

Ensure only approved devices login to fabric

FC CT Management Security

Ensure only approved devices send FC CT cmds

Shared Physical Storage


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

49

Slow Drain
Slow Drain Device Detection and Congestion Avoidance

Devices can impart slowness in a fabric

MDS has advanced features that will identify and mitigate slow draining devices

BRKSAN-3446 SAN Congestion! Understanding, Troubleshooting, Mitigating in


a Cisco Fabric

White paper (2013)

http://www.cisco.com/en/US/prod/collateral/ps4159/ps6409/ps12970/white_paper_c11-729444.pdf

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

50

FC and/or FCoE ?

FC and FCoE are widely available on all Nexus and MDS switches

FC and FCoE freely interoperate

FCoE is Operationally Identical

Need to understand data throughput for various speeds

FCoE is newer and has less diagnostics

Distance must be considered for both FC and FCoE

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

51

FC and FCoE are widely available on both Nexus


and MDS switches

MDS supports 2/4/8/16G FC and 10/40G FCoE

MDS does not supported consolidated IO

Nexus 2000/5000/6000 supports 2/48/16G FC and 10/40G FCoE

Supports consolidated IO

Nexus 7000 supports 10/40G FCoE

Nexus 9000 TOR(Top of Rack) supports FCoE in NPV

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

52

FC and FCoE freely interoperate


MDS
FCoE

FC

FC
FC Disk Array

N7K

FCoE

MDS

FCoE

Nexus 2/5/6/K
FCoE

FC

FCoE Disk Array

FC tape system

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

53

FCoE is Operationally Identical

Supports both FC and FCoE

FCoE is treated exactly the same as FC

After zoning device perform registration and then performs discovery


phx2-9513# show fcns database vsan 42

Which
are
FCoE
hosts?

VSAN 42:
-------------------------------------------------------------------------FCID
TYPE PWWN
(VENDOR)
FC4-TYPE:FEATURE
-------------------------------------------------------------------------0xac0600
N
50:0a:09:83:8d:53:43:54 (NetApp)
scsi-fcp:target
0xac0700
N
50:0a:09:84:9d:53:43:54 (NetApp)
scsi-fcp:target
0xac0c00
N
20:41:54:7f:ee:07:9c:00 (Cisco)
npv
0xac1800
N
10:00:00:00:c9:6e:b7:f0
scsi-fcp:init fc-gs
0xef0000
N
20:01:a0:36:9f:0d:eb:25
scsi-fcp:init fc-gs

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

54

The Story of Interface Speeds


Protocol

Clocking
Gbps

8G FC

8.500

Encoding Data Rate


Data/Sent Gbps MB/s
8b/10b

6.8

Comparing speeds is more


complex than just the
apparent speed

Data throughput is based on


both the interface clocking
(how fast the interface
transmits) and how efficient
the interface transmits (how
much encoding overhead)

850

10G FC

10.51875

64b/66b

10.2 1,275

10G FCoE

10.3125

64b/66b

10.0 1,250

16G FC

14.025

64b/66b

13.6 1,700

32G FC

28.050

64b/66b

27.2 3,400

40G FCoE

41.250

64b/66b

40.0 5,000
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

55

55

FCoE is newer and has less diagnostics

Nexus switches especially Nexus 5000/6000 best for smaller deployments

MDS FC has the most robust:

Capabilities

Diagnostics

Director class Dual supervisors


FC-SP/TrustSec
FEC
Smart Zoning
Port-monitor
ISL Diagnostics
SFP Detailed Diagnostics

Troubleshooting

Slowport-monitor
TxWait
SNMP OIDs for slow drain
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

56

SAN Extension FC over long distance


BB_Credits and Distance
2 Gbps FC

4 Gbps FC
8 Gbps FC

~1 km per Frame
~0.5 km per Frame
~0.25 km per Frame

16 Gbps FC ~0.125 km per Frame


16 Km
phx2-9513(config)# feature fcrxbbcredit extended
phx2-9513(config)# interface 1/1
phx2-9513(config-if)# switchport fcrxbbcredit extended 1000
phx2-9513# show interface 1/1
fc1/1 is up
..
Transmit B2B Credit is 128
Receive B2B Credit is 1000

BB_Credits are used to ensure enough FC frames in flight

A full (2112 byte) FC frame is approx 1 km long @ 2 Gbps,


km long @ 4 Gbps km long at 8 Gbps

As distance increases, the number of available BB_Credits


need to increase as well

Insufficient BB_Credits will throttle performance - no data


will be transmitted until R_RDY is returned
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

57

Distance must be considered for both FC and FCoE

Both FC and FCoE are distance


sensitive

For FC - if insufficient credits then


link will go idle

8 credits per KM @ 16Gbps

MDS Fabric switches and Nexus


have lower amounts of B2B credits

Tx Credits 5

Time t0 Tx credits at 0
Frame 1

Frame 2

Frame 3

Tx Remaining Credits 0
Frame 4

Frame 5

R_Rdy

Time t1 Credit received link idle


Link is idle
Frame 5
RRdy

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

58

SAN Extension FCoE over long distance


FCoE Flow Control
For long distance FCoE, receiving switch Ingress Buffer must be large enough to absorb all
packets in flight from the time the Pause frame is sent to the to time the Pause Frame is
received

A 10GE, 50 km link can hold ~300 frames


That means 600+ frames could be either in flight or will be transmitted by the time the receiver
detects buffer congestion and sends a Pause frame to the time the Pause frame is received and the
sender stops transmitting

Egress Buffer

Latency Buffer

Frame

Frame

Frame

Frame

Frame

Frame

Frame

Frame

Pause threshold
Ingress Buffer

Frame

Frame

Frame

Frame

Frame

Frame

Pause

Latency Buffer turning is platform specific


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

59

Distance must be considered for both FC and FCoE


t0

FCoE PFC Pause

Time t0 xoff threshold reached


Frame 1

FCoE buffers must


be large enough to
contain frames that
can be transmitted
for a RTT

Frame 3

Frame 4

Frame 5

Pause

t1

Xoff
Frame
Frame

Time t1 Pause frame received

Xon
Frame

Frame

Frame 5
Frame 4
Frame 3
Frame 2
Frame 1

FCoE drops can


occur otherwise
Verify supported
distance for each
FCoE device

Frame 2

Frame 6

Frame 8

Frame 9

Frame 10
Pause

t2

Xoff
Xon

Frame 7

Frame
Frame
Frame
Frame

Xoff
Xon

Frame 8
Frame 7
Frame 6
Frame 5
Frame 4
Frame 3
Frame 2
Frame 1

Frame 9

Time t2 traffic paused


Frame 9 & 10 dropped

Frame 10

Frame
Frame
Frame
Frame
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

60

Storage Fabric Topology


Considerations

The Importance of Architecture


SAN designs traditionally robust:
dual fabrics, data loss is not
tolerated
Must manage ratios

Fan in/out
ISL oversubscription
Virtualized storage IO streams
(NPIV attached devices, server
RDM, LPARs, etc.)
Queue depth

Latency

Initiator to target
Slow drain
Performance under load: does my
fabric perform the same

Application independence

Consistent fabric performance


regardless of changes to SCSI profile

Number of frames
Frame size
Speed or throughput

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

62

SAN Major Design Factors


Port density

How many now, how many later?


Topology to accommodate port
requirements

Network performance

What is acceptable? Unavoidable?

Traffic management

Preferential routing or resource allocation

High
Performance
Crossbar
2

QoS,
Congestion
Control,
Reduce FSPF
Routes

Large Port
Count
Directors
3
1
8

Fault isolation

Consolidation while maintaining isolation

Management

Secure, simplified management

4
Failure of One Device Has
No Impact on Others
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

63

63

ScalabilityPort Density
Topology Requirements
Considerations

Number of ports for end devices

How many ports are needed now?

What is the expected life of the


SAN?

How many will be needed in


the future?

Large Port
Count
Directors

Hierarchical SAN design

Best Practice

Design to cater for future requirements

Doesnt imply build it all now, but means cater for it and
avoids costly retrofits tomorrow

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

64

64

ScalabilityPort Density MDS Switch selection

MDS 9148S 48 ports 16G FC

MDS 9250i 40 ports 16G FC + 8 port 10G FCoE + 2 FCIP ports

MDS 9396S 96 ports 16G FC

MDS 9706 Up to 192 ports 16G FC and/or 10G FCoE and/or 40G FCoE

MDS 9710 Up to 384ports 16G FC and/or 10G FCoE and/or 40G FCoE

MDS 9718 Up to 768 ports 16G FC and/or 10G FCoE and/or 40G FCoE

All MDS 97xx chassis are 32G ready!

All 16G MDS platforms are full line rate

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

65

ScalabilityPort Density Nexus Switch selection

Nexus 55xx Up to 96 ports 10G FCoE and/or 8G FC ports

Nexus 5672UP Up to 48 10G FCoE and/or 16 8G FC ports

Nexus 5672UP-16G Up to 48 10G FCoE and/or 24 16G FC ports

Nexus 5624Q 12 ports 40G or 48 ports 10G FCoE

Nexus 5648Q 24 ports 40G or 96 ports 10G FCoE

Nexus 5696Q Up to 32 ports 100G / 96 ports 40G / 384 ports 10G FCoE or 60
8G FC

Nexus 56128P Up to 96 10G FCoE and/or 48 8G FC ports

All Nexus platforms are full line rate

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

66

Traffic Management
Do different apps/servers have different
performance requirements?

Should bandwidth be
reserved for specific applications?
Is preferential treatment/
QoS necessary?

QoS,
Congestion
Control,
Reduce FSPF
Routes
8

Given two alternate paths for traffic


between data centers, should traffic
use one path in preference to the other?

Preferential routes

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

67

Network Performance
Oversubscription Design Considerations
All SAN Designs Have Some Degree of Oversubscription

Without oversubscription, SANs would


be too costly

Tape Oversubscription
Need to sustain close to
maximum data rate
LTO-6 Native Transfer
Rate ~ 160 MBps

Disk Oversubscription

Oversubscription is introduced at
multiple points

Switches are rarely the bottleneck


in SAN implementations

Device capabilities (peak and sustained)


must be considered along with network
oversubscription

Disk do not sustain wire-rate I/O


with realistic I/O mixtures
Vendors may recommend a 6:1 to
as high as 20:1 host to disk
fan-out ratio
Highly application dependent

Must consider oversubscription during a


network failure event
ISL Oversubscription
Two-tier design
ratio less than fan-out ratio

BRKSAN-2883

Port Channels
Help Reduce
Oversubscription
While Maintaining
HA Requirements

Host Oversubscription
Largest variance observed at this level. DB
servers close to line rate, others highly
oversubscribed
16Gb line cards non-oversubscribed

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

68

Fault Isolation
Consolidation of Storage

Single Fabric = Increased Storage Utilization +


Reduced Administration Overhead

Major Drawback

Faults Are No Longer Isolated

Technologies such as VSANs enable consolidation


and scalability while maintaining security and
stability

VSANs constrain fault impacts

Faults in one virtual fabric (VSAN) are contained


and do not impact other virtual fabrics

Physical SAN Islands Are


Virtualized onto Common
SAN Infrastructure

Fabric
#3
Fabric
#1

BRKSAN-2883

Fabric
#2

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

69

Data Center SAN Topologies

70

Structured Cabling
Supporting new EoR & ToR designs

Pricing advantage for manufactured cabling


systems

Removes guessing game of how many strands to


pull per cabinet

Growth at 6 or 12 LC ports per cassette

Fiber-only cable plant designs possible

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

72

Core-Edge
Highly Scalable Network Design

End of Row

Top of Rack

Blade Server

Traditional SAN design for growing


SANs

High density directors in core and fabric


switches, directors or blade switches on
edge

Predictable performance

Scalable growth up to core and ISL


capacity

Evolves to support EoR & ToR

MDS 9718 as core


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

74

Large Edge-Core-Edge/End-of-Row Design


Large Edge/Core/Edge
(1920/2160 End Device Ports per Fabric)
Traditional Edge-Core-Edge design Is ideal for
very large centralized services and consistent
host-disk performance regardless of location

240 Storage ports at 16Gb


(optional 480 @ 8Gb without
changing bandwidth ratios)

Full line rate ports, no fabric oversubscription

A
Fabric
Shown,
Repeat
for B
Fabric

8Gb or 16Gb hosts and targets

Services consolidated in the core

240 ISLs from storage edge


to core @ 16Gb

60

MDS 9710

Easy expansion
Per Fabric

Total

Ports Deployed

3,456

6,912

Used Ports

2,880 @ 16Gb
3,120 @ 8Gb

5,760 @ 16Gb
6,240 @ 8Gb

Storage Ports

240 @ 16Gb, or
480 @ 8Gb

480 @ 16Gb, or
960 @ 8Gb

Host Ports

1,680

3,360

ISL ports

960

1,920

Host ISL
Oversubscription

7:1 @ 16Gb

End to End
Oversubscription

7:1 @ 16Gb storage


7:1 @ 8Gb storage

240 ISLs from host


edge to core @ 16Gb

24

1680 hosts @ 8Gb


or 16Gb

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

75

Very Large Edge-Core/End-of-Row Design


A Fabric Shown,
Repeat for B Fabric

Very Large Edge/Core/Edge


(4608 End Device Ports per Fabric)

Traditional Core-Edge design Is ideal for very


large centralized services and consistent hostdisk performance regardless of location
Full line rate ports, no fabric oversubscription

MDS 9718
576(288 per
switch) Storage
ports at 16Gb

16Gb hosts and targets


Services consolidated in the core
Easy expansion
Per Fabric

Total

Ports available

6912

13824

Used Ports

6144 @ 16Gb

12,288 @ 16Gb

Storage Ports

576 @ 16Gb

1152 @ 16Gb

Host Ports

4032

8064

ISL ports

1536

3072

Host ISL
Oversubscription

7:1 @ 16Gb

End to End
Oversubscription

7:1 @ 16Gb storage

768(48 per
switch) ISLs from
host edge to core
@ 16Gb

24

MDS 9710
4032 (252
per switch)
hosts @ 8Gb
or 16Gb

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

76

SAN Top of Rack MDS 9148S

Both A and B Fabric


Shown

SAN Top of Rack


(2,288 End Device Ports per Fabric)
352 Storage ports at 16Gb

Ideal for centralized services while reducing cabling


requirements

MDS 9710

Consistent host/target performance regardless of location in rack

8Gb hosts & 16Gb targets


Easy edge expansion until maxed out
Massive cabling infrastructure avoided as compared to EoR
designs
Per Fabric

Total

Ports Deployed

2,688

5,376

Used Ports

2,672

5,344

Storage Ports

176 @ 16Gb

352 @ 16Gb

Host Ports

2,112

4,224

ISL Ports

192

384

4 ISLs from
each edge to
core @ 16Gb
4

MDS 9148S

2,112 hosts @ 16Gb

Host ISL
Oversubscription

12:1 @ 16Gb

End to End
Oversubscription

12:1 @ 16G hosts

2 x 9148S per rack

48 Racks
44 Dual-attached
servers per rack

Rack
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

77

Top-of-Rack Design - Blade Centers

Both A and B Fabric


Shown

SAN Top of Rack Blade Centers


(1,920 Usable Ports per Fabric)
96 Storage ports
at 16Gb

Ideal for centralized services

MDS 9710

Consistent host/target performance regardless of


location in blade enclosure or rack

8Gb hosts & 16Gb targets


Need to manage more SAN Edge switches/Blade Switches
NPV attachment reduces fabric complexity

Assumes little east-west SAN traffic


Add blade server ISLs to reduce fabric oversubscription
Ports Deployed

1,920

Used Ports

192 @ 16Gb
1056 @ 8Gb

Storage Ports

192 @ 16Gb, or
192 @ 8Gb

Host Ports

2304

Host ISL Oversubscription

4:1 @ 8G

End to End Oversubscription

6:1 @ 16Gb Storage


12:1 @ 8Gb Storage

8 ISLs from each


edge to core @ 8Gb

Blade Center

960 hosts @ 8Gb

12 Racks, 72 chassis
96 Dual-attached blade
servers per rack

Rack
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

78

Medium Scale Dual Fabric


Collapsed Core Design

A Fabric Shown,
Repeat for B Fabric

Medium Scale Dual Fabric


(768 End Device Ports per Fabric)

96 Storage ports at 16Gb

MDS 9710

Ideal for centralized services


Consistent host/target performance regardless of
location
8Gb or 16Gb hosts & targets (if they exist)
Relatively easy edge expansion to Core/Edge
EoR design
Supports blade centers connectivity
Ports Deployed

768

Used Ports

768@ 16Gb

Storage Ports

96 @ 16Gb

Host Ports

672 @ 16Gb

Host ISL Oversubscription

N/A

End to End Oversubscription

7:1 @ 16Gb

672 hosts @ 16Gb

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

79

POD SAN Design


POD SAN Design - Ideal for centralized services
Consistent host/target performance regardless of
location in blade enclosure or rack

36-48 Storage
ports at 16Gb

10/16Gb hosts & 16Gb targets


Need to manage more SAN Edge switches/Blade
Switches

MDS 9396S

MDS 9396S

NPV attachment reduces fabric complexity


Add blade server ISLs to reduce fabric
oversubscription
Per Fabric

Total

Ports Deployed

384

768

Used Ports

336

672

Storage Ports

36-48@ 16Gb

72-96 @ 16Gb

Host Ports

252

504

ISL Ports

36

72

Host ISL
Oversubscription

7:1 @ 16Gb

End to End
Oversubscription

7:1 @ 16G hosts

A
6 ISLs from each edge
to core @ 16Gb

B
8 ISLs from each edge
to core @ 8Gb

MDS 9148S

UCS FI 6248UP

6 Racks, 288 blades


48 Dual-attached blade
servers per rack

6 Racks, 252 chassis


42 Dual-attached servers
per rack

252 hosts @ 16Gb or 288 hosts @ 10Gb


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

80

FI 6332-16UP, FI 6332 UCS SAN Design


FI 6332-16UP Use Case

FI 6332 Use Case

40G

40G

Nexus
7K/9K

Nexus
7K/9K
16G FC

40G FCoE

FI 6332-16UP

FI 6332
40G

UCS
B-Series
B200
B260
B460
and

40G

UCS
C-Series
C220
C240
C460

MDS
9700

Storage
Array

IOM 2304

40G
UCS
B-Series
B200
B260
B460
and

40G

UCS
C-Series
C220
C240
C460

MDS
9700

Storage
Array

IOM 2304
40G

40G

16G FC

40G FCoE

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

81

Intelligent SAN Services

Enhancing SAN Design with Services


Extend Fabrics

FCIP
Extended Buffer to Buffer credits
Encrypt the pipe

SAN Services extend the effective distance for


remote applications

SAN Extension with FCIP

SAN IO acceleration
Write acceleration
Tape acceleration

Enhance array replication


requirements
Reduces WAN-induced latency
Improves application performance
over distance

Data Migration

Data Migration with


DMM

IO Acceleration

Fabric is aware of all data frames from initiator to target


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

83

SAN Extension with FCIP


Fibre Channel over IP

Encapsulation of Fibre Channel frames into IP packets and tunneling through


an existing TCP/IP network infrastructure, in order to connect geographically
distant islands

Write Acceleration to improve throughput and latency

Hardware-based compression

Hardware-based IPSec encryption


Array to Array
Replication

FCIP Tunnel

TE Port
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

84

FC Redirect - How IOA Works


Replication
Starts

Replication
Starts
Flow redirected to
IOA Engine

Initiator to
target

Flow accelerated and


sent towards normal
routing path

IOA

IOA

MAN/WAN
IOA

IOA

IOA= I/O Accelerator

Initiator

Target

Initiator

Virtual Initiator

Target

Virtual Target
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

85

Data Acceleration
A fabric service to accelerate I/O between SAN devices

Accelerate SCSI I/O


Over both Fibre Channel (FC) and Fibre Channel over IP (FCIP) links
For both Write Acceleration (WA) and Tape Acceleration (TA)

I/O Acceleration Node platforms: MSM-18/4, SSN-16, MDS-9222i, MDS-9250i

Uses FC Redirect

IOA

IOA

MAN/WAN
IOA

IOA

IOA= I/O Accelerator


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

86

IOA FCIP Tape Backup


Large Health Insurance Firm

MDS IOA Results


FCIP increase
92% throughput

Highly resilient Clustering of IOA engines allows for load balancing


and failover

Improved Scalability- Scale without increasing management overhead

Significant reutilization of existing infrastructure- All chassis and


common equipment re-utilized

Flat VSAN topology- Simple capacity and availability planning


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

87

Data Mobility
Application Servers

Data Mobility Manager


Application
I/O

Old
Array

Data
Migration

New
Array

Migrates data between storage arrays for

Technology refreshes

Workload balancing

Storage consolidation

DMM offers

Online migration of heterogeneous arrays

Simultaneous migration of multiple LUNs

Unequal size LUN migration

Rate adjusted migration

Verification of migrated data

Dual fabric support

CLI and wizard-based management with Cisco Fabric Manager

Not metered on no. of terabytes migrated or no. of arrays

Requires no SAN reconfiguration or rewiring

Uses FC Redirect
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

88

SAN Extension - CWDM


Course Wavelength Division Multiplexing
TX

TX
TX

RX

Transmission

RX

Optical fiber pair

TX
Optical
transmitters

RX
OADM

8 channels WDM using 20nm spacing

Colored CWDM SFPs used in FC switch

Optical multiplexing done in OADM

RX
Optical
receivers

Passive device
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

89

SAN Extension - DWDM


Dense Wavelength Division Multiplexing
TX

Transmission

RX

TX
TX

RX
Optical Splitter Protection

RX

Optical fiber pair

TX
Optical
transmitters

RX
DWDM devices

Optical
receivers

DWDM systems use optical devices to combine the output of several


optical transmitters

Higher density technology compared with CWDM, <1nm spacing


BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

90

Dense vs Coarse (DWDM vs CWDM)


DWDM

CWDM

Application

Long Haul

Metro

Amplifiers

Typically EDFAs

Almost Never

# Channels

Up to 80

Up to 8

Channel Spacing

0.4 nm

20nm

Distance

Up to 3000km

Up to 80km

Spectrum

1530nm to 1560nm

1270nm to 1610nm

Filter Technology

Intelligent

Passive

Site 1

MDS

Site 2

Site 1

Site 2

ONS

Array
DWDM

CWDM
BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

91

Summary
Drivers in DC are forcing change

Many design options

10G convergence & server virtualization

Optimized for performance

It's not just about FCP anymore. FCoE, NFS, iSCSI are
being adopted

Some for management

Others for cable plant optimization

Proper SAN design is holistic in the approach

Performance, Scale, Management attributes all play critical roles

Not all security issues are external

Fault isolation goes beyond SAN A/B separation

Consider performance under load

Design for SAN services

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

92

Additional Relevant Sessions


Storage Networking Cisco Live Las Vegas 2016

BRKARC-1222 - Cisco MDS/Nexus SAN Portfolio: Next phase of Storage Networking

BRKSAN-3446 - SAN Congestion! Understanding, Troubleshooting, Mitigating in a Cisco


Fabric

Wednesday 4PM

BRKSAN-3101 - Troubleshooting Cisco MDS 9000 Fibre Channel Fabrics

Wednesday 8AM

Thursday 8AM

BRKCCIE-3351 Storage Networking for CCIE Data Center Candidates

Thursday 8AM

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

93

Call to Action

Visit the World of Solutions for:

Multiprotocol Storage Networking booth

Data Center Switching Whisper Suite

See the MDS 9718, Nexus 5672UP, 2348UPQ, and MDS 40G FCoE blade
Strategy & Roadmap (Product portfolio includes: Cisco Nexus 2K, 5K, 6K, 7K, and MDS products).

Technical Solution Clinics

Meet the Engineer

Available Tuesday and Thursday

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

94

Complete Your Online Session Evaluation

Give us your feedback to be


entered into a Daily Survey
Drawing. A daily winner will
receive a $750 Amazon gift card.

Complete your session surveys


through the Cisco Live mobile
app or from the Session Catalog
on CiscoLive.com/us.
Dont forget: Cisco Live sessions will be available
for viewing on-demand after the event at
CiscoLive.com/Online

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

95

Continue Your Education

Demos in the Cisco campus

Walk-in Self-Paced Labs

Lunch & Learn

Meet the Engineer 1:1 meetings

Related sessions

BRKSAN-2883

2016 Cisco and/or its affiliates. All rights reserved. Cisco Public

96

Please join us for the Service Provider Innovation Talk featuring:


Yvette Kanouff | Senior Vice President and General Manager, SP Business
Joe Cozzolino | Senior Vice President, Cisco Services
Thursday, July 14th, 2016
11:30 am - 12:30pm, In the Oceanside A room

What to expect from this innovation talk


Insights on market trends and forecasts
Preview of key technologies and capabilities
Innovative demonstrations of the latest and greatest products

Better understanding of how Cisco can help you succeed


Register to attend the session live now or
watch the broadcast on cisco.com

Thank you