Sie sind auf Seite 1von 98

FabricPath Technology and Design

BRKDCT-2081

Agenda
FabricPath Introduction FabricPath Technical Overview FabricPath and TRILL FabricPath Use Case and Designs FabricPath Monitoring and Troubleshooting Summary

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

FabricPath Introduction

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Eternal Debates on Network Design


Layer 2 or Layer 3?
Layer 3 Network
L3

Both Layer 2 and Layer 3 are required for any network design
Subnet provide fault isolation Scalable control planes with inherent provision of multi-pathing and multi-topology HA with fast convergence Additional loop-mitigation mechanism in the data plane (e.g. TTL, RPF check, etc.)

Core

Layer 3? Layer 2?

Cisco has solutions for both Layer 2 and Layer 3 to satisfy Customers requirements

Access

BRKDCT-2081

VLAN VLAN VLAN VLAN

2010 Cisco and/or its affiliates. All rights reserved.

VLAN VLAN VLAN VLAN

Simplicity (no planning/configuration required for either addressing or control plane) Single control plane protocol for unicast, broadcast, and multicast Easy application development
Cisco Public

L2

L2 Network Requirements inside DC


Maximize Bi-Sectional Bandwidth Scalable Layer 2 domain High Availability
Resilient control-plane Fast convergence upon failure Fault-domain isolation

Facilitate Application Deployment


Workload mobility, Clustering, etc.

Multi-Pathing/Multi-Topology

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

L2 Provides Flexibility in the Data Center


Layer 2 required by data center applications Layer 2 is plug and play Layer 2 is Layer 3 agnostic With Layer 2:
Server mobility does not require interaction between Network/ Server teams Theoretically, no physical constraint on server location

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

L2 Requires a Tree
11 Physical Links

Branches of trees never interconnect (no loop)

5 Logical Links
S2

S1

S3

Spanning Tree Protocol (STP) typically used to build this tree Tree topology implies:
Wasted bandwidth increased oversubscription Sub-optimal paths Conservative convergence (timer-based) failure catastrophic (fails open)
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Virtual Port Channel (vPC)


Simple Building Block Introduces some changes to the data plane Provides active/active redundancy Does not rely on STP (STP kept as safeguard) Limited to pair of switches (enough for most cases)
VPC domain

Blocked port (STP)


Redundancy handled by STP
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

Data plane based loop prevention


Redundancy handled by vPC
8

MAC Address Scaling & L2 Bridging


MAC addresses encode no location or network hierarchy Default forwarding behavior in bridged network is flood MAC filtering database limits scope of flooding Ultimately, does not scale every switch learns every MAC

MAC Table A

MAC Table A

Layer 2 Domain
MAC Table A MAC Table A MAC Table A MAC Table A

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Network Addressing Scheme


MAC v.s. IP
Network Address 10.0.0.0/24

0011.1111.1111
Non-hierarchical Address
0011.1111.1111 0011.1111.1111 0011.1111.1111 10.0.0.0/24

10.0.0.10 /24
Host Address 10.0.0.10
10.0.0.0/16 20.0.0.0/16 20.0.0.0/24

0011.1111.1111

0011.1111.1111

10.0.0.10

20.0.0.20

L2 Forwarding (Bridging)
Data-plane learning Flat address space and forwarding table (MAC everywhere!!!) Flooding required for unknown unicast destination Destination MACs need to be known for all switches in the same network to avoid flooding
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

L3 Forwarding (Routing)
Control-plane learning Hierarchical address space and forwarding Only forwarding to destination addresses with matching routes in the table Flooding is isolated within subnets No dependence on data-plane for maintaining forwarding table
Cisco Public

10

The Next Era of Layer 2 Network


What Can Be Improved? Network Address Scheme: Flat Hierarchical
Additional header is required to allow L2 Routing instead of Bridging Provide additional loop-prevention mechanism like TTL

Address Learning: Data Plane Control Plane


Eliminate the needs to program all MACs on every switches to avoid flooding

Control Plane: Distance-Vector Link-State


Improve scalability, minimize convergence time, and allow multipathing inherently

The ultimate solution needs to take both control and data plane into consideration this time!!!
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

11

Introducing Cisco FabricPath


An NX-OS Innovation for Layer 2 Networks

licity

Layer 2 strengths
Simple configuration Flexible provisioning Low cost

Resilience

Flex ib

p Sim

Fabric Path

Layer 3 strengths
Leverage bandwidth Fast convergence Highly scalable
ility

Simplicity

Flexibility

Bandwidth

Availability

Cost

"The FabricPath capability within Cisco's NX-OS offers dramatic increases in network scalability and resiliency for our service delivery data center. FabricPath extends the benefits of the Nexus 7000 in our network, allowing us to leverage a common platform, simplify operations, and reduce operational costs. Mr. Klaus Schmid, Head of DC Network & Operating, T-Systems International GmbH its affiliates. All rights reserved. BRKDCT-2081 2010 Cisco and/or Cisco Public
12

FabricPath: an Ethernet Fabric


Enabling Network Fabrics

FabricPath

Connect a group of switches using an arbitrary topology With a simple CLI, aggregate them into a Fabric:
N7K(config)# interface ethernet 1/1 N7K(config-if)# switchport mode fabricpath

An open protocol based on L3 technology provides Fabricwide intelligence and ties the elements together
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

13

What is a Fabric?
Externally, a Fabric looks like a single switch Internally, a protocol adds Fabric-wide intelligence and ties the elements together. This protocol provides in a plug-and-play fashion:
Optimal, low latency connectivity any to any High bandwidth, high resiliency Open management and troubleshooting

Cisco FabricPath provides additional capabilities in term of scalability and L3 integration

FabricPath

FabricPath

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

14

FabricPath Simplicity from the Outside


Multi-Domain Silos FabricPath Any App, Anywhere!

Fabric

Benefits server team by providing a network Fabric that looks like a single switch Breaks down silos, permits workload mobility, provides maximum flexibility Lowers OPEX by simplifying server team operation Reduces dependency on/ interaction with network team

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

15

FabricPath Simplicty from the Inside


Benefits network team by: Reducing number of switches
Higher port density Lower oversubscription

Isolating network from the users


No impact due to topology changes Fabric can be upgraded/reconfigured live

Utilizing an open protocol


Unicast, multicast, broadcast, VLAN pruning all controlled by single control protocol Maintenance and troubleshooting equivalent to L3 network Easy to extend, providing standards-compliance with Cisco value-add

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

16

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

17

Cisco FabricPath Overview

Data Plane Innovation


FabricPath encapsulation Conversation Learning Routing, not bridging Built-in loop-mitigation Time-to-Live (TTL) RPF Check

Control Plane Innovation


Plug-n-Play Layer 2 IS-IS Support unicast and multicast Fast, efficient, and scalable Equal Cost Multipathing (ECMP) VLAN and Multicast Pruning

Cisco NX-OS Cisco Nexus Platform


BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

18

FabricPath versus Classic Ethernet Interfaces


Classic Ethernet (CE) Interface Interfaces connected to existing NICs and traditional network devices Send/receive traffic in 802.3 Ethernet frame format Participate in STP domain Forwarding based on MAC table
Ethernet Ethernet

FabricPath interface CE interface

FabricPath Header

FabricPath Interface Interfaces connected to another FabricPath device Send/receive traffic with FabricPath header No spanning tree!!! No MAC learning Exchange topology info through L2 ISIS adjacency Forwarding based on Switch ID Table
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

19

FabricPath IS-IS
FabricPath IS-IS replaces STP as control-plane protocol in FabricPath network Introduces link-state protocol with support for ECMP for Layer 2 forwarding Exchanges reachability of Switch IDs and builds forwarding trees
STP BPDU STP BPDU

Improves failure detection, network reconvergence, and high availability Minimal IS-IS knowledge required no user configuration by default
Maintains plug-and-play nature of Layer 2

FabricPath IS-IS

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

20

Why IS-IS?
A few key reasons: Has no IP dependency no need for IP reachability in order to form adjacency between devices Easily extensible Using custom TLVs, IS-IS devices can exchange information about virtually anything
FabricPath Port CE Port

Provides SPF routing Excellent topology building and reconvergence characteristics


Cisco Public

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

21

Basic FabricPath Data Plane Operation


Encapsulation to creates hierarchical address scheme
DSID20 SSID10 DMACB SMACA DSID20 SSID10 DMACB SMACA Payload

FabricPath interface CE interface

S10
Ingress FabricPath Switch

Payload

S20
Egress FabricPath Switch
Payload

DMACB SMACA Payload

SMACA DMACB

DMACB SMACA Payload

Payload SMACA DMACB

MAC A

MAC B

Ingress FabricPath switch determines destination Switch ID and imposes FabricPath header Destination Switch ID used to make routing decisions through FabricPath core No MAC learning or lookups required inside core Egress FabricPath switch removes FabricPath header and forwards to CE
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

22

FabricPath Encapsulation
16-Byte MAC-in-MAC Header

Classical Ethernet Frame

DMAC

SMAC

802.1Q

Etype

Payload

CRC

Original CE Frame

Cisco FabricPath Frame


6 bits Endnode ID (5:0) 1
U/L

Outer DA (48)

Outer SA (48)

FP Tag (32)

DMAC

SMAC

802.1Q

Etype

Payload

CRC (new)

1
I/G

2 bits Endnode ID (7:6)

1
RSVD

1
OOO/DL

12 bits Switch ID

8 bits Sub Switch ID

16 bits Port ID

16 bits Etype

10 bits Ftag

6 bits TTL

Switch ID Unique number identifying each FabricPath switch Sub-Switch ID Identifies devices/hosts connected via VPC+ Port ID Identifies the destination or source interface Ftag (Forwarding tag) Unique number identifying topology and/or multidestination distribution tree TTL Decremented at each switch hop to prevent frames looping infinitely
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

23

FabricPath MAC Table


Edge switches maintain both MAC address table and Switch ID table Ingress switch uses MAC table to determine destination Switch ID Egress switch uses MAC table (optionally) to determine output switchport
S10 S20 S30 S40

FabricPath MAC Table on S100


MAC IF/SID e1/1 e1/2 S101 S200

Local MACs point to switchports Remote MACs point to Switch IDs

A B C D

S100

S101

S200

MAC A
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

MAC B

MAC C
Cisco Public

MAC D 24

show mac address-table dynamic


S100# sh mac address-table dynamic Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+-----------------* 10 0000.0000.0001 dynamic 0 F F Eth1/15 * 10 0000.0000.0002 dynamic 0 F F Eth1/15 * 10 0000.0000.0003 dynamic 0 F F Eth1/15 * 10 0000.0000.0004 dynamic 0 F F Eth1/15 * 10 0000.0000.0005 dynamic 0 F F Eth1/15 * 10 0000.0000.0006 dynamic 0 F F Eth1/15 * 10 0000.0000.0007 dynamic 0 F F Eth1/15 * 10 0000.0000.0008 dynamic 0 F F Eth1/15 * 10 0000.0000.0009 dynamic 0 F F Eth1/15 * 10 0000.0000.000a dynamic 0 F F Eth1/15 10 0000.0000.000b dynamic 0 F F 200.0.30 10 0000.0000.000c dynamic 0 F F 200.0.30 10 0000.0000.000d dynamic 0 F F 200.0.30 10 0000.0000.000e dynamic 0 F F 200.0.30 10 0000.0000.000f dynamic 0 F F 200.0.30 10 0000.0000.0010 dynamic 0 F F 200.0.30 10 0000.0000.0011 dynamic 0 F F 200.0.30 10 0000.0000.0012 dynamic 0 F F 200.0.30 10 0000.0000.0013 dynamic 0 F F 200.0.30 10 0000.0000.0014 dynamic 0 F F 200.0.30 S100#
S100 S200

S10

S20

S30

S40

po1 po2 po3 po4

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

25

FabricPath Control Plane Operation


Plug-n-Play L2 IS-IS manages forwarding topology

FabricPath IS-IS manages Switch ID (routing) table All FabricPath-enabled switches automatically assigned Switch ID (no user configuration required) Algorithm computes shortest (best) paths to each Switch ID based on link metrics Equal-cost paths supported between FabricPath switches
S10 S20 S30 S40

FabricPath Routing Table on S100


One best path to S10 (via L1)
Switch S10 S20 S30 S40 IF L1 L2 L3 L4 L1, L2, L3, L4 L1, L2, L3, L4 L1 L2 L3 L4

Four equal-cost paths to S101

S101 S200

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

S100

Cisco Public

S101

S200

26

Building the FabricPath Routing Table


Switch S20 S30 S40 S100 S101 S200 IF L1,L5,L9 L1,L5,L9 L1,L5,L9 L1 L5 L9 L5 L1 L2 L3 L6 L4 L7 L9 L8 L10 L11 L12 Switch S10 S20 IF L4,L8,L12 L4,L8,L12 L4,L8,L12 L4 L8 L12

S10

S20

S30

S40

S30 S100 S101 S200

S100
Switch S10 S20 S30 S40 S101 S200 IF L1 L2 L3 L4 L1, L2, L3, L4
BRKDCT-2081

S101

S200
Switch S10 S20 S30 IF L9 L10 L11 L12 L9, L10, L11, L12 L9, L10, L11, L12

MAC A

MAC B

MAC C

MAC D

S40 S100 S101

L1, L2, L3, L4

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

27

show fabricpath route


S100# sh fabricpath route FabricPath Unicast Route Table 'a/b/c' denotes ftag/switch-id/subswitch-id '[x/y]' denotes [admin distance/metric] ftag 0 is local ftag subswitch-id 0 is default subswitch-id FabricPath Unicast Route Table for Topology-Default 0/100/0, number of next-hops: 0 via ---- , [60/0], 5 day/s 1/10/0, number of next-hops: 1 via Po1, [115/10], 0 day/s 1/20/0, number of next-hops: 1 via Po2, [115/10], 0 day/s 1/30/0, number of next-hops: 1 via Po3, [115/10], 2 day/s 1/40/0, number of next-hops: 1 via Po4, [115/10], 2 day/s 1/200/0, number of next-hops: 4 via Po1, [115/20], 0 day/s via Po2, [115/20], 0 day/s via Po3, [115/20], 2 day/s via Po4, [115/20], 2 day/s S100# 18:38:46, local 04:15:58, isis_l2mp-default 04:16:05, isis_l2mp-default 08:49:51, isis_l2mp-default 08:47:56, isis_l2mp-default 04:15:58, 04:15:58, 08:49:51, 08:47:56, isis_l2mp-default isis_l2mp-default isis_l2mp-default isis_l2mp-default

S10

S20

S30

S40

po1 po2 po3 po4

S100

S200

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

28

FabricPath ECMP
When multiple forwarding paths available, path selection based on ECMP hash function Up to 16 next-hop interfaces for each destination Switch ID Number of next-hops installed controlled by maximum-paths command under FabricPath IS-IS process (default is 16) Path selection based on hash function
S1

S100

S16

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

29

Multiple Topologies
L2 L5 Topology 0 Topology 1 Topology 2 L9 L1 L6 L3 L7 L10 L4 L8 L11 L12

Topology: A group of links in the Fabric. By default, all the links are part of topology 0. Other topologies can be created by assigning a subset of the links to them. A link can belong to several topologies A VLAN is mapped to a unique topology Topologies can be used for traffic engineering, security etc
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

30

Conversational MAC Learning


MAC learning method designed to conserve MAC table entries on FabricPath edge switches
FabricPath core switches do not learn MACs at all

Each forwarding engine distinguishes between two types of MAC entry:


Local MAC MAC of host directly connected to forwarding engine Remote MAC MAC of host connected to another forwarding engine or switch

Forwarding engine learns remote MAC only if bidirectional conversation occurring between local and remote MAC
MAC learning not triggered by flood frames

Conversational learning enabled in all FabricPath VLANs

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

31

Conversational MAC Learning


FabricPath MAC Table on S300
MAC B C IF/SID S200 (remote) e7/10 (local)

S300

FabricPath MAC Table on S100


MAC A B IF/SID e1/1 (local) S200 (remote)

S100

MAC C

FabricPath MAC Table on S200


MAC IF/SID S100 (remote) e12/1(local) S300 (remote)

S200 MAC A

A B C

MAC B

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

32

Conversational MAC Learning


250 MACs 500 MACs 250 MACs 500 MACs

Optimize Resource Utilization Learning only the MAC addresses required


MAC IF

MAC B

IF 2/1

STP Domain
500 MACs 250 MACs 500 MACs 250 MACs

S11
MAC C A IF 3/1 S11

ALL MACs needs to be learn on EVERY Switch Large L2 domain and virtualization present challenges to MAC Table scalability
BRKDCT-2081

Local MAC: Source-MAC Learning only happen to traffic received on CE Ports Remote MAC: Source-MAC for traffic received on FabricPath Ports are only learned if Destination-MAC is already known as Local
Cisco Public

2010 Cisco and/or its affiliates. All rights reserved.

33

FabricPath Tree

Used for forwarding L2 multi-destination traffic (Unknown Unicast, Broadcast, and Multicast) inside the L2 Fabric
S101

Tree # 1

IF L1, L101

S100

S105

Tree topology is required to forward multi-destination traffic properly


One Ingress Switch Many Egress Switches

S200

S1

Root for Tree #1

S2

S16

L101 L1 S100 L2 L16

L102 L116

S200

Same method is also used by L3 (e.g. PIM Source Tree/Shared Tree) One or more Root devices are first elected for the L2 Fabric A Tree spanning from each Root is then formed and a network-wide unique ID is assigned to it Support for multiple Trees allows Cisco FabricPath to support multipathing even for multi-destination traffic Ingress Switch determines the Tree for each traffic flow
Cisco Public

A
BRKDCT-2081

FabricPath Port CE Port

2010 Cisco and/or its affiliates. All rights reserved.

34

FabricPath Multidestination Trees


S10 Root for Tree 1 S20 S30 S40 Root for Tree 2

Multidestination traffic constrained to loop-free trees touching all FabricPath switches Root switch assigned for each multidestination tree in FabricPath domain Loop-free tree built from each Root and assigned a network-wide identifier (Ftag) Support for multiple multidestination trees provides multipathing for multi-destination traffic
Two trees supported in NX-OS release 5.1

S100

S101

S200

S100

S20

S100

S10

S10

S101

S30

S40

S101

S20

Root

S200

S40

Root

S200

S30

Logical Tree 1
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

Logical Tree 2
Cisco Public

35

Multidestination Trees and Role of the Ingress FabricPath Switch


Ingress FabricPath switch determines which tree to use for each flow
Other FabricPath switches forward based on tree selected by ingress switch
S10 Root for Tree 1 S20 S30 S40 Root for Tree 2

Broadcast and unknown unicast typically use first tree Hash-based tree selection for multicast, with several configurable hash options
Multidestination Trees on Switch 100
Tree 1 2 IF L1,L2,L3,L4 L4 L5 L1 L2 L3 L6 L4 L7 L9 L8 L10 L11 L12

S100

S101

S200

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

36

Putting It All Together Host A to Host B


(1) Broadcast ARP Request
Multidestination Trees on Switch 10
Tree IF L1,L5,L9 L9 DSIDFF Ftag1 SSID100 DMACFF SMACA L5 L1 L2 L3 L6 L4 L7 L9 L8 L10 L11 L12 DSIDFF Ftag1 SSID100 DMACFF SMACA Payload

S10

Root for Tree 1

S20

S30

S40

Root for Tree 2

Ftag

1 2

Multidestination Trees on Switch 100


Tree IF L1,L2,L3,L4 L4

Payload

Broadcast

1 2

S100

S101

S200

Multidestination Trees on Switch 200


DMACFF SMACA Payload Tree IF L9 L9,L10,L11,L12 Payload SMACA DMACFF

FabricPath MAC Table on S100


MAC A IF/SID e1/1 (local)

Ftag
MAC A

1 2

MAC B

FabricPath MAC Table on S200


MAC IF/SID

Learn MACs of directly-connected devices unconditionally


BRKDCT-2081

Dont learn MACs in flood frames

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

37

Putting It All Together Host A to Host B


(2) Unicast ARP Reply
Multidestination Trees on Switch 10
Tree IF L1,L5,L9 L9 DSIDMC1 Ftag1 SSID200 L5 L1 L2 L3 L6 L4 L7 L9 L8 L10 L11 L12 DSIDMC1 Ftag1 SSID200 DMACA SMACB Payload

S10

S20

S30

S40

Ftag

1 2

Multidestination Trees on Switch 100


Tree IF L1,L2,L3,L4 L4

DMACA SMACB Payload

Ftag

1 2

S100

S101

S200

Multidestination Trees on Switch 200


Payload SMACB DMACA Tree IF L9 L9,L10,L11,L12 DMACA SMACB Payload

FabricPath MAC Table on S100


MAC IF/SID e1/1 (local) S200 (remote)

Unknown
MAC A

1 2

A B

MAC B

FabricPath MAC Table on S200


MAC IF/SID

If DMAC is known, then learn remote MAC


BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

A
B e12/2 (local)

Cisco Public

38

Putting It All Together Host A to Host B


(3) Unicast Data
FabricPath Routing Table on S30
Switch IF L11 DSID200 Ftag1 SSID100 L5 L1 L2 L3 L6 L4 L7 L9 L8 L10 L11 L12 DSID200 Ftag1 SSID100 DMACB SMACA

S10

S20

S30

S40

S200

S200

FabricPath Routing Table on S100


Switch S10 S20 S30 S40 S101 IF L1 L2 L3 L4 L1, L2, L3, L4 L1, L2, L3, L4

DMACB SMACA Payload

Hash S100 S101

Payload

S200

FabricPath Routing Table on S30


DMACB SMACA Payload Switch IF Payload SMACA DMACB

S200

S200
MAC A

S200

S200

MAC B

FabricPath MAC Table on S100


MAC A IF/SID e1/1 (local) S200 (remote)
2010 Cisco and/or its affiliates. All rights reserved.

FabricPath MAC Table on S200


MAC A IF/SID S100 (remote) e12/2 (local)

B
Cisco Public

BRKDCT-2081

39

Loop Mitigation with FabricPath


Minimize impact of transient loop with TTL and RPF Check
Root
S1 S2
TTL=1

Root
TTL=2

S10

TTL=0 TTL=3

Block redundant paths to ensure loop-free topology Frames loop indefinitely if STP failed Could results in complete network melt-down as the result of flooding
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

TTL is part of FabricPath header Decrement by 1 at each hop Frames are discarded when TTL=0 RPF check for multicast based on tree info
Cisco Public

40

VLAN Pruning in L2 Fabric


VL10 VL30

Shared Broadcast Tree

Switches indicate locally interested VLANs to the rest of the L2 Fabric Broadcast traffic for any VLAN only sent to switches that have requested for it
VL20

VLAN 10

VL10 VL20 VL30

VLAN 20

VLAN 30

L2 Fabric

L2 Fabric

L2 Fabric

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

41

STP Interaction

FabricPath (L2 IS-IS)

Classical Ethernet (STP)

STP Domain 1

STP Domain 2

FabricPath Port CE Port

L2 Fabric is presented as a single bridge to all connected CE devices L2 Fabric should be the root for all connected STP domains. CE ports will be put into blocking state when better BPDU is received (rootguard) No BPDUs are forwarded across the fabric (terminated on CE ports)

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

42

vPC Enhancement for FabricPath


MAC Table A ??? B MAC Table A S4 B

S3

S3

L2 Fabric
S3 S1 B A Payload S3 S2 B A Payload S3 S4 B A

L2 Fabric
Payload S3 S4 B A Payload

vPC
B

S1

S2
MAC Table B S3

vPC+

S1

S2
MAC Table B S3

Payload A B A Payload A

S4

For Switches at L2 Fabric Edge vPC is still required to provide active/active L2 paths for dualhomed CE devices or clouds However, MAC Table only allows 1-to-1 mapping between MAC and Switch ID
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

Each vPC domain is represented by an unique Virtual Switch to the rest of L2 Fabric Switch ID for such Virtual Switch is then used as Source in FabricPath encapsulation
Cisco Public

43

Connect L3 or Services to L2 Fabric


Layer 3 Network Layer 3 Network

L3 L2

FHRP Active

L3
Multi-pathing
FHRP

FHRP Active

VMAC

VMAC

FHRP

VMAC

L2 Fabric

L2 Fabric

FabricPath enables multipathing for bridged traffic However, FHRP allows only 1 active gateway for each host, therefore prevent traffic that needs to be routed to take advantage of multi-pathing
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

Provide active/active data-plane for FabricPath with no change to existing FHRP Allow multi-pathing even for routed traffic Same feature can be leveraged by service nodes as well

Multi-pathing
44

L2

vPC+

Cisco Public

S3

FabricPath
L2

VPC+
S1

L1

F1 F1 F1

VPC+ F1

F1 S2

CE

VPC+ allows dual-homed connections from edge ports into FabricPath domain with active/active forwarding
CE switch, Layer 3 router, dualhomed server, etc.

F1
po3

Physical

Host A

VPC+ requires F1 modules with FabricPath enabled in the VDC


Peer-link and all VPC+ connections must be to F1 ports Logical
F1 S1 F1 F1 S3
L1 L2

Host AS4L1,L2
F1 F1 F1 S2

VPC+ creates virtual FabricPath switch for each VPC+-attached device to allow load-balancing within FabricPath domain

VPC+

Virtual Switch 4 becomes next-hop for Host A in FabricPath domain

S4
po3

Host A
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

45

VPC+ Physical Topology


Peer link and PKA required Peer link runs as FabricPath core port VPCs configured as normal VLANs must be FabricPath VLANs
S10 S20 S30 S40

No requirements for attached devices other than channel support

S100

S200

MAC A
BRKDCT-2081

MAC B
Cisco Public

MAC C 46

2010 Cisco and/or its affiliates. All rights reserved.

VPC+ Logical Topology

S10

S20

S30

S40

Virtual switch introduced

S1000

S100

S200

MAC A
BRKDCT-2081

MAC B
Cisco Public

MAC C 47

2010 Cisco and/or its affiliates. All rights reserved.

Remote MAC Entries for VPC+


S200# sh mac address-table dynamic Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+-----------------* 10 0000.0000.000c dynamic 1500 F F Eth1/30 10 0000.0000.000a dynamic 1500 F F 1000.11.4513 S200#

S10

S20

S30

S40

S1000
po1 po2

S100

S200

1/30

MAC A
BRKDCT-2081

MAC B
Cisco Public

MAC C

2010 Cisco and/or its affiliates. All rights reserved.

48

FabricPath Routing for VPC+


S200# sh fabricpath route topology 0 switchid 1000 FabricPath Unicast Route Table 'a/b/c' denotes ftag/switch-id/subswitch-id '[x/y]' denotes [admin distance/metric] ftag 0 is local ftag subswitch-id 0 is default subswitch-id FabricPath Unicast Route Table for Topology-Default 1/1000/0, number of next-hops: 2 via Po1, [115/10], 0 day/s 01:09:56, isis_l2mp-default via Po2, [115/10], 0 day/s 01:09:56, isis_l2mp-default S200#

S10

S20

S30

S40

S1000
po1 po2

S100

S200

1/30

MAC A
BRKDCT-2081

MAC B
Cisco Public

MAC C

2010 Cisco and/or its affiliates. All rights reserved.

49

VPC+ and Active/Active HSRP


With VPC+ and SVIs in mixed-chassis, HSRP Hellos sent with VPC+ virtual switch ID FabricPath edge switches learn HSRP MAC as reached through virtual switch Traffic destined to HSRP MAC can leverage ECMP if available Either VPC+ peer can route traffic destined to HSRP MAC
HSRP Active
DSIDMC SSID1000 DMAC0002 SMACHSRP Payload S1000
po1 po2

HSRP Standby

SVI
S10 S20

SVI
S30 S40

S100

S200

1/30

MAC A
BRKDCT-2081

MAC B
Cisco Public

MAC C

2010 Cisco and/or its affiliates. All rights reserved.

50

HSRP MAC on Edge Switches


HSRP Active HSRP Standby

SVI
S10 S20

SVI
S30 S40

S1000
po1 po2

S100

S200

MAC A

MAC B

MAC C

S200# sh mac address-table dynamic address 0000.0c07.ac0a Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+-----------------10 0000.0c07.ac0a dynamic 0 F F 1000.0.1054 S200#

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

51

Edge Devices Integration


Hosts can leverage multiple L3 default gateways
L3
dg dg

FabricPath

s3 A

Hosts see a single default gateway The fabric provide them transparently with multiple simultaneously active default gateways dg Allows extending the multipathing from the inside to the fabric to the L3 domain outside the fabric

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

52

Layer 3 Integration
SVIs anywhere
L3

FabricPath

L3

The fabric provides seamless L3 integration An arbitrary number of routed interfaces can be created at the edge or within the fabric Attached L3 devices can peer with those interfaces The hardware is capable of handling million of routes

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

53

Integrating L3 with Fabric Path


Alternatives for N-Way Layer 3 Egress Various alternatives exist, depending on FHRP preference and location of L2/L3 boundary FHRP options: HSRP/VRRP, GLBP L2/L3 boundary: internal or external routers

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

54

Alternatives for N-Way Layer 3 Egress


VLAN Splitting with Active/Active HSRP in VPC+
Leverages benefit of VPC+ active/active HSRP Each router still has interface in all VLANs but not running HSRP Does require PL/PKA, and mixed chassis FabricPath CE

L3

Active/Active HSRP for VLANs X GWY MAC X

HSRP

HSRP

VPC+ S2

VPC+ S3

Active/Active HSRP for VLANs Y GWY MAC Y

S1

S4

L1

L2

L3 L4

VLANs x: GWY MAC XL1, L2 VLANs y: GWY MAC YL3, L4


BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

55

Alternatives for N-Way Layer 3 Egress


GLBP with FabricPath (Internal Routers)
Single virtual IP, multiple virtual MACs (up to 4) Load sharing toward exit points based on which MAC each server learns through ARP CE FabricPath

L3

GLBP

SVI

GWY IP X GWY MAC A

SVI

GWY IP X GWY MAC B

SVI

GWY IP X GWY MAC C

SVI

GWY IP X GWY MAC D

S1

S2

S3

S4

GWY MAC AL1 GWY MAC BL2 GWY MAC CL3 GWY MAC DL4

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

56

Alternatives for N-Way Layer 3 Egress


GLBP with FabricPath (External Routers)
provides more FabricPath port density
FabricPath CE

L3
GWY IP X GWY MAC A
GLBP

GWY IP X GWY MAC B

GWY IP X GWY MAC C

GWY IP X GWY MAC D

S1

S2

S3

S4

L1

L2

L3 L4

GWY MAC AL1 GWY MAC BL2 GWY MAC CL3 GWY MAC DL4

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

57

Alternatives for N-Way Layer 3 Egress


MHSRP with FabricPath
More complex configuration, DHCP changes But, can scale beyond four active forwarders FabricPath CE

L3
GWY IP W (a)
GWY IP Z (s)

GWY IP X (a)
GWY IP W (s)

GWY IP Y (a)
GWY IP X (s)

GWY IP Z (a)
GWY IP Y (s)

GWY MAC W

GWY MAC X
HSRP

GWY MAC Y

GWY MAC Z

S1

S2

S3

S4

For VLAN n: GWY MAC WL1 GWY MAC XL2 GWY MAC YL3 GWY MAC ZL4

L1

L2

L3 L4

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

58

Alternatives for N-Way Layer 3 Egress


VLAN Splitting with HSRP
Splitting by VLAN (avoids DHCP challenge of MHSRP) Each router still has interface in all VLANs but not HSRP (or HSRP in Listen mode) Active VLANs W
Standby VLANs Z

FabricPath CE

L3
Active VLANs Y
Standby VLANs X

Active VLANs X
Standby VLANs W

Active VLANs Z
Standby VLANs Y

GWY MAC W

GWY MAC X

GWY MAC Y

GWY MAC Z

HSRP

S1

S2

S3

S4

L1

L2

L3 L4

VLANs w: GWY MAC WL1 VLANs x: GWY MAC XL2 VLANs y: GWY MAC YL3 VLANs z: GWY MAC ZL4
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

59

FabricPath Configuration
No L2 IS-IS configuration required New feature-set keyword allows multiple conditional services required by FabricPath (e.g. L2 IS-IS, LLDP, etc.) to be enabled in one shot Simplified operational model only 3 CLIs to get FabricPath up and running
N7K(config)# feature-set fabricpath N7K(config)# vlan 10-19 N7K(config-vlan)# mode fabricpath N7K(config)# interface port-channel 1 N7K(config-if)# switchport mode fabricpath

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

FabricPath Port CE Port 60

FabricPath comparison
Transparent Bridging
Control Protocol Default forwarding behavior Data plane loop protection Frames/packets forwarded along the shortest path Multiple paths between nodes Transparent to IP and other L3 protocols Configuration less addressing

vPC Spanning Tree Flood None Yes


(limited topologies)

FabricPath IS-IS Drop RPFC, TTL Yes Yes, ECMP Yes Yes

IP Routing IS-IS/ EIGRP/ OSPF etc Drop RPFC, TTL Yes Yes, ECMP No No

Spanning Tree Flood None No No Yes Yes

Yes
(limited topologies)

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

61

Cisco FabricPath Feature Set


Value-Add Enhancements
16-Way Equal Cost Multipathing (ECMP) at Layer 2 FabricPath Header
Hierarchical Addressing with built in loop mitigation (RPF,TTL)

Conversational MAC Learning


Efficient use of hardware resource by learning only MACs for interested hosts

Up to 16-Way L2 ECMP

Interoperability with existing classic Ethernet networks


VPC + allows VPC into a L2 Fabric STP Boundary Termination

Up to 16Way L2 ECMP

Multi-Topology providing traffic engineering capabilities

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

62

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

63

TRILL Standardizing Multi-pathing


IETF RFC 5556 defines Transparent Interconnection of Lots of Links (TRILL) TRILL is a standards based implementation of Layer 2 Multi-pathing Lot of similarities between Ciscos current implementation and TRILL
TRILL HW Frame format finalized Final control plane (SW implementation) to be standardized by end of the year

IETF standard for Layer 2 multipathing Driven by multiple vendors, including Cisco Base protocol RFC ready for standardization but waiting on dependent standards Control-plane protocol RFCs still in process Target for standard completion is early CY2011 http://datatracker.ietf.org/wg/trill/
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

64

What Is the Relationship between FabricPath and TRILL?


a set of Layer 2 multipathing technologies FabricPath initial release runs in a Native mode that is Cisco-specific, using proprietary encapsulation and control-plane elements Nexus 7000 F1 I/O modules and Nexus 5500 HW are capable of running both FabricPath and TRILL modes

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

65

FabricPath & TRILL Feature Summary


FS-link is a superset of TRILL
L2MP
Frame routing
(ECMP, TTL, RPFC etc)

TRILL
Yes No No No No Point-to-point OR shared

Yes Yes Yes Yes Yes Point-to-point only

vPC+ FHRP active/active Multiple topologies Conversational learning Inter-switch links

Base protocol specification is now a proposed IETF standard (March 2010) Control plane specification will become a proposed standard within months

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

66

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

67

FabricPath Design Guidance


Industry has converged on a handful of wellunderstood designs/network topologies
Largely driven by constraints of STP, and density limits of switches

Designs will necessarily evolve


Not only what can/cannot be built today versus in future, but how people think about L2 designs in general

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

68

Scaling Bandwidth with FabricPath


Example: 2,048 X 10GE Server Design
16X

improvement in bandwidth performance 74 managed devices to 12 devices IT operations


FabricPath Based Network

From 2X+

increase in network availability

Simplified

Traditional Spanning Tree Based Network


Blocked Links

Fully Non-Blocking

Oversubscription 16:1

2:1

Network Fabric
4 Pods 8 Access Switches 2, 048 Servers
Cisco Public

8:1

64 Access Switches
2, 048 Servers
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved.

69

Use Case: High Performance Compute


Building Large Scalable Compute Clusters
Spine Switch
16 Chassis

8,192 10GE ports 512 10GE FabricPath ports per system


16-port Etherchannel 16-way ECMP

Edge Switch

32 Chassis

256 10GE FabricPath Ports Open I/O Slots for connectivity

160 Tbps System Bandwidth

HPC Requirements HPC Clusters require highdensity of compute nodes Minimal over-subscription Low server to server latency

FabricPath Benefits for HPC FabricPath enables building a highdensity fat-tree network Fully non-blocking with FabricPath ECMP & port-channels Minimize switch hops to reduce server to server latencies

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

70

Workload Flexibility with FabricPath


Example: Removing Data Center Silos
Single

domain agility

Responsive

Pooled compute resources


Increased

Virtualized Applications move within minutes vs. days


Capex

Seamless data center wide workload mobility


Multi-Domain Silod

and Opex savings

Maximize resource utilization, simplify IT operations


Single Domain Any App, Any where!

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

71

Use Case: L2 Internet Exchange Point


IXP Requirements
Provider A Provider B

Layer 2 Peering enables multiple providers to peer their internet routers with one another 10GE non-blocking fabric Scale to thousands of ports FabricPath Benefits for IXP Transparent Layer 2 fabric , No STP at core, simple to manage Scalable to thousands of ports Bandwidth not limited by chassis / portchannel limitations N+1 redundancy in distribution Large bisectional bandwidth at distribution

Provider C

Provider D

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

72

Classical POD with FabricPath


FabricPath vs. vPC/STP
Simple configuration (no peer link, no pair of switches, no port channels)

L3
L3 Core

Total flexibility in design and cabling Seamless L3 integration No STP, no traditional bridging (no topology changes, no sync to worry about, no risk of loops) Scale mac address tables with conversational learning Unlimited bandwidth, even if hosts are single attached

FabricPath POD

vPC POD

Can extend easily and without operational impact

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

73

FabricPath Core
Efficient POD Interconnect

L3
L2+L3 FabricPath Core

FabricPath in the Core VLANs can terminate at the distribution or extend between PODs. STP is not extended between PODs, remote PODs or even remote data centers can be aggregated.

vPC+ POD

vPC+ POD

Bandwidth or scale can be introduced in a non-disruptive way

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

74

Combining FabricPath PODs and Core


Allows Tier Consolidation
L3 L3 1
L2+L3 FabricPath

1
FabricPath

2 2 3 3

L3

FabricPath

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

75

FabricPath at the Edge

A B C

1/10G connectivity to Nexus 7000 1/10G connectivity to Fabric Extender attached to Nexus 7000 1/10G connectivity to Nexus 5500 1/10G connectivity to Fabric Extender attached to Nexus 5500

D A B C D E
E

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

76

Migration of Existing Designs


Emphasis on preserving existing topologies without major disruption Evolution rather than revolution in existing DC network Assumes DC isnt pure Nexus Phases:
Integrate Nexus 7000 with F1 modules into existing Aggregation Migrate to VPC+ Migrate Access devices to FabricPath Interconnect FabricPath Pods Pod scale-out

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

77

Migration Phases
Simple Integration of Classical Ethernet

FabricPath

vPC+

7K access
Cairo

CE access 7K or 5K access + FEX


Cairo (maint) End CY2010 Radar

Only the core of the network needs to be running L2MP


BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

78

Fabric Module Integration


Adding F1 modules to agg (either as part of Catalyst 6500 to Nexus 7000 migration or adding F1 cards into agg that already has M1 modules) Uplinks are on M1 modules (L3 links to core) Downlinks on F1 modules (L2 agg to access) Uses standard VPC with peer link in CE mode, providing active/active HSRP forwarding at agg layer Access could be anything 7k, 6k, 5k, 5k+FEX, or any other box Motivations: minimize STP, use high-density, low-cost F1 modules at aggregation layer Understand East-West capacity requirements (160G proxy L3 per agg switch in 5.1) North-South bandwidth already limited by uplink capacity

L3

Uplinks on M1 modules Active/Active HSRP for VLANs 100-199

Active/Active HSRP for VLANs 200-299

L3 Active/Active HSRP for VLANs 300-399 VPC Peer link runs in CE mode CE VPC VPC

Downlinks on F1 modules

160G proxy L3 per switch

Pod 1 VLANs 100-199


BRKDCT-2081

Pod 2 VLANs 200-299


Cisco Public

Pod 3 VLANs 300-399


79

2010 Cisco and/or its affiliates. All rights reserved.

VPC+ in Localized Pods


Only change here is migration from VPC to VPC+, in preparation to add FabricPath devices in access combined with VPC+ attached legacy CE devices

Motivations: prepare for scale-out and VLAN anywhere while preserving investment in STP devices Note that change from VPC to VPC+ is disruptive

L3

Active/Active HSRP for VLANs 200-299

L3 Active/Active HSRP for VLANs 100-199 VPC+ Peer link runs in FabricPath mode CE CE VPC+ VPC Active/Active HSRP for VLANs 300-399

Pod 1 VLANs 100-199

Pod 2 VLANs 200-299

Pod 3 VLANs 300-399

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

80

Migrating to FabricPath Pods


Migrate all or part of each pod to FabricPath Keep VPC+ to provide active/active HSRP

Motivations: prepare for scale-out and VLAN anywhere

L3
Active/Active HSRP for VLANs 200-299

L3 Active/Active HSRP for VLANs 100-199 VPC+ Keep VPC+ for active/active forwarding FabricPath VPC+ VPC Active/Active HSRP for VLANs 300-399

Pod 1 VLANs 100-199 FabricPath here assumes Nexus 5500


BRKDCT-2081

Pod 2 VLANs 200-299 Leverage VPC+ for existing Nexus 5000


Cisco Public

Pod 3 VLANs 300-399

2010 Cisco and/or its affiliates. All rights reserved.

81

Meshed Aggregation Layer


Backbone/mesh agg layer connections provide VLAN anywhere capability among connected FabricPath Pods Still have Layer 3 VLAN affinity at Pod level HSRP for particular VLAN only lives in one Pod Motivations: Consolidation; VLAN anywhere with FabricPath network Number of Pods you can combine limited by abilty to fully mesh aggregation switches Reduced cabling burden vs direct access connect, but has gateway and scale limits

L3
Active/Active HSRP for VLANs 200-299

Active/Active HSRP for VLANs 100-199 VPC+ FabricPath

L3 Active/Active HSRP for VLANs 300-399 VPC+ VPC

Pod 1 VLANs 100-299


BRKDCT-2081

Pod 2 VLANs 100-299

Pod 3 VLANs 300-399


82

reserved. Cisco Public Affinity for 2010 Cisco and/or its affiliates. All rightsAffinity for 200-299 100-199

Parallel FabricPath Core


Meshed agg model overly complex after a certain point Add FabricPath core parallel to L3 core to interconnect FabricPath Pods
Active/Active HSRP for VLANs 200-299

Motivations: Consolidation and whole-network scale Removes access connections and aggregation mesh limitations

L3

FabricPath Core

L3 Active/Active HSRP for VLANs 100-199 VPC+ FabricPath VPC+ VPC Active/Active HSRP for VLANs 300-399

Pod 1 VLANs 100-299

Pod 2 VLANs 100-299

Pod 3 VLANs 300-399

Affinity for 100-199


BRKDCT-2081

Affinity for 200-299


Cisco Public

2010 Cisco and/or its affiliates. All rights reserved.

83

Parallel FabricPath Core with VDCs


Layer 3 Core VDC FabricPath Core VDC FabricPath Core VDC

L3

Layer 3 Core VDC

L3

Active/Active HSRP for VLANs 200-299

Exact same model as prior slide but with VDCs instead of separate physical switches

L3 Active/Active HSRP for VLANs 100-199 VPC+ FabricPath VPC+ VPC

Active/Active HSRP for VLANs 300-399

Pod 1 VLANs 100-299

Pod 2 VLANs 100-299

Pod 3 VLANs 300-399

Affinity for 100-199


BRKDCT-2081

Affinity for 200-299


Cisco Public

2010 Cisco and/or its affiliates. All rights reserved.

84

Pod Build-Out with Parallel FabricPath Core


Add additional capacity in each Pod using more agg switches Not all aggs have to connect to FabricPath or L3 core necessarily

Motivations: Consolidation and per-Pod scale Requires n-way FHRP

L3

FabricPath Core

N-Way Active FHRP for VLANs 100-299 L3 Active/Active HSRP for VLANs 300-399 VPC

FabricPath Pod 1 VLANs 100-299

Pod 2 VLANs 100-299

Pod 3 VLANs 300-399

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

85

FabricPath Core with L3 Access


Scales L3 at the edge Can extend VLANs through FabricPath backbone (no hard requirement to terminate L3 at edge VPC+ peers) VLANs still have affinity to L3 access pair FabricPath CE

L3
L3 Egress 1 L3 Egress 2 L3 Egress 3
OSPF OSPF etc. OSPF etc.

L3 Egress 4

S1

S2

S3

S4

Can extend some or all VLANs into FabricPath core


SVI
Active

SVI

VPC+
HSRP

SVI
Standby

VPC+
HSRP

SVI SVI
Active

VPC+
HSRP

SVI
Standby

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Requires FabricPath and L3 support on 5500

86

FabricPath Core with L3 Access


Scales L3 at the edge Can extend VLANs through FabricPath backbone (no hard requirement to terminate L3 at edge VPC+ peers) VLANs still have affinity to L3 access pair FP extended to core
OSPF OSPF etc. OSPF etc.

FabricPath

L3
SVI L3 Egress 3 SVI

CE

L3 Egress 1

S1

S2

S3

S4

Can extend some or all VLANs into FabricPath core

SVI
Active

SVI

VPC+
HSRP

SVI
Standby

VPC+
HSRP

SVI SVI
Active

VPC+
HSRP

SVI
Standby

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Requires FabricPath and L3 support on 5500

87

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

88

Troubleshooting FabricPath
Improved Visibility for Layer 2 Evolution

Leverage the same tooling for L3 technologies


Routing table Link-state database Distribution trees ECMP path selection

Pong L2 Ping + Traceroute


Provide info on all devices on a given path in L2 Fabric Check on link health

Performance Profiling across FabricPath


Through IEEE 1588 timestamp and pong to help estimate average end-to-end latency
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

89

show mac address-table dynamic


S100# sh mac address-table dynamic Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+-----------------* 10 0000.0000.0001 dynamic 0 F F Eth1/15 * 10 0000.0000.0002 dynamic 0 F F Eth1/15 * 10 0000.0000.0003 dynamic 0 F F Eth1/15 * 10 0000.0000.0004 dynamic 0 F F Eth1/15 * 10 0000.0000.0005 dynamic 0 F F Eth1/15 * 10 0000.0000.0006 dynamic 0 F F Eth1/15 * 10 0000.0000.0007 dynamic 0 F F Eth1/15 * 10 0000.0000.0008 dynamic 0 F F Eth1/15 * 10 0000.0000.0009 dynamic 0 F F Eth1/15 * 10 0000.0000.000a dynamic 0 F F Eth1/15 10 0000.0000.000b dynamic 0 F F 200.0.30 10 0000.0000.000c dynamic 0 F F 200.0.30 10 0000.0000.000d dynamic 0 F F 200.0.30 10 0000.0000.000e dynamic 0 F F 200.0.30 10 0000.0000.000f dynamic 0 F F 200.0.30 10 0000.0000.0010 dynamic 0 F F 200.0.30 10 0000.0000.0011 dynamic 0 F F 200.0.30 10 0000.0000.0012 dynamic 0 F F 200.0.30 10 0000.0000.0013 dynamic 0 F F 200.0.30 10 0000.0000.0014 dynamic 0 F F 200.0.30 S100#
S100 S200

Local mac

S10

S20

S30

S40

po1 po2 po3 po4

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

90

show fabricpath route


S10 S20 S30 S40

po1 po2 po3 po4

S100

S200

S100# sh fabricpath route FabricPath Unicast Route Table 'a/b/c' denotes ftag/switch-id/subswitch-id '[x/y]' denotes [admin distance/metric] ftag 0 is local ftag subswitch-id 0 is default subswitch-id FabricPath Unicast Route Table for Topology-Default

B 0/100/0, number of next-hops: 0 via ---- , [60/0], 5 day/s 1/10/0, number of next-hops: 1 via Po1, [115/10], 0 day/s 1/20/0, number of next-hops: 1 via Po2, [115/10], 0 day/s 1/30/0, number of next-hops: 1 via Po3, [115/10], 2 day/s 1/40/0, number of next-hops: 1 via Po4, [115/10], 2 day/s 1/200/0, number of next-hops: 4 via Po1, [115/20], 0 day/s via Po2, [115/20], 0 day/s via Po3, [115/20], 2 day/s via Po4, [115/20], 2 day/s S100# 18:38:46, local 04:15:58, isis_l2mp-default 04:16:05, isis_l2mp-default 08:49:51, isis_l2mp-default 08:47:56, isis_l2mp-default 04:15:58, 04:15:58, 08:49:51, 08:47:56, isis_l2mp-default isis_l2mp-default isis_l2mp-default isis_l2mp-default

Topology ID: 0 Switch ID: 100 Subswitch ID:0 used for vPC+

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

91

FabricPath: In Control with DCNM


Abstracted Fabric View
Identify fabric hot-spots FabricPath state awareness

Traffic Monitoring
Frames distribution visibility Threshold crossing alerts for bandwidth management
Up to 16-Way L2 ECMP

Troubleshooting
Visualize unicast, multicast and broadcast paths Check reachability between source and destination nodes

Configuration Expert
Manage FabricPath topologies with Wizard tools Simplify fine-tuning FabricPath

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

92

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

93

FabricPath is Simple
No L2 IS-IS configuration required Single control protocol for unicast, multicast, vlan pruning

N7K(config)# feature-set fabricpath N7K(config)# fabricpath switch-id <#> N7K(config)# interface ethernet 1/1 N7K(config-if)# switchport mode fabricpath
1/1

FabricPath Port CE Port

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

94

FabricPath is Efficient & Resilient


Shortest path, Multi-Pathing, High-availability
Shortest path for low latency Up to 256 links active between any 2 nodes Multipathing over all links increase availability High availability with N+1 path redundancy Enhanced redundancy models No STP S1 S2 Fast convergence

S3

S4

FabricPath Routing Table


Switch S42 IF L1, L2, L3, L4
S11

L1 L2 L3 L4 S12 S42

A
BRKDCT-2081 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public

B
95

FabricPath is Scalable
Safe Data Plane, Conversational learning
TTL and RFP check the data plane protect against loops
L2 can be extended in the data center (while STP is segmented)

Conversational learning allows scaling mac address tables at the edge


S11 S22 S42

AB

AB

Classical Ethernet Mac Address Table


MAC A B IF 1/1 S42

Classical Ethernet Mac Address Table


MAC IF

Classical Ethernet Mac Address Table


MAC A B IF S11 1/1

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

96

Key Takeaways
Fabric Path enables network fabric scalability, flexibility, availability and resiliency Innovations in FabricPath will change long-standing Layer 2 networking design paradigms FabricPath will evolve going forward
Hardware, software, and design options will only increase our flexibility and scale

Nexus hardware available has FabricPath and TRILL capability

BRKDCT-2081

2010 Cisco and/or its affiliates. All rights reserved.

Cisco Public

97

Das könnte Ihnen auch gefallen