Sie sind auf Seite 1von 100

Deployment Considerations with

Interconnecting Data Centers


BRKDCT-3060
Reference Sessions

 This session is a companion of


BRKDCT-2840 - Data Center Networking: Taking Risk Away
from Layer 2 Interconnects

 Other relevant sessions:


LTRDCT-2008 - Deploying Overlay Transport Virtualization
BRKDCT-2011 - Design and Deployment of Data Center
Interconnects using Advanced-VPLS (A-VPLS)
BRKDCT-2049 - Overlay Transport Virtualization

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 2
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
IP Based Solutions

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Data Center Interconnect
Business Drivers
LAN Extension
 Data Centers are extending beyond traditional boundaries
 Virtualization applications are driving DCI across PODs
(aggregation blocks) and Data Centers

Use Case Business Driver Constraints IT


Solutions
• Business  Disaster Recovery / Avoidance  Stateless
Continuity  Policy synch

 Data Center Maintenance /  Flexibility  VM


• Business Migration / Consolidation
Resource
Optimization  Stateful
 Workload Mobility  Bandwidth  VMotion
• Cloud services  Latency

• Operational cost
containment  Application High Availability  Stateful  Geo
 Latency clusters

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Network HA & Applications HA
Implications in Regard of the Network Technology Used
LISP

MAC routing

STP isolation

Flat L2
extension

Cloud
L3-switching Cluster DCI Overlay
1998 2006 2011 2015
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Data Center Interconnect
Solution Requirements

DCI Purpose
Attributes
LAN Extend same VLAN across Data Centers, to virtualize
Extensions servers and applications

Storage Providing applications access to storage locally, as well


Extensions as remotely with desirable storage attributes

Routing Routing users to the data center where the application


Optimization resides while keeping symmetrical routing in
consideration for IP services (e.g. Firewall)
Application Enablers to extend applications across data centers
Mobility (e.g. VMware VMotion)

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
VLAN Extension
Key Technical Challenges
 L2 control-plane Technology challenge:
 STP domain scalability
 L2 is weak
 STP fault domain isolation
 L2 Gateway redundancy  IP is not mobile
 Inter-site transport
 Long distance link protection with fast convergence
 Point to Point & Multi-points bridging
 Path diversity
 L2 based Load repartition
 Optimized routing egress & ingress
 Extension over IP cloud
 Multicast optimization
 L2 data-plane
 Bridging data-plane flooding & broadcasting storm control
 Outbound MAC learning

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Data Center Interconnect
VLAN Extension Model

STP
domain

Si Si Si Si Si Si

DC1 Extend VLAN across DC2multiple sites DC3


 Using Native Ethernet or using L2 over L3
 Point to point / Multipoints
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Data Center Interconnect
VLAN Extension Model
DCI

DCI DCI DCI


L3
BPDU Filtering
STP STP STP
domain domain domain

Si Si Si Si Si Si

With multi-sites, L2 backdoor connection is low probability.


DC1 DC2 DC3
In DCI model, BPDU filtering is then recommended, allowing:
 STP scalability
 STP failure domain containment
 BPDU filtering is native with most of multipoints solutions
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Data Center Interconnect
VLAN Extension Model
DCI

DCI DCI DCI


L3
BPDU Filtering
BPDU Filtering
+ Storm-control policing
STP STP STP
domain domain domain

Si Si Si Si Si Si

DC1 As one site may fail,DC2


it is subject to storm: DC3
 Other sites continue to work at control-plane
 But garbage traffic could cross over
 Storm-control policing is mandated at data-plane
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
Data Center Interconnect
VLAN Extension Model
DCI

DCI DCI DCI


L3
BPDU
BPDU Filtering Filtering + Storm-control
+ Storm-control policing
policing + FHRP Isolation
STP STP STP
domain domain domain
ALT ALT GW ALT GW
ALT ALT
Si Si Si Si Si Si

DC1 DC2
If having a default-gateway DC3
is required: Filter FHRP protocol
 for optimum traffic exit
 to minimize traffic tromboning between sites
 easy implementation with dedicated DCI device
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
VLAN Extension
Technology Selection Criteria
 VSS & vPC
• Applies easily for dual site interconnection
Ethernet • Over dark fiber or protected D-WDM
• Easy crypto using end-to-end 802.1AE

 EoMPLS & VPLS & A-VPLS & H-VPLS


• L2oL3 for link protection (Fast detection & convergence / Dampening)
• PE style
MPLS • Large scale
• Multi-tenants
• Works over GRE
• Most deployed today

 OTV
• L2L3 for link protection (Fast detection & convergence / Dampening)
• CE style
IP • Enterprise / DC focus
• Easy integration over Core
• Works over MPLS transport
• Innovative MAC routing
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
VLAN Extension
Solution Alternatives
Transport Options P2P extension MAC Bridging MAC routing

Cat 6500 VSS Cat 6500 VSS HUB


w DWDM optics
Ethernet N7K (“OTV”)
N7K vPC N7K vPC HUB w Optical
Device
TRILL (L2MP)
ASR + Cat 6500 Cat 6500 + C7600
MPLS (EoMPLS ) CRS-1 + ASR9K N7K (“OTV”)
(VPLS)

ASR + Cat 6500 Cat6500) (VPLSoGRE)


(EoMPLS over GRE) Cat 6500 (“A-VPLS”) N7K (“OTV”)
IP

VSS- Virtual Switching System, vPC – Virtual Port Channel, DWDM – Dense Wavelength Division Multiplexing
EoMPLS – Ethernet over MPLS, VPLS- Virtual Private LAN service,
Presentation_ID
OTV- Overlay Transport Virtualization
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
IP Based Solutions

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Multi-Chassis EtherChannel (MEC)
Using Multi-Chassis Link Aggregation Control Protocol (mLACP)
Catalyst 6500 Nexus 7000

Si Si
L2
L2

Non-VSS VSS Non-VPC vPC

Virtual Switching System (VSS) Virtual Port Channel (vPC)

Both VSS-MEC
Allows the
and vPC are a Eliminates the
creation of
Port-channeling dependence on Scale Available
resilient L2 Simplify Network
concept extending STP in the L2 Layer 2
topologies based Design
link aggregation to access- Bandwidth
on Link
two separate distribution Layer
Aggregation.
physical switches

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Dual Sites Interconnection
Leveraging MECs between Sites

2 Server PODs
 High link utilization with MEC
 New Links for POD
Interconnect
– DCI port-channel
• 2 with VSS DCI L3
• 4 with vPC
(Always dual attach a device to a
vPC domain)
(Use separated L3 links)
– 2 for IP traffic
 DC Core not necessary
At DCI point:
• STP isolation (BPDU filering)
• Broadcast storm control
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Configure a DCI port
Example using vPC on Nexus 7000
feature lacp
feature vpc
!
vrf context vpc-keepalive
interface port-channel10
vpc domain 5
desc DCI point to point connection
role priority 4000
switchport
peer-keepalive destination
192.168.10.2 source 192.168.10.1 vrf switchport mode trunk
vpc-keepalive vpc 10
delay restore 40 switchport trunk allowed vlan 100-124
! spanning-tree port type edge trunk
interface port-channel1 spanning-tree bpdufilter enable
switchport storm-control broadcast level 1
switchport mode trunk storm-control multicast level x
vpc peer-link
switchport trunk allowed vlan 1,100-210
spanning-tree port type network
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
L3 Routing Challenges with vPC
IGP
Next-hop

NH
MAC@

No frame is forwarded from


vPC peerlink to a vPC 
Frame is not switched
vPC

Conclusions
 Currently no L3 peering should be established over a vPC
 vPC is used for L2 to L3 boundary or for L2 switching
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Multi-Sites Interconnection
Leveraging an ‘Octopus’ Core Layer
It’s Really a Question of Scale and
Manageability

4 Server PODs with Core Tier


 Easy to add more PODs
 Fewer links in the core
 Easy bandwidth upgrade
 Switch peering complexity
reduced
 Predictable performance
throughput, latency,
convergence, etc..

DCI point is
• STP isolation (BPDU filtering)
• Broadcast storm control
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Multi-Sites Interconnection Physical View
VSS and vPC over Dark Fiber
DWDM CORE

VSL, vPC
Switches use separate Switches use separate
Lambda to Lambda to
Interconnect Interconnect

VSS VSS
N7K N7K
SR SR
Optics Optics
MEC MEC MEC MEC

Si Si Si Si Aggregation Si Si
VSL VSL VSL

Access

DC1 DC2 DC3 DC4


Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
VSS / vPC Data-Center Interconnect
Scalability Validation Testing
 Public design guide
http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns949/ns304/ns975/data_center_i
nterconnect_design_guide.pdf
VSL or vPC Peer Link extended over 100km fiber
• Layer 2:
200 Layer 2 VLANs + 100 VLAN SVIs
10,000 client-to-server flows at 20 Gbps
• Layer 3
1000 BGP routes also redistributed to OSPF + 5000 OSPF routes
Results: L2/L3 Unicast & Multicast traffic protected on any failure in
VSS = 2.2s worst case
vPC = 2.8s worst case some specific case at 5s
Storm control contained on failing site

 More recent validation testing: (not yet published) with NX7K V4.2.6 & Cat6K SXI
1200 VLAN + 1200 SVI (static routing)
6500 customer flows at 20Gbps
Unicast Convergence around 4 to 5s worst cases
Storm control contained on failing site
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
EoMPLS
IP Based Solutions

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
Point to Point Topologies
What is EoMPLS Port Mode?

interface Ethernet1/1
description Link to Aggregation Layer
mtu 9216
no ip address
xconnect 15.0.5.1 2504 encapsulation mpls

 EoMPLS Pseudowires (PWs) logically “extend” physical connections


across a Layer 3 cloud
 Control plane and data plane traffic is tunneled across the PW
 “xconnect” configuration applied on the PE internal interfaces
(attachment circuits)

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
EoMPLS Port Mode
End-to-End Loop Avoidance using Edge to Edge LACP

interface port-channel70
description L2 PortChannel to DC 2
switchport mode trunk  LACP (802.3ad) to replace STP as control protocol
vpc 70 Creation of end-to-end EtherChannels between remote
switchport trunk allowed vlan <VLAN_LIST>
mtu 9216 devices

interface Ethernet1/1  Requires Multi Chassis EtherChannel capable devices


description PortChannel Member in aggregation layer
channel-group 70 mode active
Nexus 7000 (vPC) or Catalyst 6500 (VSS)
interface Ethernet2/2
description PortChannel Member  Per-flow load-balancing across EtherChannel links
channel-group 70 mode active
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
EoMPLS Port Mode
STP Domains Isolation

interface port-channel70
description L2 PortChannel to DC 2
spanning-tree port type edge trunk
spanning-tree bpdufilter enable
storm-control broadcast level 1
storm-control multicast level x

 BPDU Filtering to maintain STP domains isolation


 Storm-control for data-plane protection
 Configuration applied at aggregation layer on the logical port-channel interface
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
Dealing with PseudoWire (PW) Failures
Remote Ethernet Port Shutdown

PE receives the PW down


notification and shutdown
its transmit signal toward
aggregation

Active PW

X X X
MPLS Core

DCI DCI
Aggregation
Active PW Aggregation
Layer DC1 Layer DC2

ASR1000: native support (enabled by default)


Catalyst 6500: leverage a simple EEM script

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Remote Ethernet Port Shutdown
ASR1000 feature configuration:
interface GigabitEthernet1/0/0
xconnect 1.1.1.1 1 pw-class eompls
remote link failure notification

EEM-based Approach with Catalyst 6500


xconnect logging pseudowire status
event manager applet EOMPLS_T1_1_PW_DOWN
event syslog pattern "%XCONNECT-5-PW_STATUS: MPLS peer 15.0.5.1 vcid 2504, VC DOWN, VC state DOWN"
action 1.0 cli command "enable“
action 2.0 cli command "conf t“
action 3.0 cli command "int t1/1“
action 4.0 cli command "shut“
action 5.0 cli command "no shut“
action 6.0 syslog msg "Pseudowire Down“

 Requires a separate EEM applet for each PW configured


 Automatic recovery of the traffic after PW re-establishment
(based on LACP negotiation)
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
EoMPLS Port Mode
Encryption Services with 802.1AE
Active PWs

MPLS Core

e1/1

DCI DCI
Aggregation
Active PWs Aggregation
Layer DC1 Layer DC2
= 802.1AE Configuration

interface Ethernet1/1
 “Manual” 802.1AE configuration on a physical description PortChannel Member
cts manual
interface level no propagate-sgt
sap pmk 1234000000000000…
 Traffic encryption end-to-end (intra- and inter-
data center)
 Requires the deployment on Nexus 7000 in
the aggregation layer
 Note the link full-mesh to ensure vPC fast convergence
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
EoMPLS Port Mode
Inter-DC L3 Routing

 With VSS: Use a dedicated VLAN for IP routing over xconnect link
 With vPC: To overcome non support for IP routing:
– Create one dedicated PW to establish end-to-end IGP adjacencies
Transparent to DCI and MPLS core devices
– 802.1AE encryption for L3 traffic also possible
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
EoMPLS Port Mode
Deployment over an IP core Transport

 EoMPLS PWs are established across the logical GRE tunnel


PE devices appear as connected back-to-back
 Maintain all the design principles and characteristics of native MPLS based
core deployments
Creation of end-to-end EtherChannels (LACP based)
Per-flow load-balancing
STP domain isolation (BPDU filtering configuration)
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
EoMPLSoGRE
Configuration

 Establish the GRE tunnel leveraging


loopback interfaces as src/dest
interface Loopback100
description GRE tunnel source
ip address 12.11.11.11 255.255.255.255

interface Tunnel100
ip address 100.11.11.11 255.255.255.0
ip mtu 9192
mpls ip
tunnel source Loopback100
tunnel destination 12.11.11.21

 Configure EoMPLS port mode on  Configure static routing to ensure the


the PE internal interfaces PW is established across the GRE tunnel
interface TenGigabitEthernet1/0/0 ip route 11.0.2.31 255.255.255.255 Tunnel100
mtu 9216
no ip address
xconnect 11.0.2.31 100 encapsulation mpls

Recommended to increase MTU to account for increased IP packet size


Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
EoMPLSoGRE
IPSec Based Encryption Services

crypto isakmp policy 10


authentication pre-share
crypto isakmp key CISCO address 0.0.0.0 0.0.0.0
 Native with ASR1000 crypto ipsec transform-set MyTransSet esp-3des esp-sha-hmac
crypto ipsec fragmentation after-encryption
 Requires SIP-400 with Catalyst 6500
crypto ipsec profile MyProfile
with loopback cable for crypto set transform-set MyTransSet

interface Tunnel100
 Tunnel protection is the recommended ip address 100.11.11.11 255.255.255.0
ip mtu 9216
approach mpls ip
tunnel source Loopback100
Applied directly to the GRE interface tunnel destination 12.11.11.21
tunnel protection ipsec profile MyProfile

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
EoMPLS and EoMPLSoGRE
Guidelines
As EoMPLS is a point to point technology making usage of LACP to ensure
redundancy, the DCI architecture rules are identical to D-WDM recommendations
 EoMPLS is only used to ensure HA transport
 Specific recommendations:
• Connect each aggregation layer device to both the PEs
deployed in the DCI layer in a fully meshed fashion
• Leverage a local MPLS enabled L3 link to interconnect the PEs
deployed in the same data center location
• Modified the default carrier-delay settings on ASR1000 interfaces
facing the aggregation layer
• Recommended value is 10 msec
• Leverage loopback interfaces as source and destination points
for establishing the logical GRE connections between remote PE
devices
• Tune aggressively (1 sec, 3 sec) the GRE keepalive timers

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
VPLS Options
IP Based Solutions

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Multi-Points Topologies
What is VPLS?
PW
VFI
VLAN VLAN
MPLS
Core
SVI VFI SVI
PW
PW

Mac address table population


 is pure Learning-Bridge
One extended bridge-domain built using: VFI
• VFI = Virtual Forwarding Instance
SVI
( VSI = Virtual Switch Instance)
• PW = Pseudo-Wire
• SVI = Switch Virtual Interface
• xconnect VLAN
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
Multi-Points Topologies
DCI Functions with VPLS
PW
VFI
VLAN VLAN
MPLS
Core
SVI VFI SVI
PW
PW

VFI

SVI

=
BPDU are not transmitted by default
Storm-control is on ingress link
FHRP isolation to allow active/active defaultVLAN
gateway + localization
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
A-VPLS: Technology Overview
IP Based Solutions

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
A-VPLS (Advanced-VPLS)
Catalyst 6500 VSS technology

 Redundancy / Dual-Homing is done using VSS

 CLI simplification using “Virtual-Ethernet”

 Efficient load balancing using “Multi-Link PW” & “Fat-PW”

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
A-VPLS - Redundancy / Dual-Homing using VSS
Create a VSS System
switch virtual domain 10
switch mode virtual

interface Port-channel1
no switchport
no ip address
switch virtual link 1

switch virtual domain 10


switch mode virtual

interface Port-channel2
interface Port-channel15 no switchport
switchport no ip address
switchport trunk encapsulation dot1q switch virtual link 2
switchport trunk allowed vlan 610-619
switchport mode trunk

 With VSS, one only MSFC at the time owns the dual system
 Switching/routing paths are NSF/SSO protected
 Etherchannels are Multi-Chassis
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
A-VPLS - Redundancy / Dual-Homing using VSS
Enable MPLS on Core Links
interface Giga 1/3/0/1
ip address …
mpls ip
mtu 9216
interface Giga 2/3/0/0
ip address …
mpls ip
mtu 9216
interface Giga 2/3/0/1
ip address …
mpls ip
mtu 9216

If shared link between MPLS & IP:


no mpls ldp advertise-labels
mpls ldp advertise-labels for 1

access-list 1 permit 10.100.0.0 0.0.255.255

Recommended
Presentation_ID
to increase MTU to account
© 2010 Cisco and/or its affiliates. All rights reserved.
for increased IP packet size
Cisco Public 40
A-VPLS - Redundancy / Dual-Homing using VSS
Enable A-VPLS
#sh mpls l2 vc

Local intf Local circuit Dest address VC ID Status


------------- ------------- ------------ ----- ------
VFI VFI_610_ VFI 10.100.2.2 610 UP
VFI VFI_610_ VFI 10.100.3.3 610 UP
VFI VFI_611_ VFI 10.100.2.2 611 UP
VFI VFI_611_ VFI 10.100.3.3 611 UP

Rem: One PW per VLAN per destination

interface Virtual-Ethernet1
switchport
switchport mode trunk
switchport trunk allowed vlan 610-619
neighbor 10.100.2.2 pw-class Core Any card type facing edge (SUP-720)
neighbor 10.100.3.3 pw-class Core
Q2CY10: Requires SIP-400 facing core (6Gbps)
pseudowire-class Core Q1CY11: New ES-40 (40Gbps)
encapsulation mpls
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
A-VPLS – Label Paths
Load Balancing: Three Mechanisms

One only PW
Over multiple ECMP links

FAT-PW:
• Flow-based label

ML-PW:
• Multi Link Pseudo-Wire
• Balance ECMP links within SIP-400

Etherchannel:
• RBH (Result Bundle Hash)
• Polarization
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
Multi Link Pseudo-Wires
Logically Bundled Links

SIP-400
LTL Memory Packet Forwarding Logic

Eth Payload Vlan GRE-IP Hdr

Slot Path 1
LTL 1 14 vlan
Path 2

LTL selects channel-group RBH [ ] selects one path

RBHash Vlan from Dbus

<SA, DA>
LTL 1

Vlan from
packet

EARL

SIP-400
LTL Memory Packet Forwarding Logic

Eth Payload Vlan GRE-IP Hdr

Slot Path 1
LTL 1 14 vlan
Path 2

LTL selects channel-group RBH [ ] selects one path

Vlan from Dbus

<SA, DA>

> Attach switch 1 module 3 LTL 1

Vlan from
show platform atom ether-vc packet

show platform np vpls


Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
Flow Aware Transport of Pseudowires
draft-ietf-pwe3-fat-pw-03
 FAT PW Load Balancing – flows split across the member link and core
P
Member Link 1

VID 100 VID 100 VID-100


MAC A->B MAC A->B MAC C->D

VID 200 VID-200 Member Link 2


MAC A->B MAC C->D

Bottom label includes


L2 MPLS MPLS MPLS VC FAT SA DA DATA FAT label which
Header Label Label Label Label Label
allows per flow load
balancing across the
network. Single flow
follows single path

Global command:
port-channel load-balance {src-mac | dst-mac | src-dst-mac | src-ip | dst-ip | src-dst-ip | src-port | dst-port | src-dst-port | … }

Remark: FAT-Label is not used as flow balancing in N-PE, but only on subsequent P
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
A-VPLS – Label Paths
Load balancing Configuration

interface Virtual-Ethernet1
switchport
switchport mode trunk
switchport trunk allowed vlan 610-619
neighbor 10.100.2.2 pw-class A-VPLS_remote_PE
neighbor 10.100.3.3 pw-class Legacy_VPLS_remote_PE

pseudowire-class A-VPLS_remote_PE
encapsulation mpls
load-balance flow ! enable ML-PW load-balancing based on ECMP
flow-label enable ! enable FAT PW by allowing imp/disp of flow labels

If the remote node does not support FAT-PW, then just disable flow-label
to ensure compatibility
pseudowire-class Legacy_VPLS_remote_PE
encapsulation mpls
load-balance flow ! enable ML-PW load-balancing based on ECMP

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
A-VPLS – Redundancy
Failure 1: Full Mesh Links

mpls ldp session protection

mpls ldp router-id Loopback100 force

LDP session protection & Loopback usage allows


PW state to be unaffected

LDP + IGP convergence in sub-second


Fast failure detection on Carrier-delay / BFD

Immediate local fast protection


Traffic exit directly from egress VSS node
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
A-VPLS – Redundancy
Failure 1: Stand-alone Link

LDP session protection & Loopback usage allows


PW state to be unaffected

LDP + IGP convergence in sub-second


Fast failure detection on Carrier-delay / BFD

Traffic flows thru VSL link


Traffic exit directly from egress VSS node
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
A-VPLS – Redundancy
Failure 2: ML-PW Link

X
PW state is unaffected

ML-PW convergence in sub-second


Fast failure detection on Carrier-delay

Local traffic protection

TE-FRR is not supported with ML-PW in first release

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
A-VPLS – Redundancy
Failure 3: SIP-400 Card Failure or Dual Links Down

X
PW state is unaffected

LDP + IGP convergence in sub-second


Fast failure detection on Carrier-delay / BFD

Traffic flows through the VSL link


Traffic exits directly from egress VSS node

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
A-VPLS – Redundancy
Failure 4: VSS Node Failure (or Ingress Link)

mpls ldp graceful-restart

X
If failing slave node: PW state is unaffected
If failing master node:
– PW forwarding is ensured via SSO
– PW state is maintained on the other side using Graceful restart

Edge Ether-channel convergence in sub-second

Traffic is directly going to working VSS node


Traffic exits directly from egress VSS node
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
A-VPLS – Over IP Cloud
Leveraging MPLSoGRE

IP
Core
GRE

Create one or multiple GRE tunnels toward each remote site


Specify ‘ip route’ for each remote peer via ‘interface GRE’
• Allows fast tunnel backup (no SSO support for GRE)
Use IGP detection or tune GRE keepalive or use local EEM script to detect GRE failure
• Allows load repartition per tunnel (aka per Virtual-Ethernet interface)
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
A-VPLS: Deployment Considerations

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
A-VPLS
Design Constraints

1. Routed-PW is not supported in first release


A VLAN can either be xconnected or routed
Will be supported in Q3CY10
2. DCI Current solution MLPW has no capability to
detect split-brain and will depend on the
aggregation side to detect split-brain.
It is recommended that Split-brain detection is always
enabled with DCI solution deployment.
3. Max VPLS neighbors = 32
4. Max VLAN number 2000
(initially tested for 500 without FHRP isolation)

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
A-VPLS – Dual Site Interconnection
Dedicated VSS for DCI
DC1 DC2
MCEC

GW GW

VSS / vPC Legacy STP

1. Instead of Etherchannel to protect link using L3 technology


2. Instead of Etherchannel to allow transit on an IP cloud STP Isolation
3. Instead of EoMPLS to allow easy integration of additional site
Storm control
4. Better than EoMPLS when works with legacy DC (STP based)
5. Local LACP with Etherchannel-based modern DC FHRP Isolation
6. Multi-Gbps in Q2CY10 / 10Gbps in Q1CY11
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
A-VPLS – Multi Site Interconnection
Dedicated VSS for DCI
DC1 DC2

GW GW
Dedicated links
or
Shared MPLS links
or
IP using oGRE

vPC Legacy STP

Allows easy multi-sites GW


STP Isolation
Any to any traffic VSS
Storm control
Multi-Gbps in Q2CY10
FHRP Isolation
/ 10Gbps in Q4CY10
DC3
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
A-VPLS – Multi Site Interconnection
Fusion DCI Layer into DC Core with L3 Aggregation
STP Isolation
Storm control
FHRP Isolation
GW GW

L2
GW GW

L3
Dedicated link
GW GW
or
IP + MPLS
or
MPLS oGRE

 Extend VLAN from aggreg to core using either physical (if vPC)or dot1Q
 Use A-VPLS to extend them
 SVI routing is still in aggregation
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
A-VPLS – Multi Site Interconnection
Fusion DCI Layer into DC Core with L2 Aggregation

GW GW

L2
L3

STP Isolation
 If extension requirement is limited:
Storm control
Install SIP-400 in area reserved for L2 extension (Clusters, …)
Connect SIP-400 to Core using dedicated link + A-VPLSoGRE
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
A-VPLS – Positioning Evolution in Q3CY10
Fusion of DCI into L3 Aggregation

STP Isolation
GW Storm control
Gateway Isolation via filtering

With future support of Routed-PW

Benefit: Reuse existing VSS to connect DCI VLAN


From aggregation
From service layer (Firewall, LB, …)
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
IP Based Solutions
Overlay Transport Virtualization (OTV): Technology Overview

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
Overlay Transport Virtualization
Technology Pillars

OTV is a “MAC in IP” technique for


supporting Layer 2 VPNs
OVER ANY TRANSPORT.

Supported on Nexus 7000 platforms


starting from 5.0(3) release

Dynamic Encapsulation Protocol Learning

No Pseudo-Wire State
Built-in Loop Prevention
Maintenance
Optimal Multicast Preserve Failure
Replication Boundary
Seamless Site
Multi-point Connectivity
Addition/Removal

Point-to-Cloud Model Automated Multi-homing

BRKDCT-2049 © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
OTV Control Plane
Adjacencies in a Multicast Enabled Core
Control
OTV Adjacencies Established Control
Plane over the m-cast group in the core Plane

OTV OTV

IP A
IP B
West
East
Core
The mechanism
OTV IP C
 Edge Devices join an ASM/Bidir
multicast group in the core.
interface Overlay101
– They join as hosts (no PIM)
Control
otv control-group 239.1.1.1
– They are both MC src and listeners
Plane
 OTV hellos & updates are South
encapsulated in IP and sent to the
multicast group
 Future support over non-multicast core
(using Adjacency server concept)

BRKDCT-2049 © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
OTV Control Plane
Unicast Traffic
 OTV multicast control plane advertises new MAC address
information, together with its associated VLAN IDs and IP next hop
Overlay IS-IS adjacencies established only between OTV Edge Devices

 The IP next hops are the addresses of the Edge Devices through
which these MACs are reachable in the core

4
VLAN MAC IF
1 100 MAC A IP A
3 New MACs are OTV update is replicated 100 MAC B IP A
learned on VLAN 100 by the core
3 100 MAC C IP A
Vlan 100 MAC A
Vlan 100 MAC B
Core East
Vlan 100 MAC C 2

IP A 4
VLAN MAC IF
West 100 MAC A IP A
3
100 MAC B IP A

100 MAC C IP A

South-East
BRKDCT-2049 © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
OTV Data Plane
Unicast Traffic

 Ethernet traffic between sites is encapsulated in IP: “MAC in IP”


 Dynamic encapsulation based on MAC routing table
 No Pseudo-Wire or Tunnel state maintained

Ethernet Frame IP packet Ethernet Frame Ethernet Frame


Encap Decap
VLAN MAC IF VLAN MAC IF

100 MAC1 Eth1 OTV OTV 100 MAC1 IP A

100 MAC2 IP B IP B 100 MAC2 Eth 1


IP A
100 MAC3 IP B 100 MAC3 Eth 2

MAC 1 Communication between MAC 2


West East
MAC1 (West) and MAC2 (East) Site
Site

BRKDCT-2049 © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
OTV Data Plane Encapsulation
MTU Size Considerations

 OTV adds an 8 Byte shim to the header


 The VLAN field of the 802.1Q header is copied over into the OTV shim
 CoS (802.1p) bits are mapped into the IP header’s IP precedence field
 Recommendation is to increase the MTU size of all the interfaces along
the path

802.1Q
DMAC SMAC Eth Payload
802.1Q
CoS

Ether
DMAC SMAC Type IP Header VLAN OTV Shim CRC
ToS

6B 6B 2B 20B 8B Original Frame 4B

42 Byte encapsulation
(same as VPLSoGRE)
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
OTV Terminology
 Edge Device (ED): connects the site to the (WAN/MAN) core; responsible
for performing all the OTV functions
 Authoritative Edge Device (AED): Elected ED that performs traffic
forwarding for a set of VLAN
 Internal Interfaces: interfaces of the ED that face the site.
 Join interface: interface of the ED that faces the core.
 Overlay Interface: logical multi-access multicast-capable interface. It
encapsulates Layer 2 frames in IP unicast or multicast headers.
Overlay
OTV Interface

L2 L3
Core
Join
Internal Interface
Interfaces
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Multi-Homing
Per VLAN Authoritative Edge Device (AED)
OTV  AED role negotiated between the OTV
Internal IS-IS
peering
nodes
Control plane communication on a specific site VLAN
OTV
Site VLAN should not be extended across the overlay
Use of the same VLAN between sites is
recommended
AED
 AED role defined on a per VLAN basis
In the first release, the left OTV device is AED for
odd VLANs, the right OTV device for even VLANs

 AED forwards traffic from/to the OTV


OTV OTV OTV OTV
overlay
Unicast, Multicast, Broadcast
50% of the flows carried across the vPC peer-link (on
AED AED
a per VLAN basis) at FCS
Future optimization to provide per-flow unicast load-
balancing
Current
Presentation_ID
Future
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Internal Interfaces
Guidelines
 Regular layer 2 interfaces.
 D1 & M1 line cards are supported
Ports on D1 cards can only be deployed as OTV internal interfaces
 The OTV Internal Interfaces should carry the Layer 2 VLANs that need to
be extended using OTV plus the OTV site-vlan. This essentially makes
these interfaces Layer 2 trunk ports.
 For a higher resiliency the use of port-channels is encouraged, but it is
NOT mandated.
 There are NOT requirements neither in terms of 1GE vs 10GE nor in
terms of Dedicated vs Shared mode.
OTV

L2 L3

OTV Internal Interfaces:


Layer 2 Trunks
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Join Interface
Guidelines
 Typically point-to-point routed interface used by OTV to join the
core multicast groups. Its IP address is used as the IP source
address in the OTV encapsulation.
 Supported only on M1 line cards
 Only one join interface can be specified per Overlay at FCS
Multiple physical interfaces can be deployed as L3 uplinks
 Recommendation is to pick a different join interface per Overlay
Increasing the OTV reliability and traffic load-balancing
 For a higher resiliency the use of a port-channel is encouraged,
but it’s not mandated
 There are NOT requirements neither in terms of 1GE vs 10GE nor
in terms of Dedicated vs Shared mode.
 Supported Interface types:

Interface Type Supported


Layer 3 Routed Physical Interface and Sub-interface ✓
Layer 3 Port-Channel Interface and Sub-interface ✓
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Join Interface
Ingress and Egress Traffic
Outbound Traffic
 All the OTV traffic is sourced from the IP address of the Join interface
 Depending on the destination (remote OTV edge device), different physical paths
could be utilized for unicast traffic

Inbound Traffic
 The Edge Device will advertise to the Overlay the IP address of the Join interface
for its local MAC addresses
 The other Edge Devices will use this IP address when sending their traffic to that
site. This will cause the traffic to be received only on the join interface

OTV OTV
= Unicast Traffic

= Mcast and Control Plane Traffic


= Join Interface
= Other core-facing Interfaces
OTV Outbound OTV Inbound
Traffic Traffic
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
IP Based Solutions
Overlay Transport Virtualization (OTV): Multicast Considerations

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
OTV and Multicast
Establishment of PIM Adjacencies

PIM Hellos PIM Hellos


VLAN 10
PIM Hellos
OTV
R1 (Q) R2 OTV R3 (Q) R4 (DR)
Device 2
Device 1
(IP_2)
(IP_1)
PIM Hellos

PIM Hellos PIM Hellos


VLAN 11 Data Center 2
Data Center 1
PIM Enabled

 R1-R4 are PIM enabled (both on VLANs 10 and 11)


 PIM Hellos forwarded across the overlay
• Leveraging the MC group used for IS-IS control plane (OTV control-group)
• R1-R4 become PIM neighbors
A Designated Router is elected on VLAN 10 and VLAN 11 (R4 in this example)
 IGMP Queries are not forwarded across the overlay
A separate querier router is elected in each site

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
OTV and Multicast
MC Source
interface Overlay101
otv data-group 232.1.1.1/24

MC
5 MC stream
Source S MC stream VLAN 10
1 3
2 Mapping Info
(via IS-IS) OTV
R1 (Q) R2 OTV R3 (Q) R4 (DR)
4 Device 2
Device 1
(IP_2)
(IP_1) IGMPv3

VLAN 11 Data Center 2


Data Center 1

1. MC source starts sending traffic to the group Gs on VLAN 10


2. OTV Device 1 maps (S,Gs) to a delivery group Gd
(defined in the “otv data-group” range)
3. OTV device 1 communicates the mapping information to OTV device 2
Thru the IS-IS OTV control-group
Including source VLAN
4. OTV device 2 sends an IGMPv3 Join to (IP1,Gd)
An SSM tree rooted at OTV Device 1 is built across the core
5. Multicast traffic starts flowing across the core toward DC2
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
OTV and Multicast
MC Receivers in DC2

MC
Rcv1
Source S MC stream VLAN 10 MC stream

OTV
R1 (Q) R2 OTV R3 (Q) R4 (DR)
Device 2
Device 1
(IP_2)
(IP_1)
1 IGMP
2

VLAN 11 Rcv2
Data Center 2
Data Center 1

 MC Rcv1 sends and IGMP report for group Gs on VLAN 10


It can immediately receive the stream, since it is already delivered across the overlay
1. MC Rcv2 sends and IGMP report for group Gs on VLAN 11
IGMP report is received by R3, R4 and OTV Device 2
OTV Device 2 snoops the IGMP report and ignores it since no mapping info are
available on VLAN 11
2. R4 is the DR and starts delivering the stream to VLAN 11

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
OTV and Multicast
MC Receiver in DC1

MC
Source S MC stream VLAN 10 MC stream

OTV
R1 (Q) R2 OTV R3 (Q) R4 (DR)
Device 2
Device 1
(IP_2)
(IP_1)
GM-Update
1 3 Query 4
IGMP 2 Reply

Rcv3 VLAN 11 MC stream


Data Center 2
Data Center 1

1. MC Rcv3 sends and IGMP report for group Gs on VLAN 11


IGMP report is received by R1, R2 and OTV Device 1
R1 and R2 ignores the packet since none of them is the DR
2. OTV Device 1 snoops the IGMP report and sends an IS-IS GM-Update message
to Device 2 (including Gs and VLAN 11)
3. OTV Device 2 saves the information in the IGMP Snooping table
4. OTV Device 2 receives a query from the IGMP Querier (R3) and replies
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
OTV and Multicast
MC Receiver in DC1 (cont.)

MC
Source S MC stream VLAN 10 MC stream

OTV
R1 (Q) R2 OTV 8 R3 (Q) R4 (DR)
Device 2
Device 1
(IP_2)
(IP_1) IGMPv3
7 6
5

Rcv3 MC stream 9 VLAN 11 MC stream


Data Center 2
Data Center 1

5. R4 receives the reply and starts delivering the stream to VLAN 11


6. OTV Device 2 maps (S,Gs) to a delivery group Gd
(defined in the “otv data-group” range)
It is like if the source S started transmitting to Gs on VLAN 11 in DC2
7. OTV device 2 communicates the mapping information to OTV device 1
(including the ‘source VLAN’ 11)
8. OTV device 1 sends an IGMPv3 Join to (IP2,Gd)
An SSM tree rooted at OTV Device 2 is built across the core
9. Multicast traffic
Presentation_ID starts
© 2010 Cisco and/or its flowing
affiliates. All rightsacross
reserved. theCisco
core Public toward DC1 75
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
IP Based Solutions
Overlay Transport Virtualization (OTV): Deployment Considerations

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 76
Placement of the OTV Edge Device
Option 1 – OTV in the DC Core with L3 Boundary at Aggregation

 L2 ‘Octopus’ design
 L2-L3 boundary at aggregation
 DC Core devices performs L3 and OTV
functionalities
 Default Core VDC
 May leverage a dedicated VDC
 May use a pair of dedicated Nexus 7000
 VLANs extended from aggregation layer
Recommended to use separate physical
links for L2 & L3 traffic
STP and L2 broadcast Domains not isolated
via OTV between PODs
 VLAN extension likely not required
between PODs in the same site Easy deployment for Brownfield
Bridging through the core can be used if
needed

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
OTV and SVI Routing
Current Deployment Consideration

 Currently, on the Nexus 7000 a given


VLAN can either be associated with
an SVI or extended using OTV
This would theoretically require a dual-
system solution

OTV VDC Default VDC Default VDC OTV VDC The VDC feature allows to deploy a dual-
vdc solution
 OTV VDC as an appliance
Single L2 internal interface and single
N7K-1 N7K-2 Layer 3 Join Interface

L3 Link
L2 Link

Physical View
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Placement of the OTV Edge Device
Option 2 – OTV in the DC Core with L3 Boundary at Core

 Easy deployment for Brownfield


 L2-L3 boundary in the DC core
 DC Core devices performs L2, L3 and
OTV functionalities
Requires a dedicated OTV VDC
into core Nexus
 OTV deployed in the DC core to provide
LAN extension services to remote sites
 Intra-DC LAN extension provided by
bridging through the Core
VSS/vPC recommended to create an STP
loopless topology
Storm-control between PODs

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
Placement of the OTV Edge Device
Option 3 – OTV in the DC Aggregation

 L2-L3 boundary at aggregation

 DC Core performs only L3 role


 STP and L2 broadcast Domains
isolated between PODs
 Intra-DC LAN extension provided
by OTV
 Ideal for single aggregation block
topology
 Recommended for Green Field
deployment
Nexus 7000 required in aggregation

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
OTV in the DC Aggregation
Spanning-Tree Deployment
 OTV VDC deployment replicated in each POD
 Intra-DC & inter-DC LAN extension with a pure L3 core
 Isolated STP domain in each POD
STP filtered across the OTV overlay by default
Independent STP root bridge per POD Layer 2 Link
Layer 3 Link
 vPC facing the access layer devices OTV Virtual Link
vPC
Loop free topology inside each POD
Data Center

L3
STP L2 STP
Root Root

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
OTV in the DC Aggregation Default VDC
Configuration vdc otv-vdc-1 id 2
allocate interface Ethernet1/2,Ethernet2/2
!
interface Ethernet1/1
switchport
switchport mode trunk
switchport trunk allowed vlan 600-1000
!
interface Ethernet2/1
ip address 3.3.3.1/24
ip router ospf 1 area 0.0.0.0
ip pim sparse-mode

OTV VDC
hostname otv-vdc-1
OTV VDC Default VDC Default VDC feature otv
OTV VDC
!
e2/1 e2/2 interface Ethernet1/2
description Internal Interface
e1/1 e1/2
switchport
switchport mode trunk
switchport trunk allowed vlan 600-1000
!
interface Ethernet2/2
N7K-Agg1 N7K-Agg2 description Join Interface
ip address 3.3.3.2/24
ip router ospf 1 area 0.0.0.0 **
ip igmp version 3
!
L3 Link interface Overlay101
L2 Link otv join-interface Ethernet2/2
otv control-group 239.1.1.2
vPC otv data-group 229.1.1.1/24
otv extend-vlan 600-1000
otv site-vlan 100

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. ** Could use static default route or ospf stub
Cisco Public 82
OTV in the DC Aggregation
OTV Traffic Flows
 AED role negotiated between the
two OTV VDCs
Internal IS-IS peering on the site VLAN
Site VLAN carried on vPC links and vPC
peer-link

 Traffic is carried to the AED Device


OTV VDC
Default VDC Default VDC OTV VDC 50% of the flows carried across the vPC
(AED)
peer-link (on a per VLAN basis)

 The AED encapsulates the


original L2 frame into an IP
N7K-Agg1 N7K-Agg2
packet and send it back to the
aggregation layer device
 The aggregation layer device
L3 Link routes the IP packet toward the
L2 Link
DC Core/WAN edge
vPC
IS-IS Peering
L3 routed traffic bypasses the OTV VDC
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
OTV in the DC Aggregation
HSRP Isolation Configuration
1. Create and apply the policies to filter out 3. Apply a route-map on the OTV control plane to avoid
HSRPv1 messages communicating vMAC info to remote OTV edge devices
ip access-list ALL_IPs mac-list HSRPv1-vmac-deny seq 5 deny 0000.0c07.ac00 ffff.ffff.ff00
10 permit ip any any mac-list HSRPv1-vmac-deny seq 10 permit 0000.0000.0000 0000.0000.0000
mac access-list ALL_MACs !
10 permit any any route-map stop-HSRPv1 permit 10
! match mac-list HSRPv1-vmac-deny
ip access-list HSRPv1_IP !
10 permit udp any 224.0.0.2/32 eq 1985 otv-isis default
mac access-list HSRPv1_VMAC vpn Overlay101
10 permit 0000.0c07.ac00 0000.0000.00ff any redistribute filter route-map stop-HSRPv1
!
vlan access-map HSRPv1_Localization 10
match mac address HSRPv1_VMAC
match ip address HSRPv1_IP
action drop
vlan access-map HSRPv1_Localization 20
match mac address ALL_MACs
match ip address ALL_IPs
action forward
!
vlan filter HSRPv1_Localization vlan-list 600-1000

2. Enable VACL processing on IP packets


(OTV VDC internal interface)
interface Ethernet1/2
mac packet-classify

HSRPv1 Traffic

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
OTV in the DC Aggregation
Storm Control

 Unknown Unicast are


stopped and ARP are reduced by
default with OTV !
 Recommendation is still to install
a strict rate-limiter for generic
OTV VDC Default VDC Default VDC OTV VDC broadcast to a few ten’s Mbits
 Multicast should also be
e1/1
constrained
Interface ethernet1/1
storm-control broadcast level 1
N7K-Agg1 N7K-Agg2 storm-control multicast level x

 CoPP enabled by default on the


default VDC
L3 Link
L2 Link
Default policies apply to all the
vPC defined VDCs
Storm-Control
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
OTV in the DC Aggregation
Failure Scenarios
Failure of the AED Failure of the AED Failure of the AED
Join Interface Internal Interface OTV VDC

 Back-up AED must become  Same behavior as Join link  OTV VDC failure or even
active failure full Nexus 7000 failure is
Restarts dual way communication acting as Join/Internal link
with the remote sites  Improving resiliency : failure
Bundle multiple interfaces in a
 Improving resiliency : internal port-channel toward the
Bundle multiple interface into a same default VDC
routed port-channel toward the Or using vPC toward both default
same default VDC VDC

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
OTV Resiliency Improvement
vPC Considerations
 Two possible approaches:
vPC

OTV OTV
VDC Aggregation VDC

OTV OTV
VDC VDC

 Single vPC Layer at the Aggregation  OTV Internal Interface as vPC


 Applies to OTV at aggregation design  Applies to L2 ‘Octopus’ design
 Improves convergence Core or Dedicated DCI layer
Smooth vPC peer-link failure protection  Improves convergence
 Improves traffic pattern Similar to the other approach
No use of vPC peer-link for OTV traffic  50% of OTV traffic goes across the
vPC peer-link
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 87
Agenda

 DCI Business Drivers and Solutions Overview


 LAN Extension Deployment Scenarios
Ethernet Based Solutions
MPLS Based Solutions
IP Based Solutions

 Path Optimization
 Q&A

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 88
Path Optimization
What is the Problem?
10.1.1.0/25 & 10.1.1.128/25 advertised into L3 10.1.1.0/24 advertised into L3
DC A is the primary entry point Backup should main site go down

Layer 3 Core

Agg
Agg

Access
Access

Node A
Virtual Machine Virtual Machine
ESX ESX

Data Center 1 VMware Data Center 2


vCenter

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 89
Path Optimization
The Goal

Agg
Agg

Access
Access

Node A
Virtual Machine
ESX ESX

Data Center 1 VMware Data Center 2


vCenter

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 90
FW Deployment
Active/Standby Units Stretched between Sites

Layer 3 Core
 Policies and state automatically
sync’d between sites
DCI is used to extend the failover VLAN

 May cause sub-optimal traffic


flows across the DCI connection in
several use cases
 Active FW failure
 Cluster node failure
 Workload mobility

 Ideally positioned for synchronous


sites distant by a few 10’s miles
ESX ESX
(low latency)
Data Center 1 Data Center 2

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 91
FW Deployment
Active/Standby Units Deployed in Each Site

Layer 3 Core

 FWs in separated sites work


independently
 True active / active scenario
 Limited sub-optimal traffic thru
DCI core
 FW are not sync’d
Policies have to be replicated
between sites
No state information maintained
between sites
 May drop previously established
sessions after workload VMotion
ESX ESX FW Asymmetric Routing Support may
be positioned to alleviate this limit
Data Center 1 Data Center 2

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
Path Optimization Techniques

 Egress traffic
FHRP isolation

 Ingress traffic
Anycast
Active/Standby subnet advertisement
Reverse Health Injection (RHI)
Host based /32 announcement
ACE/GSS
DNS based Global Site Selection
Locator/ID Separation Protocol – LISP
Host routing

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 93
Path Optimization
Solution with ACE and GSS GSS DNS Lookup
sql-server.jsmp.cisco.com => ACE 1 IP Address (VIP_1)

GSS DNS Lookup


sql-server.jsmp.cisco.com => ACE 2 IP Address (VIP_2)
Cisco
GSS
Primary entry point to reach Cluster Primary entry point to reach
Layer 3 Core
VIP_1 prefix VIP_2 prefix

ACE1 ACE2
(VIP_1) (VIP_2)

S-NAT S-NAT
IP_1 IP_2
Agg
Agg

Access
Access

Virtual Machine Virtual Machine


ESX sql-server.jsmp.cisco.com ESX
sql-server.jsmp.cisco.com

Data Center 1 VMware Data Center 2


vCenter

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 94
Solution with ACE and GSS
Design Considerations

 Applies to VMotion scenarios


script running in vCenter to modify the GSS entry

 S-NAT functionality on ACE to ensure return traffic flows are always


brought back to the ACE they used for inbound direction
Allows “stickiness” to the deployed chain of services (FW, ACE, etc)

 FHRP Isolation to ensure optimal outbound traffic flows


 ACEs leverage different external VIPs in the two Data Centers
This ensures optimal routing from the core

 Storage access optimization is also required


NetApp FlexCache, EMC VPLEX, VMWare Storage VMotion

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 95
Summary

Cisco Data Center Interconnect Solutions allow


redundant, scalable, secure Layer 2 VLAN extension

Catalyst 6500 VSS, Nexus vPC allow powerful and TRILL


simple DCI over dark fibers or protected D-WDM (L2MultiPath)

MPLS based solutions are mature for both point-to-point


(EoMPLS) and multipoint (A-VPLS) deployments Routed-VPLS
MPLSoGRE is opening capability over IP transport

OTV is an innovative IP based technology that can


provide LAN extension directly from the aggregation layer LISP
in a very efficient and simple way

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 96
Data Center Interconnect
Where to Go for More Information

http://www.cisco.com/en/US/netsol/ns975/index.html
Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 97
Complete Your Online
Session Evaluation

 Give us your feedback and you


could win fabulous prizes.
Winners announced daily.
 Receive 20 Cisco Preferred
Access points for each session
evaluation you complete.
 Complete your session
evaluation online now (open a
browser through our wireless
network to access our portal)
or visit one of the Internet Don’t forget to activate your
stations throughout the Cisco Live and Networkers Virtual
Convention Center. account for access to all session
materials, communities, and on-demand
and live activities throughout the year.
Activate your account at any internet
station or visit www.ciscolivevirtual.com.

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 98
Enter to Win a 12-Book Library
of Your Choice from Cisco Press

Visit the Cisco Store in the


World of Solutions, where
you will be asked to enter
this Session ID code

Check the Recommended Reading brochure for


suggested products available at the Cisco Store

Presentation_ID © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 99

Das könnte Ihnen auch gefallen