Sie sind auf Seite 1von 89

Extending ACI to Multiple Sites:

Dual Site Deployment Deep Dive

Patrice Bellagamba (pbellaga@cisco.com), Distinguished Systems Engineer


BRKACI-3503
Agenda
• Multi-Data Center Design Options
• Stretched Fabric Deep Dive
• ACI Multi-POD Overview
• ACI Multi-Fabric Deep Dive
• Conclusion

Thanks to Santiago Freitas (safreita@cisco.com ), Distinguished Systems Engineer


Max Ardica (ardica@cisco.com ), Principal Engineer
Objectives

• This presentation and associated white-papers provides a guide to


designing and deploying Cisco® Application Centric Infrastructure
(Cisco ACITM) in two data centers with an active-active architecture
that delivers
• Increased uptime
• Disaster avoidance
• Easier maintenance
• Flexible workload placement
• Extremely low recovery time objective (RTO)
ACI Multi-DC Design Options
Single APIC Cluster/Single Domain Multiple APIC Clusters/Multiple Domains
Stretched Fabric Dual-Fabric Connected (L2 and L3 Extension)

ACI Fabric
ACI Fabric 1 ACI Fabric 2
Site 1 Site 2

L2/L3
DB Web
App

Multi-POD (Q3 2016) Multi-Site (CY 17)

POD POD Site ‘A’ IP Network Site ‘B’


‘A’ ‘B’
MP-BGP - EVPN MP-BGP - EVPN

APIC Cluster DB App


DB Web/App Web/App Web

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Stretched Fabric
Supported Distances and Interconnection Technologies
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_k
b-aci-stretched-fabric.html
Stretched ACI Fabric
ACI Fabric

DC Site 1 APIC
DC Site 2
APIC APIC

vCenter
Server

Transit leaf Transit leaf

• Single fabric stretched to two sites. Works the same way as Single fabric deployed within a single DC
• One APIC cluster. One management and configuration point.
• Anycast GW on all leaf switches. Support for up to 3 sites
• Work with one or more transit leaf per site. Any leaf can be transit leaf.
• Number of transit leaf and links is redundancy and bandwidth capacity decision

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Supported Distances and Interconnection Technologies
Dark Fiber
ACI Fabric

DC Site 1 APIC
DC Site 2
APIC APIC

vCenter
Server

Transit leaf Transit leaf

Transceivers Cable Distance


QSFP-40G-LR4 10 km
QSFP-40GE-LR4 10 km
For all these transceivers, the wavelength is 1310, the
cable type is SMF, and the power consumption is 3.5W.
QSFP-40GLR4L 2 km
QSFP-40G-ER4 30 km in 1.0(4h) or earlier
40 km in 1.1 and later

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Supported Distances and Interconnection Technologies
Private DWDM

Node ID 1 Node ID 2
DC Site 1 Node ID 3 DC Site 2
APIC APIC APIC

QSFP-40G-SR4

40G DWDM 40G


40G or 4x 10G

DWDM
40G
40G or 4x 10G
40G
QSFP to SFP+
breakout cable

• ACI leaf or spine connects to DWDM using 40G short reach or long reach transceivers.
• If using 40G-CSR4 or 40G-SR4 a QSFP to SFP+ breakout cable between ACI node and DWDM system can be used and then 4x 10G
lambdas can be used between DWDM systems. 4x10G lambdas = 1x 40G link.
• DWDM failure scenarios
• If DWDM lambda goes down, DWDM must shutdown the ports facing the ACI Fabric, otherwise 30 seconds outage DWDM links
due to Fabric toHold
IS-IS be Time.
similar,
• If one attachment circuit goes down, remote port must be shutdown, otherwise 30 seconds outage. same latency and etc.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Supported Distances and Interconnection Technologies
Ethernet over MPLS (EoMPLS)
Node ID 1 Node ID 2
DC Site 1 Node ID 3 DC Site 2
APIC APIC APIC
10 ms RTT
800 KM / 500 miles
QSFP-40G-SR4

40G
10G/40G/100G
40G

EoMPLS Pseudowire

10G/40G/100G
40G 40G

WAN

• Port mode EoMPLS used to stretch the ACI fabric over long distance.
• DC Interconnect links could be 10G (minimum) or higher with 40G facing the Leafs / Spines • Validated platform is ASR 9K with
• DWDM or Dark Fiber provides connectivity between two sites.
XR 5.3.2 or later.
• Max 10ms RTT between sites.
• Under normal conditions 10 ms allows us to support two DCs up to 800 KMs apart.
• Other ports on the Router used for connecting to the WAN via L3Out
BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
Fabric Topology from APIC

EoMPLS, DWDN, Dark Fiber pseudowire is transparent for ACI

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
VMM Integration
Focus on VMware VDS

With DVS 5.x and 6.0, one DVS can be stretched across two sites
Live migration supported.
Same vCenter manages vSphere servers for both sites
With DVS 6.0, one DVS can be used per site and the same EPG can span two (or more) VMM domains
Live migration supported with Cross vCenter (Cross DVS) vMotion, APIC release 1.2(1i) and later.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Transit Leaf and WAN Traffic

• Same ISIS metric for inter-site links and local links


• When WAN router is connected to transit leaf from both sites, non-border leaf switches will see
2-way ECMP for external subnets
• Recommended design: WAN Router is NOT connected to transit leaf, so Local WAN router is 2 hops
away and WAN router at another site is 4 hops away.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
Reference Topology
DC Site 1 10 ms RTT
DC Site 2

40G
10G
40G

EoMPLS Pseudowire

10G
40G 40G

DC1-CE1 DC1-CE2 DC2-CE1 DC2-CE2

DC1-PE DC2-PE

© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
RealWeb EPG 10.1.4.1/24

S-N Traffic Flow


N-S is symmetric
Odd Tenants = DC 1 primary
Even Tenant = DC 2 Primary
WAN EPG Layer 2

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Logical Topology Deep Dive
WAN-CE to ASA, BGP peering through the Fabric

WAN EPG with L2 BD with static binding towards ASA and WAN CE Even numbered tenants use the
primary path into/out of the fabric
ASA/T4/act(config)#route-map set-localpref-200-inprefixes permit 10
ASA/T4/act(config-route-map)# set local-preference 200
via DC2 and odd tenants use the
primary path into/out of the fabric
ASA/T4/act(config-if)# interface TenGigabitEthernet0/7.1041
ASA/T4/act(config-if)# nameif outside
via the “left side” DC1
ASA/T4/act(config-if)# ip address 10.1.1.254 255.255.255.0 standby 10.1.1.253

ASA/T4/act(config)# router bgp 65001


ASA/T4/act(config-router)# address-family ipv4 unicast
ASA/T4/act(config-router-af)# neighbor 10.1.1.21 remote-as 65001
ASA/T4/act(config-router-af)# neighbor 10.1.1.31 remote-as 65001 BGP towards
ASA/T4/act(config-router-af)# neighbor 10.1.1.41 remote-as 65001 CEs
ASA/T4/act(config-router-af)# neighbor 10.1.1.51 remote-as 65001
ASA/T4/act(config-router-af)# redistribute static
ASA/T4/act(config-router-af)# neighbor 10.1.1.31 route-map set-localpref-200-inprefixes in
ASA/T4/act(config-router-af)# neighbor 10.1.1.51 route-map set-localpref-200-inprefixes in
Static Towards WEB
ASA/T4/act(config)# route inside 10.1.3.0 255.255.255.0 10.1.2.3 subnet,
© 2016 Cisco and/or its affiliates. All rights NH
reserved. Cisco Fabric
Public
Logical Topology Deep Dive
External L3 out towards ASA

External L3 Out Configuration Steps on ACI

Create Logical Node Profile with


border leafs Leaf-3 and Leaf-5,
where ASA is connected

Static Default route from each


Border Leaf node with Next Hop
pointing to ASA Inside Interface
IP

ASA Failover: ASA failover link


and state link through the Fabric
via a BD setup in Layer 2.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
Logical Topology Deep Dive
External L3 out towards ASA

External L3 Out Configuration Steps on ACI

On the Logical Interface Profile create


Secondary IP Address (Floating IP) under
each logical transit interface created
between Border Leaf and External
Physical ASA.

This secondary address is a “floating IP” owned by the border leafs.


This helps for seamless convergence during border leaf failures.

Remark: DC1-ASA/T4/act(config)# route inside 10.1.3.0 255.255.255.0 10.1.2.3

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Logical Topology Deep Dive
MP-BGP Route Reflector Placement

Spine 1 == DC 1
Spine 3 == DC 2

• The fabric uses MP-BGP to distribute external routes within ACI fabric.
• Tested SW Release supports a max of two MP-BGP route reflectors.
• In a stretched fabric implementation, place one route reflector at each site to
provide redundancy.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Data Center Failure
Site failure on the site with two APICs

ACI Fabric

DC Site 1 APIC
DC Site 2
APIC APIC

The remaining APIC


controller becomes
vCenter minority when Site 1
Server goes down.
Configuration changes
are not allowed with
minority

• When site 1 goes down, user can access and monitor the ACI fabric via the controller in
site 2 but user can’t make configuration changes.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Data Center Failure
Restoring ability to make configuration changes

Node ID 1 Node ID 2 Node ID 3 Standby


Node ID 2

DC Site 1 APIC APIC


DC Site 2
APIC APIC
Commission new
APIC node 2.
vCenter Now has 2 working
Server De-commission controllers in site 2.
APIC node 1 and 2

• Connect a standby APIC appliance (4th APIC) in Site 2 after


• Site 2 now has majority of APIC (2
the APIC cluster is formed and operational
out of 3). User can start to make
• Standby appliance remains shutdown until needed. changes.
• When site 1 is down, user de-commission APIC node 1 and
Stretched Fabric APIC Cluster Recovery Procedures
2 and commission new APIC node 2. http://www.cisco.com/c/en/us/td/docs/switches/datacente
• The "standby" APIC appliance joins APIC cluster r/aci/apic/sw/kb/b_kb-aci-stretched-fabric.html

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
For Your
Summary Stretched ACI Fabric Reference

- One APIC cluster. One management and configuration point.


Anycast GW on all leaf switches. Works the same way as Single fabric deployed within a single DC.

- Cisco Validated Design. Previous BRKACI-3503


Extensively tested and passed validation criteria. recordings from Cisco Live
USA 2015 has a DEEP dive
- 10ms RTT between the sites and test results for Stretched
Under normal conditions 10 ms allows two DCs up to 800 KMs/500 Miles apart. Fabric

- Interconnection could be dark fiber, DWDM or EoMPLS pseudowire


If EoMPLS then DC Interconnect links could be 10G (minimum) or higher with 40G facing the Leaf/Spine
QoS required, you need to protect critical control-plane traffic.

- APIC Release 1.0(3f) or later.


DEMO available
Stretched Fabric Link failures – https://www.youtube.com/watch?v=xgxPQNR_42c
vMotion over Stretched Fabric with EoMPLS -
https://www.youtube.com/watch?v=RLkryVvzFM0
BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
ACI Multi-POD Overview

Go to ACI MultiPod/MultiSite Deployment Options [BRKACI-


2003]
for deep-dive
ACI Multi-POD Solution Overview
Availability now: Q3 2016
Inter-POD Network

POD ‘A’ POD ‘n’

MP-BGP - EVPN

Single APIC Cluster


IS-IS, COOP, MP-BGP IS-IS, COOP, MP-BGP

 Multiple ACI PODs connected by an IP Inter-POD L3  Single Management and Policy Domain
network, each POD consists of leaf and spine nodes  End-to-end policy enforcement
 Managed by a single APIC Cluster  Forwarding control plane (IS-IS, COOP) fault
isolation

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
ACI Multi-POD Solution For Your
Reference
ACI MultiPod/MultiSite
Deployment Options [BRKACI-2003]
Topologies

Intra-DC Two DC sites connected


back2back
10G/40G/100G
40G/100G 40G/100G
POD 1 POD n POD 1 40G/100G 40G/100G
POD 2
Dark fiber/DWDM (up
to 10 msec RTT)

DB Web/App
APIC Cluster Web/App DB APIC Cluster
Web/App Web/App

3 DC Sites Multiple sites interconnected by a


POD 1 POD 2 generic L3 network
10G/40G/100G
40G/100G 40G/100G
Dark fiber/DWDM (up 40G/100G 40G/100G
to 10 msec RTT)

L3
40G/100G 40G/100G 40G/100G

POD 3
BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
ACI Multi-Pod Solution
Scalability Considerations

 Maximum number of supported ACI leaf nodes (across all Pods)


 300 with a 5 nodes APIC Cluster
 Maximum 200 leaf nodes per Pod
 Up to 80 leaf nodes supported with a 3 nodes APIC cluster
 Up to 6 spines per Pod

 Maximum number of supported Pods


 4 in Congo/Congo-MR releases (Q3CY16)
 6 in Crystal release (Q4CY16)

 No current plans to increase those values before end of CY16

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
ACI Multi-POD Solution
Inter-POD Network (IPN) Requirements
 Not managed by APIC, must be initially pre-
configured
POD ‘A’ POD ‘B’
 Main requirements:
40G/100G interfaces to connect to the spine MP-BGP - EVPN
nodes
Multicast, specifically BiDir PIM  needed to DB Web/App
APIC Cluster Web/App
handle BUM traffic
DHCP Relay for spine/leaf discovery across PODs
OSPF (only option at FCS) for advertising VTEP
reachability
Increased MTU support to handle VXLAN
encapsulated traffic
QoS (to prioritize intra APIC cluster
communication)

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
Inter-Pod Connectivity
Frequently Asked Questions
 Nexus 9200s, 9300-EX, but also any other
switch or router supporting all the IPN
requirements
What platforms can or should I
deploy in the IPN?  NorthStar and Donner/Donner+ based
platforms not initially supported as IPN nodes
SW fix is being scoped for 2HCY16 timeframe

 Yes, once QSA adapters will be supported on


Can I use a 10G connection the ACI spine devices
between the spines and the IPN Planned for Crystal release (Q4CY16) on EX
network? based HW
Scoped for Q1CY17 for Alpine based spines

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
Inter-Pod Connectivity
Frequently Asked Questions

I have two sites connected with


POD 1
X POD 2

dark fiber/DWDM circuits, can I


connect the spines back-to- APIC Cluster

back?
 No, because of multicast requirement for L2 multi-
destination inter-Pod traffic

40G/100G
IPN Devices
connections

POD 1 POD 2

Do I need a dedicated pair of


IPN devices in each Pod?
APIC Cluster

 Yes, but initially mandates the use of 40G/100G inter-


Pod links
BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
ACI Multi-POD Solution
Policy information
Overlay Data Plane carried across Pods

Group
VTEP IP VNID Tenant Packet
Policy

Spine encapsulates
172.16.2.40 Leaf 4 Leaf 4
172.16.1.20 Proxy B traffic to remote 172.16.1.20
Proxy A
172.16.2.40
Proxy B Spine VTEP Spine encapsulates
traffic to local leaf
3
1 4
1
Proxy A Proxy B

172.16.2.40 Pod1 L4

* Proxy A
5
1 * Proxy B

VM2 unknown, traffic is 2


1 Leaf learns remote VM1
encapsulated to the local Single APIC Cluster location and enforces policy
Proxy A Spine VTEP (adding 172.16.2.40 172.16.1.20
S_Class information) 1 6
1
VM1 sends traffic If policy allows it, VM2
destined to remote VM2 receives the packet

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
ACI Golf
ACI Integration with WAN at Scale
Project GOLF Overview

WAN  Connect an ACI Fabric to the external L3 domain


GOLF (WAN Edge)
Routers GOLF devices functionally behave as ACI ‘border leafs’
Control plane and data plane scale

MP-BGP Complementary with ACI Multi-Fabric solutions


EVPN L3Out
with GOLF  MP-BGP EVPN control plane between ACI spine
and GOLF routers
 VXLAN data plane between ACI spine and GOLF
routers
 OpFlex for exchanging config parameters (VRF
VRF-1 names, BGP Route-Targets, etc.)
VRF-2
 Consistent policy for north-south traffic applied at
ACI leaf (both ingress and egress directions)
DB L3Out
Web/App with VRF-lite

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
ACI Integration with WAN at Scale
Project GOLF : Supported Platforms

 WAN Router initial choices


Nexus 7000/7700: F3 line card in 7.3.0.DX(1)ES
(end of May 2016). M3 support in Q4CY16
(Atherton release)
MP-BGP
IP Network ASR 9000: IOS-XR 6.1.1 (June/July 2016) for
EVPN
platforms with minimum RSP3 and Typhoon/
Tomahawk line card support
ASR 1000: Polaris release 16.4 (Q4CY16),
including also CSR1Kv support
 High level whitepaper available on CCO:
http://www.cisco.com/c/en/us/solutions/collateral
/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-736899.html

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Multi-Pod and GOLF Combined Models
Centralized Scenario (Intra-DC)
GOLF Devices Connected to IPN GOLF Devices Connected to Pod Spines
WAN
WAN

MP-BGP
EVPN
MP-BGP
EVPN

 GOLF devices perform a dual function:


 Pure L3 routing for Inter-Pod VXLAN traffic
 VXLAN Encap/Decap for WAN to DC traffic
flows
BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
ACI Multi-Fabric Deep Dive
Multiple APIC Clusters / Multiple Domains

http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-737077.pdf
Dual-Fabric Design Scenarios
• Two independent ACI fabrics.
• Two management and configuration
domains.

• Design Goals:
• Active/Active workload.
• Extend L2 and subnet across sites.
• Anycast GW on both fabrics

• Interconnect Technologies:
• Dark Fiber or DWDM (back to back vPC)
• VXLAN/OTV/VPLS/PBB for L2 extension over IP

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
Latency considerations
Cisco ACI fabrics are totally independent from each other, and the only control plane
between them uses BGP and learning bridges (both supported over long distances)
 Latency considerations are relative to the other components of the solution
• VMware supports RTT of up to 100 ms starting with vSphere Release 6.0
• ASA clusters are supported over two sites deployed with 20 ms of RTT latency
• Storage replication.
• with asynchronous replication, there is no real limit
• With synchronous replication, there is a strict limitation that depends on the deployed
technology.
• EMC VPLEX and NetApp MetroCluster solutions support a maximum RTT latency limit of 10 ms.

Recommendation to deploy all application tiers at the same site with local storage
When planning DCI deployments, you also need to consider path optimization

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
ACI Multi-Fabric Layer 2

http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-737077.pdf
L2 Reachability across Sites
Static Binding between EPGs and VLANs
APIC
ACI Fabric 1 ACI Fabric 2

Static 1:1 mapping VLANs/EPGs DCI Static 1:1 mapping VLANs/EPGs

EPG1 BD1 VLAN = BD = EPG EPG1 BD1


App1 EP1 App1 EP2

 Internal EPGs are ‘extended’ to the remote site by leveraging a static 1:1 mapping with VLANs carried on the
double-sided vPC.
o Simpler and recommended over the use of L2Out
o With vPC, the VLAN  EPG mapping must be consistent between the APIC cluster of each Fabric
o With VXLAN /OTV, DCI can perform VLAN translation
 ACI Fabric: 1,750 BD per Border Leaf node  DCI Scalability
 OTV 1500 VLAN, PBB-EVPN 4000 VLAN, VXLAN/EVPN 1000 VLAN

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Dual-Fabric with Active/Active GW
One EPG per BD with Static binding

ACI Fabric 1 – VRF-1 ACI Fabric 2 – VRF-1


BD-1 VLAN300 BD-1
VLAN300
WEB1 WEB2
100.1.1.1/24 100.1.1.1/24

BD-2 BD-2

DCI
VLAN301 VLAN301
APP1 APP2
100.1.2.1/24 100.1.2.1/24

BD-3 VLAN302 VLAN302 BD-3


DB1 DB2
100.1.3.1/24 100.1.3.1/24

• Use static binding to extend EPG between the sites.


• VLAN ID to EPG mapping matches between fabrics.
• Fabric treats the remote end points as if they are locally attached, they are learned on the Border Leaf
• Flood and learn between two fabrics
• Recommended to turn on Unknown Unicast and ARP flooding in the BD for extended L2 segments
BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
Dual-Fabric with Active/Active GW
Multiple EPGs Under one BD with Static binding

ACI Fabric 1 – VRF-1 ACI Fabric 2 – VRF-2


BD-1 BD-1
100.1.1.1/24 100.1.1.1/24
VLAN300
WEB1 WEB2

VLAN301
APP1 APP1

VLAN302
DB1 DB1

• Create loop between two fabrics


• WEB, APP and DB reside in same flooding domain.
• Can’t support multiple EPGs under same BD with L2 extension between ACI fabrics

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Dual-Fabric with L2 Extension
Pervasive Default Gateway (ACI version 11.2 or later)

 Two IP/MAC addresses should be defined for each stretched IP subnet


1. Unique IP and MAC per site for ARP resolution (Glean)
2. Common virtual IP and virtual MAC for server default GW

ACI Fabric 1 ACI Fabric 2

BD1 BD1
Glean IP: 10.1.4.252 Glean IP: 10.1.4.253
Virtual IP 10.1.4.1 Virtual IP 10.1.4.1
MAC: MAC-A MAC: MAC-B
vMAC: MAC-common vMAC: MAC-common

DCI

Hypervisor
Hypervisor

One L2 Domain
10.1.4.10 One IP Subnet 10.1.4.20

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
Active/Active Default Gateway
Common GW MAC/IP Configuration
APIC DC 1 APIC DC 2

Common Virtual
MAC, used by VIP

Common IP address
marked as VIP

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
Routing to an Endpoint Connected to an Internal IP
Subnet

Inter-subnet routing
locally performed by ACI Fabric 1

ACI Fabric 1 ACI Fabric 2

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Inter subnet routing to silent host using gleaning
3
1
EP1 Leaf 1

2
1 * Proxy A 6
1 7
1
EP1 e1/3 Proxy A EP1 BL1/BL2
Proxy B

* Proxy A * Proxy B
BL1 BL2 BL1 BL2
Leaf1
Po1 Po1
e1/3
DCI
1 EP1 Leaf 1 EP1 vPC1
ESX ESX

* Proxy A * Proxy B Leaf6

EP1 4
1 DCI 5
1 EP2
10.1.4.10 10.1.5.10
EPG WEB1 EPG WEB2
Flood and learn between two fabrics:
• Unknown are gleaned by the spine using ARP with physical MAC/IP
• Recommended to turn on Unknown Unicast and ARP flooding in the BD for extended L2 segments

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
Inter subnet routing to silent host using gleaning
3
1
EP1 Leaf 1
EP2 BL1/BL2

2
1 * Proxy A 6
1 7
1
EP1 e1/3 Proxy A EP1 BL1/BL2
Proxy B
EP2 Leaf 6

* Proxy A * Proxy B
BL1 BL2 BL1 BL2
Leaf1
Po1 Po1
e1/3
DCI
1 EP1 Leaf 1 EP1 vPC1
ESX EP2 Po1 ESX
EP2 Leaf 6

* Proxy A * Proxy B Leaf6

EP1 4
1 DCI 5
1 EP2
10.1.4.10 10.1.5.10
EPG WEB1 EPG WEB2

On ARP Reply to the Fabric 1, Silent Host is discovered

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
Policy Consistency across Sites (L2 Extension)
Contract Relationship with Static Binding

Logical “EPG Extension”


ACI Fabric 1 ACI Fabric 2
BD-1 BD-1
VLAN300
WEB1 WEB2
APP2 EP subject to C2 policy when
communicating back to WEB-1
WEB1 EP subject to C1 policy when WEB2 EP subject to C2 policy when
C1 communicating with APP1 and APP2 C2 communicating with APP1 and APP2

BD-2 VLAN301 BD-2


APP1 APP2

• Each Fabric treats the remote end points as if they were locally connected (they are
added to the local COOP database)
• Remote endpoint classification performed with static EPG-VLAN binding
• Contract on two fabrics must be independently created and kept consistent
WEB1 EP should be subject to the same policy when accessing an EP in APP1 or APP2

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Inter-sites storm-control
Can be applied to ACI or DCI or both

APIC

APIC
Storm-Control ingress traffic on ACI

EPG static
Ingress ACI
Storm-Control
binding
http://www.cisco.com/c/en/us/td/doc
s/switches/datacenter/aci/apic/sw/k
EPGIngress
DCI static b/b_KB_Configuring_Traffic_Storm
Storm-Control ingress traffic on DCI Storm-Control
binding
_Control_in_APIC.html
interface port-channel2
storm-control broadcast level 1.00
storm-control multicast level 1.00
storm-control unicast level 1.00

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
ACI Fabric Loop Protection
Applied per site independently

• Multiple Protection Mechanisms against


external loops
• LLDP detects direct loopback cables between
any two switches in the same fabric
• Mis-Cabling Protocol (MCP) is a new link level
loopback packet that detects an external L2
forwarding loop
• MCP frame sent on all VLAN’s on all Ports
• If any switch detects MCP packet arriving on a port
that originated from the same fabric the port is err-
disabled LLDP Loop
Detection MCP Loop
• External devices can leverage STP/BPDU Detection
(supported from STP Loop
• MAC/IP move detection and learning throttling 11.1 release) Detection
and err-disable

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
Interconnecting multiple ACI Fabrics using OTV
vSphere / vCenter 6.0 vSphere / vCenter 6.0
DVS-DC1 DVS-DC2

APIC APIC

EPG static
binding

ESX-DC1 ESX-DC2
EPG static
VLAN to OTV
Overlay
binding DVS-DC2
DVS-DC1 Nexus 7700 Nexus 7700
Server 2
OTV OTV
Server 1
10.1.5.81 10.1.5.92

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
Interconnecting multiple ACI Fabrics using OTV
vSphere / vCenter 6.0 vSphere / vCenter 6.0
DVS-DC1 DVS-DC2

OTV Advantages:
APIC APIC
● Spanning-tree isolation
● Unknown unicast traffic suppression
● ARP optimization
● Layer 2 broadcast policy control
EPG static
binding
OTV also offers a simple command-line
interface (CLI), or it can easily be set up ESX-DC2
ESX-DC1
using a programming
EPG
VLAN tolanguage
static
OTV such as
Python. Overlay
binding DVS-DC2
DVS-DC1 Nexus 7700 Nexus 7700
Server 2
OTV OTV
Server 1
10.1.5.81 10.1.5.92

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
ACI Dual Fabric with vSphere 6.0 for Cross vCenter vMotion
vSphere / vCenter 6.0 vSphere / vCenter 6.0
DVS-DC1 DVS-DC2

WEST_OTVA EAST_OTVA
feature otv feature otv
APIC APIC
otv site-vlan 210 otv site-vlan 210
otv site-identifier 0001.0001.0001 otv site-identifier 0002.0002.0002

interface Overlay1 interface Overlay1


otv join-interface port-channel100 otv join-interface port-channel100
EPG static
otv control-group 239.1.1.1 otv control-group 239.1.1.1
binding
otv data-group 232.1.1.0/24 otv data-group 232.1.1.0/24
otv extend-vlan 200-209 otv extend-vlan 200-209
no shutdown
ESX-DC1 no shutdown ESX-DC2
EPG static
VLAN to OTV
Overlay
binding DVS-DC2
DVS-DC1
interface Nexus 7700
port-channel100 Nexus 7700
interface port-channel100
mtu 9216
Server 2
OTV mtu 9216 OTV
Server 1
10.1.5.81 ip address 172.16.1.34/30
10.1.5.92 ip address 172.16.1.26/30
ip igmp version 3 ip igmp version 3

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Interconnecting multiple ACI Fabrics using VXLAN/EVPN
vSphere / vCenter 6.0 vSphere / vCenter 6.0
DVS-DC1 DVS-DC2

APIC APIC

EPG static
binding

ESX-DC1 ESX-DC2
VLAN to
EPG static
VXLAN
DVS-DC1 Nexus 9300 binding
mapping DVS-DC2

Server 2
NX-OS Mode
Server 1
10.1.5.81 10.1.5.92

EPG static
VXLAN overlay
withbinding
BGP-EVPN

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
VXLAN as a DCI for Dual ACI Fabric
DCI Nexus 9300 running in NX-OS mode
• Transport EPG VLAN over L2 VNI to remote site(s)
 vPC attachment-circuit
 Anycast VTEP for VXLAN encapsulation

• L3 peering between Fabrics over vPC attachment-circuit


• DCI Core
 Can be Fiber or DWDM with IGP peering.
 Can be a Metro or WAN network with BGP peering.

VLAN to
EPG static
VXLAN
Nexus 9300 binding
mapping
NX-OS Mode

EPG static
VXLAN overlay
withbinding
BGP-EVPN

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
router bgp 100
neighbor 21.21.21.21
VXLAN as a DCI – set-up remote-as 200
update-source loopback0
ebgp-multihop 10
address-family l2vpn evpn
send-community both
route-map NEXT-HOP-UNCHANGED out
route-map NEXT-HOP-UNCHANGED permit 10
set ip next-hop unchanged
Storm-Control ingress traffic
evpn
interface port-channel2 vni 31001 l2
storm-control broadcast level 1.00 rd auto
storm-control multicast level 1.00 route-target import 100:31001
storm-control unicast level 1.00 route-target export 100:31001
Overlay Control-Plane:
• BGP peering with remote Fabric
• EVPN to populate MAC and MAC IP per VNI
Map VLAN_ID to VNI L2

vlan 1001
vn-segment 31001 Anycast VTEP

interface loopback1
Overlay Data-Plane: ip address 11.11.11.12/32
• Interface NVE ip address 11.11.12.12/32 secondary
• BUM multicast or Ingress Replication
interface nve1
no shutdown Create an Underlay
source-interface loopback1 • IGP if back to back links
• eBGP if Metro/WAN Core
host-reachability protocol bgp
Multicast or Unicast only
member vni 31001 BFD enabled
ingress-replication protocol bgp
BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
ACI Multi-Fabric
Layer 3 connectivity between fabrics

http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-737077.pdf
Cross Fabric L3 Extension
ACI Fabric 1 ACI Fabric 2

BD-5 BD-6
L3Out-DCI L3Out-DCI
5.5.5.1/24 6.6.6.1/24
EBGP EBGP
Fabric to Fabric, Per
DCI DCI
Tenant eBGP peering over
Layer 2 DCI (L2VNI, OTV
and back to back vPC)
• Not all EPGs have to be Layer 2 extended
• Some subnets are local to a single DC/Fabric.
• L3 Peering between the Fabrics is required for route exchange.
• ACI support multiple protocols including iBGP, eBGP and OSPF
• The reference design uses eBGP as it provides demarcation of the administrative domain and
the option to manipulate routing policy

• ACI supports Layer 3 dynamic routing protocol peering over vPC.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
Policy Consistency across Sites (L3 Connectivity)
One EPG per BD with Unique IP Subnet

 Distributed application with L3 connectivity between fabrics (EPG=BD=Subnet)


 Classification: group policy ID derived based on subnet (IP prefix to external EPG
mapping)
 Policy: WEB1 EP should be subject to the C2 policy when accessing APP2 EP

L3Out L3Out Local BD3


Local BD1
172.20.1.1/24
172.10.1.1/24
Ext-APP2 Ext-
WEB1 C1
WAN WEB1 WEB2

Ext-WEB2 Ext-APP1 C2 C2
C1
Local BD4
Local BD2 External EPG Mapping Table External EPG Mapping Table
172.20.2.1/24
172.10.2.1/24 172.20.1.0/24 Ext-WEB2 172.10.1.0/24 Ext-WEB1
APP2
APP1 172.20.2.0/24 Ext-APP2 172.10.2.0/24 Ext-APP1

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Multitenancy support
 WAN routers to WAN Edge
multitenancy via MPLS L3 VPN or
VRF-lite
 ACI provides L2 BD between WAN
Edge and Firewall
 OSPF per VRF on Router 
FW Context per Tenant
 ASA and ACI
 OSPF within the ASA context
 L3out per Tenant in ACI
 Fabric 1  Fabric 2
 Per Tenant L3Out eBGP
peering over Layer 2 DCI

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
ACI Multi-Fabric
Layer 3 connectivity outside the fabrics

http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-737077.pdf
Perimeter Firewall Design using Active/Active ASA Cluster
Video of a demo available at https://youtu.be/Qn5Ki5SviEA

ACI Fabric 1 ACI Fabric 2

Cluster Control
Link (CCL)

CCL CCL
DCI CCL CCL
DATA DATA DATA DATA

Dual-DC Cisco ASA Cluster

 ASA Cluster inserted using IP routing, without Service Graph.


 North-South communication through the local ASA units for IP subnets that are not stretched.
 Ingress traffic from the WAN routed to the DC where the non-stretched subnet resides based on IP routing.

 Intra-cluster forwarding to keep symmetry for stretched IP subnets.


 ACI fabric provides a Layer 2 BD on a dedicated vPC for CCL VLAN which is then extended via DCI to the other site.

 ASA cluster in routed mode with multiple contexts using individual interfaces.
 OSPF used as the routing protocol between ASA units and ACI Fabric.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Routing Peering
AS100 AS200

ACI ACI

OSPF
eBGP
Traffic AS300

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Local Subnet to ACI 1

ACI ACI

InGress / Egress

OSPF
eBGP
Traffic

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Local Subnet to ACI 2

ACI ACI

Egress/Ingress

OSPF
eBGP
Traffic

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Stretched Subnet to both ACI

ACI ACI

Egress Egress

OSPF
eBGP
Traffic

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Stretched Subnet to both ACI

ACI ACI

If FW state

Ingress

OSPF
eBGP
Traffic

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Stretched Subnet to both ACI

ACI ACI

Ingress

OSPF
eBGP
Traffic

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Perimeter Firewall Design using Active/Active ASA Cluster
Video of a demo available at https://youtu.be/Qn5Ki5SviEA
 Subnet A: Available only
in DC1 (10.100.14.0/24)
 Subnet B: Available only
in DC2 (10.200.14.0/24)
 Subnet C: Stretched
across and available in
both data centers
(10.1.4.0/24)
 Subnet D: Represents an
external Layer 3
destination in the WAN

 All 4 ASAsgrouped into a single logical unit. Every member of the cluster has the same configuration, is capable of
forwarding every traffic flow, and can be active for all flows.

 Each firewall peers on its inside interface with the local ACI fabric using OSPF. On the outside interface, each firewall
peers with the local WAN edge routers through the ACI fabric (the fabric performs only Layer 2 transport to enable the
peering).

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Perimeter Firewall Design using Active/Active ASA Cluster
Video of a demo available at https://youtu.be/Qn5Ki5SviEA

 Traffic from subnet A (DC1 only) leaving the fabric and traveling to subnet D in the WAN uses one of the
local ASA devices in DC1. This traffic uses the optimal forwarding path.
 Traffic from subnet B (DC2 only) leaving the fabric and traveling to the WAN uses one of the local ASA
devices in the data center 2. This traffic uses the optimal forwarding path.
 Traffic from stretched subnet C:

 Traffic originating from DC1 uses one of the local ASA devices in DC1. This traffic uses the optimal
forwarding path.
 Traffic for some devices currently located in DC2 use one of the local ASA devices in DC2. This
traffic uses the optimal forwarding path.
 Ingress Traffic (WAN to DCs)

 Ingress is optimized to the non-stretched subnets (A and B), because only the local units in DC1
(subnet A) or DC2 (subnet B) announce them to the WAN.
 Stretched subnet is not optimized in ingress.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
ACI Multi-Fabric
Application Integration

http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-737077.pdf
vCenter Integration Models
VMM Integration with Live Migration between sites with vSphere 6

ACI Fabric 1 ACI Fabric 2

vCenter vCenter
Server 1 Server 2

VLAN VLAN VLAN


DCI VLAN
100 100 200
200

ESX ESX ESX ESX


Live migration with vSphere 6
DVS2
DVS1

VMM Domain: DC1 VMM Domain: DC2


EPG WEB 100.1.1.0/24 EPG WEB 100.1.1.0/24

 One vCenter/DVS for each fabric with VMM integration


 vSphere 6 Cross vCenter Server vMotion supported from APIC release 1.2(1i) and later
 Allows live migration between two fabrics with optimized default gateway

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
Cisco UCS Director
Policy (Configuration) synchronization between APIC Clusters
• UCS Director integrates with ACI by communicating with the APIC cluster.
• User provide IP of one controller, UCS-D discover the other APICs.
• Single UCS-D instance communicate with two or more APIC clusters
• UCS Director becomes the platform for the provisioning of Application Network
Profiles, EPGs, Bridge Domains, etc.
• Changes performed directly in the APIC are discovered by UCS Director and reflected
on UCS-D object model however configurations are not synchronized to other APICs.
• To deploy applications, UCS-D creates the ACI objects in the multiple APIC
clusters simultaneously.
• Approval (optional) requested before executing the change on the APIC cluster(s).
• Support for Multi-Fabric is based on custom workflows.
• UCS-Director also automates the DCI devices.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
Implementation of UCS-Director for Policy
Synchronization with Approval (Optional)

DC 1 DC 2

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
Implementation of UCS-Director for Policy
Synchronization with Approval (Optional)

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
Implementation of UCS-Director for Policy
Synchronization with Approval

© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Implementation of UCS-Director for Policy
Synchronization with Approval

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 76
Implementation of UCS-Director for Policy
Synchronization with Approval
3 Tier Application Profile Across Both Sites

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Implementation of UCS-Director for Policy
Synchronization with Approval
3 Tier Application Profile Across Both Sites

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Implementation of UCS-Director for Policy
Synchronization with Approval
3 Tier Application Profile Across Both Sites

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
Implementation of UCS-Director for Policy
Synchronization with Approval
3 Tier Application Profile Across Both Sites

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
Validated Topology and components with SW version
I06-9336-01 I06-9336-02

DC1 Spine-201 Spine-202


DC2
APIC1-SVR-01 APIC1-SVR-02 APIC1-SVR-03 APIC2-SVR-01 APIC2-SVR-02 APIC2-SVR-03

E 2/1 E 2/2 E 2/1 E 2/2 E 2/1 E 2/2 E 2/1 E 2/2 E 2/1 E 2/2 E 2/1 E 2/2
E 1/34 E 1/35 E 1/36 E 1/34 E 1/35 E 1/36 E 1/34 E 1/35 E 1/36 E 1/34 E 1/35 E 1/36

10G A
10G C
STC2 1/7
STC2 1/5

I06-9396-01
Leaf-101
I06-9396-02
Leaf-102
I06-9396-03
Leaf-103 10G B
I06-9396-04
Leaf-101
I06-9396-05
Leaf-102
I06-9396-06
Leaf-103
10G D
STC2 1/8
Component Software Version
STC2 1/6

E 2/11
E 2/11

E 2/11
E 1/3

E2
E 2/11
E 1/3

E2

E 1/2
E 1/2

E1 2

2
11
2
E1 2

2
11

/1

/1
E
E 1/3 E 1/4

E 1/1
2/1
/1

/1
E
E 1/1

2/1 E 1/3 E 1/4

2/

/3
2/

1/
/3

E2

1
1/

E
E2

E
E

1/
E

1/

1
1

E 1/26 E 1/46
E 1/45
E 1/29 E 1/31 E 1/25 E 1/27 E 1/43
E 1/26 E 1/43
E 1/46
E 1/29 E 1/31 E 1/25 E 1/27 E 1/45
E 1/26 E 1/46
E 1/45
E 1/29 E 1/31 E 1/25 E 1/27 E 1/43
E 1/26 E 1/43
E 1/46
E 1/29 E 1/31 E 1/25 E 1/27 E 1/45
APIC 1.2(1i)

vPC
2 Po
2 Nexus 9000 ACI Leaf/Spines n9000-11.2(1i)
Po10 11 11
Po10 Po Po10 Po10 Po
Po10 Po11 e06-2911-04 Po10 Po11
e06-2911-02 Gi0/0 Gi0/1 Gi0/0 Gi0/1 Gi0/2 Gi0/3 Gi0/0 Gi0/1 Gi0/0 Gi0/1 Gi0/2 Gi0/3 e1/4 e1/1
Gi0/2 Gi0/3 e1/3 e1/4 e1/1 e1/3 e1/4 e1/3
Data Data e1/3 e1/4 e1/47-48 Gi0/2 Gi0/3 e1/1 e1/47-48
CCL e1/1 Data Data CCL
Gi0/1 Gi0/2 CCL Gi0/1 Gi0/2 CCL
Po1

Po
Gi 0/0/0
e1/45
i05-9372-01
e1/46 e1/46

i05-9372-02
e1/45
ISR
Gi 0/0/0
E1/49
E1/50

i05-9372-03
E1/50

i05-9372-04
E1/49 ASA 5585 9.5(1)
.11 .12 .21 .22
ISR
192.168.1.0/24 192.168.4.0/24
i05-asa5585-02 e05-asa5550-02
i05-asa5585-01 e05-asa5585-01

DCI Nexus 9300 NX-OS 7.0(3)I2(2a)


FW DCI FW DCI

WAN
Router

Gi 0/1 Gi 0/2

Gi 0/1/0
Gi 0/0/0
e06-2911-03 .1
192.168.3.0/24 .1 192.168.5.0/24
e1/47
e1/49
10G 40G
Gi0/1 10G E 192.168.2.0/24 e1/48
192.168.6.0/24
e1/50
STC2 10/12 .1 .1
Branch
VM

Client
I07-c220m3-01

Versions later than the ones above also support the design presented
BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
Test Results Summary
ACI Dual Fabric Design has been validated

Test Case On Failure On Recovery

Link from ACI Leaf 1 in DC1 to the local Nexus 9300 VXLAN DCI device 320 ms 122 ms

Nexus 9300 VXLAN DCI device node failure 390 ms 1529 ms

Peer link failure between the Nexus 9300 DCI devices 735 ms 1593 ms

ASA cluster member failure (slave node in DC1) 3255 ms 214 ms

ASA cluster member failure (master node) 3947 ms 0 ms

ASA cluster member failure (slave node in DC2) 3038 ms 0 ms

Customer edge router: link with ACI fabric failure 3094 ms 20 ms

Customer edge router WAN link failure 2745 ms 0 ms

Cisco ACI border leaf node failure 2494 ms 135 ms

Cisco ACI spine node failure 280 ms 0 ms

Numbers shown are the worst case scenario, refer to the Whitepaper for DETAILED test results

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
Multi-Fabric Design Summary
Stretched Fabric Dual-Fabric with Multi-POD
L2Out and L3Out
Management Single Multiple Single
Domain
Distance Up to 10 msec RTT No limit Up to 10 msec RTT

HA and Fault One HA Domain Total independency Control protocol


Isolation isolation
L2 extension End to End built in Yes. Flood-N-Learn End to End
(Single Fabric) Overlay Data-Plane
End-to-End Policy End to End built in Yes with one EPG Single APIC Cluster
(Single Fabric) per BD
Scalability Same as one fabric Border Leaf scale 300 nodes across 6
PODs

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
Summary
Solutions for ACI Dual Site Deployment
• Provides Active/Active Data Centers
• Business Continuity.
• Workload mobility and better asset utilization.
• Single APIC Cluster / Single Management Domain
• Stretched Fabric with Dark Fiber and Private DWDM.
• Stretched Fabric with EoMPLS for long distance or SP-managed.

• Multi-POD (Now in Q3CY16)


• Multiple APIC Clusters / Independent Fabrics
• Multi-Fabric with DCI (vPC, VXLAN and OTV) with L2 and L3 Extension
• Whitepaper published in CCO.

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
Complete Your Online Session Evaluation
• Give us your feedback to be
entered into a Daily Survey
Drawing. A daily winner will
receive a $750 Amazon gift card.
• Complete your session surveys
through the Cisco Live mobile
app or from the Session Catalog
on CiscoLive.com/us.

Don’t forget: Cisco Live sessions will be available


for viewing on-demand after the event at
CiscoLive.com/Online

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Lunch & Learn
• Meet the Engineer 1:1 meetings
• Related sessions

BRKACI-3503 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
Please join us for the Service Provider Innovation Talk featuring:
Yvette Kanouff | Senior Vice President and General Manager, SP Business
Joe Cozzolino | Senior Vice President, Cisco Services

Thursday, July 14th, 2016


11:30 am - 12:30pm, In the Oceanside A room

What to expect from this innovation talk


• Insights on market trends and forecasts
• Preview of key technologies and capabilities
• Innovative demonstrations of the latest and greatest products
• Better understanding of how Cisco can help you succeed

Register to attend the session live now or


watch the broadcast on cisco.com
Thank you

Das könnte Ihnen auch gefallen