Sie sind auf Seite 1von 95

Deploying Nexus7000 in Data Centre

Networks
BRKDCT-2951

www.ciscolivevirtual.com
Session Abstract
Deploying Nexus 7000 in Data Centre Networks
This session is targeted to network administrators and operators who
have deployed or are considering the deployment of the Nexus 7000.
The session starts with an overview of the Nexus 7000 hardware
components. This is followed by the design options, implementation
and best practices for deployments of Nexus 7000 in the data
centres. The presentation will cover installation, layer-2 & layer-3 and
security features. The session will cover NX-OS CLI but
troubleshooting is not part of this presentations scope

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 2
Agenda
Hardware Overview
Design Motivation
Design Options
Implementation and Leading Practices

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Hardware Overview
Nexus 7000 Series Flexibility and Scale
Highest 10GbE Aug 11
Density in
Modular
Switching
Nexus 7009 Nexus 7010 Nexus 7018
Height 14 RU 21 RU 25 RU

BW per slot with Fabric2 550 Gbps/Slot 550 Gbps/Slot 550 Gbps/Slot

BW per slot with Fabric1 Ships only w/ Fabric2 230 Gbps/slot 230 Gbps/slot

Wire-Rate 10GE density 336 ports 384 ports 768 ports

2 Upto 3 Upto 4
Power Supply & Airflow
Side to side Front to back Side to Side
Data centre
Deployment Targets Data centre Large Data centre
Campus Core
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Supervisor Engine
Performs control plane and management functions
Dual-core 1.66GHz x86 processor with 8GB DRAM
2MB NVRAM, 2GB internal bootdisk, compact flash slots, USB
Console, aux, and out-of-band management interfaces
Interfaces with I/O modules via 1G switched EOBC
Houses dedicated central arbiter ASIC that controls VOQ
admission/fabric access via dedicated arbitration path to I/O modulesN7K-SUP1

ID LED AUX Port USB Ports CMP Ethernet


Management
Status Console Port Compact Flash
Ethernet Reset Button
LEDs BRKDCT-2951 Slots
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Nexus 7000 2nd Generation Fabric

Increases per slot


connectivity to 110 G Fabric2

550 G capacity when chassis Fabric2


is fully loaded 550 Gbps
Fabric2

Backward compatible with Per Slot


Fabric2
Gen 1 modules
Fabric2
Only one fabric version (1 or
2) is recommended in a
chassis
Future proofing for 40/100 G
modules
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Just started
Nexus 7000 I/O Module Evolution shipping

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Deployment Options M1 Series I/O
Modules
M1- Modules M1 Series Modules
M2- Modules
Up to 80Gbps 6 ports of 40 GE (Quad - 48-port 10/100/1000 RJ45 - XL
Local/Fabric SFP+) - 48-port 1GE SFP - XL
vPC VDC ISSU

2 ports of 100 GE (C
L2/L3 Forwarding - 32-port 10GE SFP+ - XL
Form Pluggable optic
CFP) - 8-port 10GE X2 - XL
Large FIB/MAC/ACL
Tables Upto 550 Gbps Fabric
connectivity M2 Series Modules
Large Buffers Requires NX-OS 6.1 and - 6-port 40 GE QSPF+
above
- 2-port 100 GE - CFP
512K Sampled Netflow
No support for FEX in
MPLS these line cards
FEX Support
OTV
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Deployment Options I/O Modules M1
XL/L For Your
Reference
N7K-M108X2-12L N7K-M148GS-11L N7K-M132XP-12L N7K-M148GT-11L

48 port 10/100/1000
Description 8 port 10GE (X2) 48 port GE (SFP) 32 port 10GE (SFP+)
(RJ45)

Fabric Connection 80G 46G 80G 46G

Software 5.0(2a) and later 5.0(2a) and later 5.1(1) and later 5.1(1) and later

Performance 2 FE 1 FE 1 FE 1 FE
(IPv4) (120M pps) (60M pps) (60M pps) (60M pps)

Port-level
1:1 1:1 4:1 1:1
Oversubscription

vPC Peer-Link
FEX Support
BRKDCT-2951

2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public 11
Deployment Options Comparison - M1 /
M1-XL For Your
Reference
M1/M2-XL Module
M1/M2-XL Module (No
Capability M1/M2 Module (with/ Scalable
Scalable License)
License)

MAC entries 128K 128K 128K

FIB TCAM 128K 128K 900K

Security / QoS ACL 64K 64K 128K

IPv4 Routes 128K 128K 1M

IPv6 Routes 64K 64K Up to 350K

Netflow entries 512K 512K 512K


BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Deployment options F Series I/O Modules

F1- Module F1 Series Module


System-on-Chip (SoC) F2 - Modules
Forwarding Engines 32-port 1/10GE SFP+
Incremental
320Gbps Local Capabilities
vPC FabricPathFEX

230Gbps Fabric
48 ports of 1/10 Gbps
L2 Forwarding
Upto 480Gbps Local
480Gbps Fabric
FCoE
L3 Forwarding with ACL
F2 Series Modules
FabricPath
and QoS
48-port 1/10GE SFP+
IEEE 1588
Fabric Extender FEX
Support 40-port 1/10GBase-T
16 SPAN Sessions
DCB/FCoE support
Low Latency
Can be used F2 with Fabdo1not interoperate with other shipping modules
modules
Low Power
or Fab 2
BRKDCT-2951 Must deploy in an F2 only VDC
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
Deployment options - M and F
Comparison For Your
Reference
Capability M1 Series Module F2 Series M2 Series

10G line rate ports Max of 8 ports 48 1/10GE ports 6x 40GE or 2x100GE
ports
480 Gbps per slot for L2 550 Gbps
Fabric Connection 80 Gbps
and L3

Virtual Port Channel No


Yes* Yes
(VPC) and VPC+
16K
VLANs 16K 4K
Large
FIB, ACL and QoS tables Large Small
Supported
MPLS, LISP and OTV Supported Not supported
No
FEX support Yes Yes
No
FabricPath support No Yes
No
FCoE No Yes
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
F2 cards - FAQ
Q: Will the F2 module work in the same VDC with the M1/F1 module?

A: The F2 module cannot work in the same VDC as M1/F1.

Q: Will F2 ever support OTV, LISP and MPLS?

A: The F2 module does not have the hardware capabilities to support


these functionalities.

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Physical view
Fabric Extenders (FEX) Parent
Switch Fabric Host
ports Interface
Nexus 2000 (FEX) can be considered as a
remote I/O module for the Nexus 7000
FEX Uplink
Provide High Density GE Connectivity
Fabric
Port-Channel
Support Hybrid ToR and EoR Network
Architectures Logical view
FEX physically resides on top of each server
rack but logically acts as an end of access
row device
Reduced Power Consumption/Cap-EX/Op-EX FEX SW Per System

Single point of Management 2248TP-1GE 2248TF- 5.1(1) and later 32


1GE
No configuration and software on FEX 2224TP-1GE 2224TF- 5.2 and later 32
1GE 2232PP-10GE
2232PF-10GE
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
Design Motivations
Virtual Device Contexts (VDCs) - Design
Motivations
Provides logical separation of control-plane, data-plane, management,
resources, and system processes within a physical switch
Consolidate and support multiple business VDC2 VDC3
units, departments, and networks BU1 / BU2 /
App 1 App 2
Web, App, Database
Production, OOB mgmt, Development, VDC2 VDC3 VDC4
Test Prod Dev Test

Customer A, Customer B, Customer C


VDC2 VDC3
Provide network segmentation to meet Non-
Secure Secure
security compliance requirements
VDC2
Internet, Extranet, DMZ, Intranet Core
VDC3
Non-Secured, Secured, PCI Agg
VDC4
Implement logical tier design Access
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Overlay Transport Virtualisation (OTV) -
Design Motivations
Simplifying Data Centre Interconnect
OTV is a MAC in IP supporting Layer 2 VPNs Over any Transport
Extend Layer 2 between several pods/sites over IP
Simple configuration, does not require full mesh of pseudo-
wires
Site independence, STP isolation
No unknown unicast flooding
OTV
L3 DC1 DC2
L2
OTV
L2

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
Virtual Port-Channel Design Motivations
Provides multi-chassis ether channel capability (L2 port channel only)
Eliminates STP blocked ports and reduce
STP complexity
vPC vPC
Uses all available uplink bandwidth
Enables dual-homed servers to operate
in active-active mode
Provides fast convergence upon
Double-sided vPC
link/device failure
Software version Number of vPC
Pre 4.2 release 196
4.2(1) and later 256 vPC vPC

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
FabricPath Design Motivations
Connect any where using an arbitrary topology, fabric uses the best path

Connect devices to any switch in the fabric


Single MAC table lookup at fabric identifies fabric exit point
Minimised switch hops and end-to-end latency
High performance with parallel bandwidth
Up to 256 active paths between any two devices
Scalable & Resilient
No SFP
SFP routing implementation
MAC IF MAC IF

A e1/1
Traffic evenly redistributed in failure cases A S1,e1/1


FabricPath

B S8, e1/2 B e1/2

S3
A B
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
FabricPath
S10 S20 S30 S40

Switch ID space: S300: FabricPath


Routing decisions Routing Table
A B S100 S300
are made based on
the FabricPath
routing table S100
FabricPath S200
(FP) S300

MAC adress space: 1/1 1/2 S300: CE MAC


Switching based on Classical Ethernet (CE) Address Table
MAC address tables A B

The association MAC address/Switch ID is maintained at the


edge
Traffic is encapsulated across the Fabric
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
POD in a BOX - Design Motivations
N7000 with N2000
Large scale but reduced management end points
Extend to L3 boundary without resorting to any L2 switch interconnect
Single device to manage (single NX-OS image)
High end data centre feature set including redundancy, VDCs, ISSU, large
tables, SVIs, 4K VLANs etc


POD POD
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Design Options
Data Centre Design Example 1
Large Data Centre utilising 3-Tier DC design
Nexus 7000 in core and aggregation
10GE/GE ToR and GE MoR access layer switches
Implement vPC / double-sided vPC to eliminate L2 loops and to support
active/active server connections

Core1 Core2 L3 Channel


L3 link
Core L3 L2 Channel
L2 link

VPC
Aggregation
agg1a agg1b
..
aggNa VPC aggNb L3
L2
Access VPC VPC
...
... L2
active active active standby active active active standby

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
Data Centre Design Example 2
Large Data centre utilising 3-Tier DC design
Nexus 7000 in core and aggregation, Nexus 5000 and Nexus 2000 in
access layer
Implement vPC / double-sided vPC to eliminate L2 loops
Two different vPC redundancy models can be utilised to support
active/active or active/standby server connections
Core Core1 Core2 L3 Channel
L3 L3 link
L2 Channel
L2 link
Aggregation agg1a VPC agg1b aggNa VPC aggNb
.. L3
L2
Access VPC VPC VPC VPC
L2
vPC vPC vPC vPC
vPC vPC
active standby active standby Active/Active Active/Active

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
Data Centre Design Example 3
Large Data Centre utilising 3-Tier DC design
Nexus 7000s in Core and Aggregation
Utilise VDCs in aggregation layer to create a non-secured zone and a
secured zone
10GE/GE ToR and GE MoR access layer switches
Implement vPC / double-sided vPC to eliminate L2 loops and to support
Coreactive/active server connections
Core1 Core2 L3 L3 Channel
L3 link
L2 Channel
L2 link
Aggregation
SW-1a vPC SW-1b SW-1a vPC SW-1b SW-2a vPC SW-2b SW-2a vPC SW-2b
VDC3
L3
VDC2 VDC2 VDC3 VDC3 VDC2 VDC2 VDC3
L2
Access vPC vPC
active standby active standby active active
L2
active active

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Data Centre Design Example 4
Small Data centre with a virtualised 3-Tier DC design
Utilise VDCs on a single device to create a core and aggregation layer
GE and 10GE ToR access layer switches
Implement vPC / double-sided vPC
L3 Channel
Core SW-1b L3 link
SW-1a
VDC2 VDC2 L3 L2 Channel
L2 link

Aggregation SW-1a vPC SW-1b L3


VDC3 VDC3 L2
Access vPC
active standby active active
L2

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Data Centre Design Example 5
OTV at the aggregation with a dedicated VDC
STP and unknown unicast domains isolated between PODs
Intra-DC and inter-DC LAN extension provided by OTV
Ideal for single aggregation block topology

Recommended for Greenfield


Join Interface
Internal Interface Core
Virtual Overlay
Interface

Aggregation OTV VDC SVIs SVIs OTV VDC OTV VDC SVIs SVIs OTV VDC
VPC VPC

Access
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Data Centre Design Example 6
OTV at the Aggregation with L3 boundary on the Firewalls
The Firewalls host the Default Gateway
No SVIs at the Aggregation Layer
No Need for the OTV VDC

Core
OTV OTV
Def Def
L3 GWY GWY
Aggregation
L2
Firewall Firewall

Access
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Data Centre Design Example 7
Fabric Path High-level design option with Centralised Routing at Aggregation
Aggregation serves as FabricPath spine as well as L2/L3 boundary. Benefits
include Removal of STP, Traffic distribution across all link, active-active
gateway, any vlan any where at access layer

Core L3

L3
Aggregation
L2/L3 boundary L2/L3 boundary

FabricPath FabricPath
Access
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Data Centre Design Example 8
Fabric Path Design with Centralised aggregation services
FabricPath spine with F1 or F2 FabricPath core
modules provides transit fabric (no ports provided by F1
routing, no MAC learning) or F2 modules

Run VPC+ for


active/active
HSRP
FabricPath VPC+
Active Standby
VPC+ SVIs SVIs L2/L3 boundary
VPC+
HSRP
L3

SVIs for all VLANs on


All VLANs leaf L3 services switch
available at all HSRP between L3
pair (provided by M1 or services switches for
access switches F2 modules) FHRP

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
Data Centre Design Example 9
Fabric Path - Distributed Routing Selective VLAN Extension
Dedicated VLAN for
VPC+ for active/active transit routing Layer 2 CE
HSRP at access pair Layer 2 FabricPath

FabricPath
SVI 100 SVI 100
SVI SVI
100 100 SVI SVI HSRP
100 100 VPC+
SVI SVI SVI SVI SVI 30 SVI 30
10 10 40 40
SVI SVI SVI SVI Active Standby
20 20 50 50
VPC+ VPC+
Active
HSRP
Standby Active
HSRP
Standby L3

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6


L2/L3 boundary for extended
VLAN 10 VLAN 20 VLAN 30 VLAN 40 VLAN 50 VLAN 30
VLANs can follow any Routing at
Aggregation or Centralized
Routing design options
Most VLANs Some VLANs
terminated directly at extended into
access switch pair FabricPath
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
Implementation and
Leading Practices

Hardware and Software Installation


Software Licensing
Features installed by individual licenses or enabling
the license grace period (120 days)
Grace period not recommended Feature License
Enterprise LAN
Features

OSPF, EIGRP, BGP,


Installation is non-disruptive to features

already running under the grace period Advanced LAN CTS, VDC
Scalable Feature M1-XL TCAM

Backup the license after license is Transport Services OTV


Enhanced L2 Package
installed FabricPath

System generates periodic Syslog,


SNMP or Call home messages
Nexu7K# show license usage
Feature Ins Lic Status Expiry Date Comments
Count
---------------------------------------------------------------------------------------------------------------
LAN_ADVANCED_SERVICES_PKG Yes - In use Never -
LAN_ENTERPRISE_SERVICES_PKG No - In use Grace 119D 22H
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Software Upgrade
Synchronise the kickstart image with the system image
Utilise cold start upgrade procedure for non-production devices
Nexus7K(config)# boot system bootflash:<system-image>
Nexus7K(config)# boot kickstart bootflash:<kickstart-image>
Nexus7K# copy run startup-config
Nexus7K# reload

Utiliseinstall all to perform ISSU with zero service interruption


Issue show install all impact to determine upgrade impact
Nexus7K# install all kickstart bootflash:<kickstart-image> system bootflash:<system-image>

Refer to release notes and installation guide


Avoid disruption to the system during ISSU upgrade (STP topology
change, module removal, power interruption, etc)

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
EPLD Upgrade
EPLD upgrade is used to enhance Nexus7K# sh ver <type> <#> epld
Nexus7K# sh ver mod 3 epld
HW functionality or to resolve known
issues EPLD Device Version
--------------------------------
EPLD upgrade is an independent Power Manager 4.008
process from software upgrade and IO 1.016
Forwarding Engine 1.006
not dependent on NX-OS FE Bridge(1) 186.006
FE Bridge(2) 186.006
EPLD upgrade is typically not required Linksec Engine(1) 2.006
---deleted---
Linksec Engine(8) 2.006
Performed on all Field Replaceable
Modules
In redundant configuration, requires
reload of IO modules
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
EPLD Upgrade Best Practices
Upgrade to the latest EPLD image prior to bringing hardware into
production environment (staging HW, replacement HW, etc)
Only use Install all EPLD on non-production systems
Nexus7K# install all epld bootflash:<EPLD_image_name>

When performing supervisor EPLD upgrade for a system with dual-sup,


first upgrade the standby supervisor, then switchover and upgrade
previous active supervisor
Make sure
Nexus7K# EPLDmodule
install image is on both
<module> epld supervisors flash
bootflash:<EPLD_Image_name>

In a redundant system, only EPLD upgrade for I/O modules can disrupt
traffic since the module needs to be power-cycled

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
Hardware Installation Considerations
Two supervisors for high availability and ISSU
Two M1 modules in mixed mode chassis (M1/F1)
A minimum of three fabric modules to provide N+1 redundancy for all
M1/M1-XL I/O modules
Use five 2nd generation fabric modules for full performance from F2 I/O
modules
Perform chassis / system grounding
Perform additional diagnostics on staged devices before production
Configure complete boot-up diagnostic level (default)
Administratively shutdown all ports to run Portloopback test over night
Power-cycle after burn-in period to perform boot-up diagnostic
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
Supervisor Memory 8G Memory Upgrade
Guidelines
Prior to release 5.1, supervisor shipped with 4G Memory
From release 5.1 onwards, supervisors shipped with 8G memory
Determine if 8G memory upgrade is necessary based on the software
version and the software features enabled on the system
Nexu7K# show system resources
----deleted
Memory usage: 4115776K total, 2793428K used, 1322348K free

Recommended if more than 3GB of


system memory are used
Required if more than 3 VDCs
Required if more than 1 VDC in XL-
mode Part Number for 8 GB memory
upgrade kit = N7K-SUP1-8GBUPG=
Required if more than 2 VDCs with
FabricPath or FEX features enabled
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Network Access
Leverage a dedicated OOB management

Allow only SSH remote access (default)

Secure network access (mgmt0, CMP and


VTY) with access-list (ACL)
OOB Mgmt
Terminal Servers
VTY ACL is supported in NX-OS 5.1 (Management VRF)

CoPP (Control Plane Policing) does not ACL ACL


Console Sup (A) Mgmt0 CMP
protect access from mgmt0 interface Console Sup (S) Mgmt0 CMP
I/O Modules
Restrict SNMP access with ACL VTY ACL CoPP

Configure exec-timeout for VTY and console Inband Mgmt


access (Default VRF)

Configure maximum sessions for VTY


access
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
Useful Commands
Utilise terminal dont-ask to disable prompting when copy & paste configuration
Nexus7k(config-if)# switchport trunk allowed vlan 300
This will cause VLANS to be overwritten. Continue anyway?
[yes] int e1/3
input string too long
Nexus7k(config-if)# terminal dont-ask
Nexus7k(config-if)#
Nexus7k(config-if)# switchport trunk allowed vlan 300
Nexus7k(config-if)# int e1/3 Make sure to remove terminal dont ask
Nexus7k(config-if)# .. when you are done with copy & paste
Nexus7k(config-if)# no terminal dont-ask
Utilise show CLI syntax / CLI list to identify available commands
Nexus7K# show cli syntax | i spanning-tree
(788)[ no ] debug spanning-tree all
---deleted---
Leverage CLI alias to replace frequently used commands and actions
Nexus7K(config)# cli alias name wri copy run start

Utilise show hardware capacity to determine the system capacity for capacity
planning

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
Implementation and
Leading Practices

VDC Implementation
VDC1
Virtual Device Contexts (VDCs) Default
VDC2
Secure-Net
VDCs provide logical separation of control- VDC3
Non-Secure
plane, data-plane, management, resources, VDC4
and system processes ..
32 port M1/M1-XL Port-
Port Number
Group
Support up to 4 separate VDCs with common 1 1,3,5,7

supervisor module(s) 2 2,4,6,8


.. .

All ports in the same port-group on the 32 port 8 26, 28, 30, 32

10GE modules (M1/M1-XL and F1) must be 32 port F1 Port-Group Port Number

allocated to the same VDC 1 (SoC1)


2 (SoC2)
1,2
3,4
..... ..........
All ports on F2 line cards can not be part of the 16 (SoC16) 31, 32
Nexus7K(config)# vdc secure-net id 2
same VDC as allocate
Nexus7K(config-vdc)# M1/F1 interface e2/1,e2/3,e2/5,e2/7 48 port F2 Port-Group Port Number
Nexus7K(config-vdc)# allocate interface ..
1 (SoC1) 1 , 2.3.4
Nexus7K(config-vdc)# exit 2 (SoC2) 5,6,7,8
Nexus7K(config)# switchto vdc secure ..... ..........
16 (SoC16) 31, 32
---- System Admin
BRKDCT-2951
Account Setup ----
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
Virtual Device Contexts (VDCs)
Implementation considerations (1)
For highly available environments, reserve the Admin Default VDC
default VDC be as an admin VDC and strictly Provider Admin =
Network Admin
Provider Op =
Network Operator
use it for the administration of the other VDCs
VDC1
CO-B Admin =
Not necessary if all VDCs are under the same Admin VDC Admin
VDC2 VDC3
administrative control CO-A CO-B CO-B Op =
VDC4 VDC Operator
CO-C
When default VDC is utilised as traffic forwarding
VDC, restrict access to the appropriate privilege Data Default VDC
Super Admin = CO-A Admin =
levelUser Role
Default Privileges Network Admin VDC Admin
Network-Admin Read / Write access for all VDCs
VDC1
VDC-Admin Read / Write access for a given VDCs CO-A1
VDC2 VDC3
Network-Operator Read access for all VDCs CO-A2 CO-A3
VDC-Operator Read access for a given VDCs VDC4
CO-A4
Nexus7K(config)#switchto vdc pod5 Logged in as VDC-Admin and denied
% Permission denied access to other VDCs

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
Virtual Device Contexts (VDCs)
Implementation considerations (2)
VDC HA policy determines the action the physical device takes when
the VDC encounters an unrecoverable event
The VDC HA policy for a test / lab VDC for dual-supervisor system
can be configured to Bring-down to avoid sup switchover
VDC HA Policy Default

Reload Sup Single Sup


Default VDC
Switchover Dual Sup

Restart VDC Single Sup


VDC1
Non-Default VDC Admin
Switchover Dual Sup VDC2 VDC3
Agg1 Agg2
VDC4
Dual Sup HA Policy Test
= Bring down

Nexus7K(config)# vdc test


Nexus7K(config-vdc)# ha-policy dual-sup bringdown single-sup restart
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
Virtual Device Contexts (VDCs)
Implementation considerations (3)
Resources that can only be allocated, set, or configured
Global
globally for all VDCs from the master VDC i.e; boot image
Resources
configuration, Ethanalyzer session, SPAN, CoPP etc
Dedicated Resources that are allocated to a particular VDC e for
Resources example L2 and L3 ports, VLANs, IP address space, etc

Shared Some resources are shared between VDCs for example :


Resources the OOB Ethernet management port.
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/White_Paper_Tech_Overview_Virtual_Device_Contexts.html
Certain resources can be allocated and limited to a given VDC:
m4route-mem Set ipv4 route memory limits
m6route-mem Set ipv6 route memory limits
module-type Controls which type of modules are allowed in this vdc
monitor-session Monitor local/erspan-source session
monitor-session-erspan-dst Monitor erspan destination session
port-channel Set port-channel limits
u4route-mem Set ipv4 route memory limits
u6route-mem Set ipv6 route memory limits
vlan BRKDCT-2951 2012 Cisco and/orSet VLAN
its affiliates. limits
All rights reserved. Cisco Public 51
Virtual Device Contexts (VDCs)
Implementation considerations (4)
By default, VDC allows a mix of F1, M1 and M1-XL I/O modules
If the same VDC has both M1 and M1-XL modules, the system will
operate with the least common denominator mode
Customise VDC resource-limit module-type as needed (exception -
Does not allow M1 modules for internet facing VDC)
INET Facing VDC
VDC Resource-Limit Default u4route mem VDC1 M1
= 96/96 M Admin modules only
Module Type All (F1, M1, M1-XL) M1-XL VDC2 VDC3
modules only INET ENET
Nexus7K(config)# vdc inet F2 modules
Nexus7K(config-vdc)# limit-resource module-type m1xl VDC4 only

F2 modules can not be mixed with F1, M1 and M1-XL I/O modules

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Virtual Device Contexts (VDCs)
Implementation considerations (5)
10G M1 and F1 I/O Modules Interfaces in port M1 48 Port
groups can not be shared across VDCs LC Port Number
Port-Group
For the 48 port I/O modules, if ports in the
same port-group are shared between different 1 1 - 12
VDCs, then during a reload of a VDC, ports on
other VDCs sharing the port-group might
experience brief traffic disruptions (1 to 2 2 13 - 24
seconds) for these interfaces
3 25 - 36
It is recommended to
Nexus7K(config)#
Allocate all ports
vdcin<VDC-name2>
the same port-group
id 3 on 4 37 - 48
Nexus7K(config-vdc)# allocate
the 48 Port I/O modules interface
to the samee2/13-24
VDC
Nexus7K(config)# vdc <VDC-name1> id 2
Nexus7K(config-vdc)# allocate interface e2/1-12

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Implementation and
Leading Practices

Layer 2/Layer 3 Features


VLAN Trunking Protocol (VTP)
Nexus7K(config)# no feature vtp
VTP OFF mode is recommended agg1a agg1b
(default) Off
When utilising VTP transparent mode to VTP
extend VTP domain, VLAN 1 must be packets

allowed on trunks Acc1 VTP server Acc2

VTP server/client mode (v1/v2) is Nexus7K(config)# feature vtp


Supported in NX-OS 5.1 vtp mode transparent
vtp domain <name>
Default VTP mode is Server/Client in Transparent
NX-OS 5.1 if VTP feature is enabled agg1a agg1b

Internal VLANs (3968 - 4047, 4094) are


Must allow
reserved. VTP VLAN1
packets
The internal VLAN assignment is
supported in NX-OS 5.2 release to Acc1 VTP server Acc2
resolve conflict with existing VLANs
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
VTP client 56
Port-Channel
Port-channel with M1 and M1-XL member
M1 M1
interfaces is supported M1 M1

Port-channel with M1 ports on one side and F1 F1 F1


ports on the other side is supported F1 F1

Mixing M1/M1-XL and F1 interfaces in a single


port-channel is not allowed
M1 M1
F1 LCs support up to 16 active member ports and M1 M1 M1
F1 F1
LCs support 8 active member ports in a port-channel F1 F1
Nexus7K(config)# port-channel load-balance ethernet <lb-method>
The Load-balancing can be modified on default
Nexus7K(config)# port-channel load-balance ethernet <lb-method>
VDC
module <mod>
Nexus7K# sh port-channel load-balance
Nexus7K#sh port-channel load-balance forwarding-path interface M1 F1
port-channel 1 src-ip 1.1.1.1 dst-ip 2.2.2.2 vlan 2 mod 3 M1 F1
Missing params will be substituted by 0's.
Module 3: Load-balance Algorithm: source-dest-ip-vlan
RBH: 0x7 Outgoing port id: Ethernet3/3
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
Port-Channel
Understand port-channel failure behaviors
Core1-1
BW and IGP cost for L3 channels are recalculated Core-2

Module
STP cost for L2 channels does not get Failure
recalculated ECMP All
links have
Recommended to configure IGP cost on L3 channel Aggr1a cost of 100 Aggr1b
if the default convergence behavior is not desired
ECMP can be used instead of L3 port-channel
(will increase the number routing peers)
Utilise LACP to negotiate both L2 and L3 port-
channels
Implement normal LACP timer (default)
Implement port channels with power of 2
active members for optimal traffic distribution
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Unidirectional Link Detection (UDLD)
Tx Tx
UDLD is a Layer 2 protocol that detects
Rx Rx
and disables one-way connections
UDLD has 2 modes of operation : normal or aggressive
Normal : The errors detected by examining the incoming UDLD
packets from the peer port.
Aggressive : Port is put to err-disable state in case of sudden
cessation of udld packets.
Recommendation is to enable UDLD normal mode globally
Enabling UDLD feature is equivalent to configuring UDLD normal mode
globally
Nexus7K(config)# feature UDLD
Default message timer is recommended
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Interior Gateway Protocol (IGP)
Leading Practices (1)
IETF NSF
Enable NSF/Graceful restart IETF NSF
(NX-OS)

OSPF IETF NSF is enabled by default. To C6K N7K


ensure proper OSPF graceful restart and
NSF operation, It is necessary to
Implement consistent OSPF IETF NSF IOS(config-router)#
nsf ietf
mode in the routing domain
The default OSPF reference BW in NX-OS is Network A
40G while the default in IOS is 100M. To
ensure proper route selection, it is C6K N7K
recommended to Auto-Cost
Ref
Ref BW
BW ==
Configure consistent OSPF auto-cost 100G
40G

BW reference in the routing domain


Network B
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Interior Gateway Protocol (IGP)
Leading Practices (2)
Use default IGP timers in a dual-sup system to avoid
Network A
unnecessary convergence with supervisor failover
Node 1 Node 2
Leverage Bidirectional Forwarding Detection (BFD) for
fast failure detection
e1/1
Advantages of BFD vs. lower routing protocol timers:
Reduced control plane load and link bandwidth L2
usage. BFD is performed by I/O modules
Sub-second failure detection
Stateful restart / ISSU
Nexus7K(config)# feature bfd
Distributed implementation hellos sent from I/O
Please disable the ICMP redirects on all interfaces
running BFD sessions using the command below
X
Node 3 Node 4

module
'no ip redirects
Network B
Nexus7K(config)# int e1/1
Nexus7K(config-if)# no ip redirects Example - Enable BFD only on
Nexus7K(config-if)# ip ospf bfd
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
specific interface 62
General Layer-3 Features
Leading Practices
Configure extended hold timers for HSRP to support Nexus7K(config)#
NSF during ISSU and supervisor switchover feature hsrp
feature interface-vlan
Dont configure sub-second FHRP timers on a dual- !
sup system vlan <vlan>
!
Hello (1s) and Hold (3s) timer is recommended hsrp timers extended-hold <time>
!
Aggressive timers are not necessary with vPC interface vlan <vlan>
description <description>
Configure HSRP preemption delay no shutdown
no ip redirects
Disable IP proxy ARP to prevent forwarding issues ip address <address>/<mask>
with malfunctioning servers (default) hsrp <group>
authentication <text>
Configure no IP redirects to disable supervisor from preempt delay minimum 180
generating ICMP redirects priority 110
timers 1 3
ip <hsrp address>

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Implementation and
Leading Practices

OTV Implementation
OTV Terminology
Edge Device (ED): connects the site to the (WAN/MAN) core;
responsible for performing all the OTV functions
Authoritative Edge Device (AED): Elected ED that performs traffic
forwarding for a set of VLAN
Internal Interfaces: interfaces of the ED that face the site.
Join interface: interface of the ED that faces the core.
Overlay
Overlay Interface: logical multi-access
OTV
multicast-capable interface. It
Interface
encapsulates Layer 2 frames in IP unicast or multicast headers.

L2 L3
Join Core
Internal Interface
Interfaces
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
OTV and SVI Routing
On Nexus 7000 a given VLAN
can either be associated with
an SVI or extended using
OTV
OTV VDC Default Default OTV VDC
This would theoretically VDC VDC

require a dual-system
solution
N7K-1 N7K-2
The VDC feature allows to
deploy a dual-vdc solution
L3 Link
L2 Link
OTV VDC as an appliance
Single L2 internal interface Physical View
and single Layer 3 Join
Interface
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
OTV - Configuration considerations
Internal Interfaces

Regular layer 2 interfaces, Supported on F1 and M1 Cards

The OTV internal interfaces should carry the Layer 2 VLANs and site vlan

Use port-channels for higher resiliency

OTV adds an 8 Byte Shim to the header recommend to increase the MTU size of all the
interfaces along the path

Recommend configuring Rootguard on Downstream interfaces leading to the access switches


(and to the OTV VDC internal interfaces)

Join Interface

Use point-to-point routed interface. Supported only on M1 line card not F1 and F2

Use only one join interface(physical or logical) per overlay

Use different join interface per overlay

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
OTV - Configuration considerations
Site-id is mandatory
If not configured, no overlay will come up (it is not generated by default)
Holds true for single-homed site as well

Edge Devices in same site MUST be configured with same site-id


If mismatch detected, all overlays will come down until this error is fixed

Site-id 0 is not acceptable


Allow the site VLAN through to at least one dual attached access
switch so as to prevent taking down the site VLAN when rebooting one
of the aggregation switches
VLANs are split between local OTV Edge Devices as long as:
At least one Adjacency is up between them
AND
BRKDCT-2951 All Edge Devices
2012 Cisco and/or itsadvertised themselves
affiliates. All rights reserved. as
Cisco AED capable
Public 68
OTV Control Plane
MAC Address Advertisements (Multicast-Enabled Transport)
When an Edge Device learns a new MAC address it advertises it together
with its associated VLAN IDs and the IP address of the join-interface
A single OTV update can contain multiple MACs from different VLANs
With a multicast-enabled transport a single update reaches all 4
neighbors. VLAN MAC IF
1 100 MAC A IP A
3 New MACs are learned on
VLAN 100 OTV update is replicated 100 MAC B IP A

Vlan 100 MAC A by the core 3 100 MAC C IP A

Vlan 100 MAC B

Vlan 100 MAC C 2 East


4
IP A VLAN MAC IF
West 3 100 MAC A IP A

100 MAC B IP A

100 MAC C IP A

South
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
OTV Configuration
OTV over a Multicast Transport
Minimal configuration required to get OTV up and running
feature otv
otv site-identifier 0x3*
feature otv otv site-vlan 99
otv site-identifier 0x1* interface Overlay100
otv site-vlan 99 otv join-interface e1/1.10
interface Overlay100 otv control-group 239.1.1.1
otv join-interface e1/1 otv data-group 232.192.1.0/24
otv control-group 239.1.1.1 otv extend-vlan 100-150
otv data-group OTV
232.192.1.0/24
feature otv OTV
otv extend-vlan 100-150 otv site-identifier 0x2*
IP A otv site-vlan 99 IP B
West interface Overlay100 East
otv join-interface Po16
otv control-group
IP C OTV 239.1.1.1
otv data-group 232.192.1.0/24
otv extend-vlan 100-150
South

*Introduced from release 5.2


BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Release 5.2
OTV Control Plane and above
Neighbor Discovery (Unicast-Only Transport)
Ideal for connecting a small number of sites
With a higher number of sites a multicast transport is the
Unicast-only
best choice Transport
OTV OTV
OTV Control Plane OTV Control Plane

IP A IP B
West East

Mechanism End Result


Edge Devices (EDs) register with an Neighbor Discovery is automated by
Adjacency Server ED the Adjacency Server
EDs receive a full list of Neighbors All signaling must be replicated for
(oNL) from the AS each neighbor
OTV hellos and updates are Data traffic must also be replicated at
encapsulated
BRKDCT-2951
in IP and unicast to
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
Release 5.2
OTV Configuration and above
OTV over a Unicast Only Transport
Primary Adjacency Server
feature otv
otv site-identifier 0x1 Secondary Adjacency Server
otv site-vlan 99 feature otv
interface Overlay100 otv site-identifier 0x2
otv join-interface e1/1 otv site-vlan 99
otv adjacency-server unicast-only interface Overlay100
OTV
otv extend-vlan 100-150 otv join-interface e1/1.10
otv adjacency-server unicast-only
otv use-adjacency-server OTV
10.1.1.1 unicast-only
IP A otv extend-vlan 100-150
West 10.1.1.1 OTV
IP B
10.2.2.2 East
Generic OTV Edge Device IP C OTV
feature otv
otv site-identifier 0x3
otv site-vlan 99 South
interface Overlay100
otv join-interface Po16
otv use-adjacency-server
BRKDCT-2951
10.1.1.1 10.2.2.2 unicast-only
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
Implementation
and Leading Practices

Virtual Port-Channel (vPC)


Virtual Port-Channel
vPC Terminology
vPC peer - a vPC switch, one of a pair
vPC member port - one of a set of ports (port channels) that form a vPC
vPC peer-link (vPC_PL) synchronises state between vPC peer devices
(must be 10GE port-channel)
vPC peer-keepalive Link (vPC_PKL) - detects the status of vPC peer
CFS - Cisco Fabric Services protocol, used for
devices
state synchronisation and configuration vPC_PKL
validation between vPC peer devices agg1a vPC_PL agg1b

vPC VLANs - VLANs carried over the peer-link


orphan port
agg1a
CFS Protocol
agg1b

Non-vPC VLANs - VLANs not carried over the


vPC member
peer-link port
Stand-alone
vPC orphan-ports - non vPC ports that are port-channel
mapped to the vPC VLANs vPC
Access

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
Virtual Port-Channel
STP Leading Practices (1)
Do not disable STP !!
Configure vPC peers as primary/secondary root VLAN 1 - 4094 VLAN 1- 4094
STP Pri 8192 STP Pri 16384
vPC peer-switch should only be used in pure vPC
topology - both vPC devices will behave as single
STP root
agg1a agg1b
BA (Bridge Assurance) is enabled by default on
vPC peer-link BA Enabled
(Default)
Do not enable Loopguard and BA on vPC (disabled
by default) No
NoBA
BA,or Loopguard
Loopguard
(Default)
(Default)
vPC is loopfree and avoid issues with split-brain
Acc1 BPDU-guard Acc2
Enable STP port type edge and port type edge
trunk on host ports Port Type
Enable STP BPDU-guard globally on access Edge / Edge Trunk
switches
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Virtual Port-Channel
STP Leading Practices (2)
Run consistent STP mode to avoid slow STP
convergence (30+ secs) agg1a agg1b

RPVST+ is the default in NX-OS and IOS


default is PVST+
RPVST+
Utilise MST to scale large L2 network to or MST
Acc1
overcome logical port limitation
MST supports 90K and RPVST+ supports
16K logical ports (Per System)
Nexus7K# show spanning-tree summary total Output is
----deleted---- per VDC
Name Blocking Listening Learning Forwarding STP Active
---------- -------- --------- -------- --------- ---------
For MST
9 vlans sh spanning-tree
Nexus7K# 0 0internal0info global
18 18
logical ports

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Virtual Port Channel (vPC)
Leading Practices/Configuration (1) vPC Devices
Domain (1
1000)

Enable vPC feature DC1 Agg1a/1b


1
Nexus7K(config)# feature vpc
DC1 Acc1a/1b
Define unique vPC domain-ID for each pair 2

of vPC peer devices in the same L2 .. .


domain
Nexus7K(config)# vpc
DC2 Agg1a/1b
101
domain 1
Nexus5K(config)# vpc domain 2
.. ..
vPC system ID is derived from vPC domain ID
and is used for vPC control plane PDUs
Nexus7K# sh vpc role vPC Domain 1
<snip>
vPC system-mac : 00:23:04:ee:be:01
<snip>
agg1a agg1b
vPC local system-mac : 00:0d:ec:a4:53:3c
Nexus5K# sh vpc role vPC Domain 2
<snip>
vPC system-mac : 00:23:04:ee:be:02 Acc1a Acc1b
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
Virtual Port Channel (vPC)
Leading Practices/Configuration (2)
Peer Keepalive link used for Heartbeat for peer 10.1.1.1
VRF
10.1.1.2
VRF
vPC-Keepalive vPC-Keepalive
Use Dedicated connection for vPC peer keepalive vPC_PKL

link and assign to a separate VRF agg1a e3/47- 48 agg1b


(po 2)
Port-channel is recommended but not required Peer-link
If mgmt0 interface is used as vPC keepalive
link, connect it via an OOB mgmt network
Back-to-back mgmt0 connection should only
be used in single supervisor implementation
Nexus7K-1(config)#
vPC
vrf Peer-Keepalive
context vpc-keepalive messages should NOT
!be routed over the vPC Peer-Link
int <interface>
vrf member vpc-keepalive
ip address 10.1.1.1/30
no shut
!
vpc domain 1
peer-keepalive
BRKDCT-2951 destination
2012 Cisco 10.1.1.2
and/or its affiliates. All rights reserved. source
Cisco Public10.1.1.1 vrf vpc-keepalive 80
Virtual Port Channel (vPC)
Leading Practices/Configuration (3)
vPC peer link carries CFS messages, STP BPDU
Dedicated
Utilise diverse 10GE modules to form vPC peer-link Rate-Mode
(must be 10GE port-channel) Po1
e1/1 e1/1
Implement physical vPC peer-link interfaces in agg1a agg1b
dedicated rate-mode for M1 modules e2/1
vPC_PL
e2/1

Shared mode is supported but not


recommended Trunk
Allowed
AllowedVLANs
VLANs=
=vPC
vPCVLANs
VLANs
vPC peer-link must be configured as a trunk
Pre-configure to allow
Nexus7K-1(config-if-range)# all vPC VLANs on peer-
Nexus7K-1(config)#
rate-mode dedicated int port-channel 1
link
switchport switchport
switchport mode trunk switchport mode trunk
channel-group 1 mode active vpc peer-link
no shut
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
Virtual Port Channel (vPC)
Leading Practices/Configuration (4)

Clear unnecessary VLANs on trunks


Core
Match vPC with port-channel number
Nexus7K-1a(config)#
int e3/1-2 vPC
switchport vPC
Primary Secondary
switchport mode trunk
channel-group 11 mode active vPC_PKL Shut SVIs
no shut agg1a vPC_PL agg1b
vPC VLANs
!
int port-channel 11 Orphan
switchport port
switchport mode trunk
switchport trunk allowed vlan remove <vlan>
vpc 11 Isolated!!
Always dual home all devices to vPC Acc1 Acc2 Acc3
domain using vPC !!
If vPC peer-link fails, the secondary
Failure of peer-link can isolate single vPC peer suspends local vPCs and
attached devices
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. shuts down SVIs of vPC VLANs
Cisco Public 82
Virtual Port Channel (vPC)
Leading Practices/Configuration (5)

A primary and a secondary vPC peer STP Primary vPC Primary vPC Secondary
device are elected by default. For better root & HSRP (role priority role priority
vPC management, it is recommended to active router 8192) 16382

Assign and designate vPC primary peer agg1a agg1b


role with lower role priority
Align vPC primary peer with STP primary
root, HSRP active router and PIM DR Acc1a Acc1b

One vPC peer can be configured as


HSRP active router for all VLANs since vPC Primary vPC Secondary
role priority role priority
both vPC devices are active forwarders 8192 16384
Nexus7K-1(config-vpc-domain)# role priority 8192
Nexus7K-2(config-vpc-domain)# role priority 16384

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
Virtual Port Channel (vPC)
Leading Practices/Configuration (6) vPC Domain 10
Both switches in the vPC Domain maintain
distinct control planes
CFS provides for protocol state
synchronisation between both peers (MAC
Address table, IGMP state, ) vPC Domain 20

System configuration must also be kept in


sync
Two types of interface consistency checks
Type 1 Will put interfaces into suspend
state to prevent invalid forwarding of
packets. With Graceful Consistency check
only suspend on secondary switch
Type 2 Error messages to indicate
potential forundesired forwarding behavior
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
Virtual Port Channel (vPC)
Leading Practices/Configuration (7)
If both vPC devices are reloaded, all vPCs are suspended
until peer adjacency is re-established. If only one vPC device
*In NX-OS 5.2 reload restore
comes back online, it can lead to an extended outage
is deprecated and auto-
Enable vPC restore on reload to allow one vPC device to recovery is used
assume STP / vPC primary role and bring up all local
vPCs (Supported in NX-OS 5.0.2a) vPC restore vPC restore
on reload on reload
It is recommended to configure on both vPC devices
vPCs are brought up after delay timer expiration (default agg1a agg1b
and min timer is 240s)
Dual-Active can only be introduced with multiple failures
(both vPC devices reload and recover
Nexus7K-1a(config-vpc-domain)#reload but both peer-link
restore
and keepalive links stay down)
Warning:
Enables restoring of vPCs in a peer-detached state after reload, will
wait for 240 seconds (by default) to determine if peer is un-reachable
Nexus7K-1b(config-vpc-domain)#reload restore
Nexus7K-1b(config-vpc-domain)#auto recovery
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
Layer 3 and vPC Designs
Layer 3 and vPC Design Recommendation
Use L3 links to hook up routers and peer with a vPC domain
Dont use L2 port channel to attach routers to a vPC domain unless you
statically route to HSRP address
If both, routed and bridged traffic is required, use individual L3 links for
routed traffic and L2 port-channel for bridged traffic
Switch Switch
Po2
Po2

P P
P P
7k1 7k2 L3
Po1 P Routing Protocol Peer
ECMP
Dynamic Peering Relationship
P P Router
P P Router
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 87
FabricPath Implementation
Implementation
and Leading
Practices
FabricPath
FabricPath connects a group of switches using an arbitrary topology and provides
Scalability, High Bandwidth, High Resiliency, L3 integration and L2 integration

With a simple CLI, aggregate them into a Fabric:

Nexus7K(config)# feature fabricpath


Nexus7K(config)# feature switch-id <#>
Nexus7K(config)# interface ethernet 1/1
Nexus7K (config)# switchboard mode fabricpath

FabricPath

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 91
Use Case: High Performance Compute
Building Large Scalable Compute Clusters

Spine Switch 16 Chassis 8,192 10GE ports


512 10GE FabricPath ports per system
16-port Etherchannel
16-way ECMP
FabricPath
32 Chassis 256 10GE FabricPath Ports
Edge Switch
Open I/O Slots for
160 Tbps System Bandwidth connectivity

HPC Requirements FabricPath Benefits for HPC


HPC Clusters require high- FabricPath enables building a high-
density of compute nodes density fat-tree network
Minimal over-subscription Fully non-blocking with FabricPath
ECMP & port-channels
Low server to server latency
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
FabricPath
Each switch in FP fabric is allocated a global switch id default
allocation is automatic
To configure global switch-ID manually
Nexus7K(config)# fabricpath switch-id <1-4094>
Nexus7K(config)# feature switch-id <#>
Nexus7K(config)# interface ethernet 1/1
Nexus7K (config)# switchboard mode fabricpath

vPC+ system also need a switch-ID value. Allocation must be


manual else vPC+ wont come up
AssignNexus7K(config)# vpc domain
the same emulated 100
switch-ID on both peer devices !!!
Nexus7K(config-vpc-domain)# fabricpath switch-id <1-4094>

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 93
Implementation and
Leading Practices

FEX (FEX Extenders)


Fabric Extenders (FEX)
Guidelines and Limitations (1)
FEX is supported only on 32 port M1/M1-XL modules and
48 Port 10 GE F2 Modules N7K VPC N7K
NX-OS 6.0 is required for F2 module

EPLD 5.1.1 upgrade is required for 32 port M1 I/O modules


Nexus7K# show hardware feature-capability Supported
N2K
Nexus7K# show interface ethernet <mod/port> capabilities in 5.1

Nexus 2000 can only be connected to a single Nexus 7000


(NX-OS 5.1)
VPC
Host port-channel and host vPC are supported in NX-OS
5.2
vPC from FEX to Nexus 7000 is targeted for future release vPC Supported
in 5.2
Local switching is not supported on the Nexus 2000
Forwarding is based on VNTag added to the packet
between FEX 2012
BRKDCT-2951 and Nexus
Cisco and/or 7000
its affiliates. All rights reserved. Cisco Public 95
Fabric Extenders (FEX)
Guidelines and Limitations (2)
Nexus 7000 10GE ports must be in shared rate
mode (default)
1/1 2/1
Minimize over-subscription by utilising 1 2 ports 2:1
Oversubscribed
from the same port-group
Err-disable
Over-subscription is determined by the number of
uplinks and host connections
2248TP-1G BPDU
All Nexus 2000 host ports are edge ports (STP edge
port, BPDU-Guard and global BPDU-Filter are shared rate
enabled and cant be disabled) mode

Diverse I/O modules (FEX fabric uplinks) provides


No
Oversubscription
redundancy

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. 2248TP-1G
Cisco Public 96
Implementation and
Leading Practices

Control Plane Policing (CoPP)


Hardware Rate-Limiters (HWRL)
Control Plane Policing (CoPP)
Hardware based feature that protects the supervisor from DoS attacks
Configure and monitor from default VDC
Performed on per forwarding engine (FE) basis and the maximum traffic
received by the supervisor will be the total number of FE multiplied by the
rate allowed Control Plane
Layer 2 Protocols Layer 3 Protocols
VLAN UDLD OSPF GLBP
PVLAN CDP BGP HSRP
STP 802.1X EIGRP IGMP
LACP CTS PIM SNMP

Supervisor

Logic Representation of the Fabric Modules

X X X X
FE
CoPP
FE
CoPP ... FE
CoPP
FE
CoPP N Linecards
Allow X Allow X Allow X Allow X
Linecard Linecard Linecard
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 99
Control Plane Policing (CoPP)
Leading Practices (1)
The default policy has optimised values suitable for basic device operations
It is recommended to use strict CoPP policy (default) and modify the CoPP
policy as per the Data centre application requirements
It is not recommended to disable CoPP
The added / modified policies can set to monitor mode initially by setting
the violate action to transmit
Because traffic patterns constantly change in a DC, customizing of CoPP is an
ongoing process
Monitor unintended drops and add / modify the default CoPP policy
according to expected traffic patterns
Nexus7K# show policy-map interface control-plane | inc violated
violated 59 bytes; action: drop
.
Nexus7K(config)# policy-map type control-plane copp-system-policy
Nexus7K(config-pmap)# class copp-system-class-monitoring
Nexus7K(config-pmap-c)# police cir 200 kbps bc 1000 ms conform transmit
violate drop
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 100
Control Plane Policing (CoPP)
Leading Practices (2)
SW Added Traffic
Additional traffic classes are added and enhanced in Class
different software releases

Since running CoPP policy does not automatically
update after software upgrade,
5.1(1) L2 un-policed
4.2(6) L2 default/No-IP
It is recommended that
After software upgrade, if major features are
. .
added, run setup command to apply the default
policy DHCP ACL
4.2(3)
(Bootpc,Bootps)
Future enhancement to generate syslog with
changes in BP CoPP and CLI to see the changes 4.2(1) WCCP, CTS
Nexus7K# setup
----deleted----
Any non-default CoPP policies need to be
Configure best practices CoPP profile (strict/moderate/lenient/none) [strict]:
reapplied after setup

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 102
Control Plane Policing (CoPP)
Customisation Example
If servers use ICMP Pings and ARPs to verify
ARP Net1 Class
the default gateway access from the active Exceed
Cir / Bc
ARP Set 1
NIC (not recommended)
ARP Net2 Class
Supervisor ARP Set 2
A single mal-functioning servers can cause

impact to all servers in the aggregation block
ARP Catch-All Class
CoPP can be customized to limit the impact to ARP
individual subnets or groups of subnets Normal Class
DHCP-SNOOP

Configuration Steps:
1) Remove ARP/ICMP from default classes
2) Create new ARPs and ICMP classes based
on subnets or group of subnets
3) Create a catch all class for ARPs and ICMP
(make sure all subnets are covered)
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 103
Hardware Rate-Limiter
Hardware-limiters complement CoPP to Supervisor
protect the CPU (enabled by default) (CPU)
Rate limit supervisor-bound exception and
redirected traffic
HWRL
The configured setting is per forwarding
engine (FE)
Configure and monitor from default VDC CoPP

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 105
Conclusion
Key Takeaways
Understand requirements and features available from the product
Choose the topology and use the leading practices to design a solution
that is scalable
Test the solution
Implement the solution

BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 107
Q&A
Complete Your Online Session
Evaluation
Complete your session evaluation:
Directly from your mobile device by visiting
www.ciscoliveaustralia.com/mobile and login
by entering your badge ID (located on the
front of your badge)

Visit one of the Cisco Live internet


stations located throughout the venue

Open a browser on your own computer Dont forget to activate your Cisco Live
to access the Cisco Live onsite portal Virtual account for access to all session
materials, communities, and on-demand and
live activities throughout the year. Activate your
account at any internet station or visit
www.ciscolivevirtual.com.
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 109
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 110

Das könnte Ihnen auch gefallen