Beruflich Dokumente
Kultur Dokumente
Networks
BRKDCT-2951
www.ciscolivevirtual.com
Session Abstract
Deploying Nexus 7000 in Data Centre Networks
This session is targeted to network administrators and operators who
have deployed or are considering the deployment of the Nexus 7000.
The session starts with an overview of the Nexus 7000 hardware
components. This is followed by the design options, implementation
and best practices for deployments of Nexus 7000 in the data
centres. The presentation will cover installation, layer-2 & layer-3 and
security features. The session will cover NX-OS CLI but
troubleshooting is not part of this presentations scope
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 2
Agenda
Hardware Overview
Design Motivation
Design Options
Implementation and Leading Practices
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Hardware Overview
Nexus 7000 Series Flexibility and Scale
Highest 10GbE Aug 11
Density in
Modular
Switching
Nexus 7009 Nexus 7010 Nexus 7018
Height 14 RU 21 RU 25 RU
BW per slot with Fabric2 550 Gbps/Slot 550 Gbps/Slot 550 Gbps/Slot
BW per slot with Fabric1 Ships only w/ Fabric2 230 Gbps/slot 230 Gbps/slot
2 Upto 3 Upto 4
Power Supply & Airflow
Side to side Front to back Side to Side
Data centre
Deployment Targets Data centre Large Data centre
Campus Core
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Supervisor Engine
Performs control plane and management functions
Dual-core 1.66GHz x86 processor with 8GB DRAM
2MB NVRAM, 2GB internal bootdisk, compact flash slots, USB
Console, aux, and out-of-band management interfaces
Interfaces with I/O modules via 1G switched EOBC
Houses dedicated central arbiter ASIC that controls VOQ
admission/fabric access via dedicated arbitration path to I/O modulesN7K-SUP1
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Deployment Options M1 Series I/O
Modules
M1- Modules M1 Series Modules
M2- Modules
Up to 80Gbps 6 ports of 40 GE (Quad - 48-port 10/100/1000 RJ45 - XL
Local/Fabric SFP+) - 48-port 1GE SFP - XL
vPC VDC ISSU
2 ports of 100 GE (C
L2/L3 Forwarding - 32-port 10GE SFP+ - XL
Form Pluggable optic
CFP) - 8-port 10GE X2 - XL
Large FIB/MAC/ACL
Tables Upto 550 Gbps Fabric
connectivity M2 Series Modules
Large Buffers Requires NX-OS 6.1 and - 6-port 40 GE QSPF+
above
- 2-port 100 GE - CFP
512K Sampled Netflow
No support for FEX in
MPLS these line cards
FEX Support
OTV
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Deployment Options I/O Modules M1
XL/L For Your
Reference
N7K-M108X2-12L N7K-M148GS-11L N7K-M132XP-12L N7K-M148GT-11L
48 port 10/100/1000
Description 8 port 10GE (X2) 48 port GE (SFP) 32 port 10GE (SFP+)
(RJ45)
Software 5.0(2a) and later 5.0(2a) and later 5.1(1) and later 5.1(1) and later
Performance 2 FE 1 FE 1 FE 1 FE
(IPv4) (120M pps) (60M pps) (60M pps) (60M pps)
Port-level
1:1 1:1 4:1 1:1
Oversubscription
vPC Peer-Link
FEX Support
BRKDCT-2951
2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public 11
Deployment Options Comparison - M1 /
M1-XL For Your
Reference
M1/M2-XL Module
M1/M2-XL Module (No
Capability M1/M2 Module (with/ Scalable
Scalable License)
License)
230Gbps Fabric
48 ports of 1/10 Gbps
L2 Forwarding
Upto 480Gbps Local
480Gbps Fabric
FCoE
L3 Forwarding with ACL
F2 Series Modules
FabricPath
and QoS
48-port 1/10GE SFP+
IEEE 1588
Fabric Extender FEX
Support 40-port 1/10GBase-T
16 SPAN Sessions
DCB/FCoE support
Low Latency
Can be used F2 with Fabdo1not interoperate with other shipping modules
modules
Low Power
or Fab 2
BRKDCT-2951 Must deploy in an F2 only VDC
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
Deployment options - M and F
Comparison For Your
Reference
Capability M1 Series Module F2 Series M2 Series
10G line rate ports Max of 8 ports 48 1/10GE ports 6x 40GE or 2x100GE
ports
480 Gbps per slot for L2 550 Gbps
Fabric Connection 80 Gbps
and L3
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Physical view
Fabric Extenders (FEX) Parent
Switch Fabric Host
ports Interface
Nexus 2000 (FEX) can be considered as a
remote I/O module for the Nexus 7000
FEX Uplink
Provide High Density GE Connectivity
Fabric
Port-Channel
Support Hybrid ToR and EoR Network
Architectures Logical view
FEX physically resides on top of each server
rack but logically acts as an end of access
row device
Reduced Power Consumption/Cap-EX/Op-EX FEX SW Per System
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
Virtual Port-Channel Design Motivations
Provides multi-chassis ether channel capability (L2 port channel only)
Eliminates STP blocked ports and reduce
STP complexity
vPC vPC
Uses all available uplink bandwidth
Enables dual-homed servers to operate
in active-active mode
Provides fast convergence upon
Double-sided vPC
link/device failure
Software version Number of vPC
Pre 4.2 release 196
4.2(1) and later 256 vPC vPC
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
FabricPath Design Motivations
Connect any where using an arbitrary topology, fabric uses the best path
A e1/1
Traffic evenly redistributed in failure cases A S1,e1/1
FabricPath
S3
A B
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
FabricPath
S10 S20 S30 S40
POD POD
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Design Options
Data Centre Design Example 1
Large Data Centre utilising 3-Tier DC design
Nexus 7000 in core and aggregation
10GE/GE ToR and GE MoR access layer switches
Implement vPC / double-sided vPC to eliminate L2 loops and to support
active/active server connections
VPC
Aggregation
agg1a agg1b
..
aggNa VPC aggNb L3
L2
Access VPC VPC
...
... L2
active active active standby active active active standby
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
Data Centre Design Example 2
Large Data centre utilising 3-Tier DC design
Nexus 7000 in core and aggregation, Nexus 5000 and Nexus 2000 in
access layer
Implement vPC / double-sided vPC to eliminate L2 loops
Two different vPC redundancy models can be utilised to support
active/active or active/standby server connections
Core Core1 Core2 L3 Channel
L3 L3 link
L2 Channel
L2 link
Aggregation agg1a VPC agg1b aggNa VPC aggNb
.. L3
L2
Access VPC VPC VPC VPC
L2
vPC vPC vPC vPC
vPC vPC
active standby active standby Active/Active Active/Active
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
Data Centre Design Example 3
Large Data Centre utilising 3-Tier DC design
Nexus 7000s in Core and Aggregation
Utilise VDCs in aggregation layer to create a non-secured zone and a
secured zone
10GE/GE ToR and GE MoR access layer switches
Implement vPC / double-sided vPC to eliminate L2 loops and to support
Coreactive/active server connections
Core1 Core2 L3 L3 Channel
L3 link
L2 Channel
L2 link
Aggregation
SW-1a vPC SW-1b SW-1a vPC SW-1b SW-2a vPC SW-2b SW-2a vPC SW-2b
VDC3
L3
VDC2 VDC2 VDC3 VDC3 VDC2 VDC2 VDC3
L2
Access vPC vPC
active standby active standby active active
L2
active active
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Data Centre Design Example 4
Small Data centre with a virtualised 3-Tier DC design
Utilise VDCs on a single device to create a core and aggregation layer
GE and 10GE ToR access layer switches
Implement vPC / double-sided vPC
L3 Channel
Core SW-1b L3 link
SW-1a
VDC2 VDC2 L3 L2 Channel
L2 link
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Data Centre Design Example 5
OTV at the aggregation with a dedicated VDC
STP and unknown unicast domains isolated between PODs
Intra-DC and inter-DC LAN extension provided by OTV
Ideal for single aggregation block topology
Aggregation OTV VDC SVIs SVIs OTV VDC OTV VDC SVIs SVIs OTV VDC
VPC VPC
Access
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Data Centre Design Example 6
OTV at the Aggregation with L3 boundary on the Firewalls
The Firewalls host the Default Gateway
No SVIs at the Aggregation Layer
No Need for the OTV VDC
Core
OTV OTV
Def Def
L3 GWY GWY
Aggregation
L2
Firewall Firewall
Access
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Data Centre Design Example 7
Fabric Path High-level design option with Centralised Routing at Aggregation
Aggregation serves as FabricPath spine as well as L2/L3 boundary. Benefits
include Removal of STP, Traffic distribution across all link, active-active
gateway, any vlan any where at access layer
Core L3
L3
Aggregation
L2/L3 boundary L2/L3 boundary
FabricPath FabricPath
Access
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Data Centre Design Example 8
Fabric Path Design with Centralised aggregation services
FabricPath spine with F1 or F2 FabricPath core
modules provides transit fabric (no ports provided by F1
routing, no MAC learning) or F2 modules
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
Data Centre Design Example 9
Fabric Path - Distributed Routing Selective VLAN Extension
Dedicated VLAN for
VPC+ for active/active transit routing Layer 2 CE
HSRP at access pair Layer 2 FabricPath
FabricPath
SVI 100 SVI 100
SVI SVI
100 100 SVI SVI HSRP
100 100 VPC+
SVI SVI SVI SVI SVI 30 SVI 30
10 10 40 40
SVI SVI SVI SVI Active Standby
20 20 50 50
VPC+ VPC+
Active
HSRP
Standby Active
HSRP
Standby L3
already running under the grace period Advanced LAN CTS, VDC
Scalable Feature M1-XL TCAM
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
EPLD Upgrade
EPLD upgrade is used to enhance Nexus7K# sh ver <type> <#> epld
Nexus7K# sh ver mod 3 epld
HW functionality or to resolve known
issues EPLD Device Version
--------------------------------
EPLD upgrade is an independent Power Manager 4.008
process from software upgrade and IO 1.016
Forwarding Engine 1.006
not dependent on NX-OS FE Bridge(1) 186.006
FE Bridge(2) 186.006
EPLD upgrade is typically not required Linksec Engine(1) 2.006
---deleted---
Linksec Engine(8) 2.006
Performed on all Field Replaceable
Modules
In redundant configuration, requires
reload of IO modules
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
EPLD Upgrade Best Practices
Upgrade to the latest EPLD image prior to bringing hardware into
production environment (staging HW, replacement HW, etc)
Only use Install all EPLD on non-production systems
Nexus7K# install all epld bootflash:<EPLD_image_name>
In a redundant system, only EPLD upgrade for I/O modules can disrupt
traffic since the module needs to be power-cycled
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
Hardware Installation Considerations
Two supervisors for high availability and ISSU
Two M1 modules in mixed mode chassis (M1/F1)
A minimum of three fabric modules to provide N+1 redundancy for all
M1/M1-XL I/O modules
Use five 2nd generation fabric modules for full performance from F2 I/O
modules
Perform chassis / system grounding
Perform additional diagnostics on staged devices before production
Configure complete boot-up diagnostic level (default)
Administratively shutdown all ports to run Portloopback test over night
Power-cycle after burn-in period to perform boot-up diagnostic
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
Supervisor Memory 8G Memory Upgrade
Guidelines
Prior to release 5.1, supervisor shipped with 4G Memory
From release 5.1 onwards, supervisors shipped with 8G memory
Determine if 8G memory upgrade is necessary based on the software
version and the software features enabled on the system
Nexu7K# show system resources
----deleted
Memory usage: 4115776K total, 2793428K used, 1322348K free
Utilise show hardware capacity to determine the system capacity for capacity
planning
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
Implementation and
Leading Practices
VDC Implementation
VDC1
Virtual Device Contexts (VDCs) Default
VDC2
Secure-Net
VDCs provide logical separation of control- VDC3
Non-Secure
plane, data-plane, management, resources, VDC4
and system processes ..
32 port M1/M1-XL Port-
Port Number
Group
Support up to 4 separate VDCs with common 1 1,3,5,7
All ports in the same port-group on the 32 port 8 26, 28, 30, 32
10GE modules (M1/M1-XL and F1) must be 32 port F1 Port-Group Port Number
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
Virtual Device Contexts (VDCs)
Implementation considerations (2)
VDC HA policy determines the action the physical device takes when
the VDC encounters an unrecoverable event
The VDC HA policy for a test / lab VDC for dual-supervisor system
can be configured to Bring-down to avoid sup switchover
VDC HA Policy Default
F2 modules can not be mixed with F1, M1 and M1-XL I/O modules
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Virtual Device Contexts (VDCs)
Implementation considerations (5)
10G M1 and F1 I/O Modules Interfaces in port M1 48 Port
groups can not be shared across VDCs LC Port Number
Port-Group
For the 48 port I/O modules, if ports in the
same port-group are shared between different 1 1 - 12
VDCs, then during a reload of a VDC, ports on
other VDCs sharing the port-group might
experience brief traffic disruptions (1 to 2 2 13 - 24
seconds) for these interfaces
3 25 - 36
It is recommended to
Nexus7K(config)#
Allocate all ports
vdcin<VDC-name2>
the same port-group
id 3 on 4 37 - 48
Nexus7K(config-vdc)# allocate
the 48 Port I/O modules interface
to the samee2/13-24
VDC
Nexus7K(config)# vdc <VDC-name1> id 2
Nexus7K(config-vdc)# allocate interface e2/1-12
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Implementation and
Leading Practices
Module
STP cost for L2 channels does not get Failure
recalculated ECMP All
links have
Recommended to configure IGP cost on L3 channel Aggr1a cost of 100 Aggr1b
if the default convergence behavior is not desired
ECMP can be used instead of L3 port-channel
(will increase the number routing peers)
Utilise LACP to negotiate both L2 and L3 port-
channels
Implement normal LACP timer (default)
Implement port channels with power of 2
active members for optimal traffic distribution
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Unidirectional Link Detection (UDLD)
Tx Tx
UDLD is a Layer 2 protocol that detects
Rx Rx
and disables one-way connections
UDLD has 2 modes of operation : normal or aggressive
Normal : The errors detected by examining the incoming UDLD
packets from the peer port.
Aggressive : Port is put to err-disable state in case of sudden
cessation of udld packets.
Recommendation is to enable UDLD normal mode globally
Enabling UDLD feature is equivalent to configuring UDLD normal mode
globally
Nexus7K(config)# feature UDLD
Default message timer is recommended
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Interior Gateway Protocol (IGP)
Leading Practices (1)
IETF NSF
Enable NSF/Graceful restart IETF NSF
(NX-OS)
module
'no ip redirects
Network B
Nexus7K(config)# int e1/1
Nexus7K(config-if)# no ip redirects Example - Enable BFD only on
Nexus7K(config-if)# ip ospf bfd
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
specific interface 62
General Layer-3 Features
Leading Practices
Configure extended hold timers for HSRP to support Nexus7K(config)#
NSF during ISSU and supervisor switchover feature hsrp
feature interface-vlan
Dont configure sub-second FHRP timers on a dual- !
sup system vlan <vlan>
!
Hello (1s) and Hold (3s) timer is recommended hsrp timers extended-hold <time>
!
Aggressive timers are not necessary with vPC interface vlan <vlan>
description <description>
Configure HSRP preemption delay no shutdown
no ip redirects
Disable IP proxy ARP to prevent forwarding issues ip address <address>/<mask>
with malfunctioning servers (default) hsrp <group>
authentication <text>
Configure no IP redirects to disable supervisor from preempt delay minimum 180
generating ICMP redirects priority 110
timers 1 3
ip <hsrp address>
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Implementation and
Leading Practices
OTV Implementation
OTV Terminology
Edge Device (ED): connects the site to the (WAN/MAN) core;
responsible for performing all the OTV functions
Authoritative Edge Device (AED): Elected ED that performs traffic
forwarding for a set of VLAN
Internal Interfaces: interfaces of the ED that face the site.
Join interface: interface of the ED that faces the core.
Overlay
Overlay Interface: logical multi-access
OTV
multicast-capable interface. It
Interface
encapsulates Layer 2 frames in IP unicast or multicast headers.
L2 L3
Join Core
Internal Interface
Interfaces
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
OTV and SVI Routing
On Nexus 7000 a given VLAN
can either be associated with
an SVI or extended using
OTV
OTV VDC Default Default OTV VDC
This would theoretically VDC VDC
require a dual-system
solution
N7K-1 N7K-2
The VDC feature allows to
deploy a dual-vdc solution
L3 Link
L2 Link
OTV VDC as an appliance
Single L2 internal interface Physical View
and single Layer 3 Join
Interface
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
OTV - Configuration considerations
Internal Interfaces
The OTV internal interfaces should carry the Layer 2 VLANs and site vlan
OTV adds an 8 Byte Shim to the header recommend to increase the MTU size of all the
interfaces along the path
Join Interface
Use point-to-point routed interface. Supported only on M1 line card not F1 and F2
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
OTV - Configuration considerations
Site-id is mandatory
If not configured, no overlay will come up (it is not generated by default)
Holds true for single-homed site as well
100 MAC B IP A
100 MAC C IP A
South
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
OTV Configuration
OTV over a Multicast Transport
Minimal configuration required to get OTV up and running
feature otv
otv site-identifier 0x3*
feature otv otv site-vlan 99
otv site-identifier 0x1* interface Overlay100
otv site-vlan 99 otv join-interface e1/1.10
interface Overlay100 otv control-group 239.1.1.1
otv join-interface e1/1 otv data-group 232.192.1.0/24
otv control-group 239.1.1.1 otv extend-vlan 100-150
otv data-group OTV
232.192.1.0/24
feature otv OTV
otv extend-vlan 100-150 otv site-identifier 0x2*
IP A otv site-vlan 99 IP B
West interface Overlay100 East
otv join-interface Po16
otv control-group
IP C OTV 239.1.1.1
otv data-group 232.192.1.0/24
otv extend-vlan 100-150
South
IP A IP B
West East
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
Virtual Port-Channel
STP Leading Practices (1)
Do not disable STP !!
Configure vPC peers as primary/secondary root VLAN 1 - 4094 VLAN 1- 4094
STP Pri 8192 STP Pri 16384
vPC peer-switch should only be used in pure vPC
topology - both vPC devices will behave as single
STP root
agg1a agg1b
BA (Bridge Assurance) is enabled by default on
vPC peer-link BA Enabled
(Default)
Do not enable Loopguard and BA on vPC (disabled
by default) No
NoBA
BA,or Loopguard
Loopguard
(Default)
(Default)
vPC is loopfree and avoid issues with split-brain
Acc1 BPDU-guard Acc2
Enable STP port type edge and port type edge
trunk on host ports Port Type
Enable STP BPDU-guard globally on access Edge / Edge Trunk
switches
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Virtual Port-Channel
STP Leading Practices (2)
Run consistent STP mode to avoid slow STP
convergence (30+ secs) agg1a agg1b
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Virtual Port Channel (vPC)
Leading Practices/Configuration (1) vPC Devices
Domain (1
1000)
A primary and a secondary vPC peer STP Primary vPC Primary vPC Secondary
device are elected by default. For better root & HSRP (role priority role priority
vPC management, it is recommended to active router 8192) 16382
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
Virtual Port Channel (vPC)
Leading Practices/Configuration (6) vPC Domain 10
Both switches in the vPC Domain maintain
distinct control planes
CFS provides for protocol state
synchronisation between both peers (MAC
Address table, IGMP state, ) vPC Domain 20
P P
P P
7k1 7k2 L3
Po1 P Routing Protocol Peer
ECMP
Dynamic Peering Relationship
P P Router
P P Router
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 87
FabricPath Implementation
Implementation
and Leading
Practices
FabricPath
FabricPath connects a group of switches using an arbitrary topology and provides
Scalability, High Bandwidth, High Resiliency, L3 integration and L2 integration
FabricPath
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 91
Use Case: High Performance Compute
Building Large Scalable Compute Clusters
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 93
Implementation and
Leading Practices
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. 2248TP-1G
Cisco Public 96
Implementation and
Leading Practices
X X X X
FE
CoPP
FE
CoPP ... FE
CoPP
FE
CoPP N Linecards
Allow X Allow X Allow X Allow X
Linecard Linecard Linecard
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 99
Control Plane Policing (CoPP)
Leading Practices (1)
The default policy has optimised values suitable for basic device operations
It is recommended to use strict CoPP policy (default) and modify the CoPP
policy as per the Data centre application requirements
It is not recommended to disable CoPP
The added / modified policies can set to monitor mode initially by setting
the violate action to transmit
Because traffic patterns constantly change in a DC, customizing of CoPP is an
ongoing process
Monitor unintended drops and add / modify the default CoPP policy
according to expected traffic patterns
Nexus7K# show policy-map interface control-plane | inc violated
violated 59 bytes; action: drop
.
Nexus7K(config)# policy-map type control-plane copp-system-policy
Nexus7K(config-pmap)# class copp-system-class-monitoring
Nexus7K(config-pmap-c)# police cir 200 kbps bc 1000 ms conform transmit
violate drop
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 100
Control Plane Policing (CoPP)
Leading Practices (2)
SW Added Traffic
Additional traffic classes are added and enhanced in Class
different software releases
Since running CoPP policy does not automatically
update after software upgrade,
5.1(1) L2 un-policed
4.2(6) L2 default/No-IP
It is recommended that
After software upgrade, if major features are
. .
added, run setup command to apply the default
policy DHCP ACL
4.2(3)
(Bootpc,Bootps)
Future enhancement to generate syslog with
changes in BP CoPP and CLI to see the changes 4.2(1) WCCP, CTS
Nexus7K# setup
----deleted----
Any non-default CoPP policies need to be
Configure best practices CoPP profile (strict/moderate/lenient/none) [strict]:
reapplied after setup
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 102
Control Plane Policing (CoPP)
Customisation Example
If servers use ICMP Pings and ARPs to verify
ARP Net1 Class
the default gateway access from the active Exceed
Cir / Bc
ARP Set 1
NIC (not recommended)
ARP Net2 Class
Supervisor ARP Set 2
A single mal-functioning servers can cause
impact to all servers in the aggregation block
ARP Catch-All Class
CoPP can be customized to limit the impact to ARP
individual subnets or groups of subnets Normal Class
DHCP-SNOOP
Configuration Steps:
1) Remove ARP/ICMP from default classes
2) Create new ARPs and ICMP classes based
on subnets or group of subnets
3) Create a catch all class for ARPs and ICMP
(make sure all subnets are covered)
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 103
Hardware Rate-Limiter
Hardware-limiters complement CoPP to Supervisor
protect the CPU (enabled by default) (CPU)
Rate limit supervisor-bound exception and
redirected traffic
HWRL
The configured setting is per forwarding
engine (FE)
Configure and monitor from default VDC CoPP
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 105
Conclusion
Key Takeaways
Understand requirements and features available from the product
Choose the topology and use the leading practices to design a solution
that is scalable
Test the solution
Implement the solution
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 107
Q&A
Complete Your Online Session
Evaluation
Complete your session evaluation:
Directly from your mobile device by visiting
www.ciscoliveaustralia.com/mobile and login
by entering your badge ID (located on the
front of your badge)
Open a browser on your own computer Dont forget to activate your Cisco Live
to access the Cisco Live onsite portal Virtual account for access to all session
materials, communities, and on-demand and
live activities throughout the year. Activate your
account at any internet station or visit
www.ciscolivevirtual.com.
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 109
BRKDCT-2951 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 110