Sie sind auf Seite 1von 168

Cisco ASR 9000 System

Architecture
Aleksandar Vidakovic; Technical Leader, CCIE 5697 R&S
Agenda
Introduction
Troubleshooting Packet Forwarding Update and
Recap
Understanding TCAM operations and ACLs
ARP
Fragmentation
Load-balancing review
QOS
Tomahawk
Usability

3
Acknowledgements
With contributions from
Xander Thuijs
Sadanande Phadke
Eddie Chami

YOU
Our team really REALLY appreciates the interaction online via the forums and the direct feedback
that you have given.
We hope to have shown you the last few years that your feedback didnt get unnoticed with a large
emphasis on usability and ease of use.
Your ratings on this session the last few years has been overwhelming which gives us so much pride
and joy to continue to improve whatever we need to or can.
So PLEASE keep that input coming and YOUR participation is necessary to build a better product
TOGETHER!

4
Introduction
What is new and what has
changed recently

5
New hardware
ASR9910
Beefed up backplane for higher speed cards (8x100)
Wildchild SFP

P
CPU
SFP
SFP

Fixed 40G/56G linecard H


SFP
SFP
SFP
SFP
SFP
Y

40x1G or 2x10G+16x1G SFP


Swi
P
tch
SFP
SFP
SFP
H
SFP
SFP
Y Fab
VSM applications
SFP
SFP
ric
SFP
SFP
SFP
P
Radware/Arbor SFP
SFP
H R
S
Y
NP FIA
SFP
SFP
SFP
P
SFP

P Switch 0
Tomahawk linecards
SFP
SFP
SFP
SFP
H Fabric Swi
Y tch
SFP
SFP

4x100G/8x100G SFP

SFP
Fab
P
ric
SFP
SFP

RSP880 H
SFP
SFP
SFP
SFP
SFP
Y

6
Organizational and Software
No more BU. Routing segment taking care of all routing platforms
The goods from every group and architecture put together, eliminating the bad from
before
Dedicated dev/test teams focusing on usability
Extreme focus on usability and ease of use
Installation ASR 9000 Product Survey What can be improved?
Knobs, tweaks and tunes

Feature packs / Service packs


Improving documentation
More forum stuff
different model for cco documentation

7
Troubleshooting Packet
Forwarding
Update and recap

8
NPU Packet Processing - Ingress
5 Stages:

Queueing
Parse Search Resolve Modify Scheduling

L2/L3 header Performs QoS Processes Search Adds internal Queuing,


packet parsing in and ACL results: system headers Shaping and
TCAM lookups in ACL filtering Egress Control Scheduling
Builds keys for TCAM tables Ingress QoS Header (ECH) functions
ingress ACL, Performs L2 classification and Switch Fabric All packets go
QoS and and L3 lookups policing Header (SFH) through this
forwarding in RLDRAM Forwarding (egress stage
lookups (uCode) SFP determined)
Performs L2 MAC
learning 9
ASR9000 Fully Distributed Control Plane
LPTS (local packet transport service):
control plane policing RP
Punt
CPU FPGA
FIA

CPU
CPAK 0 iFIB LC1 Switch Fabric Switch Fabric
Control CPAK 1
PHY NP
LPTS FIA
packet CPAK 2
PHY NP FIA
CPAK 3
Switch RP CPU: Routing, MPLS, IGMP, PIM,


Fabric
CPAK 4
PHY NP FIA
(SM15) HSRP/VRRP, etc
CPAK 5
Up to
14x120G
CPAK 6 LC CPU: ARP, ICMP, BFD, NetFlow,
PHY NP FIA OAM, etc
CPAK 7

10
Input Drops Troubleshooting
Troubleshooting this?
Piece of cake starting with
GigabitEthernet0/0/1/6.1 is up, line protocol is up
<..output omitted..>
IOS XR 5.3.3!!!
307793 packets input, 313561308 bytes, 227987 total input drops

New packet drops troubleshooting tools in IOS XR 5.3.3 and later:


monitor np interface
show controller np capture
Available on both Tomahawk and Typhoon
monitor np counter still available for all other counters

11
Monitor NP Interface
RP/0/RSP0/CPU0:our9001#monitor np interface g0/0/1/6.1 count 2 time 1 location 0/0/CPU0
Monitor NP counters of GigabitEthernet0_0_1_6.1 for 2 sec

<..output omitted..>
**** Sun Jan 31 22:14:32 2016 ****

Monitor 2 non-zero NP1 counters: GigabitEthernet0_0_1_6.1


Offset Counter FrameValue Rate (pps)
-------------------------------------------------------------------------------
262 RSV_DROP_MPLS_LEAF_NO_MATCH_MONITOR 101 49
1307 PARSE_DROP_IPV4_CHECKSUM_ERROR_MONITOR 101 50

(Count 2 of 2)
RP/0/RSP0/CPU0:our9001#

12
Monitor NP Interface
RP/0/RSP0/CPU0:our9001#monitor np interface g0/0/1/6.1 count 2 time 1 location 0/0/CPU0
Monitor NP counters of GigabitEthernet0_0_1_6.1 for 2 sec

<..output omitted..> Temporarily uses a separate memory region for drop counters on selected
**** Sun Jan 31 22:14:32 2016 ****
interface
Monitor 2 non-zero NP1 counters: GigabitEthernet0_0_1_6.1
Counters reported with _MONITOR appendix
Offset Counter FrameValue Rate (pps)
These counters are not added to global NP counters
-------------------------------------------------------------------------------
262 RSV_DROP_MPLS_LEAF_NO_MATCH_MONITOR 101 49
By default runs
1307 PARSE_DROP_IPV4_CHECKSUM_ERROR_MONITOR one capture during
101 5 seconds50(configurable count and time)
(Count 2 of 2) One session at the time per LC
RP/0/RSP0/CPU0:our9001#
Supports physical and BE (sub)interfaces
Physical (sub)int: monitoring runs on NP that hosts the interface
BE(sub)int: monitoring runs on all NP that hosts the member

Applicable only to ucode stages where uidb is known


Works perfectly for input drops troubleshooting and some output drops

13
Show Controllers NP Capture
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 location 0/0/CPU0

NP1 capture buffer has seen 426268 packets - displaying 32

Sun Jan 31 22:55:13.935 : RSV_DROP_MPLS_LEAF_NO_MATCH


From GigabitEthernet0_0_1_6: 1222 byte packet on NP1
0000: 84 78 ac 78 ca 3e 30 f7 0d f8 af 81 81 00 03 85
0010: 88 47 05 dc 11 ff 45 00 00 64 01 ae 00 00 ff 01
0020: 62 c3 ac 12 00 02 ac 10 ff 02 00 00 02 3a 00 0a
<..output omitted..>
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 help location 0/0/CPU0

NP1 Status Capture Counter Name


---------------------+------------------------------
Capturing PARSE_UNKNOWN_DIR_DROP
Capturing PARSE_UNKNOWN_DIR_1
<output omitted..>
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 filter RSV_DROP_MPLS_LEAF_NO_MATCH disable location 0/0/CPU0

Disable NP1 packet capture for: RSV_DROP_MPLS_LEAF_NO_MATCH

14
Show Controllers NP Capture
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 location 0/0/CPU0

NP1 capture buffer has seen 426268 packets - displaying 32

Sun Jan 31 22:55:13.935 : RSV_DROP_MPLS_LEAF_NO_MATCH


From GigabitEthernet0_0_1_6:
1222 byte packet on NP1
Circular buffer captures the recently dropped packets
0000: 84 78 ac 78 ca 3e 30 f7 0d f8 af 81 81 00 03 85
0010: 88 47 05 dc 11 ff 45 00 00 Tomahawk:
64 01 ae 00128
00buffers
ff 01
0020: 62 c3 ac 12 00 02 ac 10 ff Typhoon:
02 00 00 32
02buffers
3a 00 0a
<..output omitted..>
Enablednp
RP/0/RSP0/CPU0:our9001#sh controllers bycapture nohelp
defaultnp1 configuration required!
location 0/0/CPU0

NP1 Status Works


Capture at port-level
Counter Name
---------------------+------------------------------
L2 encapsulation is included in the dump you can decode the sub-interface from the
Capturing PARSE_UNKNOWN_DIR_DROP
encapsulation
Capturing PARSE_UNKNOWN_DIR_1
<output omitted..> In case of packets spanning more than one buffer, only the first buffer is captured
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 filter RSV_DROP_MPLS_LEAF_NO_MATCH disable location 0/0/CPU0
Filtering is supported you can select which drop reasons not to capture
Disable NP1 packet capture for: RSV_DROP_MPLS_LEAF_NO_MATCH
Run the help option to see the eligible counters and their status

15
Show Controllers NP Capture Next Steps
Ethernet II, Src: 30:f7:0d:f8:af:81, Dst: 84:78:ac:78:ca:3e
interface GigabitEthernet0/0/1/6.1
Type: 802.1Q Virtual LAN (0x8100)
ipv4 address 172.18.0.1 255.255.255.0
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 901
encapsulation dot1q 901
Type: MPLS label switched packet (0x8847) !
MultiProtocol Label Switching Header, Label: 24001, Exp: 0, S: 1, TTL: 255
MPLS Label: 24001
MPLS Experimental Bits: 0
MPLS Bottom Of Label Stack: 1
MPLS TTL: 255
Internet Protocol, Src: 172.18.0.2 (172.18.0.2), Dst: 172.16.255.2 (172.16.255.2)
Internet Control Message Protocol
Type: 0 (Echo (ping) reply)
Code: 0 ()

16
Troubleshooting Input Drops Next Steps
Ethernet II, Src: 30:f7:0d:f8:af:81, Dst: 84:78:ac:78:ca:3e interface GigabitEthernet0/0/1/6.1
Type: 802.1Q Virtual LAN (0x8100) ipv4 address 172.18.0.1 255.255.255.0
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 901 encapsulation dot1q 901
Type: MPLS label switched packet (0x8847) !
MultiProtocol Label Switching Header, Label: 24001, Exp: 0, S: 1, TTL: 255
MPLS Label: 24001 RP/0/RSP0/CPU0:our9001#sh mpls forwarding labels 24001
MPLS Experimental Bits: 0
MPLS Bottom Of Label Stack: 1
MPLS TTL: 255 RP/0/RSP0/CPU0:our9001#sh mpls ldp bindings local-label 24001
Internet Protocol, Src: 172.18.0.2 (172.18.0.2), Dst: 172.16.255.2 (172.16.255.2)
Internet Control Message Protocol
RP/0/RSP0/CPU0:our9001#sh mpls ldp bindings 172.16.255.2/32
Type: 0 (Echo (ping) reply)
Code: 0 () 172.16.255.2/32, rev 48
Local binding: label: 24010
Remote bindings: (1 peers)
Peer Label
----------------- ---------
172.16.255.3:0 23 Drop reason: Upstream peer is
RP/0/RSP0/CPU0:our9001# sending packets with a wrong
MPLS label because it has a wrong
prefix to MPLS label binding!

17
Monitor NP Counter

Available since 4.3.x


ACL with capture can be used to filter packets you want to match
IPv4/v6 ACL can still be used for matching if the MPLS stack is one level deep
All captured packets are dropped!!!
NP reset is required upon capture completion
~50ms traffic outage on Typhoon, ~150 on Tomahawk

18
Monitor NP Counter
RP/0/RSP0/CPU0:our9001#monitor np counter ACL_CAPTURE_NO_SPAN.1 np1 location 0/0/CPU0

Warning: Every packet captured will be dropped! If you use the 'count'
option to capture multiple protocol packets, this could disrupt
protocol sessions (eg, OSPF session flap). So if capturing protocol
packets, capture only 1 at a time.

Additional packets might be dropped in the background during the


capture; up to 1 second in the worst case scenario. In most cases
only the captured packets are dropped.

Warning: A mandatory NP reset will be done after monitor to clean up.


This will cause ~50ms traffic outage. Links will stay Up.
Proceed y/n [y] >
Monitor ACL_CAPTURE_NO_SPAN.1 on NP1 ... (Ctrl-C to quit)

<output omitted..> ipv4 access-list CL16


10 permit icmp host 172.18.0.2 host 172.16.255.2 capture
!
Cleanup: Confirm NP reset now (~50ms traffic outage).
Ready? [enter] > interface GigabitEthernet0/0/1/6.1
RP/0/RSP0/CPU0:our9001# ipv4 access-group CL16 ingress

RP/0/RSP0/CPU0:our9001#sh controllers np counters np1 location 0/0/CPU0 | i SPAN


483 ACL_CAPTURE_NO_SPAN 14859 3
19
Show drops all enhancement
Supported starting with 5.3.0
Uses a grammar file to combine outputs of other show commands
Easy way to achieve a combined view of relevant aspects (drops are the most obvious
use case)
Grammar file:
Can be modified to suite particular troubleshooting tasks
System will look for it at two locations:
1. disk0a:/usr/packet_drops.list
2. /pkg/etc/packet_drops.list (default)

show drops all commands shows the constituent commands that will be called
for parsing the final output

20
Show drops all sample output (1)
RP/0/RP0/CPU0:ios#sh drops all commands
Wed Feb 4 05:27:40.915 UTC
Module CLI
[arp] show arp traffic
[cef] show cef drops
[fabric] show controllers fabric fia drops egress
[fabric] show controllers fabric fia drops ingress
[lpts] show lpts pifib hardware entry statistics
[lpts] show lpts pifib hardware police
[lpts] show lpts pifib hardware static-police
[netio] show netio drops
[netio] fwd_netio_debug
[niantic-driver] show controllers dmac client punt statistics
[np] show controller np counters
[np] show controllers np tm counters all
[spp] show spp node-counters
[spp] show spp client detail
[spp] show spp ioctrl

21
Show drops all sample output (2)
RP/0/RP0/CPU0:ios#sh drops all location 0/5/CPU0
Wed Feb 4 05:26:30.192 UTC

=====================================
Checking for drops on 0/5/CPU0
=====================================

show cef drops:


[cef:0/5/CPU0] Discard drops packets : 5

show controllers fabric fia drops ingress:


[fabric:FIA-0] sp0 crc err: 2746653
[fabric:FIA-0] sp0 bad align: 663
[fabric:FIA-0] sp0 bad code: 2
[fabric:FIA-0] sp0 align fail: 101
<snip>
[fabric:FIA-3] sp1 prot err: 150577

show controller np counters:


[np:NP0] MODIFY_PUNT_REASON_MISS_DROP: 1
[np:NP1] MODIFY_PUNT_REASON_MISS_DROP: 1
[np:NP2] MODIFY_PUNT_REASON_MISS_DROP: 1
[np:NP3] PARSE_ING_DISCARD: 5
[np:NP3] PARSE_DROP_IN_UIDB_DOWN: 5
[np:NP3] MODIFY_PUNT_REASON_MISS_DROP: 1

22
Troubleshooting NP Forwarding
1. Identify interface in question.
2. Identify the mapping from interface to NPU.
3. Examine NP counters.
4. Look for rate counters that match lost traffic rate.
If none of the counters match the expect traffic, check for drops at interface controller

5. Lookup the counter description.


6. If required capture the packet hitting the counter (Typhoon/Tomahawk only).
If troubleshooting drops use the new tools monitor np interface and show controller np
capture.

7. If packets are forwarded to the fabric, run fabric troubleshooting steps.


8. Identify egress NP and repeat steps 3 to 6.

23
Understanding TCAM and
ACL

24
What is a TCAM
It is reversed memory:
Normal memory receives an address and provides the data on that location
TCAM receives a data pattern (aka key) and searches where that content is found
That is GREAT for matching!
Deterministic performance!

But.. TCAM is power hungry, expensive and limited in size


ASR9000 uses TCAM for:
VLAN matching (q or qiq combo matching to an EFP interface)
QOS class-map matching
ACL matching

When A9K receives a packet a key is built and passed to tcam this returns
both ACL match results and qos class-map results. The key has a particular
width (e.g.: 160 or 640 bits)
25
TCAM Capacity
ACLs are programmed in TCAM memory in LC
TCAM entries are shared between QOS, netflow, LPTS and ACL
TCAM allocation happens when ACL is applied to an interface

In Typhoon/Tomahawk, 2 different size TCAM keys are defined


160 bit Keys: Used by IPv4 applications and compression level 0 and 1
640 bit Keys: Used by IPv6 applications and ipv4 compression level 3 ACLs
Current CLI commands incorrectly specify144/576 key sizes (CSCuj12424).
144/576 bit keys couldnt accommodate for port ranges (144/576 was used on Trident)

Transitional states: make-before-break method


If ACL is modified inline, new ACL will be committed into TCAM and applied before the
old one is removed (requires 2x the space)
For long ACLs recommendation is to 1) create a new ACL, 2) apply to the interface, 3)
remove the old ACL

26
How Much Space My ACL Requires? Object groups simplify
long ACL configuration

object-group network ipv4 CLIENTS object-group port NFS_UDP object-group network ipv4 HOST
10.10.10.0/24 eq sunrpc 192.168.162.140/32
10.40.40.0/24 eq 635
! range 38465 38467

ipv4 access-list sample-acl


10 permit udp net-group CLIENTS port-group NFS_UDP net-group HOST
!
interface Bundle-Ether34.41
ipv4 access-group sample-acl ingress

RP/0/RSP0/CPU0:av-asr9001#sh pfilter-ea fea ipv4-acl sample-acl location 0/0/CPU0


Rgn sample-acl, lkup v4, Dir In, Chan 1, acl_id 4, vmr_id 5, num_aces 2, num_tcam_entries 29, refcnt 1.
ACE List for This Region:
seq_num 10, tcam_entries 28, stats_ace_id 0x5304a6 (0x612530) new 0
seq_num 2147483647, tcam_entries 1, stats_ace_id 0x5304a7 (0x612538) new 0
Intf List for This Region:
BE34.41, hw_count 0.

27
TCAM Partitioning If you have large
RP/0/RSP0/CPU0:nv-cluster-escalation#show prm server tcam partition all location 0/1/CPU0 ACLs and running
Node: 0/1/CPU0:
----------------------------------------------------------------
out of TCAM space
TCAM partition information: 1 ods2 blk = 2048 entries, 1 ods8 blk = 512 entries
NP0 : tot-ods2-blks 47 [60% of ods2+ods8 blks], used-ods2-blks 4 [ 5% of ods2+ods8 blks]
NP0 : tot-ods8-blks 31 [40% of ods2+ods8 blks], used-ods8-blks 2 [ 3% of ods2+ods8 blks]
NP1 : tot-ods2-blks 47 [60% of ods2+ods8 blks], used-ods2-blks 4
NP1 : tot-ods8-blks 31 [40% of ods2+ods8 blks], used-ods8-blks 2
[
[
5% of ods2+ods8
3% of ods2+ods8
blks]
blks]
Confirm current partitioning
(ods2 == 160 bit key)
RP/0/RSP0/CPU0:nv-cluster-escalation(admin-config)#hw-module profile tcam ?
(ods8 == 640 bit key)
default Default tcam partitions ods2:ods8 to 60:40
tcam-part-30-70 Set tcam partitions ods2:ods8 to 30:70
tcam-part-40-60 Set tcam partitions ods2:ods8 to 40:60
tcam-part-50-50 Set tcam partitions ods2:ods8 to 50:50 1. Configure new
tcam-part-70-30 Set tcam partitions ods2:ods8 to 70:30
RP/0/RSP0/CPU0:nv-cluster-escalation(admin-config)#hw-module profile tcam tcam-part-30-70 location 0/1/CPU0
partitioning
Tue Oct 21 15:03:35.493 UTC 2. reload the line card
In order to activate this new tcam partition profile, you must manually reload the line card.

RP/0/RSP0/CPU0:nv-cluster-escalation#show prm server tcam partition all location 0/1/CPU0 Confirm new partitioning
Node: 0/1/CPU0:
----------------------------------------------------------------
TCAM partition information: 1 ods2 blk = 2048 entries, 1 ods8 blk = 512 entries
NP0 : tot-ods2-blks 23 [29% of ods2+ods8 blks], used-ods2-blks 4 [ 5% of ods2+ods8 blks]
NP0 : tot-ods8-blks 55 [71% of ods2+ods8 blks], used-ods8-blks 2 [ 3% of ods2+ods8 blks]
NP1 : tot-ods2-blks 23 [29% of ods2+ods8 blks], used-ods2-blks 4 [ 5% of ods2+ods8 blks]
NP1 : tot-ods8-blks 55 [71% of ods2+ods8 blks], used-ods8-blks 2 [ 3% of ods2+ods8 blks]

28
ARP

29
ARP Architecture LDP RSVP-TE BGP OSPF
IOS XR: Static
ISIS EIGRP
fully distributed
two-stage forwarding operation (clear LSD RIB
separation of ingress and egress feature RP
processing)
Internal EOBC
Layer 2 header imposition is an egress
operation
only the line card that hosts the FIB Adjacency
egress interface needs to know the ARP LC NPU
Layer 2 encapsulation for packets that SW FIB
have to be forwarded out of that
AIB AIB: Adjacency Information Base
interface. LC CPU
RIB: Routing Information Base
As a consequence, ARP and FIB: Forwarding Information Base
adjacency tables are local to a line card. LSD: Label Switch Database

Exceptions:
Bundle-Ethernet interfaces: ARP synced b/w all LCs that host the bundle members.
Bridged Virtual Interfaces (BVI): ARP synced b/w all LCs. (Ingress+Egress L3
processing on ingress LC). 30
High-Scale ARP Deployments
Challenges:
synchronisation of large number of ARP entries across line cards during large ARP
churn:
ARP storm in the attached subnet
Router starts forwarding to end devices in a large attached subnet, triggering ARP resolution
requests
Running the show arp command or poll ARP via SNMP in very high scale can further
slow down the ARP process
Supported scale:
128k entries per LC tested thoroughly at Cisco
ARP table can grow up to the dynamic memory limit imposed on the ARP process
You can go higher, but make sure you test your deployment scenario

31
ARP Data Plane Line card ARP LC
CPU
Ingress NP classifies the packet to an
interface and detects that it's an ARP packet. spio netio

Packet is subjected to Local Packet Transport SPP


Services (LPTS) processing: Tsec Driver
the packet goes through the dedicated policer for ARP
packets (all ARP is subject to this policer, the NP ucode
Punt Switch
doesn't validate the request and would punt ARP requests
directly to the LC CPU)
if not dropped by the policer, packet is punted to the local LPTS
line card CPU PHY NP FIA
Packet is received by SPP and passed on to
the ARP process via spio.
Outgoing ARP packets are injected from RP CPU to the NP that hosts the
egress interface.
BE: RP performs the load-balancing hash calculation and injects the packet to selected bundle
member.
BVI: RP sends the APR packet toward np0 of a LC dedicated to handle packets injected into BVI.
32
ARP Data Plane - Resolution Line card ARP LC
CPU
When the line card that performs the egress IPv4
processing receives a packet for a prefix for spio netio
which there is no adjacency information, the SPP
packet must be punted to slow path for ARP
resolution. Tsec Driver

Packet is subjected to Local Packet Transport Punt Switch


Services (LPTS) processing:
the packet goes through the dedicated policer for ARP packets LPTS
(all ARP is subject to this policer, the NP ucode doesn't PHY NP
validate the request and would punt ARP requests directly to FIA
the LC CPU)
if not dropped by the policer, packet is punted to the local line
card CPU
Packet is received by NetIO (dedicated queue for packets that require ARP resolution).
NetIO triggers an ARP resolution request for the given IPv4 address.
The original IPv4 packet sits in NetIO queue until ARP resolution is completed and
adjacency created, after which it's injected towards the NP.
33
ARP Data and Control Plane Commands
RP/0/RSP0/CPU0:ASR9006-H#sh controllers np counters np5 location 0/1/CPU0 NP5 is receiving
<output omitted..> excessive ARP packets
776 ARP 130760261 929 929 pps punted
777 ARP_EXCD 613523651 4422 4422 pps dropped
RP/0/RSP0/CPU0:ASR9006-H#sh netio clients location 0/1/CPU0
<output omitted..>
Input Punt XIPC InputQ XIPC PuntQ 993 packets are sitting in
ClientID Drop/Total Drop/Total Cur/High/Max Cur/High/Max NetIO awaiting ARP
-------------------------------------------------------------------------------- resolution
<output omitted..>
arp 0/0 17025280/138794437 0/0/1000 993/1000/1000

RP/0/RSP0/CPU0:ASR9006-H#sh packet-memory job 115 data 128 location 0/1/CPU0


Fri Jan 29 07:26:03.500 MyZone
Pakhandle Job Id Ifinput Ifoutput dll/pc Size NW Offset If you want to see which
0xdd8e18f8 115 TenGigE0/1/0/17BVI1051 0x4e78bd78 60 14 packets are awaiting ARP
resolution check ARP
0000000: 00082C00 1058C800 00000000 04DA4500 002E0000 process job ID on desired
0000020: 00003F3D 8BFA1600 00160D00 CC830001 02030405 location and dump the
0000040: 06070809 0A0B0C0D 0E0F1011 12131415 16171819 packets
-----------------------------------------------------------------

34
ARP Configuration Commands
!
Configure the LPTS ARP policer.
lpts punt police location 0/1/CPU0
protocol arp rate 400 This rate is applied to all NPs on the LC. If you have 8 NPs on the LC the max
! total ARP rate hitting the LC CPU would be: 400 x 8 = 3200 pps

!
Recommended in BNG deployments
subscriber arp scale-mode-enable
! Available starting from 5.3.3

Recommended in HSRP/VRRP deployments.


RP/0/RSP0/CPU0:ASR9006-H(config)#arp redundancy group 1 ? Keeps ARP pre-populated on standby
interface-list List of Interfaces for this Group Available starting from 5.3.2
peer Peer Address for this Group
source-interface Source interface for Redundancy Peer Communication

Read more on ARP operation and troubleshooting in:


https://supportforums.cisco.com/document/12766486/troubleshooting-arp-asr9000-routers

35
How Do I Tell If ARP Process Is Overloaded?
LC/0/1/CPU0:Jan 27 06:38:12.445 : netio[270]: %PKT_INFRA-PQMON-6-QUEUE_DROP : Taildrop on XIPC queue 2 owned
by arp (jid=115)
NetIO queue is full, packets that require ARP resolution are dropped

RP/0/RSP0/CPU0:ASR9006-H#sh arp trace location 0/1/CPU0 | i GSP send failed


Jan 28 00:18:59.839 ipv4_arp/slow 0/1/CPU0 104# t1 ERR: GSP send failed for 128 arp entries, message ID:
5140529: No space left on device
ARP syncronisation requests are dropped
This is more likely occur only in IRB deployments (BVI interface)

36
High-Scale ARP Deployments Tips
Validate your solution if youre going beyond 128k/LC
Tighten the LPTS policer rate.
A safe ball-park number is 1000 per LC (if the LC has 4xNPs, configure 250).
Monitor ARP counters at NP
Monitor NetIO ARP resolution queue
Monitor adjacencies summary info
If using BVI, look for GSP failures in ARP traces
In BNG deployment on IOS XR release 5.3.3 or later configure "subscriber arp
scale-mode-enable
Read more on ARP operation and troubleshooting in:
https://supportforums.cisco.com/document/12766486/troubleshooting-arp-asr9000-routers

37
Fragmentation

38
Fragmentation and Reassembly at a glance
All fragmentation is handled on the (LC) CPU on the egress side (ingress LC does not do MTU check)
Fragmentation is outsourced to NETIO (process switching)
No features applied on fragment injection by netio. Directly from LC CPU to NPU xmit.
BNG has SPP based fragmentation, at 10kpps, feature applied, packets are injected through the fabric
Memory
Verify with show spp node-counters CPU
0

BGP 0
1 1

Reassembly only on for us


2

CDP ICMP 2

packets handled by NETIO, FTP 0


1 4

5
HDLC
application receives complete packets 6

Each grey arrow is an IPC inter process call. raw UDP TCP 0
1
8

10

Because XR has protected or virtual mem space, each process 11

cant look in the other guys. 12

13
Unless shared mem is used which can be seen by all, this is
14
limited in size however
15

BFD Netflow NetIO CpuCntrl Queues 16

From high/medium/critical etc

SPP

39
Verifying
Looking at fragmentation at the SPP (the interupt switching path) level.
RP/0/RSP0/CPU0:A9K-BNG#show spp node-counters | i frag
ipv4_frag
Drop: Dont-fragment set: 3125 <<<< packets that have DF bit set
ipv4-frag: 3854 <<<< packets fragmented

Verifying the NP counters with show controller np counters NP<x> location 0/<y>/CPU0
416 RSV_PUNT_IP_MTU_EXCEEDED 9615 99 <<<< 100pps requiring frag
1048 PPPOE_FRAG_NEEDED_PUNT 9615 99 <<<< on pppoe sessions
842 IPV4_FRAG_NEEDED_PUNT 677 100 <<<< packets punted for FRAG reasons

Note that in both cases the DF bit is NOT assessed in the hardware, this is handled
by the controlling agent (NETIO/SPP)

40
Preventing fragmentation 1: MSS adjust
MSS adjust https://supportforums.cisco.com/document/12417241/tcp-mss-adjust-asr9000
Segment size in TCP is the maximum frame size (as opposed to window size which is
the number of packets that can be sent to be ackd together.
ASR9000 allows for TCP adjusting in hardware (typhoon and up).
The MSS value needs to be the first option in the TCP header.
The MSS is part of the initial syn from client to server
MSS rewrite requires a checksum update.
Needs to be enabled per NP
Value possible 1280-1535 (only 8 MSB adjusted)
MSS value designated

BNG:
Normal L3 routing:
subscriber
pta tcp mss-adjust 1410 hw-module location 0/0/CPU0 tcp-mss-adjust np 1 value 1300

interface Bundle-Ether48.10
ipv4 address 4.8.10.4 255.255.255.0
ipv4 tcp-mss-adjust enable 41
Preventing fragmentation 2: MTU tuning
Gotchas:
OSPF
DBD Exchange can get stuck
ISIS
Hello padding

MTU configuration between main and subinterface and packet reception:


If you have a main MTU defined, then the packet received will get dropped when it is over that
configured MTU value.
If you have a main MTU defined of say 8000 and a subif MTU of 2000 and that subif is an
L2transport interface, you WILL be able to receive a 3000 byte packet on your AC.
If the AC is an xconnect with a PW going out a core interface, the core interface MTU on the egress
LC will be enforced and may result in fragmentation.
If you receive a 3000 byte packet on your PW (assuming core can handle that) and needs to go out
the AC with the subif MTU of 2000, this WILL go through.

42
Loadbalancing revisted

43
Access L2VPN PE Core

Loadbalancing FAQ
If packets are fragmented, L4 is omitted from the hash
calculation
V6 flow label addition (includes the 5 tupple/needs config) to the
hash coming (532)
Show cef exact route or bundle hash BE<x> can be used to
feed info and determine actual path/member, but this is a
shadow calculation that is *supposed* to be the same as the
HW.
Mixing Bundle members between trident/typhoon/tomahawk
can be done, not recommended (though hash calc same).

44
How does ECMP path or LAG member selection work?

Every packet arriving on an NPU will under go a HASH computation. What


fields are used is dependent on encap (see overview shortly)
L2 L3 L4 payload
CRC32
32 bits

HASH 8 bits selected


8 bits selected
(3 drawn) 256 buckets

Path (ECMP)
Member (LAG)

path2 path2 path2


path1 path1 path1
Buckets distributed over available members/paths
45
ECMP Load balancing
A: IPv4 Unicast or IPv4 to MPLS (3)
IPv6 uses first 64 bits in 4.0
releases, full 128 in 42
releases
No or unknown Layer 4 protocol: IP SA, DA and Router ID
UDP or TCP: IP SA, DA, Src Port, Dst Port and Router ID
B: IPv4 Multicast
For (S,G): Source IP, Group IP, next-hop of RPF
For (*,G): RP address, Group IP address, next-hop of RPF
C: MPLS to MPLS or MPLS to IPv4
# of labels <= 4 : same as IPv4 unicast (if inner is IP based, EoMPLS, etherheader will follow: 4th label+RID)
# of labels > 4 : 4th label and Router ID

Bundle Load balancing


- L3 bundle uses 5 tuple as 1 (eg IP enabled routed bundle interface)
- MPLS enabled bundle follows C
- L2 access bundle uses access S/D-MAC + RID, OR L3 if configured (under l2vpn)
- L2 access AC to PW over mpls enabled core facing bundle uses PW label (not FAT-PW label even if configured)
- FAT PW label only useful for P/core routers

46
MPLS vs IP Based loadbalancing
When a labeled packet arrives on the interface.

The ASR9000 advances a pointer for at max 4 labels.

If the number of labels <=4 and the next nibble seen right after that label is
4: default to IPv4 based balancing
6: default to IPv6 based balancing

This means that if you have a P router that has no knowledge about the MPLS service of the packet, that nibble can
either mean the IP version (in MPLS/IP) or it can be the DMAC (in EoMPLS).

RULE: If you have EoMPLS services AND macs are starting with a 4 or 6. You HAVE to use Control-Word

45 (ipv4)
L2 MPLS MPLS 0000 (CW) 4111.0000.
41-22-33 (mac)
Control Word inserts additional zeros after the inner label showing the P nodes to go for label based balancing.

In EoMPLS, the inner label is VC label. So LB per VC then. More granular spread for EoMPLS can be achieved with FAT PW
(label based on FLOW inserted by the PE device who owns the service).
Take note of the knob to change the code: PW label code 0x11 (17 dec, as per draft specification). (IANA assignment is 0x17)

47
Loadbalancing ECMP vs UCMP and polarization
Support for Equal cost and Unequal cost
32 ways for IGP paths
32 ways (Typhoon) for BGP (recursive paths) 8-way Trident
64 members per LAG
Make sure you reduce recursiveness of routes as much as possible (static route
misconfigurations)
All loadbalancing uses the same hash computation but looks at different bits from that hash.
Use the hash shift knob to prevent polarization.
Adj nodes compute the same hash, with little variety if the RID is close
This can result in north bound or south bound routing.
Hash shift makes the nodes look at complete different bits and provide more spread.
Trial and error (4 way shift trident, 32 way typhoon, values of >5 on trident result in modulo)

48
Hash shift, what does it do?
L2 L3 L4 payload
8 bits selected

Hash shift 8
100 = 8 1 1 HASH
1 HASH 8 bits selected
010 = 4
(3 drawn) 256 buckets

1 2 3 4 7 8
Path (ECMP)
Member (LAG)

path1 path1 path2


path1 path1 path2
Buckets distributed over available members/paths
49
Loadbalancing knobs and what they affect
L2vpn loadbalancing src-dest-ip
For L2 bundle interfaces egress out of the AC
FAT label computation on ingress from AC towards core
Note: upstream loadbalancing out of core interface does not look at fat label (inserted after hash is
computed)

On bundle (sub)interfaces:
Loadbalance on srcIP, dest IP, src/dest or fixed hash value (tie vlan to hash result)
Used to be on L2transport only, now also on L3.
GRE (no knob needed anymore)
Encap: based on incoming ip
Decap: based on inner ip
Transit: based on inner payload if incoming is v4 or v6 otherwise based on GRE header
So running mpls over GRE will result in LB skews!

50
ASR9000 L2VPN Load-Balancing (cont.)
ASR9000 PE Imposition load-balancing
behaviors PE Per-Flow load-balance configuration based
Per-PW based on MPLS VC label (default) on L2 payload information
Per-Flow based on L2 payload information; i.e. !
L2 DMAC / L2 SMAC, RTR ID l2vpn
load-balancing flow src-dst-mac
Per-Flow based on L3/L4 payload information; !
i.e. L3 D_IP / L3 S_IP / L4 D_port / L4 S_port1,
RTR ID

ASR9000 PE Disposition load-balancing PE Per-Flow load-balance configuration based


behaviors on L3/L4 payload information
Per-Flow load-balancing based on L2 payload !
l2vpn
information; i.e. L2 DMAC / L2 SMAC (default) load-balancing flow src-dst-ip
Per-Flow load-balancing based on L3/L4 !
payload information; i.e. L3 D_IP / L3 S_IP / L4
D_port / L4 S_port
(1) Typhoon/Tomahawk LCs required for L3&L4 hash keys. Trident LCs only
capable of using L3 keys
51
ASR9000 L2VPN Load-Balancing (cont.)
Applicable to Applicable to
L2VPN Is MPLS L3VPN
ASR9000 as P router load-balancing behaviors payload
Based on L3/L4 payload information for IP MPLS IP?
payloads No Yes1 RTR DA
Based on Bottom of Stack label for Non-IP MPLS
Select ECMP / Bundle Select ECMP / Bundle RTR SA
payloads
Member according to Member according to MPLS E-Type (0x8847)
IP MPLS payloads identified based on version field hash operation based hash operation based PSN MPLS Label
value (right after bottom of stack label) on bottom of stack on L3 / L4 payload PW MPLS Label
Version = 4 for IPv4 label value information 0 PW CW
Version = 6 for IPv6 DA
Anything else treated as Non-IP Non-IP SA

L2VPN (PW) traffic treated as Non-IP For L2VPN, bottom of stack 802.1q Tag (0x8100)
label could be: C-VID
PW Control-Word strongly recommended to avoid
PW VC label or E-Type (0x0800)
erroneous behavior on P router when DMAC starts
with 4/6 Flow label (when using
FAT PWs) IPv4 Payload

(1) MPLS encap IP packets with up to four (4) MPLS labels Typical EoMPLS frame
52
Flow Aware Transport PWs

Problem: How can LSRs load-balance RTR DA


traffic from flows in a PW across core RTR DA RTR SA
ECMPs and Bundle interfaces? RTR SA MPLS E-Type (0x8847)
MPLS E-Type (0x8847) PSN MPLS Label
LSRs load-balance traffic based on IP
header information (IP payloads) or PSN MPLS Label PW MPLS Label
based on bottom of stack MPLS label PW MPLS Label Flow MPLS Label
(Non-IP payloads) PW CW PW CW
PW traffic handle as Non-IP payload DA DA
SA SA
RFC6391 defines a mechanism that 802.1q Tag (0x8100) 802.1q Tag (0x8100)
introduces a Flow label that allows P C-VID C-VID
routers to distribute flows within a PW
E-Type (0x0800) E-Type (0x0800)
PEs push / pop Flow label 4 4
P routers not involve in any signaling / IPv4 Payload IPv4 Payload
handling / manipulation of Flow label
EoMPLS frame without EoMPLS frame with
Flow Label Flow Label
53
Flow Aware Transport PWs (cont.)
!
ASR9000 PE capable of negotiating (via l2vpn
LDP RFC6391) the handling of PW Flow pw-class sample-class-1
encapsulation mpls
labels load-balancing flow-label both
!
ASR9000 also capable of manually pw-class sample-class-1
encapsulation mpls
configure imposition and disposition load-balancing flow-label tx
behaviors for PW Flow labels !
pw-class sample-class-1
encapsulation mpls
Flow label value based on L2 or L3/L4 PW load-balancing flow-label rx
payload information
ASR9000 PE capable of load-balancing !
regardless of the presence of Flow Label l2vpn
pw-class sample-class
Flow label aimed at assisting P routers encapsulation mpls
load-balancing flow-label both static
!

54
PW2 (Service Y)
P router with ECMP and P router without ECMP
Bundle interfaces and Bundle interfaces

PW1 (Service X) PE router with ECMP and


PE router with ECMP and P1 P3
Non-bundle interfaces
Bundle interfaces

PE1 PE2 PE router with Bundle


interface as PW
attachment circuit (AC)
P2 P4

P router with ECMP and P router without ECMP


Non-bundle interfaces and Bundle interfaces

55
L2VPN Load-balancing (Per-VC LB)
Default - ASR9000 P with PW2 (Service Y)
Default - ASR9000 PE with
Core-facing Bundle AC Bundle
P rtr load-balances traffic PE load-balances traffic across
across Bundle members based PW1 (Service X) Bundle members based on
P3
on VC label; i.e. all traffic from P1
DA/SA MAC
F1x F2x F3x
a PW assigned to one member F4x

Svc X Flow 1
Svc X Flow 2
Svc X Flow 3
Svc X Flow 4
Svc Y Flow 1
Svc Y Flow 2 F1y F2y F3y
Svc Y Flow 3
PE1 F4y PE2
Svc Y Flow 4
Default - ASR9000 PE with P4
P2
ECMP
Default - ASR9000 PE with Default - ASR9000 P with
PE load-balances PW traffic
Core-facing Bundle ECMP
across ECMPs based on VC
PE load-balances traffic across P rtr load-balances traffic
label; i.e. all traffic from a PW
Bundle members based on VC across ECMPs based on VC
assign to one ECMP
label; i.e. all traffic from a PW label; i.e. all traffic from a PW
assigned to one member assigned to one ECMP
56
L2VPN Load-balancing (L2/L3 LB)
Default - ASR9000 P
PW loadbalancing based on
VC label; only one ECMP and PE L2VPN load-balancing knob:
one bundle member used for l2vpn
PE L2VPN load-balancing knob: load-balancing flow {src-dst-mac
l2vpn all PW traffic PW1 (Service X) | src-dst-ip}
P1 P3
load-balancing flow {src-dst-mac
| src-dst-ip} F1x F2x

Svc X Flow 1
Svc X Flow 2
Svc X Flow 3
Svc X Flow 4

Two-stage Hash PE1 PE2


process
F3x F4x
ASR9000 PE with ECMP P4 ASR9000 PE with AC Bundle
P2
PE now load-balances PW ASR9000 PE with Core- PE load-balances now traffic
traffic across ECMPs based facing Bundle across Bundle members based
on L2 or L3 payload info; i.e. PE now load-balances traffic on L2 or L3 payload info
flows from a PW distributed across Bundle members based
over ECMPs on L2 or L3 payload info; i.e.
flows from a PW distributed
over members
57
L2VPN Load-balancing (L2/L3 LB + FAT)
PE L2VPN load-balancing ASR9000 P with Core-facing
knob (same as before) Bundle No new configuration
PW loadbalancing based on required on P routers
PE FAT PW
l2vpn Flow label; i.e. flows from a PW PE L2VPN load-balancing
pw-class sample-class distributed over bundle members PW1 (Service X) knob
encapsulation mpls P1 P3
PE FAT PW
load-balancing flow- F1x
label both

Svc X Flow 1
Svc X Flow 2
Svc X Flow 3 F2x
Svc X Flow 4

ASR9000 PE F3x
PE1 PE2
PE now adds Flow
labels based on L2 or F4x
L3 payload info P2 P4 ASR9000 PE with AC Bundle
ASR9000 PE with ECMP ASR9000 PE with Core- PE load-balances now traffic
PE now load-balances PW facing Bundle ASR9000 P with ECMP across Bundle members based
traffic across ECMPs based PE now load-balances traffic P rtr now load-balances traffic on L2 or L3 payload info
on L2 or L3 payload info; i.e. across Bundle members based across ECMPs based on Flow
flows from a PW distributed on L2 or L3 payload info; i.e. label; i.e. flows from a PW
over ECMPs flows from a PW distributed distributed over ECMPs
over members
58
L2VPN LB Summary
ASR9000 as L2VPN PE router performs multi-stage hash algorithm to select
ECMPs / Bundle members
User-configurable hash keys allows for the use of L2 fields or L3/L4 fields in PW
payload in order to perform load-balancing at egress imposition
ASR9000 (as PE) complies with RFC6391 (FAT PW) to POP/PUSH Flow labels
and aid load-balancing in the Core
PE load-balancing is performed irrespective of Flow PW label presence
FAT PW allows for load-balancing of PW traffic in the Core WITHOUT requiring any
HW/SW upgrades in the LSR
Cisco has prepared a draft to address current gap of FAT PW for BGP-signaled PWs

ASR9000 as L2VPN P router performs multi-stage hash algorithm to select


ECMPs / Bundle members
Always use bottom of stack MPLS label for hashing
Bottom of stack label could be PW VC label or Flow (FAT) label for L2VPN
59
Significance of PW Control-Word

Solution:
Problem:
Add PW Control Word in
DANGER for LSR
front of PW payload. This RTR DA
LSR will confuse payload as
guarantees that a zero will
IPv4 (or IPv6) and attempt to RTR DA RTR SA
always be present and thus
load-balance based off RTR SA MPLS E-Type (0x8847)
no risk of confusion for LSR
incorrect fields
MPLS E-Type (0x8847) PSN MPLS Label
PSN MPLS Label PW MPLS Label
PW MPLS Label 0 PW CW
4 DA 4 DA 4 DA
SA SA SA
802.1q Tag (0x8100) 802.1q Tag (0x8100) 802.1q Tag (0x8100)
C-VID C-VID C-VID
Payload E-Type Payload E-Type Payload E-Type

Non-IP Payload Non-IP Payload Non-IP Payload

60
QOS:
Queuing and Scheduler
Hierarchy

61
Queuing Hierarchy Of Default Interface Queues
P1 P2 P3 L

Level 4 Queues

Level 3 Schedulers

Level 2 Schedulers
Level 1 Schedulers
62
Old format was:
show qoshal default-queue subslot 1 port 0 location 0/0/CPU0
Default Interface Queues
RP/0/RSP0/CPU0:A9K#show qoshal default-queue interface Gig0/0/1/0
Thu Apr 3 06:00:08.931 UTC
Interface Default Queues : Subslot 1, Port 0 current number of packets
===============================================================
Port 64 NP 1 TM Port 16 in the queue
Ingress: QID 0x20000 Entity: 1/0/2/4/0/0 Priority: Priority 1 Qdepth: 0
StatIDs: commit/fast_commit/drop: 0x690000/0x660/0x690001
Statistics(Pkts/Bytes):
Tx_To_TM 0/0 Fast TX: 425087/91780021 TX statistics
Total Xmt 425087/91780021 Dropped 0/0

Ingress: QID 0x20001 Entity: 1/0/2/4/0/1 Priority: Priority 2 Qdepth: 0


<...>
<...> NP/TM/Chunk/Level/Index/Offset
Egress: QID 0x20020 Entity: 1/0/2/4/4/0 Priority: Priority 1 Qdepth: 0
StatIDs: commit/fast_commit/drop: 0x6900a0/0x663/0x6900a1
Statistics(Pkts/Bytes):
Tx_To_TM 0/0 Fast TX: 412811/68090497
Total Xmt 412811/68090497 Dropped 0/0

Egress: QID 0x20021 Entity: 1/0/2/4/4/1 Priority: Priority 2 Qdepth: 0


<...>
<...>
63
MQC Hierarchy in Queuing ASIC
Port default queues MQC queues
policy-map child
P1 P2 P3 L c1 c2 cd-c class c1
priority level 1
police rate 640 kbps
class c2
bandwidth 20 mbps
class class-default cd-c
bandwidth 1 mbps
L4 !
policy-map parent
class class-default cd-p
shape average 35 mbps
service-policy child
L3 cd-p !
interface GigabitEthernet0/0/0/0
service-policy output parent

L2
Inactive entity

L1 Active entity
64
policy-map child
MQC Hierarchy in Queuing ASIC class c1
priority level 1
police rate 640 kbps
Port default queues G0/0/0/0.1 G0/0/0/0.2 class c2
P1 P2 P3 L c1 c2 cd-c c1 c2 cd-c bandwidth 20 mbps
class class-default cd-c
bandwidth 1 mbps
!
policy-map parent
class class-default cd-p
shape average 35 mbps
service-policy child
L4 !
interface GigabitEthernet0/0/0/0.1
service-policy output parent
!
L3 cd-p cd-p interface GigabitEthernet0/0/0/0.2
service-policy output parent

L2
Inactive entity

L1 Active entity
65
MQC Hierarchy in Queuing ASIC
policy-map child
Port default queues G0/0/0/0.1 class c1
priority level 1
P1 P2 P3 L c1 c2 cd-c cd-p-vc police rate 640 kbps
cd-c
class c2
bandwidth 20 mbps
class class-default
bandwidth 1 mbps
!
policy-map parent
class c3
cd-p
L4
shape average 35 mbps
service-policy child
class class-default
bandwidth 1 mbps
cd-gp
L3 c3 cd-p !
policy-map grand-parent
class class-default
cd-gp shape average 35 mbps
L2 service-policy parent
!
interface GigabitEthernet0/0/0/0.1
service-policy output grand-parent
L1
66
Priority Propagation
All ASR9000 cards support priority propagation
Traffic belonging to high priority will trump all other traffic of lower priority throughout the
Queuing ASIC hierarchy up to the port
Priority propagation is enabled on all queues by default; it cannot be turned off.
priority keyword can only be specified at the leaf level and not in the middle node of
a MQC policy-map hierarchy.

67
Priority Propagation policy-map child
class Pr1
Pr1 Pr2 Cl3 Cl4 L4 police rate 64 kbps
priority level 1
class Pr2
police rate 10 mbps
priority level 2
Parent min BW class Cl3
bandwidth 3 mbps
L3 class Cl4
Parent shaper bandwidth 1 mbps
!
policy-map parent
class parent1
shape average 25 mbps
Commit Excess service-policy child
P1 P2 class parent2
scheduler scheduler
To other L3 shape average 25 mbps
service-policy child
!
policy-map grand-parent
class class-default
Grand-Parent shaper shape average 500 mbps
service-policy parent
Commit Excess
P1 P2 L2
scheduler scheduler

68
QoS:
Queue Limits/Occupancy
Buffer Limits/Occupancy

69
Queue Accounting Vs. Port Accounting
Queue occupancy is accounted in bytes (units of 64 bytes)
Inst-queue-len (Packets) field in show policy-map interface command output.
If there are 2 packets of 80 bytes each in the queue, the Inst-queue-len shows 3 (3 x 64 = 192 bytes).
This value is then used to compare against the queue-limit and random-detect configured on the
policy-map
Staring with 5.1.1 (5.3.0 for TH) queue occupancy can be calculated in numbers of 256-byte buffers
hw-module all qos-mode wred-buffer-mode

Port-level occupancy is measured in number of 256 byte buffers


There is no check on the sum of queue-limits configured on all of a ports queues to see if it exceeds
any limits; the user can over-subscribe the queue-limits on individual queues.
Limit exists on the number of buffers per port, per direction.
The limit itself is configured as linear curve with Min & Max: Drop probability
1

0 min max Buffers


70
From show qoshal default-queue interface Gig0/0/1/0

Port Buffer Occupancy


RP/0/RSP0/CPU0:A9K#sh qoshal entity np 1 tm 0 chunk 2 level 4 index 4 0 hierarchy loc 0/0/CPU0
<...>
Np: 1 TM: 0 Chunk: 0 Level: 0 Index: 11 Offset 0
<...> Current buffer occupancy in
Egress: Entity: 1/0/0/0/11/0 Qdepth: 0 numbers of 256-byte particles
<...>
[D] WRED template - ID: 2 Curve: 0 Min/Mid/Max 657/657/851 MidDrop/MaxDrop 0/102
[D] WRED scale - ID: 3 Value: 132096
[D] WRED template - ID: 2 Curve: 1 Min/Mid/Max 657/657/851 MidDrop/MaxDrop 0/102
[D] WRED scale - ID: 3 Value: 132096
[D] WRED template - ID: 2 Curve: 2 Min/Mid/Max 657/657/851 MidDrop/MaxDrop 0/102
[D] WRED scale - ID: 3 Value: 132096
[D] WRED template - ID: 2 Curve: 3 Min/Mid/Max 657/657/851 MidDrop/MaxDrop 0/102
[D] WRED scale - ID: 3 Value: 132096
Buffer limit in numbers of PPM of WRED Scale
Two curves used by default: 256-byte particles
One for normal priority, and one for high priority (levels 1, 2 and 3).
Per-Priority Buffer-Limits mode is available in XR Release 5.1.1 all 4 curves in use
hw-module all qos-mode per-priority-buffer-limit
show qoshal qos-mode location <loc>

71
MQC Queue Occupancy
RP/0/RSP0/CPU0:av-asr9001#sh policy-map interface g0/0/1/0 output

GigabitEthernet0/0/1/0 output: core-parent policy-map core-child


class TGN
Class af21 bandwidth percent 10
Classification statistics (packets/bytes) (rate - kbps) !
Matched : 9443/9480772 0 class class-default
Transmitted : 4646/4664584 0 bandwidth percent 1
Total Dropped : 2397/2406588 0
!
Policy core-child Class TGN
Classification statistics (packets/bytes) (rate - kbps)
policy-map core-parent
Matched : 9443/9480772 0
class af21
Transmitted : 4646/4664584 0 service-policy core-child
Total Dropped : 2397/2406588 0 shape average 10 mbps
Queueing statistics !
Queue ID : 131714 class class-default
High watermark : N/A shape average 100 mbps
Inst-queue-len (packets) : 1814
Avg-queue-len : N/A
Taildropped(packets/bytes) : 2397/2406588
Queue(conform) : 4646/4664584 0
Queue(exceed) : 0/0 0
RED random drops(packets/bytes) : 0/0 Current queue occupancy in
<..> 64-byte units

72
MQC - What Is Programmed In Hardware
RP/0/RSP0/CPU0:av-asr9001#sh qos interface Gig0/0/1/0 output
Interface: GigabitEthernet0_0_1_0 output
Bandwidth configured: 1000000 kbps Bandwidth programed: 1000000 kbps policy-map core-child
ANCP user configured: 0 kbps ANCP programed in HW: 0 kbps class TGN
Port Shaper programed in HW: 0 kbps bandwidth percent 10
Policy: core-parent Total number of classes: 4 !
---------------------------------------------------------------------- class class-default
Level: 0 Policy: core-parent Class: af21 bandwidth percent 1
QueueID: N/A
!
Shape CIR : NONE
Shape PIR Profile : 2/3(S) Scale: 156 PIR: 9984 kbps PBS: 124800 bytes
policy-map core-parent
WFQ Profile: 2/9 Committed Weight: 10 Excess Weight: 10
class af21
Bandwidth: 0 kbps, BW sum for Level 0: 0 kbps, Excess Ratio: 1 service-policy core-child
---------------------------------------------------------------------- shape average 10 mbps
Level: 1 Policy: core-child Class: TGN !
Parent Policy: core-parent Class: af21 class class-default
QueueID: 131746 (Priority Normal) shape average 100 mbps
Queue Limit: 114 kbytes Abs-Index: 27 Template: 0 Curve: 0
Shape CIR Profile: INVALID
WFQ Profile: 2/76 Committed Weight: 91 Excess Weight: 91
Bandwidth: 1000 kbps, BW sum for Level 1: 1100 kbps, Excess Ratio: 1
----------------------------------------------------------------------
Level: 1 Policy: core-child Class: class-default
Parent Policy: core-parent Class: af21
QueueID: 131747 (Priority Normal)
Queue Limit: 12 kbytes Abs-Index: 7 Template: 0 Curve: 0
Shape CIR Profile: INVALID
WFQ Profile: 2/8 Committed Weight: 9 Excess Weight: 9
Bandwidth: 100 kbps, BW sum for Level 1: 1100 kbps, Excess Ratio: 1
73
policy-map core-child
class TGN
bandwidth percent 10
Excess BW for class- !
Service Rate
Excess BW for class TGN
(calculated from Excess
default (child level) class class-default
bandwidth percent 1
(calculated from Excess !
Weight)
Weight) policy-map core-parent
class af21
service-policy core-child
shape average 10 mbps
!
class class-default
Total excess BW shape average 100 mbps

Parent
Shaper Service rate of class TGN

Total guaranteed BW

Guaranteed BW for class- Guaranteed BW for class


default (child level) TGN

74
Queue Limit and Service Rate
Queue Limit is by default 100ms worth of Service Rate
Service Rate is the sum of minimum guaranteed bandwidth and bandwidth
remaining assigned to a given class
Bandwidth remaining = Parent Max Rate Sum of Guaranteed BW at child level
Parent Max Rate is typically the parent shaper rate; if no parent, its the physical
interface BW
Bandwidth remaining assigned to a given class = BR * Excess Weight / Sum of
Excess Weights at child level
( Parent Max Rate Guaranteed BW ) * Excess Weight
Service Rate = Guaranteed BW +
Excess Weights

75
Committed vs Excess Weight
Configured BW allocation is always translated into a WFQ weight
Bandwidth Remaining Ratio is directly translated into WFQ weight
Bandwidth Percentage is translated as percentage of 1024

Same value is used for Committed Weight and Excess Weight


excess bandwidth is distributed among classes in the same ratio as the guaranteed
bandwidth
Committed Weight: within parent bandwidth
Excess Weight: exceeding parent bandwidth, but within parent shape
Values are pre-fit; the one closest to user configured ratio is picked

76
policy-map core-child
Queue Limit Calculation class TGN
bandwidth percent 10
Level: 0 Policy: core-parent Class: af21 !
Shape PIR Profile : 2/3(S) Scale: 156 PIR: 9984 kbps PBS: 124800 bytes class class-default
---------------------------------------------------------------------- bandwidth percent 1
Level: 1 Policy: core-child Class: TGN !
Queue Limit: 114 kbytes Abs-Index: 27 Template: 0 Curve: 0 policy-map core-parent
WFQ Profile: 2/76 Committed Weight: 91 Excess Weight: 91 class af21
Bandwidth: 1000 kbps, BW sum for Level 1: 1100 kbps, Excess Ratio: 1 service-policy core-child
---------------------------------------------------------------------- shape average 10 mbps
Level: 1 Policy: core-child Class: class-default !
class class-default
WFQ Profile: 2/8 Committed Weight: 9 Excess Weight: 9
shape average 100 mbps
Queue Limit is 100ms worth of Service Rate
Queue Limit = Service Rate * 100ms
Service Rate is the sum of minimum guaranteed bandwidth and bandwidth remaining assigned to a given
class
Parent BW = 9984 kbps
Guaranteed BW of class TGN = 1000 kbps
Sum of guaranteed BW at Level1 = 1100 kbps
Total remaining BW at Level1 = 9984 1100 = 8884 kpbs
Remaining BW of class TGN = ( 91 / ( 91 + 9 ) ) * 8884 kpbs = 8084 kbps
Service Rate of class TGN = 1000 + 8084 = 9084 kbps
Queue Limit of class TGN = 9084 / 10 = 908400 [bits] = 113550 [Bytes] ~= 114 kB

77
Configuring The Queue Limit
Queue Limit can be configured manually
Supported units:
Bytes
Kilobytes
Megabytes
Milliseconds
Packets (default)
For conversion into 64-Byte units of queue occupancy, packet size of 256 bytes is presumed
Microseconds
Queue Limit sizes are pre-fit
The one closest to the calculated queue limit is picked for HW programming

78
Configuring The Queue Limit
RP/0/RSP0/CPU0:av-asr9001#sh qos interface Gig0/0/1/0 output location 0/0/CPU0
<. . .>
Level: 1 Policy: core-child Class: TGN <..>
Parent Policy: core-parent Class: af21 policy-map core-child
Queue Limit: 2624 kbytes (10000 packets) Abs-Index: 90 Template: 0 Curve: 0 class TGN
WFQ Profile: 2/76 Committed Weight: 91 Excess Weight: 91 bandwidth percent 10
Bandwidth: 1000 kbps, BW sum for Level 1: 1100 kbps, Excess Ratio: 1 queue-limit 10000 packets
!
<..>
Presuming 256 Byte packet size
Closest pre-fit value picked

RP/0/RSP0/CPU0:av-asr9001#sh qos interface Gig0/0/1/0 output location 0/0/CPU0


<. . .>
Level: 1 Policy: core-child Class: TGN <..>
Parent Policy: core-parent Class: af21 policy-map core-child
Queue Limit: 1120 kbytes (1000 ms) Abs-Index: 90 Template: 0 Curve: 0 class TGN
WFQ Profile: 2/76 Committed Weight: 91 Excess Weight: 91 bandwidth percent 10
Bandwidth: 1000 kbps, BW sum for Level 1: 1100 kbps, Excess Ratio: 1 queue-limit 1000 ms
!
<..>
1000ms worth of Service Rate
9084 [kbps] / 8 [bit/Byte] = 1135500 Bytes
Closest pre-fit value picked
79
QoS Classification Formats
A given QoS policy-map generally classifies based on a single classification format
IPv4 and IPv6 classes can co-exist in the same policy
Format 0 Format 1 Format 2 Format 3 Format 4

Fields - IPV4 source address - Outer VLAN/COS/DEI - Outer VLAN/COS/DEI - Outer VLAN/COS/DEI - IPV6 source address
supported (Specific/Range)[1] - Inner VLAN/COS - Inner VLAN/COS - Inner VLAN/COS (Specific/Range)
- IPV4 Destination - IPV4 Source address - IPV4 Destination - MAC Destination - IPV6 Destination
address (Specific/Range) address address address
(Specific/Range) - IP DSCP / TOS / (Specific/Range) - MAC source address (Specific/Range)
- IPV4 protocol Precedence - IP DSCP / TOS / - QOS-group (output - IPV6 protocol
- IPV4 TTL - QOS-group (output Precedence policy only) - IPV6 TOS /EXP
- IPV4 Source port policy only) - QOS-group (output - Discard-class (output - IPV6 TTL
(Specific/Range) - Discard-class (output policy only) policy only) - IPV6 Source port
- IPV4 Destination port policy only) - Discard-class (output (Specific/Range)
(Specific/Range) - EXP policy only) - IPV6 Destination port
- TCP Flags - EXP (Specific/Range)
- IP DSCP / TOS / - TCP Flags
Precedence - Outer VLAN/COS/DEI
- QOS-group (output - Inner VLAN/COS
policy only) - IPV6 header flags
- Discard-class (output- - QOS-group (output
policy only) policy only)
- EXP - Discard-class (output-
policy only)
[1] All fields marked in green are defined using an ACL used for QOS classification.
80
Tomahawk
NP5

81
Line card Generations & Silicon Evolution
240Gbps & 150Mpps
High
Performance
Ultra-fast 4Tbps on-chip mem.
Internal TCAM for ACL/QoS
1st Gen
Trident
Class Trident Octopus Santa Cruz PowerPC
90nm 130nm 130nm Dual Core 1M policers & 1M queues
120G Rich QoS
15 Gbps 60 Gbps 90 Gbps 1.2 Ghz 64k subscribers/NPU

Integrated 10/40/100GE MAC


2nd Gen Versatile
Variable port loading for
Typhoon Interface density/performance flexibility
Class Typhoon Skytrain Sacramento PowerPC
360G 55nm 65nm 65nm Quad Core
60 Gbps 60 Gbps 220 Gbps 1.5 Ghz
Silicon Pioneer 28nm device
Innovation Massive Power Efficiency
3rd Gen
Tomahaw
k Class Tomahawk Tigershark SM15 X86 Coupled with Silicon photonics
800G 28nm 28nm 28nm 6 Core Optical technology for size, cost, and
240 Gbps 200 Gbps 1.20 Tbps 2 Ghz Innovation power optimization
Flexible 10GE, 40GE, & 100GE82
ASR 9000 Tomahawk LCs
Tomahawk 8x100GE CPAK Line Card

Tomahawk 4x100GE CPAK Line Card

ASR 9000 MOD-400 GE Line card


20x10GE and 2x100GE Modular Port Adapter

83
Tomahawk Scale in Phase 1
Metric -TR Scale -SE Scale
MPLS Labels 1M
MAC Addresses 2M (6M Future)
FIB Routes (v4/v6) Search Memory 4M(v4) / 2M(v6) XR (10M/5M Future)
Mroute/MFIB (v4/v6) 128k/32k (256k/64k Future)
VRF 8K (16k Future)
Bridge Domains 64K
TCAM (acl space v4/v6) 20Mbit 80Mbit
Packet Buffer 100ms (6G/NPU) 200ms (12G/NPU)
EFPs 16K/LC 128K/LC (64K/NP)
L3 Subif (incl. BNG) 8k 128k (64k/NP)
IP/PPP/LAC subscriber sessions per
16k 256k (64k/NP)
LC
Egress Queues 8 Queues / port + nV Sat Qs 1M/NPU (4M for 8x100GE!)
Policers 32K/NPU 512K/NPU
ACL (v4/v6) 16k v4 or 4k v6 ACEs/LC 98k v4 / 16k v6 ACEs

84
V3 Power Supplies

85
Power System Version 3 for ASR9k
Introduction
Power System Version 3 is a next-gen Power Supply and Power Tray for ASR9k
More powerful Power Supplies allow to increase the System Power density
Available for ASR9010, ASR9912 & ASR9922
Supported from IOS-XR version 5.3.0 onwards
Two new Power Supplies: 6kW AC, 4.4kW DC
Two new Power Trays: 3xAC V3 (18 kW/Tray), 4xDC V3 (17.6 kW/Tray)
Routers are field-upgradeable to V3 power
Very important: Although the new power supplies are more powerful, all the feed
specifications stay the same as with V2. Therefore no need to change the electrical
installation / cords when using V3 power!
86
Version 3 AC Power Supply
New Power Tray and Power Supply
Front

PWR-6KW-AC-
V3

New AC Tray

Rear

A9K-AC-PEM-V3
Three 6kW AC PMs per tray
Two logically ANDed single phase 3kW AC power inputs per PM
If both inputs are active, output is max 6kW
If only one input is active, Power Supply is still working, output will be max 3kW

Input 16/20A@230V AC & same power cords easily upgradeable!


87
Version 3 DC Power Supply
New Power Tray and Power Supply

Front

PWR-4.4KW-DC-V3

New DC Tray

Rear

Four 4.4kW DC PMs per tray A9K-DC-PEM-V3

Two logically ANDed DC power feeds per PM


If both inputs are active, output is max 4.4kW
If only one input is active, Power Supply is still working, output will be max 2.2kW

DC power requirements & DC lug easily upgradeable if you like copper


88
ASR9000 Power Supplies

AC V2 AC V3 DC V2 DC V3
Max Power 3 kW 6 kW 2.1 kW 4.4 kW
# of Feeds 1 2 2 2
Feed redundancy in PEM n/a No Yes No
# of PSs per power tray 4 3 4 4
Redundancy scheme1) N+N N+N N+1 N+N
1) number of modules required to protect from feed failure (e.g. power grid outage)

89
Tomahawk Power Budgets

Card Budget
A99-SFC2 165 W
A9K-8X100GE 1150 W
A99-RP2-SE 410 W
ASR-9922-FAN 1100 W

90
Tomahawk LC
Architecture

91
Tomahawk LC: 8x100GE Architecture L2/L3/L4 lookups, all
VPN types, all feature VoQ buffering, Fabric
Slice Based Architecture processing, mcast FPOE, Auto- Ivy Bridge CPU
credits, mcast hashing,
CPAK: Macsec Suite B+, replication, QoS, ACL, Spread, DWRR, Complex
scheduler for fabric and
100G, 40G, 10G G.709, OTN, etc RBH, replication
egress port
Clocking

240G 240G
Tomahawk Tigershark
PHY
NP FIA

240G 240G
Tomahawk Tigershark
PHY
NP FIA

Central XBAR
(SM15)
XBAR
240G 240G
Tomahawk Tigershark
PHY
NP FIA

240G 240G
Tomahawk Tigershark
PHY
NP FIA
92
Tomahawk Flexible MAC PHY
Configurable MAC PHY for MLG between optical interface to NPU:

10GE/40GE/100GE LANPHY
100G LR4/ER4 (4x25.78G) 1xCAUI (100GE, 802.3ba)
100G SR10 (10x 10G) 2x CAUI or 2x10x 10GE XFIs
10x 10G LR 10x 10GE XFIs
2x 40GE LR4/SR4 2x 40GE XLAUIs (2x40GE, 802.3ba)
10GE WANPHY
10x 10GE WANPY (9.954Gbps WIS) 10x 10GE XFIs
CPU

OTN: OTUk wrapper/mapping for 10GE/100GE MAC PCS CPAK 0


PHY NP FIA
OTL4.4 (4x 27.95G) 2x CAUI (2x 100GE) CPAK 1

OTL4.10 (10x 11.18G) 2x 10x 10GE XFIs CPAK 2


OTU4, OTU-2 (10.7Gbps), OTU-2e (11.096Gbps) PHY NP FIA
CPAK 3
G.709 framing, EFC, EFEC (G.975 I.4) and UFEC (I.7) Switch


Fabric
CPAK 4
(SM15)
PHY NP FIA
More features: CPAK 5
Up to
14x120G
802.1qbb Priority based Ethernet flow control CPAK 6

Integrated 1588v2 and SyncE timing support PHY NP FIA


CPAK 7

MACSec 802.1ae at 10GE, 40Ge, 100Ge line-rate


128-bit, 256-bit encryption key, 10,240 octet MTU

Soft loop (when configured via CLI) is closed on the PHY


93
Tomahawk NPU Architecture
TCAM
Tomahawk NPU 4 TM loopback/replication ports 4x36Gbps

TM0
Line port Input bypass Line port Output

WRED
I/F x12 ICU & Pre-parse I/F x12

Line port Output

TOP Resolve
TOP Search
Line port Input

TOP Modify
TOP Parse
I/F x12

SPri WRR
I/F x12 ICFDQ per flow queuing
TM1 FIA Output I/F
FIA Input I/F
bypass x16

WRED
x16

FIA Input I/F FIA Output I/F


x16 x16

per flow queuing


TOPs bypass

Internal MEM, Cache


DRAM Frame Buffers
Packet classification, preparse complete 64 groups x 4 CoS queues
frame header fields Strict priority round robin scheduling
HW flow hashing to map traffic flows to HW based EFD for low priority pkt
ICFDQ group priority queues ICFDQ multicast replication w/o pps limitation
(Typhoon ICFC repl <25mpps) 94
Tomahawk Line Card CPU
CPU Subsystem:
Intel IVY Bridge EN 6 cores @ ~2GHz with 3 DIMMs
Integrated acceleration engine for Crypto, Pattern Matching and Compression
1x32GB SSD

HW Parameter Typhoon LC CPU Tomahawk-SE Linecard

P4040 Ivy Bridge EN


Processor
4 Core 1.5GHz 6 Core 2GHz

LC CPU Memory 4GB 24 GB

L1: 32KB Instructions,


Cache L1, L2, L3 L2: 256KB
L3: 2.5MB per core

95
3rd Generation Fabric

96
ASR9912/9922 Switch Fabric Card (FC2) Overview
6+1 All Active 3-Stage Fabric Planes, Scale to 1.6Tbps LCs
Fabric frame format:
5x 2x115G (120G raw) Super-frame
bi-directional Fabric load
= 1.15Tbps balancing:
Unicast is per-packet
Multicast is per-flow

FIA
FIA
FIA Fabric Fabric FIA
SM15 SM15
FIA
FIA
Tomahawk Line Tomahawk Line
Card Card

5x 2x115G (120G raw) 7-Fabric Cards Fabric


bi-directional Bandwidth:14x 115Gbps
= 1.15Tbps ~1.61Tbps/slot
SFC v2
6+1 Fabric Redundancy Bandwidth:
97
12x 115Gbps ~ 1.38Tbps/slot
Tomahawk 5/7-Fabric Cards or Typhoon Interop
Any traffic that traverse between
Tomahawk 5-fabric and 7-fabric cards,
will use only the first 5 fabric. (5-1)x2x115G bi-directional
The same is for between Typhoon and = 920Gbps (protected)
Tomahawk!

FIA
FIA
FIA Fabric FIA
fabric
SM15 FIA
FIA
Typhoon Linecard Tomahawk Line
Card

(5-1)x2x55G bi-directional
= 440Gbps (protected) Fabric Lanes 6 & 7 are only used towards
Tomahawk 7-fabric plane linecards
SFC v2
98
Fabric Deployment Modes
Each 8x100 LC slot is assigned 96 fabric destinations (VQIs) to operate any CPAK
breakout. 10 LC slots need 960 VQIs and 20 LC slots need 1920 VQIs.
ASR9912 chassis can operate with mix of Typhoon and Tomahawk LCs or Tomahawk
only LCs without requiring explicit fabric mode configuration.
ASR9922 requires explicit fabric mode configuration (and chassis reload) to operate in
Tomahawk only mode in order to enable 2K VQI scale.
Configuration command to enable high-VQI mode:
RP/0/RP0/CPU0:A9K(admin-config)#fabric enable mode highbandwidth

For ASR9922 in mix mode, upon exhausting 1K VQIs, new LCs inserted in the chassis are
kept in IN-RESET state.
Mix mode is default (chassis is limited to 1K VQIs).
All FIAs in the chassis must support high VQI scale (including the RP)

99
Each 8x100 LC slot is assigned 96 fabric destinations (VQIs) to operate any CPAK port in
1x100, 2x40, or 10x10 mode in a flexible manner. So 10 LC slots need 960 VQIs and 20 LC
slots need 1920 VQIs.

ASR9912 chassis can operate with mix of Typhoon and Tomahawk LCs or Tomahawk only
LCs without requiring explicit fabric mode config.

ASR9922 requires explicit fabric mode config (and chassis reload) to operate in Tomahawk
only mode in order to enable 2K VQI scale.

Config CLI to enable high-VQI mode:


RP/0/RP0/CPU0:ios(admin-config)#fabric enable mode highbandwidth

For ASR9922 in mix mode, upon exhausting 1K VQIs, new LCs inserted in the chassis are kept in IN-
RESET state.
Mix mode is default (chassis is limited to 1K VQIs).
All FIAs in the chassis must support high VQI scale (note that RP1 uses Skytrain FIA)

100
CPAK

101
ASR 9000 Optical Interface Support
Cisco CPAK 100G Pluggable Transceiver

Standards Compliant
Meets IEEE 802.3ba 100GBASE-SR10 / LR4 / ER4 requirements
OTN OTU4 compliant, Electrical interface OIF compliant

Highest Density
1/3 the size of CFP
20% smaller than CFP2
Enables 10+ Pluggable 100G Ports (1Tb/s) per card

Low Power Consumption


Max power dissipation <7.5W for all reaches
CPAK 100G-LR4 nominally dissipates < 5.5W (<1/3 of CFP)
Enabled by Ciscos CMOS Photonics Technology
CMOS Photonics leverages massive industry investment in CMOS mfg

In Production and Shipping since 2013


102
Interface Flexibility
Single CPAK Product ID Three SW selectable Options
DUPLEX SC TO
Configurable 100GE Interconnect LC /SC/ST SM
LGX
Options for 10GE interfaces: Panel
Interface HunGigE 0/x/y/z CPAK-100G-LR4

Breakout Interface Convention:


Rack/Slot/Bay/Port (phy)/Breakout# MPO24 TO 10X
DUPLEX
LC/SC/ST MM
10GE Interconnect Options LGX
Panel
CPAK-100G-SR10
hw-module 0/x/cpu0 port z breakout 10xTenGigE
LGX
Interface TenGigE 0/x/y/z/0 Panel
Interface TenGigE 0/x/y/z/1 MPO24 TO 10X
CPAK-10X10G-LR DUPLEX LC
Interface TenGigE 0/x/y/z/9 /SC/ST SM

40GE Interconnect Options


LGX
hw-module 0/x/cpu0 port z breakout 2xFortyGigE Panel
Interface FortyGigE 0/x/y/z/0 CPAK-2X40G-LR4
Interface FortyGigE 0/x/y/z/1 LC TO DUPLEX
LC/SC/ST SM

103
CPAK Breakout What Does It Mean For
Operations?

CPAK Breakout creates physical controllers


You can run all exec and configuration commands on each one of the individual TenGigE
or FortyGigE interfaces
You can shut down the laser on each one of the individual TenGigE or FortyGigE
interfaces
Expect ~10s time lapse between breakout configuration commit and interface
creation

104
CPAK Breakout
RP/0/RP0/CPU0:ios(config)#hw-module location 0/5/CPU0 port 6 breakout 10xTenGigE
RP/0/RP0/CPU0:ios(config)#commit
Sun Feb 1 12:55:58.344 UTC
RP/0/RP0/CPU0:Feb 1 12:55:59.029 : config[65752]: %MGBL-CONFIG-6-DB_COMMIT : Configuration committed by user 'root'. Use 'show configuration
commit changes 1000000243' to view the changes.
LC/0/5/CPU0:Feb 1 12:56:00.585 : ifmgr[205]: %PKT_INFRA-LINK-3-UPDOWN : Interface HundredGigE0/5/0/6, changed state to Down
LC/0/5/CPU0:Feb 1 12:56:00.585 : ifmgr[205]: %PKT_INFRA-LINEPROTO-5-UPDOWN : Line protocol on Interface HundredGigE0/5/0/6, changed state to Down
RP/0/RP0/CPU0:Feb 1 12:56:00.965 : config[65752]: %MGBL-SYS-5-CONFIG_I : Configured from console by root
LC/0/5/CPU0:Feb 1 12:56:07.778 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/0, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.778 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/1, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.778 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/2, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.778 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/3, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.778 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/4, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.779 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/5, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.779 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/6, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.779 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/7, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.779 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/8, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.779 : ifmgr[205]: %PKT_INFRA-LINK-5-CHANGED : Interface TenGigE0/5/0/6/9, changed state to Administratively Down
LC/0/5/CPU0:Feb 1 12:56:07.779 : cfgmgr-lc[136]: %MGBL-CONFIG-6-OIR_RESTORE : Configuration for node '0/5/0' has been restored.
LC/0/5/CPU0:Feb 1 12:56:19.129 : inv_agent[213]: %PLATFORM-INV_AGENT-6-CPAK_OIRIN : CPAK OIR: 0/5/CPU0/6 Sn: FBN17462015 is inserted, state: 1

~10stime lapse between


~10s
time
configuration commit and
interface creation
lapse
105
CPAK Breakout Configuration Guidelines
Empty slot: Changing from 10x10 or 2x40 to 1x100G
Use preconfigured keyword for breakout remove the 10x01 or 2x40 breakout
and interface configuration configuration (1x100G is the default)

LC already inserted: Rollback:

Option A (One step commit): Work as normal


Use normal breakout configuration Working with commit replace
Use preconfigured keyword for interface
configuration commit replace wipes out all configuration,
Load breakout and interface configurations in including breakout configuration
one go
Option B (Delayed commit):
Use normal breakout configuration
Wait until all interfaces are created
Configure interfaces as normal

106
CPAK Breakout Interface Numbering
RP/0/RP0/CPU0:ios#sh int brief location 0/5/CPU0
Sun Feb 1 13:27:18.699 UTC

Intf Intf LineP Encap MTU BW


Name State State Type (byte) (Kbps)
--------------------------------------------------------------------------------
Hu0/5/0/0 up up ARPA 9216 100000000
Hu0/5/0/1 admin-down admin-down ARPA 9216 100000000
Hu0/5/0/2 down down ARPA 1514 100000000
Hu0/5/0/3 up up ARPA 1514 100000000
Hu0/5/0/4 up up ARPA 1514 100000000
Hu0/5/0/5 down down ARPA 1514 100000000
Hu0/5/0/7 up up ARPA 1514 100000000
Te0/5/0/6/0 down down ARPA 1514 10000000
Te0/5/0/6/1 admin-down admin-down ARPA 1514 10000000
Te0/5/0/6/2 admin-down admin-down ARPA 1514 10000000
Te0/5/0/6/3 admin-down admin-down ARPA 1514 10000000
Te0/5/0/6/4 admin-down admin-down ARPA 1514 10000000
Te0/5/0/6/5 admin-down admin-down ARPA 1514 10000000
Te0/5/0/6/6 admin-down admin-down ARPA 1514 10000000
Te0/5/0/6/7 admin-down admin-down ARPA 1514 10000000
Te0/5/0/6/8 admin-down admin-down ARPA 1514 10000000
Te0/5/0/6/9 admin-down admin-down ARPA 1514 10000000

107
Interface Down Troubleshooting
Look for RX_LOS alarm in:
show controller <interface> <location> internal
Sample output:
RP/0/RSP0/CPU0:td02#show controllers HundredGigE 0/0/0/2 internal | include Defects
H/W Link Defects : (0x00001328) HW_LINK LASI RFI TX_FAULT MOD_NR
H/W Raw Link Defects : (0x0400132a) RX_LOS HW_LINK LASI RFI TX_FAULT MOD_NR ADMIN_DOWN

Most common reasons for RX_LOS:


1. CPAK not properly inserted
2. Fiber not properly inserted
3. Far-end could be in shut-down mode.

If RX_LOS is reported:
1. Check if CPAK is inserted correctly
2. Make sure PID is read/detected correctly
RP/0/RSP0/CPU0:td02#show controllers HundredGigE 0/0/0/3 internal | include PID
Pluggable PID : CPAK-100G-SR10
Pluggable PID Supp. : (Service Un) Supported

108
Interface Down Troubleshooting
For further troubleshooting collect:
show tech ethernet interface
show controller <interface> internal
show controller <interface> phy
show controller <interface> regs
show controller <interface> xgxs

109
Tomahawk Advanced
Power Management
(APM)

110
Tomahawk APM Overview
Tomahawk line card supports Advanced Power Management (APM) to enable
users to power up and down a slice without affecting functioning of other slices.
A slice includes ports on Sacramento/SM15, one Tigershark, one Tomahawk
NP, two PHYs, Gear Boxes, and two optical modules.
SM15 Interface
Slice Tigershark NP PHY Gear Box Optics
Ports (100G)
0 0-1 0 0 0 0 0 0-1
1 2-3 1 1 1 1 1 2-3
2 4-5 2 2 2 2 2 4-5
3 6-7 3 3 3 3 3 6-7

111
Tomahawk APM Overview
When a slice is powered down:
All interfaces on the slice are deleted; hence no traffic can pass through this slice;
Optics are powered off; DOM stops monitoring the port sensors;
Envmon stops polling the power and temperature sequencers and on this slice;
Online diag is stopped for this slice.

All hardware is powered on after a LC cold reboot


Even if a slice is configured as powered off, hardware on this slice is powered on during LC boot.
Any errors on powered-off slice hardware during LC initialization are detected and logged before
powering the slice off.

Slice 0 is not allowed to be powered off because certain applications require NP0 be up all
the time.
Slice power-on may take ~40 seconds

112
Slice Power Off
RP/0/RP0/CPU0:ios(admin-config)#hw-module power saving location 0/5/CPU0 slice 3

RP/0/RP0/CPU0:ios#show platform slices location 0/5/CPU0


Sun Feb 1 13:54:19.569 UTC
Line Card Slice Config Status
0/5/CPU0 0 Power on Completed
1 Power on Completed
2 Power on Completed
3 Power saving Completed

RP/0/RP0/CPU0:ios(admin)#show apm psm status location 0/5/CPU0


Sun Feb 1 13:48:41.471 UTC
PSM Status
----------
PSM Client Status
ENVMON: Registered
DIAG0: Registered
DIAG1: Not registered
INVMGR: Registered
0/5/CPU0 PSA: Registered

LC Status
---------
Line Card Slice Config Status ENVMON DIAG0 DIAG1 INVMGR PSA
0/5/CPU0 0 On Completed Completed Completed Not registered Completed Completed
1 On Completed Completed Completed Not registered Completed Completed
2 On Completed Completed Completed Not registered Completed Completed
3 Saving Completed Completed Completed Not registered Completed Completed

113
Slice Power Off
RP/0/RP0/CPU0:ios(admin-config)#hw-module power saving location 0/5/CPU0 slice 3

RP/0/RP0/CPU0:ios(admin)#show apm psa status location 0/5/CPU0


Sun Feb 1 13:51:18.002 UTC

0/5/CPU0

PSA Client Status


DIAG ENVMON INVMGR FIA PCIE LDA PRM
Registered Registered Registered Registered Registered Registered Registered

PSA Slice Status


Slice 0: Power On Completed 1: Power On Completed 2: Power On Completed 3: Power Saving Completed
DIAG Completed Completed Completed Completed
ENVMON Completed Completed Completed Completed
INVMGR Completed Completed Completed Completed
FIA Completed Completed Completed Completed
PCIE Completed Completed Completed Completed
LDA Completed Completed Completed Completed
PRM Completed Completed Completed Completed

114
Tomahawk APM Troubleshooting
Commands:
show platform slices
admin show apm psm status
admin show apm psa status
show tech-support apm
show tech ethernet interface

115
Tomahawk Life of a
transient packet

116
Tomahawk Intelligent HW Based EFD
- for guaranteed protection of high-priority traffic
Tomahawk implements HW (ICFDQ) based priority EFD

Existing Typhoon SW based EFD for priority classification and discard logic
hw location 0/x/cpu0 early-fast-discard <ip, mpls, vlan cos, i/o encp> val op
& Pre-parse

Line port Input Supports IPv4 and IPv6


I/F x12
ICU

HW priority discard criteria


Line port Input
I/F x12 SPri RR
1. Input Frame Resource Congestion State:
ICFDQ

ICFDQ global occupation, RFD consumption rate


Packet priority as classified by ICU: 4 classes, 3 actually used
& Pre-parse

FIA Input I/F 2.


x16 Network control, High priority (ToS/Exp/Vlan Cos/DSCP >= 6) and Low priority
ICU

FIA Input I/F


x16 ICFD/RFD Line Side Forward Line Side Early Line Side Fabric Side
Usage Priority (No Drop) Fast Drop Priority Flow Control Flow Control

NPU HW Early >95% Control High, Low Priority On* On


Fast Drop >85% Control, High priority Low priority On* On
>60% All None On* On
* If not CLI disabled
>40% All None On* Off
<40% All None Off Off
117
Tomahawk MultiStage Multicast Replication Overview
4. 22-bit FGID for slot bitmask
based unique fabric LC
1. 512k MGID/FGID lookup replications and FPOE lookup 6. MGID in FIA DI table lookup
5. MGID lookup into FPOE for unique NPU replication
Flow based (RBH key calc)
7. MGID lookup for NPU table (256k entries) for Flow based load balance
multi lnk load balance
to egress OIF interface unique FIA replications
replication
7 SFP+
6 SFP+
FIA NP SFP+
SFP+
SFP+
SFP+
SFP+
CPAK 0 5 FIA NP SFP+
SFP+
SFP+
7 6FIA
PHY NP Fabric 6 7 SFP+
SFP+
CPAK 1 SM15 FIA NP SFP+
SFP+
SFP+
SFP+
5
4
FIA NP SFP+
SFP+
CPAK 2 7 SFP+
SFP+
PHY NP 6FIA
RSP880 Tomahawk Line Card
CPAK 3
Switch SFP+
Fabric SFP+
CPAK 4 (SM15) FIA NP SFP+
SFP+
SFP+
PHY NP FIA
SFP+
CPAK 5 3 FIA NP
SFP+
SFP+
SFP+
SFP+
Up to Fabric SFP+
2 SFP+
CPAK 6
14x120 SM15 FIA NP SFP+
SFP+
PHY NP 1 FIA G SFP+
SFP+
CPAK 7 RSP880 FIA NP SFP+
SFP+
SFP+
SFP+
Tomahawk Line Card
2 & 3. Flow based (using RBH)
118
multi lnk load balance
64-way ECMP Load Balancing
XR 5.3.0, Typhoon & Tomahawk.
OSPF/OSPFv3 and ISIS
Higher protocols inherit
CLI:
OSPF/OSPFv3:
router ospf 100 maximum paths <1-128>
router ospfv3 100 maximum paths <1-128>
ISIS:
router isis 1111 address-family ipv4 unicast/mcast maximum-paths <1-64>
router isis 1111 address-family ipv6 unicast/mcast maximum-paths <1-64>

120
Tomahawk Diagnostics
No change in diagnostic
architecture and
troubleshooting compared
to Typhoon(!)
121
NP Diagnostic - Health Monitoring
A single health monitor packet is injected directly into each NP after LC boot.
If this packet is dropped or lost, then a new health monitor packet re-inject is
attempted three times.
Failure detection time is ~12-13 seconds (polling interval + original health drop +
3 re-inject health drops) ( 1 sec + 3 sec + 9 sec).
Error recovery with NP fast reset
Error message:
LC/0/0/CPU0: Jan 20: 11:09:22:122 : prm_server_to: NP-DIAG health monitoring failure on NP1

122
NPU Packet Processing

PRM
LC CPU
Health Mon packet PRS_HEALTH_MON Health Mon
counter packet
loopback

PRS LKUP RSV MDF TM

NPU

123
RSP loopback Diagnostic

CP
U RPS loopback
packets

LC
CPAK
RSP Tsec driver
0
PHY NP FIA
CPAK
1
CPAK
2
Punt
PHY NP FIA Switch
CPAK Fabric FPGA
3
Switch
CPAK Fabric
4 FIA (SM15)
PHY NP
CPAK
5
CPAK
6
Switch
PHY NP FIA Fabric
CPAK
7

124
Online Diagnostic - Linecard NP loopback
To test linecard CPU and NP punt path integrity.
Every minute the LC CPU sends a diagnostic packet over the local host punt
path to each NP
Failure detection time is approximately 3mins.
PFM alarm will be raised.
Error message:
LC/0/5/CPU0:Mar 23 20:09:34 : pfm_node_lc: %PLATFORM-DIAGS-0-
LC_NP_LOOPBACK_FAILED_TX_PATH : Set|online_diag_lc[151635]|Line card NPU
loopback Test(0x2000006)|NP loopback

125
Line Card Loopback Diagnostic

CPU LC loopback
packets

Tsec driver

LC
Punt Switch

CPAK

Fabric ASIC
Switch
0 PH
CPAK NP FIA
Y
1
CPAK
2 PH
CPAK NP FIA
Y
3
CPAK Switch
4 PH Fabric
CPAK NP FIA (SM15
Y
5 )
CPAK
6 PH
CPAK NP FIA
Y
7

126
Show controllers np ports all loc <LC>
Show controllers np counters <> location <LC>
Show spp sid stats Show controllers np fabric-counters <rx | tx>
loc <> <np> loc <LC> show controllers fabric fia
Show spp node- sh controllers np tm counters <np> loc<> link-status loc <LC>
counters loc <> Show lpts pifib hardware entry type <ipv4 | ipv6> Show controllers fabric fia
Show spp interface statis loc <LC> <drops | errors> <ing | egr
loc <> > loc <LC>
show controllers
dpaa tsec port 9
Show controllers fabric
location 0/3/CPU0 Show controllers epm-switch
Show controllers fabric Crossbar instance <>
port-status loc <LC> statistics location <LC>
fia stats location <LC>
show controllers epm-switch
mac-stats <> location <LC>

Ivy Bridge 1G DPAA EPM


CPU Switch NP (FIA) (XBAR)
10G DPAA

Show controllers
TM
Show qoshal punt-queue np <> loc <LC> MAC <interface> stats
show qoshal default-queue port <> loc <LC>

127
Usability

128
IOS-XR Usability Initiative Progress
Number of usability related features / featurettes delivered and
planned
5.1.x / 5.2.x 5.3.0 5.3.1 5.3.2

183 18 43 20

Delivered Planned

129
Few usability highlights
To that extent, IOS XR 5.3.1 delivers twenty six (26) new usability related features / featurettes / fixes. Some of the highlights
include:

Global Configuration Replace Ever wanted to quickly move interface configuration from one port to another? This new feature
allows for quick customization of router configuration by match and replace based on interface names and / or regular expressions
(see presentation below for details)

Non-interactive EXEC commands Ever wanted to initiate a router reload without being asked for confirmation? A new global
knob has been introduced to remove user interaction with the parser

BGP advertised prefix count statistics A new knob provides access to advertised count stats (something you could do easily in
IOS but not in IOS XR)

OSPF post-reload protocol shutdown A new knob that would keep OSPF in shutdown state after a node reload

Interactive Rollback operations Ever issued the wrong rollback ID by mistake? a new knob would ask for user confirmation
before committing

CLI / XML serviceability enhancements to several platform dependent commands such as show controllers and show hw-
module fpd commands

And many more

130
Some more cool ones
CSCua10564 A 6 1034 Featurette: request auto fpd on newly inserted linecards
CSCuj15553 R 6 532 need provide xml support for "show hw-module fpd location all"
CSCue46774 R 3 757 add xml support for "show controllers tengige/gige/hundredGigE phy"
CSCuo15664 R 6 329 add support "Monitor interface Bundle-Ether * to include error counters
CSCue33274 R 6 759 exec commands must not require interaction over a session
CSCut21292 R 3 2 Need to suppress FRR-Ready syslog
CSCup78779 R 3 248 sh mpls traffic-eng topology srlg output missing local/remote address
CSCuo01750 R 6 256 "sh controllers TenGigE * phy" needs interface name in the output
CSCus28994 R 3 51 Need ability to throttle FRR Ready Message
CSCul81622 R 3 457 show qos for bundle not reporting info for all members
CSCun39879 R 4 326 no support for wildcards for scp
CSCun08218 R 6 310 ASR9K Fabric VOQ Serviceability CLI improvements
CSCup73636 R 3 138 BGP advertised prefixes statistics in IOS XR
CSCup99228 R 6 170 Add CLI to display configs for all the commits

131
Atomic Configuration
Replace Feature

132
Operational and Automation
Enhancements
Atomic Configuration Replace Problem Statement

1 Original Configuration 2 Target Configuration


RP/0/RSP0/CPU0:pE2#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:pE2(config)#interface GigabitEthernet0/0/0/19
Mon Feb 16 13:25:11.142 UTC RP/0/RSP0/CPU0:pE2(config-if)# no description ***To 7604-2 2/12***
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:pE2(config-if)# no cdp
description ***To 7604-2 2/12*** RP/0/RSP0/CPU0:pE2(config-if)# no ipv4 address 13.3.6.6 255.255.255.0
cdp RP/0/RSP0/CPU0:pE2(config-if)# no negotiation auto
ipv4 address 13.3.6.6 255.255.255.0 RP/0/RSP0/CPU0:pE2(config-if)# no load-interval 30
negotiation auto RP/0/RSP0/CPU0:pE2(config-if)# ipv6 address 2603:10b0:100:10::31/126
load-interval 30
!

Example:
Consider an interface with a Problem Statement:
target config where all config It is operationally challenging to
lines are new expect prior knowledge of existing
config in order to manually remove
unwanted items

133
What about

Atomic Configuration Replace Current Behavior using no


interface?

1 Original Configuration 2 Target Configuration


RP/0/RSP0/CPU0:pE2#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:pE2(config)#no interface GigabitEthernet0/0/0/19
Mon Feb 16 13:25:11.142 UTC RP/0/RSP0/CPU0:pE2(config)#interface GigabitEthernet0/0/0/19
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:pE2(config-if)# description ***TEST-after-change***
description ***To 7604-2 2/12*** RP/0/RSP0/CPU0:pE2(config-if)# ipv4 address 13.3.6.6 255.255.255.0
cdp RP/0/RSP0/CPU0:pE2(config-if)# ipv6 address 2603:10b0:100:10::31/126
ipv4 address 13.3.6.6 255.255.255.0 RP/0/RSP0/CPU0:pE2(config-if)# negotiation auto
negotiation auto RP/0/RSP0/CPU0:pE2(config-if)# load-interval 60
load-interval 30 RP/0/RSP0/CPU0:pE2(config-if)# commit
!

3 Committed Configuration
Example:
RP/0/RSP0/CPU0:pE2#show configuration commit changes last 1
Mon Feb 16 13:33:25.972 UTC Consider an interface with a
Building configuration... new target config where
!! IOS XR Configuration 5.1.2 some config lines are
interface GigabitEthernet0/0/0/19 untouched and the rest are
no description ***To 7604-2 2/12***
description ***TEST-after-change***
either deleted , changed or
! added
no interface GigabitEthernet0/0/0/19
interface GigabitEthernet0/0/0/19 CURRENT Behavior:
ipv4 address 13.3.6.6 255.255.255.0 When issuing the no interface config
ipv6 address 2603:10b0:100:10::31/126
submode, the entire interface config is
no negotiation auto
negotiation auto
destroyed to later be re-created
no load-interval 30 This causes unnecessary interface flaps
load-interval 60
! 134
end
Operational and Automation
Enhancements
Atomic Configuration Replace NEW Behavior
1 Original Configuration 2 Target Configuration
RP/0/RSP0/CPU0:PE1#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:PE1(config)#no interface GigabitEthernet0/0/0/19
Mon Feb 16 13:00:32.153 UTC RP/0/RSP0/CPU0:PE1(config)#
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:PE1(config)#interface GigabitEthernet0/0/0/19
description ***AAABBBCCC*** RP/0/RSP0/CPU0:PE1(config-if)# ipv6 address 2603:10b0:100:10::21/126
cdp RP/0/RSP0/CPU0:pE1(config-if)# commit
ipv4 address 13.3.5.5 255.255.255.0
negotiation auto
shutdown
load-interval 30
!
3 Committed Configuration Example:
RP/0/RSP0/CPU0:PE1#show configuration commit changes last 1 Consider an interface with a
Mon Feb 16 13:15:36.655 UTC target config where all config
Building configuration...
lines are new
!! IOS XR Configuration 5.1.2
interface GigabitEthernet0/0/0/19
no description ***AAABBBCCC***
no cdp
no ipv4 address 13.3.5.5 255.255.255.0 NEW Behavior:
ipv6 address 2603:10b0:100:10::21/126 When issuing the no interface config,
no negotiation auto the system does not destroy the subtree
no shutdown
but instead performs a SET of new config
no load-interval 30
!
and REMOVE of unwanted config lines
end

135
Operational and Automation
Enhancements
Atomic Configuration Replace NEW Behavior
1 Original Configuration 2 Target Configuration
RP/0/RSP0/CPU0:PE1#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:PE1(config)#no interface GigabitEthernet0/0/0/19
Mon Feb 16 13:00:32.153 UTC RP/0/RSP0/CPU0:PE1(config)#interface GigabitEthernet0/0/0/19
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:PE1(config-if)# description ***TEST-after-change***
description ***AAABBBCCC*** RP/0/RSP0/CPU0:PE1(config-if)# ipv4 address 13.3.5.5 255.255.255.0
cdp RP/0/RSP0/CPU0:PE1(config-if)# ipv6 address 2603:10b0:100:10::21/126
ipv4 address 13.3.5.5 255.255.255.0 RP/0/RSP0/CPU0:PE1(config-if)# negotiation auto
negotiation auto RP/0/RSP0/CPU0:PE1(config-if)# load-interval 60
load-interval 30 RP/0/RSP0/CPU0:pE1(config-if)# commit
!

3 Committed Configuration
Example:
RP/0/RSP0/CPU0:PE1#show configuration commit changes last 1
Mon Feb 16 13:15:36.655 UTC Consider an interface with a
Building configuration... new target config where
!! IOS XR Configuration 5.1.2 some config lines are
interface GigabitEthernet0/0/0/19 untouched and the rest are
description ***TEST-after-change***
no cdp
either deleted , changed or
ipv6 address 2603:10b0:100:10::21/126 NEW Behavior: added
load-interval 60 When issuing the no interface config,
! the system does not destroy the subtree
End but instead performs a SET of new config
and REMOVE of unwanted config lines
Only the diffs (changes, removals,
additions) are applied
No interface flaps
136
What about
other config
Atomic Configuration Replace submodes?

1 Original Configuration 2 Target Configuration


RP/0/RSP0/CPU0:PE1#sh run router bgp 100 neighbor- RP/0/RSP0/CPU0:PE1(config)#router bgp 100
group NG-test RP/0/RSP0/CPU0:PE1(config-bgp)#no neighbor-group NG-test
Tue Mar 3 09:02:34.728 UTC RP/0/RSP0/CPU0:PE1(config-bgp)#neighbor-group NG-test
router bgp 100 RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#remote-as 100
neighbor-group NG-test RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#description *** NEW NEW ***
remote-as 100 RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#update-source loopback 0
description *** TEST description *** RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#address-family l2vpn evpn
update-source Loopback0 RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp-af)#exit
address-family l2vpn evpn RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#address-family vpnv4 unicast
! RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#commit
!
!

3 Committed Configuration
Example:
RP/0/RSP0/CPU0:PE1#show configuration commit changes last 1
Tue Mar 3 09:05:19.008 UTC
Consider an BGP neighbor-
Building configuration... group with a new target
!! IOS XR Configuration 5.1.2 config where some config
router bgp 100 EXISTING Behavior: lines are untouched and the
neighbor-group NG-test When issuing the no neigh-group config rest are either changes or
description *** NEW NEW *** , the system does not destroy the subtree additions
address-family vpnv4 unicast
but instead performs a SET of new config
!
! and REMOVE of unwanted config lines
! Only the diffs (changes, removals,
end additions) are applied

137
Global Configuration
Replace Feature

138
Operational and Automation
Enhancements
Global Configuration Replace
Description / Use Case
Easy manipulation of router
configuration; e.g. moving around Want to change all
configuration blocks Want to move repetitions of a given
interface config pattern?
Release around?
Target IOS-XR 5.3.2 (08/2015) Interface Y
CSCte81345 description my_uplink
ipv4 address x.x.x.x
load-interval 30
Configuration / Example
RP/0/0/CPU0:router(config)#replace interface <int_id> with <int_id>
Interface X
RP/0/0/CPU0:router(config)#replace pattern <regex_1> with <regex_2> description my_uplink
ipv4 address x.x.x.x
RP/0/0/CPU0:ios(config)#replace interface gigabitEthernet 0/0/0/0 with loopback 450
RP/0/0/CPU0:ios(config)#replace pattern '10\.20\.30\.40' with '100.200.250.225 load-interval 30
RP/0/0/CPU0:ios(config)#replace pattern 'GigabitEthernet0/1/0/([0-4])' with 'TenGigE0/3/0/\1'

139
Operational and Automation
Global Configuration Replace Example 1 Enhancements

hostname fella
group test Original
interface 'GigabitEthernet*'
description grouped Running Configuration RP/0/0/CPU0:fella(config)#replace interface gigabitEthernet 0/0/0/0
mtu 500
! with loopback 450
end-group Building configuration...
ipv4 access-list mylist
10 permit tcp 10.20.30.40/16 host 1.2.4.5 Loading.
20 deny ipv4 any 1.2.3.6/16 232 bytes parsed in 1 sec (230)bytes/sec
!
interface GigabitEthernet0/0/0/0
description first RP/0/0/CPU0:fella(config-ospf-ar-if)#show configuration
ipv4 address 10.20.30.40 255.255.0.0
shutdown Wed Feb 25 18:27:16.110 PST
! Building configuration...
interface GigabitEthernet0/0/0/2
description 10.20.30.40 !! IOS XR Configuration 0.0.0
shutdown interface Loopback450
!
interface GigabitEthernet0/0/0/3 description first
description 1020304050607080 ipv4 address 10.20.30.40 255.255.0.0
shutdown
! shutdown
interface GigabitEthernet0/0/0/4 !
description 1.2.3.4.5.6.7.8
shutdown no interface GigabitEthernet0/0/0/0
! router ospf 10
route-policy temp
if ospf-area is 10.20.30.40 or source in (2.3.4.5/20) then area 200
pass interface Loopback450
endif
end-policy transmit-delay 5
! !
router ospf 10
cost 100 no interface GigabitEthernet0/0/0/0
area 200 !
cost 200
interface GigabitEthernet0/0/0/0 !
transmit-delay 5 End
!
!
!
router bgp 1 140
!
end
Operational and Automation
Global Configuration Replace Example 2 Enhancements

hostname fella
group test Original RP/0/0/CPU0:fella(config)#replace pattern '10\.20\.30\.40' with
interface 'GigabitEthernet*' '100.200.250.225'
description grouped Running Configuration Building configuration...
mtu 500
! Loading.
end-group
ipv4 access-list mylist 434 bytes parsed in 1 sec (430)bytes/sec
10 permit tcp 10.20.30.40/16 host 1.2.4.5
20 deny ipv4 any 1.2.3.6/16
! RP/0/0/CPU0:fella(config)#show configuration
interface GigabitEthernet0/0/0/0 Thu Feb 26 09:00:11.180 PST
description first
ipv4 address 10.20.30.40 255.255.0.0 Building configuration...
shutdown !! IOS XR Configuration 0.0.0
!
interface GigabitEthernet0/0/0/2 ipv4 access-list mylist
description 10.20.30.40 no 10
shutdown
! 10 permit tcp 100.200.250.225/16 host 1.2.4.5
interface GigabitEthernet0/0/0/3 !
description 1020304050607080
shutdown interface GigabitEthernet0/0/0/0
! no ipv4 address 10.20.30.40 255.255.0.0
interface GigabitEthernet0/0/0/4
description 1.2.3.4.5.6.7.8 ipv4 address 100.200.250.225 255.255.0.0
shutdown !
!
route-policy temp interface GigabitEthernet0/0/0/2
if ospf-area is 10.20.30.40 or source in (2.3.4.5/20) then no description
pass
endif description 100.200.250.225
end-policy !
!
router ospf 10 !
cost 100 route-policy temp
area 200
cost 200 if ospf-area is 100.200.250.225 or source in (2.3.4.5/20) then
interface GigabitEthernet0/0/0/0 pass
transmit-delay 5
! endif
! end-policy
!
router bgp 1 ! 141
! end
end
Operational and Automation
Global Configuration Replace Example 3 Enhancements

interface GigabitEthernet0/1/0/0
ipv4 address 20.0.0.10 255.255.0.0 Original RP/0/0/CPU0:ios(config)#replace pattern 'GigabitEthernet0/1/0/([0-
! 4])' with 'TenGigE0/3/0/\1'
interface GigabitEthernet0/1/0/1 Running Configuration Building configuration...
ipv4 address 21.0.0.11 255.255.0.0
! Loading.
interface GigabitEthernet0/1/0/2
ipv4 address 22.0.0.12 255.255.0.0 485 bytes parsed in 1 sec (482)bytes/sec
!
interface GigabitEthernet0/1/0/3
ipv4 address 23.0.0.13 255.255.0.0 RP/0/0/CPU0:ios(config-if)#show configuration
! Fri Feb 27 16:52:56.549 PST
interface GigabitEthernet0/1/0/4
ipv4 address 24.0.0.14 255.255.0.0 Building configuration...
! !! IOS XR Configuration 0.0.0
interface TenGigE0/3/0/0
shutdown no interface GigabitEthernet0/1/0/0
! no interface GigabitEthernet0/1/0/1
interface TenGigE0/3/0/1
shutdown no interface GigabitEthernet0/1/0/2
! no interface GigabitEthernet0/1/0/3
interface TenGigE0/3/0/2
shutdown no interface GigabitEthernet0/1/0/4
! interface TenGigE0/3/0/0
interface TenGigE0/3/0/3
shutdown ipv4 address 20.0.0.10 255.255.0.0
! !
interface TenGigE0/3/0/4
shutdown interface TenGigE0/3/0/1
! ipv4 address 21.0.0.11 255.255.0.0
interface TenGigE0/3/0/5
shutdown !
! interface TenGigE0/3/0/2
interface TenGigE0/3/0/6
shutdown ipv4 address 22.0.0.12 255.255.0.0
! !
end
interface TenGigE0/3/0/3
ipv4 address 23.0.0.13 255.255.0.0
!
interface TenGigE0/3/0/4
ipv4 address 24.0.0.14 255.255.0.0 142
!
end
MPLS Traffic Engineering

143
TE Tunnel Naming
Requirements:
be able to create named TE tunnels
be able to query state on head/mid/tail via the creation name
be able to run services over named TE tunnel

144
TE Tunnel Naming
Configuration:
mpls traffic-eng RP/0/0/CPU0:rtrA(config)#show run group
auto-tunnel named group TUNS
tunnel-id min 1000 max 1500 mpls traffic-eng
! tunnel-te FROM_NY_TO_.*
tunnel-te FOO destination 192.168.0.3
apply-group TUNS path-option 10 dynamic
! autoroute-announce
tunnel-te WEST !
destination 192.168.0.3 !
autoroute-announce end-group
path-option 10 dynamic
path-option 20 explicit name west-path
!

All existing TE feature CLIs applicable under the named TE tunnel


autoroute announce, fastreroute, logging, etc.
Works seamlessly with apply-groups
An ID space is allocated for named tunnels

145
TE Tunnel Naming
Operational State
can be queried by head/mid/tail with the creation name using existing CLIs
Also displays (tunnel ID) used in signaling and internally
RP/0/0/CPU0:rtrA#show mpls tr tun name FOO RP/0/0/CPU0:rtrB#show mpls tr tunnels name FOO
Name: tunnel-te1000 Destination: 192.168.0.3 Ifhandle:0x2480 LSP Tunnel 192.168.0.1 1000 [2] is signalled, Signaling State: up
Signalled-Name: FOO Tunnel Name: FOO Tunnel Role: Mid
Status: InLabel: GigabitEthernet0/0/0/0, 24007
Admin: up Oper: up Path: valid Signalling: connected OutLabel: GigabitEthernet0/0/0/2, 24004
Signalling Info:
path option 0, type (Basis for Setup, path weight 3) Src 192.168.0.1 Dst 192.168.0.3, Tun ID 1000, Tun Inst 2, Ext ID
path option 10, type dynamic 192.168.0.1
G-PID: 0x0800 (derived from egress interface properties) Router-IDs: upstream 192.168.0.1
Bandwidth Requested: 0 kbps CT0 local 192.168.0.2
Creation Time: Tue Mar 31 10:41:40 2015 (04:13:44 ago) downstream 192.168.0.4
Config Parameters: Bandwidth: 0 kbps (CT0) Priority: 7 7 DSTE-class: 0
Bandwidth: 0 kbps (CT0) Priority: 7 7 Affinity: 0x0/0xffff Soft Preemption: None
Metric Type: TE (default) SRLGs: not collected
Path Selection Tiebreaker: Min-fill (default) Path Info:
Hop-limit: disabled Incoming Address: 10.10.10.2
Cost-limit: disabled Incoming:
AutoRoute: disabled LockDown: disabled Policy class: Displayed 1 Explicit Route:
up, 0 down, 0 recovering, 0 recovered heads Strict, 10.10.10.2
Strict, 14.14.14.2

146
TE Tunnel Naming
Services over Tunnel
Configured under named tunnel work seamlessly, e.g.
autoroute announce, forwarding-adjacency, forward-class, load-share
autoroute destination installs static route over the tunnel
Configured another protocol (e.g. LDP, static, etc.)
require extending other application to accept named tunnel
require extending other application to listen to tunnel name as an IM
attribute
Existing New
mpls ldp mpls ldp
interface tunnel-te10 tunnel-te FOO
! !
! !
router static router static
address-family ipv4 unicast address-family ipv4 unicast
3.3.3.3/32 tunnel-te10 3.3.3.3/32 tunnel-te FOO

147
Per LSP event history

Existing events/syslogs (filtered by name)


RP/0/0/CPU0:rtrA#show logging string EAST-WEST
RP/0/0/CPU0:Apr 2 09:50:05.628 : te_control[1054]: %ROUTING-MPLS_TE-5-LSP_UPDOWN : tunnel-te102 (signalled-name: EAST-WEST, LSP
Id: 5) state changed to down
RP/0/0/CPU0:Apr 2 09:50:08.223 : te_control[1054]: %ROUTING-MPLS_TE-5-LSP_UPDOWN : tunnel-te102 (signalled-name: EAST-WEST, LSP
Id: 6) state changed to up
RP/0/0/CPU0:Apr 2 09:51:12.705 : te_control[1054]: %ROUTING-MPLS_TE-5-LSP_REROUTE_PENDING : tunnel-te102 (signalled-name: EAST-
WEST, LSP Id: 6) is in re-route pending state.
RP/0/0/CPU0:Apr 2 09:51:12.706 : te_control[1054]: %ROUTING-MPLS_TE-5-LSP_REOPT_INIT : Initializing reoptimization for tunnel-
te102 (signalled-name: EAST-WEST, LSP Id: 6); reason: Inuse path-option change.
RP/0/0/CPU0:Apr 2 09:51:32.930 : te_control[1054]: %ROUTING-MPLS_TE-5-LSP_REROUTE_PENDING_CLEAR : tunnel-te102 (signalled-name:
EAST-WEST, old LSP Id: 6, new LSP Id: 7) has been reoptimized.
RP/0/0/CPU0:Apr 2 09:51:32.931 : te_control[1054]: %ROUTING-MPLS_TE-5-LSP_REOPT : tunnel-te102 (signalled-name: EAST-WEST, old
LSP Id: 6, new LSP Id: 7) has been reoptimized; reason: Inuse path-option change.

Do we need per process syslog buffer (so other processes syslog do not
wrap events) - CSCuo49107
How much more valuable is per LSP event history enhancement?

148
Logging Discriminator

CSCuo49107

149
Operational & Automation
Enhancements1

150
Feature Status Notes1

BGP: New show command scale CLI keyboard CSCud44259 5.1.0

Flex CLI: exclude-group enhancement CSCuh07131 5.1.1

Flex CLI: relocate inheritance keyboard on show running-config


CSCuc74237 5.1.1
<sub-mode>

Provide commit failure details as part of commit execution output CSCue31515 5.1.1

Interfaces: Shortcut name for bundle-ether to BE CSCuh04526 5.1.3

MPLS-TE: tunnel display using interface's description CSCuo97949 5.1.3

MPLS-TE: reverse DNS (IP to Host names) on "show mpls traffic-


CSCuj69042 5.1.3
eng" commands
IPv6 ND should print Interface name when Duplicate Address is
CSCuo57029 5.1.3
configured
(1) DDTS associated with enhancement (if applicable) and first shipping release Feature shipping / code committed
151
Feature Status Notes1

diff style configuration changes CSCub96653 5.2.0

Prevent destructive edits of route-policy CSCul96628 5.2.0

Add ACL/selective match criteria to LPTS configuration profiles CSCui28897 5.2.2

Grep utility switch "-A" (after context) & "-B" (before context) CSCsy52070 5.2.2

"show configuration failed load" should point out where the syntax
CSCup55705 5.3.0
error is

commit show-error should remain in config submode after failure CSCur00689 5.3.0

show commit changes diff should remain in config submode CSCur07806 5.3.0

Netflow: records for TE traffic show tunnel as egress interface


CSCuq41598 5.3.0
instead of physical/bundle interface

(1) DDTS associated with enhancement (if applicable) and first shipping release Feature shipping / code committed
152
Feature Status Notes1

BGP: Per-neighbor prefix advertisement counter CSCup73636 5.3.1

CSCue33274 5.3.1
Non-interactive EXEC commands CSCut27602 5.3.1

Global Configuration Replace CSCte81345 5.3.1

ACL: pre-defined "nd-na" and "nd-ns" ICMP keywords CSCue19007 5.3.1

Addition of repeat and df-bit keyboards in ping command CSCuq10440 5.3.1

User confirmation before copying file to running-config CSCus05515 5.3.1

User confirmation before committing rollback configuration CSCus08261 5.3.1

Removing location keyword from install operations CSCuh87177 5.3.1

(1) DDTS associated with enhancement (if applicable) and first shipping release Feature shipping / code committed

153
Feature Status Notes1

Remove location all keyboard from slot-based license


CSCty17766 5.3.1
configuration

Display configuration changes from ALL commits CSCup99228 5.3.1

Show command to display all interfaces configured for Netflow CSCtc82728 5.3.1

(1) DDTS associated with enhancement (if applicable) and first shipping release

Feature shipping / code committed

154
Feature Status Notes1

CSCut81223 Target 5.3.2


VRF names missing from help in show ipv4 / ipv6 vrf commands (Aug 15)

CSCut81235 Target 5.3.2


VRF names missing from help in show ospf / ospfv3 vrf commands (Aug 15)

CSCut81244 Target 5.3.2


VRF names missing from help in show pim vrf commands (Aug 15)

CSCus88239 Target 5.3.2


VRF names missing from help in ping and traceroute commands (Aug 15)

CSCus87758 Target 5.3.2


OSPF: display knob that combines info from global and all vrf (Aug 15)
CSCti50227 Target 5.3.2
Modify RPL and delete prefix-set in a single commit (Aug 15)

CSCsm76990 Target 5.3.2


Support Interface Range command for configuration (Aug 15)

CSCue31584 Target 5.3.2


Object-Level Configuration Replace (Aug 15)

(1) DDTS associated with enhancement (if applicable) and first shipping release Development in progress

155
Feature Status Notes1

CSCul20251 Target 5.3.2


Add last link flap timestamp to show interface output (Aug 15)

(1) DDTS associated with enhancement (if applicable) and first shipping release

Feature shipping / code committed


Development in progress

156
Manageability &
Instrumentation
Enhancements

157
Feature Status Notes1

Need add SCP support for ASR9K CSCuj57256 5.1.1

XML: schema for NP counters CSCuj26374 5.1.1

SSH: Enable command chaining over SSH session CSCue33304 5.1.1

SYSLOG: Lack ability to set IP Precedence TOS / DSCP for


CSCtg87596 5.1.1
outgoing syslog traffic

FTP: should close socket after keep expiry or hard idle timeout CSCug34777 5.1.1

Pre-socket IPP setting for Telnet CSCtz24243 5.1.1

XML: schema for transit MPLS-TE LSPs transiting a specific


CSCup72305 5.1.3
interface

SNMP: MPLS TE needs to populate auto-bw data for tunnel index CSCuq12567 5.1.3

(1) DDTS associated with enhancement (if applicable) and first shipping release Feature shipping / code committed
158
Feature Status Notes1

"show inventory" command truncates pluggable optic serial number CSCum12533 5.1.3

MPLS-TE: XML affinity attribute name broken CSCun93296 5.1.3

SCP doesn't show a progress bar CSCum48866 5.1.3

Slow TCP transfer rates affecting ability upgrade and collect traces CSCuo25887 5.1.3

XML: schema to fetch a summary of transit LSP's CSCup72323 5.2.2

Wildcard support for Secure Copy (SCP) CSCun39879 5.3.1

"show controllers" needs to include interfaces name in the output CSCuo01750 5.3.1

SECURITY-login-6-AUTHEN_SUCCESS needs upper case login


CSCub96671 5.3.1
log standards
(1) DDTS associated with enhancement (if applicable) and first shipping release
Feature shipping / code committed
159
Feature Status Notes1

XML: schema for "show hw-module fpd location all" CSCuj15553 5.3.1

XML: schema for show controllers phy CSCue46774 5.3.1

CSCut95708 Target 5.3.2


FPD auto-upgrade on newly inserted linecards (Aug 15)

CSCui82933 Target 5.3.2


XML: schema for show processes memory (Aug 15)

CSCuj10737 Target 5.3.2


XML: schema for LPTS (Aug 15)

CSCuj10747 Target 5.3.2


XML: schema for show filesystem (Aug 15)

CSCus89552 Target 5.3.2


SNMP: VRF aware IP-MIB implementation for IOS-XR (Aug 15)

(1) DDTS associated with enhancement (if applicable) and first shipping release
Feature shipping / code committed
Development in progress

160
Feature Status Notes1

SNMP: allow configuration of single community associated to CSCus89709 Target 5.3.2


separate ipv4 and ipv6 ACLs (Aug 15)

CSCug08226 Target 5.3.2


SYSLOG: configurable UDP port (Aug 15)

(1) DDTS associated with enhancement (if applicable) and first shipping release

Development in progress

161
Troubleshooting,
Debugability &
Diagnostics
Enhancements

162
Feature Status Notes1

MPLS-TE: Syslog with the exact link id in LSP Failure scenarios CSCtl12202 5.1.1

MPLS-TE: "show mpls traffic-eng" output must include signaled


CSCue18862 5.1.1
name in command output
Include MPLS-TE tunnel signaled name in all logging output
CSCue18859 5.1.1
messages

MPLS-TE auto-bandwidth syslog message output enhancements CSCue31544 5.1.1

RSVP-TE: logging enhancements CSCuh07700 5.1.3

Add support for TE tunnel interfaces to "monitor interface" command CSCuj66947 5.1.3

Lower the severity from critical to warning in PFM for non-cisco SFP CSCuo71034 5.1.3

MPLS-TE: Decode RRO Flags into strings in show mpls traffic-eng


CSCup53206 5.1.3
tunnels

(1) DDTS associated with enhancement (if applicable) and first shipping release Feature shipping / code committed
163
Feature Status Notes1

Support for error counters in "monitor interface Bundle-Ethernet * " CSCuo15664 5.3.1

Add show tech-support for NTP CSCty78821 5.3.1

Add show tech-support watchdog CSCub81362 5.3.1

ASR9000 Fabric VOQ serviceability CLI improvements CSCun08218 5.3.1

CSCut21292 5.3.1
MPLS-TE FRR protection logging enhancements CSCus28994 5.3.1

QoS: ASR9000 Bundle-Ethernet serviceability CLI improvements CSCul81622 5.3.1

MPLS-TE SRLG CLI improvements CSCup78779 5.3.1

(1) DDTS associated with enhancement (if applicable) and first shipping release
Feature shipping / code committed
Development in progress

164
Feature Status Notes1

Object-group: show config failed to indicate cause of semantic


CSCuo30396 5.3.1
errors
CSCus39159 Target 5.3.2
Enhancements to shared memory usage statistics (CLI / XML) (Aug 15)

CSCus39188 Target 5.3.2


Ltrace: Static memory allocation and knob to limit ltrace buffer size (Aug 15)

CSCuo49107 Target 6.0


User-defined Logging Files & Logging Discriminators (Nov 15)

(1) DDTS associated with enhancement (if applicable) and first shipping release

Feature shipping / code committed


Development in progress

165
Call to Action
Visit the World of Solutions for
Cisco Campus
Walk in Labs
Technical Solution Clinics

Meet the Engineer


Lunch and Learn Topics
DevNet zone related sessions

166
Complete Your Online Session Evaluation
Please complete your online session
evaluations after each session.
Complete 4 session evaluations
& the Overall Conference Evaluation
(available from Thursday)
to receive your Cisco Live T-shirt.

All surveys can be completed via


the Cisco Live Mobile App or the
Communication Stations

167
Thank you

168

Das könnte Ihnen auch gefallen