Sie sind auf Seite 1von 268

Cisco IOS XE Routers

ASR 1000 & ISR 4000


The Evolution of Converged Network Edge
Architectures
Matt Falkner, Distinguished Technical Marketing Engineer
Mirko Grabel, Technical Marketing Engineer
Simon Herriotts, Technical Leader, CSG BU
David Roten, Technical Marketing Engineer
TECOPT-2401
TECOPT-2401 Agenda
• Introduction, What’s new about IOS XE
• Software Architecture
• ASR1000
• ISR4000

• QoS, Similarities and differences across platforms


• High Availability on ASR1000
• Performance
• DRAM Demystified
• Configuration Specifics
• Deployment use cases
Introducing IOS XE
With ASR1000 and ISR4000 series routers
• 2007
ASR1000 introduced as the first routing
platform using IOS XE software
• 2013
ISR4451 introduced as the first branch
routing platform using IOS XE software
• ISR4000 series routers inherited the new IOS architecture
and married with the previous innovations from the ISR G2
series of routers
• 2014
• 5 new ISR4000 series routers introduced to extend
coverage through all branch connectivity needs
What is IOS XE
How is it different than Classic IOS at 30,000 feet?
BIG differences!
• Linux is the underlying operating system for the chassis
• IOSd runs as a process in Linux
• Benefit from protected memory and process isolation
• Very familiar CLI (some things are best kept the same)
• Separation of control and data planes into discrete processes
• Multicore support for data plane
• Introduction of services plane in addition to control and data plane
Linux? where?
I don’t see a shell prompt anywhere!
• Linux, yes, but the only interface with the system is via IOSd
• IOSd presents the same CLI interface that everyone loves from
other platforms like 7200, 7600, and ISR G2 routers
• Because IOSd is running as a discrete process it has protected
memory that is isolated from crashes in other processes and
failures in other components in the system.
• Individual software component upgrade opportunity
• With “service internal” and “request platform
software system shell” commands you can find Linux.
Don’t do it without a good reason. Here be dragons and you taste
good with ketchup. Requires one-day license from TAC since you
go well with ketchup.
Same CLI
Not like IOS-XR that looks like you understand it until you don’t…
• In general configurations from Classic IOS platforms move
forward to IOS XE without any changes
• There are certain features like QoS, carrier grade NAT (CGN),
WAAS, CME that when moved forward are going to have slight
variations or need updating to take advantage of new features
• More details on some of these later
• Cisco Active Advisor can analyze configurations from Classic IOS platforms and
provide updated configurations for IOS XE platforms
https://ciscoactiveadvisor.com/
Divided we stand, united we fall!
Wait, isn’t it supposed to be the other way around?
• Classic IOS is a single threaded monolithic blob of code that has
served us well for a long time
• Impossible to separate control and data plane
• Processors aren’t getting faster so much, their number of cores are
growing though
• Multi-core lets us to dedicate certain cores for control plane and
others for data plane, i.e. no starving data plane for control plane
• Furthermore, we can use one chip architecture for control and a
separate for data plane for mix and match to meet needs
• We have even created a services plane that can run alongside
IOSd and not impact platform performance
Three legged stool
Balances a whole lot better than a two legged stool
• Previous router platforms had only the concept of a control
plane and data plane

• IOS XE introduced the service-plane which allows for rich


appliance type functions to be provided in the same sheet metal
chassis as WAN edge functionality

• Consolidates equipment ownership, service contract


management / expense, power, cooling, and space
requirements
Services plane
Coffee, tea, soft drinks, peanuts, watch your elbows please…
• All platforms have multiple cores on the control plane, no truck rolls needed!
• IOSd consumes one core with occasional use for extra cores for
specific features
• Remaining cores are given to a hypervisor which can run dedicated
applications to provide appliance like services
• vWAAS
• EnergyWise
• SNORT
• WireShark

• Single memory pool is used for Linux, IOSd, and the services plane
IOS XE in enterprise next generation networks
Corporate HQ WAN aggregation
DCI

Regional office
Internet gateway

Branch Cloud

High Speed CPE WAN Aggregation Data Center Interconnect


High-end Branch DMVPN / GETVPN / FlexVPN for access Internet gateway
Regional office IPSec Zone-Based Firewall
Corporate headquarters Cloud Services Edge
IOS XE in service provider next gen networks
Corporate HQ Access & aggregation Edge
ISP
Internet
WAG
Peering
WiMAX
BNG LNS
Branch
ETTx IP/MPLS Core
IPSec Route
xDSL reflector
PE
Home office Content
xPON farm
SBC
800 series
CMTS WiFi Access Gateway
High end CPE Internet Peering
Corporate headquarters BNG-PPPoE
LNS
Branch IPoE, LAC, PTA, ISG
Route Reflector
Home office IPSec Aggregator
PE (L3VPN and L3VPN)
VoIP SBC
Cisco’s routing portfolio
Service Provider Edge Routers
Enterprise Edge / DC ASR 9000
Managed L2 / L3 VPNS Integrated Security
Application Recognition 7600 Series

ASR 1000
ISR G1 & G2 Series
7200 Series 200G per Slot
Carrier Ethernet + BNG
IP RAN
40G per Slot L2/L3 VPNs
ISR 4k Series Carrier Ethernet Vidmon
IP RAN
20 – 360GB Per Hosted Firewall
SBC/VoIP
System IP Sec
Broadband
Broadband SBC/VoIP
Vidmon
Route Reflector DPI
Distributed PE
Software architecture
IOS XE software architecture

Control-plane
• IOS + IOS XE Middleware + Platform IOS active IOS standby
Software
Platform Adaptation Layer (PAL)
• IOS runs as its own Linux process for Chassis manager Forwarding manager-RP
control plane
Linux Kernel
• Linux kernel with multiple processes running
in protected memory Control
messaging
• Fault containment, re-startability

Data plane
Forwarding engine client
• ISSU of individual SW packages
Forwarding engine driver
• With redundant data plane hardware packet
Chassis manager Forwarding manager-FP
loss can be as low as 50 ms at failover
Linux Kernel
IOS XE software architecture ASR1000 implementation

Control-plane
IOS active IOS standby
ISR4000 implementation
Platform Adaptation Layer (PAL)

Linux Kernel
IOS active
Control-
plane

Chassis manager Forwarding manager-RP

Platform Adaptation Layer (PAL)


Linux Kernel

Control messaging
Chassis

Chassis manager Forwarding manager-RP


Control
messaging
Forwarding engine client
plane
Data

Data plane
Forwarding engine driver Forwarding engine client

Forwarding manager-FP Forwarding engine driver

Chassis manager Forwarding manager-FP

Linux Kernel
IOS XE architecture building blocks • Provides abstraction layer
between hardware & IOS

Control plane
• Runs Control Plane IOS active IOS standby • Manages ESP redundancy
• Generates configurations • Maintains copy of FIB and
• Maintains routing tables (RIB, interface list
Platform Adaptation Layer (PAL) • Communicates FIB status to
FIB…)
active & standby data plane FM
Chassis manager Forwarding manager - RP
• Initialization of RP processes
• Initialization of installed cards Linux Kernel
• Detects and manages OIR of
cards • Maintains copy of FIBs
• Manages system status, Control
messaging • Programs forwarding plane and
environmentals, power, EOBC forwarding engine DRAM
• Statistics collection & RP
Data plane

Forwarding engine client


communication
Forwarding engine driver
• All messaging done via IP • Communicates with forwarding
datagrams in the kernel or over Chassis manager Forwarding manager - FP
manager in coltrol plane
the backplane of the chassis • Provides interface to QFP client &
Linux Kernel
driver
IOS XE Linux kernel
• Control CPUs run a Linux operating system kernel
• handles process scheduling, memory management, interrupts
• modular ASR1000 routers run multiple instances of Linux
• fixed ASR1000s and ISR4000s run a single instance of Linux

• Includes a suite of low-level applications


• Linux console access for debugging
• base software is common, but may vary slightly on different platforms

• Connectivity to other system components via IPC


• device drivers for EOBC, Hypertransport, PCIe
• kernel is responsible for directing IPC messages to the respective software
processes (IOS, chassis manager, etc.)
• Implements punt-path for legacy data packets

• Pre-emptible (can interrupt and prioritize processes)


IOS XE IOSd
• IOSd runs as a process
• IOS timing is governed by Linux kernel scheduling
• Provides virtualized management ports
• no direct hardware component access
• Communicates with other software processes via IPC
• Handles all control plane features
• CLI, configuration processing, SNMP handling
• running routing protocols & computing routes
• session management
• Processes punted packets (legacy protocols, all protocols communications)
• Two IOS processes can run in parallel for software redundancy on non-
redundant chassis
• Based on IOS 12.2SR features (includes 12.2SB and 12.4T-based features)
For your
IOS XE features reference
Routing & IPv4 / IPv6 routing CRoMPLS BGP PIC Core BGP PE-CE Opt.
MPLS & L2 BGP, RIP, IS-IS, OSPF EoMPLS IPv4 selective Download mVPN
(IPv4 / IPv6) static routes PW redundancy Ethernet, POS, ATM Half-duplex VRF
GRE MLPPP GLBP, HSRP, VRRP BGP Pic Best External
MPLS LDP GEC IP event dampening IPv4 over IPv6 Tunnels
MPLS VPN PBR BFD for IS-IS, OSPF, Static (IPv4 & PfR
Inter-AS & CsC Netflow (v5, v8, v9) IPv6) L2TPv3
MPLSoGRE BGP policy accounting WCCP
MPLS TE FRR BGP NSF 8000 eBGP/iBGP
VRF-aware features BGP 4-byte AS (DOT) 4000 VRF
Broadband LAC, LNS, PTA per-user Firewall PPPoE Relay ISG VRF-transfer
L2TP ANCP ISG postpaid, tariff switching VPND Multihop
32K sessions with HA& QoS dynamic Policies ISG flow control L2TP forwarding of PPP Tags
AAA support BNG clustering VRF-aware ISG PPPoEoA
DHCP Relay for IPv4 & IPv6 template ACL ISG-SCE control bus IPv6 Broadband
remote access MPLS LI: Radius & SNMP RADIUS COA / PoD ANCP on ATM
per-session QoS PPPoE Tag support (RID, CID) ISG single sign-on

Multicast PIM IPv6 BSR Multicast NAT IGMPv2/v3


PIM BiDir MVPN Multicast CAC Extended ACL for Multicast
IPv6 Multicast Routing MVPN Extranet MVPN NSF/SSO

QoS HQF support dual/single rate 3 color policing bandwidth remaining ratio ATM service policies (VP/VC)
2PQs, 128K queues 16K policy-maps policy aggregation NBAR
MQC: classification / marking 1000 class-maps ATM shaping per VP/VC FPM
egress queuing 3-level hierarchical scheduling egress classification on QoS group
For your
IOS XE features reference
Security hardware assisted IPSec Zone-based Firewall FIPS compliance DMVPN Hierarchical Hub
IPSec VPN 3DES/AES NAT IPv6 IPSec static VI VRF-aware IPSec
DMVPN RTSP Firewall ALG VRF-aware zone-based Firewall VRF-aware Zone-based FW
GETVPN Control Plane Policing VRF-aware NAT

SBC Distributed and Integrated SBC NAPT Twice NAT for IPv4 Flexible header manipulation
Topology Identity hiding Megaco/H.248 No NAT for IPv6 Privacy Header
DoS Protection Flow-based QoS control H.248 ACK 3-way Signaling congestion control
Pinhole/filter control DBE control interface H.248, V4 H.248 interim accounting IPv6 support
SIP Signaling/latching transport, UDP, TCP, etc SIP-H.323, H.323-H.323 SBC Endpoint switching

HA Config sync IPv6 IPSec


SNMP, ARP, NAT FR, PPP, MLPPP, HDLC, VLAN MPLS, MPLS-VPN, LDP, VRF-lite
Stateful IS-IS DHCPv4/v6

Network LAN Management Solution MPLS Diagnostics Expert Traffic Engineering Manger Syslog
management Cisco Information Center Netflow Collector MPLS LSP Ping / Traceroute VRF-aware NF
QoS Policy Manager Cisco Security Manager MIBs
IP Solution Center Cisco Multicast Manager SNMP
IOS XE chassis manager

Control-plane
• Initializes hardware and boots other processes IOS active IOS standby
• Manages EOBC switch on RP in ASR1000
Platform Adaptation Layer (PAL)
• Manages ESI links on RP/ESP/SIP in ASR1000
• Manages timing circuitry Chassis manager Forwarding manager-RP

• Controls reset, power-down of modules Linux Kernel


• Selects active/standby hardware, initiates failover
• Detects OIR Control
messaging
• starts image download and boot process of the
respective hardware component

Data plane
Forwarding engine client
• Communicates with IOS to make it aware of the
hardware components Forwarding engine driver

• Monitors environmental variables and Chassis manager Forwarding manager-FP


alarms Linux Kernel
IOS XE forwarding manager

Control-plane
• FM in control plane communicates with peer FM IOS active IOS standby
processes in data plane
• Distributed control function Platform Adaptation Layer (PAL)

• Propagates control plane operations to data plane Chassis manager Forwarding manager-RP

• Exports forwarding information to / from IOS to data


Linux Kernel
plane (CEF tables, ACLs, NAT, QoS hierarchies, etc.)
• Maintains its own copy of forwarding state tables
Control
• Communicates state information back to RP messaging

• statistics

Data plane
Forwarding engine client
• FM on the active control plane maintains state for
both active and standby data plane hardware Forwarding engine driver

• Facilitates NSF after re-start with bulk-download of Chassis manager Forwarding manager-FP
state information
Linux Kernel
IOS XE forwarding engine control

Control-plane
• Forwarding engine client IOS active IOS standby
• Allocates and manages resources on forwarding
engine (data structures, memory, scheduling Platform Adaptation Layer (PAL)
hierarchy)
Chassis manager Forwarding manager-RP
• Receives requests from IOS via FM processes
• Re-initializes FE and memory if a software error occurs Linux Kernel

• Forwarding engine driver


Control
• Provides low-level access and control of FE messaging
• Provides communication path between FE client and
actual forwarding engine via IPC

Data plane
Forwarding engine client
• Forwarding engine (runs μcode) Forwarding engine driver
• ASR1000 QFP implements data plane on PPEs
Chassis manager Forwarding manager-FP
• ISR4000 platforms use other multicore chips running
the same microcode compiled for alternate processor Linux Kernel
Feature Invocation Array – FIA
IPv6 IPv4 MPLS X-connect L2 Switch
L2/L3
Classify
IPv4 validation
show platform hardware qfp active interface if-name <name>

SSLVPN Netflow Forwarding NAT ISG


ERSPAN ISG APS Marking
• IP Unicast
MLP QPPB • Loadbalancing WCCP Policing
• IP Multicast
IP Hdr. Compress. QoS Classify/Police • MPLS Imposit. Classify Accounting
• MPLS Dispos. SSLVPN
VASI IPSec TCP MSS Adjust
• MPLS Switch.
LI uRPF • FRR Firewall Netflow
• AToM Dispos.
LISP NAT • MPLSoGRE IPSec LI
FPM PBR ACL BDI & Bridging
ACL SBC GEC IP Tunnels
L2/L3

IPv4
BGP Policy Acct. WCCP FPM Queuing
Classify
MLP
System Architecture –
ASR1000 Modular
Platforms
ASR 1000 series
Compact, Business-Critical Instant On
Powerful Router Resiliency Service Delivery
 Line-rate performance 2.5G to 200G+  Fully separated control and forwarding  Integrated firewall, VPN, encryption,
with services enabled planes NBAR, CUBE
 Hardware QoS engine with up 128K  Hardware and software redundancy  Scalable on-chip service provisioning
queues per ASIC  In-service software upgrades through software licensing
 Investment protection with modular
engines, IOS CLI and SPAs for I/O

One IOS-XE Feature Set


ASR 1001-X ASR 1002-X ASR 1004 ASR 1006

ASR 1013
2.5 - 20 2.5–36 10-40 10-100+ 10-200
Gbps Gbps Gbps Gbps Gbps
ASR1000 building blocks • Route Processor (RP)
• Handles control plane traffic
FECP CPU • CPU
Manages system FECP

RP

RP
• Embedded
ESP

ESP
interconn. GE switch interconn. GE switchService Processor (ESP)
QFP QFP
Crypto • Handles data planeCrypto
traffic
Assist. PPE BQS Assist. PPE BQS

interconnect
interconn.
• SPA Interface Processorinterconnect
(SIP)
Route Processor • Shared Port Adapters provide interface
Handles control plane connectivity
EmbeddedMidplane
Service Processor Manages system
Handles forwarding plane traffic • Centralized Forwarding Architecture
• All traffic flows through the active ESP,
interconnect interconnect interconnect with all flow state
standby is synchronized
with a dedicatedSPA10-Gbps link
SIP

SIP

SIP
SPA SPA
IOCP IOCP IOCP
Aggreg. Aggreg. Aggreg.
• Distributed Control Architecture
SPA SPA SPA SPA • SPA components
All major system SPA have a
SPA Interface Processor powerful control processor dedicated for
Houses SPA’s control and management planes
Queues packets in & out
ASR1000 data plane architecture
• Enhanced SerDes Interconnect (ESI)
FECP CPU • serialCPU FECP

RP

RP
communication via midplane
ESP

ESP
QFP interconn. GE switch • can run atGE11.5Gbps
interconn. switch or 23Gbps QFP
Crypto Crypto
• Provides data packet communication
PPE BQS PPE BQS

interconnect
interconn. interconnect
• data packets between ESPs and other linecards
punt/inject traffic to/from RP
Midplane • state synchronization between ESPs
• two ESI links between each ESP and all
interconnect linecards
interconnect interconnect
• Additional full set ofSPA
SIP

SIP ESI links to standby ESP

SIP
SPA SPA
IOCP IOCP
CRC protection ofAggreg. IOCP
packet contents
Aggreg. Aggreg.

SPA SPA SPA SPA SPA SPA


ASR1000 control plane architecture
EthernetEthernet Out of Band Channel (EOBC)
out-of-band
1Gbps Ethernet bus
channel (EOBC)
FECP CPU CPU if cards are installed FECP

RP

RP
indication and ready loading
Used by RP to pass control messages,
ESP

ESP
images, stats collection
statistics and program QFP
interconn. GE switch interconn. GE switch
Crypto QFP messages to program QFPCrypto QFP
Assist. PPE BQS Assist. PPE BQS
2
Inter-Integrated Circuit (I C)
interconnect
interconn. interconnect
monitor health of hardware components
control resets
Midplane
communicate active/standby
real time presence and ready indicators
control the other RP (reset, Circuit
Inter-Integrated power-down,etc.)
(I2C) Bus
interconnect report power-supply
interconnect status
interconnect
Slow (few kbps)
EEPROM access Used for system monitoring
SIP

SIP

SIP
SPA SPA SPA
IOCP IOCP (temp., OIR, IOCP
fan speed,…)
Aggreg. Aggreg. Aggreg.
SPA control links
SPA SPA SPA detect
SPA SPA OIR SPA SPA
reset SPAs (via I2C)
power-control SPAs (via I2C)
SPA Control Link
read EEPROMs
Works between the SPA’s and SIP
ASR1000 chassis configuration

SIP0, SIP1 and SIP2


SPA interface access

FP0 and FP1


data plane processing

RP0 and RP
control plane processing
ASR1006 supports redundant ASR1004 supports redundant
control and data planes via control planes via dual IOS
active/standby hardware. process redundancy on a single
RP card.
ASR1000 – power supplies
ASR1001-X
ASR1002

ASR1004
3x multispeed
fan per PEM ASR1013
2 PEM total 3x multispeed
fan per PEM

4 PEMs total
ASR1006
3x multispeed
fan per PEM

2 PEM total
ASR1000 systems
ASR1001-X ASR 1002-X ASR 1004 ASR 1006 ASR 1013

1 SPA slot 8 SPA slot 12 SPA slots 24 SPA slots


Expansion slots 1 NIM slot
3 SPA slots
using 2 SIPs using 3 SIPs using 6 SIPs

RP Slots Integrated Integrated 1 2 2

ESP Slots Integrated Integrated 1 2 2

SIP Slots Integrated Integrated 2 3 6

IOS Redundancy Software Software Software Hardware Hardware

Built-In Ethernet 6 GE + 2 TenGE 6 GE N/A N/A N/A

Height 1.75” (1RU) 3.5” (2RU) 7” (4RU) 10.5” (6RU) 22.7” (13RU)

Bandwidth 2.5 to 20 Gpbs 5 to 36 Gbps 10 to 40 Gbps 10 to 100 Gbps 40-100+ Gbps

Max Output Pwr 400W 470W 765W 1275W 3200W

Airflow Front to back Front to back Front to back Front to back Front to back
ASR1000 SPA interface processor (aka SIP)
• Supports up to 4 SPAs
• full OIR support
• Does not participate in forwarding decisions
• Preliminary QoS
• Ingress packet classification – high & low priority
• Ingress over-subscription buffering
• 128MB of ingress oversubscription buffering
• Capture stats on dropped packets
• Network clock distribution to SPAs, reference selection from SPAs
• IOCP manages midplane links, SPA OIR, SPA drivers
• SIP40 supports minimally disruptive restartfor ISSU (MDR)
• SIP reboot times of 25 seconds or less
• SPA reboot times of 10 seconds or less
ASR1000 SIP40 advantages
• Packet classification enhancements to support additional media
• PPP, HDLC, FR, ATM…

• 96 vs 64 queues per port priority queues for higher density SPA support
• Full 3 parameter priority scheduler (Strict, Min, Excess) vs 2 parameter (Min, Excess)
• Addition of per-port and per-VLAN/VC
ingress policers
• Network clocking support
• DTI clock distribution to SPAs
• Timestamp and time-of-day clock distribution
ESI, 11.2 Gbps
SPA-SPI, 11.2Gbps
Hypertransport, 10Gbps
Links for data packets
Other RPs
SIP40 block
RPsdiagram Standby ESP GE, 1Gbps
I2C
SPA Control
SPA Bus
Active ESP
Output ref
clocks
Input ref
clocks
IO Control processor Guarantee bandwidth
running Linux to all interfaces
Interconnect
EV-RP
EV-FC

Egress
DDRAM Ingress
Buffer
Scheduler
Status
Boot Flash IO control
(OBFL,…) processor
JTAG Ctrl
SPA C2W

Aggregation Network
… clock
128MB of input Reset / Pwr Ctrl … ASIC distribution
8MB of output
buffering Temp Sensor
buffering

Network
clocks
EEPROM
Ingress Ingress
Egress
Chassis buffers SPA Agg.
Classifier buffers
management

Classify high and low


RPs priority traffic 4 SPAs 4 SPAs 4 SPAs 4 SPAs 4 SPAs
Supported SPAs and SFPs
WAN optics Ethernet Optics POS SPAs Serial SPAs Ethernet SPAs
SFP-OC3-MM SFP-GE-S / SPA-2XOC3-POS SPA-4XT-Serial SPA-4X1FE-TX-V2
SFP-OC3-SR GLC-SX-MMD SPA-4XOC3-POS SPA-8XCHT1/E1 SPA-8X1FE-TX-V2
SFP-OC3-IR1 SFP-GE-L / SPA-8XOC3-POS SPA-4XCT3/DS0 SPA-2X1GE-V2
SFP-OC3-LR1 GLC-LH-SMD SPA-1XOC12-POS SPA-2XCT3/DS0 SPA-5X1GE-V2
SFP-OC3-LR2 SFP-GE-Z SPA-2XOC12-POS SPA-1XCHSTM1/OC3 SPA-8X1GE-V2
SFP-OC12-MM SFP-GE-T SPA-4XOC12-POS SPA-1XCHOC12/DS0 SPA-10X1GE-V2
SFP-OC12-SR CWDM SPA-8XOC12-POS SPA-2XT3/E3 SPA-1X10GE-L-V2
SFP-OC12-IR1 XFP-10GLR-OC192SR / SPA-1XOC48-POS/RPR SPA-4xT3/E3 SPA-1X10GE-WL-V2
SFP-OC12-LR1 XFP10GLR-192SR-L SPA-2XOC48POS/RPR SPA-2X1GE-SYNCE
SFP-OC12-LR2 XFP-10GER-192IR+ / SPA-4XOC48POS/RPR
SFP-OC48-SR XFP10GER-192lR-L SPA-OC192POS-XFP
SFP-OC48-IR1 XFP-10GZR-OC192LR
SFP-OC48-LR2 XFP-10G-MM-SR ATM SPAs Service SPAs CEOPs SPAs
DWDM-XFP
XFP-10GLR-OC192SR (32 fixed channels) SPA-1XOC3-ATM-V2 SPA-WMA-K9 SPA-1CHOC3-CE-ATM
XFP-10GER-OC192IR SPA-3XOC3-ATM-V2 SPA-DSP SPA-24CHT1-CE-ATM
XFP-10GZR-OC192LR GLC-GE-100FX SPA-1XOC12-ATM-V2
GLC-BX-U SPA-2CHT3-CE-ATM
GLC-BX-D
For your
reference
EoS Hardware
End of Sale Replacement
PID EoS Announce EoS Date PID
SPA-8XCHT1/E1 18-Nov-2013 19-May-2014 SPA-8XCHT1/E1-V2
ASR1000-ESP5 31-Mar-2015 29-Apr-2016 ASR1000-ESP20
ASR1002-X
ASR1000-ESP10 31-Mar-2015 29-Apr-2016 ASR1000-ESP20
ASR1002-X
ASR1000-RP1 31-Mar-2015 29-Apr-2016 ASR1000-RP2
ASR1000-SIP10 31-Mar-2015 29-Apr-2016 ASR1000-SIP40
ASR1001 31-Mar-2015 29-Apr-2016 ASR1001-X
For your
reference
DSP SPA
SPA-DSP(=)
• Many audio/video codecs supported (100 by default)
• Media pinholes opened for specific bandwidth; RTP streamis inspected
• DTMF interworking: Allows CUBE(SP) based calls to interwork between DTMF-inband
tones and RFC-2833 or out-of-band signalling (SIP or H.248)
• On-board transcoding– 588 medium complexity codec transcoded calls per card
• Multiple DSP-SPAs can be used in available SPA slots for a pool of DSP resources
Number
Codec types Complexity
of calls
G.711 903 Low
G.722 G.726 G.729a G.729ab 588 Medium
G.729 G.729b G.723.1 G.728 AMR iLBC 357 High
For your
reference
Synchronous Ethernet SPA
SPA-2X1GE-SYNCE(=)
• 1 External Interface to derive synchronization from BITS/SSU Clock
• 2 Synchronous Gigabit Ethernet ports
• All chassis except ASR1001
• Features Supported – same functionality as on Cisco 7600
Revertive Mode - If failed (active) source comes back up, it is used as clock reference again
OOR - If source clock goes beyond +-12 ppm it is declared as out of range
Holdover Mode - If no clock source available then all LCs replay learned clock from last source
Selectively disabling interfaces - Only the entire LC can be avoided from participating in Synchronized Clocking.
High Availability - SSO compliant
Hold-Off Time - Ensures short activation of failed source are not passed to the selection process
Wait-to-Restore Time - Failed source is considered by selection process if it is fault-free for this duration
Lockout - A particular source is no longer considered to be able in participating in source selection algorithm
Manual Switch - Override currently selected synchronization source
Force Switch - Override currently selected synchronization source. Overrides manual switch command
Clear Lockout/Switch - Clears effect of lockout and switch commands
Fixed configuration Ethernet linecards

Support Key features and advantages


ASR1000-2T+20X1GE – XE3.10 Full feature parity with existing Ethernet interfaces
ASR1000-6TGE – XE3.12 Significant price savings versus fully populated SIP40
40x1GE – Future with corresponding SPAs
Requires modular chassis (1004, 1006, or 1013) SyncE, Y.1731 (CFM)
Requires RP2 with minimum of ESP40 Line rate performance for all interfaces concurrently
For your
reference
Maximum interface termination
ASR platform 1001 1001-X 1002 1002-X 1004 1006 1013 Comment
# SPAs (single hgt) 1 1 3 3 8 12 24
# NIMs (single width) 0 1 0 0 0 0 0
10GE with SPAs 1 3 3 3 8 12 24 1-port 10GE
10GE wiith fixed card 12 18 36 6x10GE fixed card
GE with SPAs 12 14 28 30 64 96 192 8-port GE
GE with fixed card 40 60 120 2x10GE +20x1GE fixed card
FE 8 8 24 24 64 96 192 8-port FE
STM-4 1 1 3 3 8 12 24 1-port STM4 POS
STM-1 4 4 12 12 32 48 96 4-port STM1 POS
T3/E3 4 4 12 12 32 48 96 4-port T3/E3
ChT3 @T1 112 112 336 336 896 1344 2688 4-port Channelized T3
ChT3 @DS0 1023 1023 3069 3069 8184 12276 24552 4-port Channelized T3
ChT1 / ChE1 @DS0 192 / 256 * 384 / 512 576 / 768 576 / 768 1536 / 2048 2304 / 3072 4608 / 6144 8-port Channelized T1/E1
V.35/X.21/EIA-232… 4 4 12 12 32 48 96 4-port Serial (12in1)
ChSTM1 @ T3 / E3 3/3 3/3 9/9 9/9 24 / 24 36 / 36 72 / 72 1-port Channelized STM1
ChSTM1 @ T1 / E1 84 / 63 84 / 63 252 / 189 252 / 189 672 / 504 1008 / 756 2016 / 1512 1-port Channelized STM1
ChSTM1 @ DS0 1023 1023 3066 3066 8176 12264 24528 1-port Channelized STM1
STM-64 1 1 4 6 12 1-port OC192 (double-height)
STM-16 4 4 12 12 32 48 96 4-port OC48
* Includes one SPA and one 8xT1/E1 NIM module
Modular route processors: RP1 and RP2
• RP1
• 1.5GHz PowerPC architecture
• Up to 4GB IOS Memory
• 1GB Bootflash
• 33MB NVRAM
• Fixed 40GB Hard Drive

• RP2
• 2.66Ghz Intel dual-core architecture
• 64-bit IOS XE
• Up to 16GB IOS Memory
• 2GB Bootflash (eUSB)
• 33MB NVRAM
• Hot swappable 80GB Hard Drive
GE, 1Gbps
I2C
SPA Control
SPA Bus

RP2 block diagram ESI, 11.2-40 Gbps


SPA-SPI,11.2Gbps
Hypertransport, 10Gbps
Other
Route Processor No forwarded traffic
Manages all System Logging
chassis functions Core Dumps
Management
and runs IOS Ethernet BITS
(input & output)
Card Infrastructure Console 2.5’’
USB & Aux
RIB, FIB & other Hard disk Runs IOS, Linux OS
processes Boot Flash
(OBFL,…)
Manages boards and chassis
Determines BGP
routing table size
NVRAM 33MB
RP2: 8 or 16GB

CPU Memory
RP control processor Bootdisk 2GB
Intel 2.66 GHz dual core
Stratum-3 Network
clock circuit
I2C Chassis
Management Bus Interconnect EOBC Switch

For punt path traffic

SIPs ESPs RP Misc ESPs SIPs ESPs RP SIPs SIPs RP RP


Ctrl Output Input
clocks clocks
Modular route processors: RP1 and RP2
• NVRAM: 1GB of Internal flash for code storage, boot, config, logs, etc.
• Management Ethernet management port, auxiliary port, console port
• Storage:
• For core dumps, failure capture, etc; 40 / 80 GB Hard Disk Drive (mechanical)
• Two external USB for IOS configs or file transfer

• Multiple communications paths to other cards (for control and for network control packets)
• Miscellaneous control functions for card presence detection, card ID, power/reset control, alarms,
redundancy, etc.

• Stratum-3 network clock circuitry and BITS reference input (for synchronizing SONET
links, etc.)
• RP2 adds BITS output capability
Modular route processors: RP1 and RP2
• ROMMON can load IOS XE distribution from multiple sources
• has a v4 IP/UDP stack for tftp booting
• SIP/ESPs tftpboot over EOBC from active RP using this same facility
• ftp, rcp not supported (but are supported by IOS XE once booted)

• There is no helper image on the ASR1000


• Normal config-register rules apply
• 0x0 → boot to ROMMON
• 0x2142 → boot and ignore config

• File system support


• Linux Ext2
• FAT16/FAT32 (USB devices)
Route processor overview
ASR1001 ASR1001-X ASR1002-X RP1 RP2

CPU dual core, 2.2GHz quad core, 2.0GHz quad core, 2.13GHz single core, 1.5GHz dual core, 2.66GHz

Default memory 4GB, 4x1GB 8GB (4x2GB) 4GB 2GB, 2x1GB 8GB, 4x2GB

Memory upgrade 8GB (4x2GB)


16GB (4x4GB) 16GB (4x4GB) 4GB (2x2GB) 16GB (4x4GB)
options 16GB (4x4GB)

Built-In eUSB 1GB


8GB 8GB 8GB 2GB
Bootflash (8GB on ASR 1002)
optional 160 GB HDD pptional 160GB HDD 40GB HDD 80GB HDD
Storage external USB
external USB external USB external USB external USB

IOS XE OS 64 bit 64 bit 64 bit 32 bit 64 bit


ASR1002 (integrated) ASR1004
Chassis Support integrated integrated integrated ASR1004 ASR1006
ASR1006 ASR1013
ASR1000 Embedded Services Processor
• Centralized, programmable forwarding engine providing full-packet processing
• Packet Buffering and Queuing/Scheduling (BQS)
• For output traffic to carrier cards/SPAs ESP40
• For special features such as traffic shaping, reassembly,
replication, punt to RP, crytptography, etc.

• 5 levels of HQoS scheduling, up to 464K Queues,


Priority Propagation
ESP100
• Dedicated crypto co-processor
• Interconnect providing data path links (ESI) to/from
other cards over midplane
• Input scheduler for allocating QFP BW among ESIs

• FECP CPU manages QFP, crypto device, midplane links, etc.


For your
reference
ASR1000 Embedded Services Processor

SPI MUX
Interconnect
ASIC

TCAM

Crypto Engine

QFP Subsystem
PPE + BQS
FECP CPU

PPE DRAM
FECP DRAM

BQS Packet DRAM


ESP5 block diagram
Reset / Pwr Ctrl Resource Packet
TCAM Part Len /
DRAM Buffer DRAM BW SRAM
(5 Mbit)
Temp Sensor (512MB) (64MB)

EEPROM
QFP complex
DDRAM
Packet Processor Engines BQS GE, 1Gbps

Boot Flash
FE control I2C
PPE1 PPE2 PPE3 PPE4 PPE5 SPA Control
processor E-CSR SPA Bus

JTAG Ctrl … ESI, 11.2Gbps


SPA-SPI, 11.2Gbps
PPE6 PPE7 PPE8 PPE40 Hypertransport, 10Gbps
Other
PCI*

crypto Dispatcher
coprocessor

System bandwidth is 5 Gb/sec full duplex


Reset / Pwr Ctrl
SA table
DRAM Interconnect ( 5 Gb/sec up plus 5 Gb/sec down )

RPs RPs RPs SIPs


ESP10 block diagram
Reset / Pwr Ctrl Resource Packet
TCAM Part Len /
DRAM Buffer DRAM BW SRAM
(10Mbit)
Temp Sensor (512MB) (128MB)

EEPROM
QFP complex
DDRAM
Packet Processor Engines BQS GE, 1Gbps

Boot Flash
FE control I2C
PPE1 PPE2 PPE3 PPE4 PPE5 SPA Control
processor E-CSR SPA Bus

JTAG Ctrl … ESI, 11.2Gbps


SPA-SPI, 11.2Gbps
PPE6 PPE7 PPE8 PPE40 Hypertransport, 10Gbps
E-RP* Other
PCI*

crypto Dispatcher
coprocessor
SPI Mux
System bandwidth is 10 Gb/sec full duplex
Reset / Pwr Ctrl
SA table
DRAM Interconnect Interconnect ( 10 Gb/sec up plus 10 Gb/sec down )

RPs RPs ESP RPs SIPs


ESP20 block diagram
Reset / Pwr Ctrl Resource Packet
TCAM Part Len /
DRAM Buffer DRAM BW SRAM
(40Mbit)
Temp Sensor (1 GB) (128 MB)

EEPROM
QFP complex
DDRAM
Packet Processor Engines BQS GE, 1Gbps

Boot Flash
FE control I2C
PPE1 PPE2 PPE3 PPE4 PPE5 SPA Control
processor E-CSR SPA Bus

JTAG Ctrl … ESI, 11.2Gbps


SPA-SPI, 11.2Gbps
PPE6 PPE7 PPE8 PPE40 Hypertransport, 10Gbps
Other
PCI*

crypto Dispatcher
coprocessor

System bandwidth is 20 Gb/sec full duplex


Reset / Pwr Ctrl
SA table
DRAM Interconnect ( 20 Gb/sec up plus 20 Gb/sec down )

RPs RPs ESP RPs SIPs


Buffering Queuing & Scheduling

ESP40 block diagram PPE engines


responsible for all feature implementation
Executes complex QoS scheduling
Queues and schedules packets

Forwarding Engine Control Processor


Manages board Reset / Pwr Ctrl Resource Packet
TCAM Part Len /
DRAM Buffer DRAM
Programs QBS, PPE, Crypto (40Mbit) BW SRAM
Temp Sensor (1 GB) ( 256 MB)
Linux Kernel
EEPROM
QFP complex
DDRAM
Packet Processor Engines BQS GE, 1Gbps
Boot Flash FE control I2C
(OBFL,…) PPE1 PPE2 PPE3 PPE4 PPE5 SPA Control
processor E-CSR SPA Bus

JTAG Ctrl … ESI, 23 or 11.2Gbps


SPA-SPI, 11.2Gbps
PPE6 PPE7 PPE8 PPE40 Hypertransport, 10Gbps
E-RP* Other
PCI*
Crypto assist ASIC
responsible for all
encryption functions crypto Dispatcher Quantum Flow Processor
coprocessor Responsible for forwarding
packets
SPI Mux
Reset / Pwr Ctrl
SA table
DRAM Interconnect Interconnect System bandwidth is 40 Gb/sec full duplex

( 40 Gb/sec up plus 40 Gb/sec down )


RPs RPs ESP RPs SIPs
GE, 1Gbps
I2C
SPA Control
SPA Bus

ESP100 Block diagram ESI, 11.2 or 23 Gbps


SPA-SPI, 11.2Gbps
Hypertransport, 10Gbps
Other

Reset / Pwr Ctrl Resource Packet Full mesh between QFP


TCAM Part Len /
DRAM Buffer DRAM BW SRAM
complexes to exchange
(80 Mbit) forwarded traffic
Temp Sensor (1 GB) (1 GB total)

EEPROM
QFP complex QFP complex
DDRAM
PPEs BQS PPEs BQS
Boot Flash
FE control
PPE1 PPE2 PPE3 PPE1 PPE2 PPE3
processor E-CSR

JTAG Ctrl
PPE4 PPE64 PPE4 PPE64

PCI*

crypto Dispatcher Dispatcher


coprocessor

Reset / Pwr Ctrl


SA table
DRAM Interconnect Interconnect

RPs RPs ESP RPs


System bandwidth is 69 Gb/sec full duplex x2
( 69 Gb/sec up plus 69 Gb/secSIPs
down )
GE, 1Gbps
I2C
SPA Control
SPA Bus

ESP200 Block diagram ESI, 11.2 or 23 Gbps


SPA-SPI, 11.2Gbps
Hypertransport, 10Gbps
Other
TCAM Part Len /
TCAM
Reset / Pwr Ctrl (80 Mbit) Resource Packet BW SRAM
(2 x 80 Mbit)
memory Buffer DRAM
Temp Sensor (8 GB total) (2 GB total)
Full mesh between QFP complexes to
EEPROM exchange forwarded traffic

DDRAM
QFP complex QFP complex
Boot Flash
FE control PPEs
QFP complex BQS PPEs
QFP complex BQS
processor PPE1

PPEs
PPE2 PPE3

BQS PPEs
PPE1 PPE2 PPE3

BQS
JTAG Ctrl PPE1
PPE
PPE2 4 PPE3
PPE64
PPE1
PPE
PPE2 4 PPE3
PPE64

PPE4 PPE64 PPE4 PPE64

PCI* Dispatcher Dispatcher

Dispatcher Dispatcher
crypto crypto
coprocessor coprocessor

Reset / Pwr Ctrl


SA table
DRAM Interconnect Interconnect

RPs RPs ESP RPs System bandwidth is 69 Gb/sec full duplex x4


( 69 Gb/sec up plus 69 Gb/sec down ) SIPs
For your
reference
ASR1000 ESP reference
ASR1001 ASR1001-X ASR1002-X ESP5 ESP10 ESP20 ESP40 ESP100 ESP200

2.5/5 2.5/5/10/20 5/10/20/36


System bandwidth 5 Gbps 10 Gbps 20 Gbps 40 Gbps 100 Gbps 200 Gpbs
Gpbs Gpbs Gbps
Performance 8 Mpps 17 Mpps 30 Mpps 8 Mpps 17 Mpps 24 Mpps 24 Mpps 59 Mpps 113 Mpps
# of Processors 10/20 31 24/32/48/62 20 40 40 40 128 256
Clock Rate 900 MHz 1.5 GHz 1.2 GHz 900 MHz 900 MHz 1.2 GHz 1.2 GHz 1.5 GHz 1.5 GHz
Crypto BW (1400B) 1.8 Gpbs 8 Gpbs 4 Gbps 1.8 Gbps 4.4 Gbps 8.5 Gbps 11 Gbps 29 Gbps 78 Gbps
4GB 2 GB / QFP
QFP Resource Mem 256MB 1GB 256MB 512MB 1GB 1GB 4GB
(unified) < 8GB total
512MB
Packet Buffer 64MB 512MB 64MB 128MB 256MB 256MB 1GB 2GB
(unified)
Dual core Quad core Quad core Single core Single core Single core Dual core Dual core Dual core
Control CPU
2.2 GHz 2.00 GHz 2.13 GHz 800 MHz 800 MHz 1.2 GHz 1.8 GHz 1.73 GHz 1.73 GHz
Control Memory 8 GB 4/8/16 GB 1 GB 2 GB 4 GB 8 GB 16 GB 32 GB
TCAM 5 Mb 10 Mb 40 Mb 5 Mb 10 Mb 40 Mb 40 Mb 80 Mb 2 x 80 Mb
ASR 1002, ASR 1004, ASR 1004, ASR 1006,
Chassis Support Integrated Integrated Integrated ASR 1002 ASR 1013
1004, 1006 1006 1006, 1013 1013
For your
reference
ASR1000 computational reference
ASR ASR ASR
FRU RP1 RP2 ESP5 ESP10 ESP20 ESP40 ESP100 ESP200
1001 1001-X 1002-X
Control plane
1 2 2 4 4 1 1 1 1 2 2
cores
Control plane
1.50 GHz 2.66 GHz 2.20 GHz 2.00 GHz 2.13 GHz 800 MHz 800 MHz 800 MHz 1.80 GHz 1.73 GHz 1.73 GHz
clocking
Data plane cores * 20 31 * 62 40 40 40 40 2 x 62 4 x 62
Data plane
900 MHz 1.5 GHz 1.2 GHz 900 MHz 900 MHz 1.2 GHz 1.2 GHz 1.5 GHz 1.5 GHz
clocking
Control plane 4 / 8 / 16 4 / 8 / 16
2 / 4 GB 8 / 16 GB 8 / 16 GB 1 GB 2 GB 4 GB 8 GB 16 GB 32 GB
SDRAM GB GB
Bootflash 1 GB 2 GB 8 GB 8 GB 8 GB

NVRAM 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB

QFP memory 256 MB * 4 GB 1 GB 32 MB 512 MB 1 GB 1 GB 4 GB 4 GB


Packet buffer
64 MB 512 MB 512 MB 64 MB 128 MB 256 MB 256 MB 1 GB 2 GB
memory
TCAM 5 Mbit 10 Mbit 40 Mbit 10 Mbit 10 Mbit 40 Mbit 40 Mbit 80 Mbit 2x80 Mbit
ESP-100 and QFP Responsibilities
• Each ESP 100 uses 2 QFP-NG ASICs to achieve
performance
• Each QFP-NG is associated with a subset of the SPA-bays
and interfaces
• Should be taken into account for
QoS
Multicast
NAT

Egress queueing for interfaces handled by QFP0


Egress queueing for interfaces handled by QFP1
ESP-200 and QFP Responsibilities
• Each ESP200 uses 4 QFP ASICs to achieve performance
• Each QFP is associated with a subset of the SPA-bays and
interfaces
• SIP40 linecards may be split amongst multiple QFPs
• SIP10 linecards will be serviced entirely by the leftmost QFP
indicated per slot

• Should be taken into account for


QoS
Multicast
NAT Egress queuing for interfaces handled by QFP0
QFP1
QFP2
QFP3
ESP100 / ESP200 – In-service HW upgrade
• Support for ESP40 to ESP 100 upgrade provided the following criteria are met
• Old image already supports ESP100 (XE 3.7 or later)
• ESP40 config has less than 4000 schedules per hierarchy
• ESP40 config has less than 116K queues on those interfaces associated with a single
QFP on ESP100 or ESP200
• Typically only ever a risk in Broadband configurations
• Can be mitigated by spreading the queues across multiple interfaces distributed in the chassis

• Downgrading
• Need to ensure that the ESP100 config does not exceed the scaling limits of ESP40 in
any respect
For your
reference
TCAM Uses

Definition
Ternary Content-Addressable Memory is designed for rapid, hardware-based table
lookups of Layer 3 and Layer 4 information. In the TCAM, a single lookup provides
all Layer 2 and Layer 3 forwarding information.

Which ASR 1000 • Security Access Control Lists (ACL) • Multi Topology Routing
features use • Firewall • NAT
TCAM? • IPSec • Policy Based Routing
• Ethernet Flow Point for Ethernet • QoS
Virtual Circuits • NBAR / SCEASR
• Flexible Packet Matching • Web Cache Control Protocol
• Lawful Intercept • Edge Switching Services
• Local Packet Transport Services • Event Monitoring
(LPTS)
Available SIP bandwidth depends on ESP & chassis
ESP-xxx Card • Each ESP has a different Interconnect
ASIC with different numbers of ESI ports
QFP Complex
• ESP-10: 10G to all slots
10Gb 10Gb 20Gb 40Gb Multiple
69Gb • 1 x 11.5G ESI to each SIP slot (1004,1006)
ESP10 Interconnect ESP10 Interconnect
• ESP-20: 10G to all slots
ESP20 Interconnect • 1 x 11.5G ESI to each SIP slot (1004, 1006)

• ESP-40: 40G to all slots


ESP40 Interconnect
• 2 x 23G ESI to slots 0-2/3 in 1006, 1013
ESP100 / ESP200 Interconnect • 1 x 11.5G ESI to slots 4 & 5 in 1013

• ESP-100/200: 40G to all slots


• 2 x 23G ESI to all SIP slots in all chassis
ESI Link (11.5G only)
Standby ESP RP1 RP0 SIP0 SIP1 SIP2 SIP3 SIP4 SIP5
ESI Link (23G capable)
ASR1004
ASR1006
ASR1013
ASR 1000 System Oversubscription
• Total bandwidth of the system is determined by the following factors
• Type of forwarding engine: eg. ESP-10, ESP-20, ESP40, ESP100, or ESP200
• Type of SIP: SIP10 or SIP40
• The SIP bandwidth is the bandwidth of the link between one SPA Interface Processor and the ESP

• Step1: SPA-to-SIP Oversubscription


• Up to 4 x 10Gbps SPAs per SIP 10 = 4:1 Oversubscription Max
• No over subscription for SIP-40 = 1:1
• Calculate your configured SPA BW to SIP capacity ratio

• Step 2: SIP-to-ESP Oversubscription


• Up to 2,3 or 6 SIPs share the ESP bandwidth, depending on the ASR1000 chassis used
• Calculate configured SIP BW to ESP capacity ratio
• Total Oversubscription = Step1 x Step2
ASR 1000 Family Oversubscription Rates
Max. Bandwidth ESP (System
Chassis ESP SIP SIP Slot Max. SIP Interconnect Bandwidth on System (Chassis)
per SIP Slot Bandwidth)
Version Version Version Number Oversubscription ESP (Gbps) Oversubscription
(Gbps) Oversubscription
ASR 1001 ESP2.5 n.a. n.a. n.a. n.a. 2.5 5.6:1 5.6:1
ESP5 n.a. n.a. n.a. n.a. 5 6.8:1 6.8:1
ASR 1002
ESP10 n.a. n.a. n.a. n.a. 10 3.4:1 3.4:1
ESP10 SIP10 1, 2 10 4:1 10 2:1 8:1
ASR 1004
ESP20 SIP10 1, 2 10 4:1 20 1:1 4:1
ESP10 SIP10 1, 2, 3 10 4:1 10 3:1 12:1
ESP20 SIP10 1, 2, 3 10 4:1 20 3:2 6:1
ASR 1006 ESP40 SIP10 1, 2, 3 10 4:1 40 3:4 4:1
ESP40 SIP40 1, 2, 3 40 1:1 40 3:1 3:1
ESP100 SIP40 1,2,3 40 1:1 100 1.2:1 1.2:1
ESP40 SIP10 1-6 10 4:1 40 3:2 6:1
SIP40 1, 2, 3, 4 40 1:1
ESP40 40 5:1 6:1
ASR 1013 SIP40 5, 6 10 4:1
SIP40 1,2,3,4 40 1:1
ESP100 100 2.4:1 2.4:1
SIP40 5,6 40 1:1
ESP200 SIP40 1-6 40 1:1 200 1.2:1 1.2:1

ESP and SIP Ingress QOS functions were integrated


into the ASR 1000 design to deal with this apparent
oversubscription
Quantum Flow Processor – ASR 1000 innovation
1st generation
QFP Chip Set
• Five year design and continued evolution – developing the 3rd generation
• Massively parallel: 64 cores, 4 threads per core for 256 packets in flight
• QFP Architecture designed to scale to beyond 100Gbit/sec
• High-priority traffic path throughout forwarding processing
Cisco QFP
• Packet replication capabilities for Lawful Intercept
Packet Processor
• Full visibility of entire L2 frame
• Latency: tens of microseconds with features enabled
• Interfaces on-chip for external cryptographic engine
• 2nd generation QFP is capable of 70Gbit/sec, 32Mpps processing
• Can cascade 1, 2 or 4 chips to build higher capacity ESPs Cisco QFP Traffic Manager
(Buffering, Queueing,
Scheduling)
Cisco Quantum Flow Processor
custom versus off the shelf
• Custom design needed for next-gen • Preserves C-language programming
Network Integrated Services support
• Existing CPUs do not offer forwarding power • Including stacking for nested procedures
required
• Differentiator as compared to NPUs
• Architecture of general purpose CPUs relies on
• Key to feature velocity
larger memory caches (64B/128B) which is
inefficient for network features • Support for portable, large-scale development

• QFP uses 16 byte memory access sizes • Add hardware assists to further boost
• minimizes wasted memory reads and increases performance
memory access • TCAM, PLU, HMR…
• for the same raw memory bandwidth, a 16B • Trade-off power requirement vs. board space
read allows 4-8 times the number of memory
accesses/sec as a CPU using 64/128B
accesses
Cisco Quantum Flow Processor
2nd generation details
• Used in ASR1002-X, ESP100 & ESP200 • 1st and 2nd gen QFPs run the same code
• 2nd gen QFP integrates both the PPE • Maintains identical feature behavior
engine and the Traffic manager into a single between QFP hardware releases
ASIC • Full configuration consistency
• Identical feature behavior (NAT, FW, etc)
• 64 PPEs
• In-service hardware upgrade from ESP40 to
• 116K queues per 2nd gen QFP
ESP100 supported
128K queues for 1st gen QFP
• Differences
• Can be used in a matrix of 2 or 4
• Minor behavioral show-command differences
• ESP100 has 232K queues
• Deployment differences in deployments with
• ESP200 has 464K queues large number of BQS schedules
For your
reference
Quantum Flow Processor Video

http://www.cisco.com/cdc_content_elements/flash/netsol/sp/quantum_flow/demo.html
40 PPEs (1st Gen); 64 PPEs (2nd Gen)
• Tensilica (MIPS-like) instruction set architecture

AR1000 QFP Architecture •



Data cache (1KB per thread, 16B cache line)
Four HW threads per PPE
• PPEs operate at different speeds on various ESPs
• Extensive HW Assists: ACL, TBM-lookup, WRED, Flow Locks
PPE Processing Array

Memory Interconnect
Distributor Assigns Each Packet to
General Resources Memory Resources Memory Access Resources
a PPE/Context

DRAM0

DRAM7

INFRA
SRAM

WRC
HMR
TCM
RLB

ARL

PLU
• QFP is not doing flow-based load-
balancing among processors
• Distribution is to any eligible
PPE/Context
Resource Interconnect • Hardware locks for ordering and
mutual exclusion

Boundary
Queuing
Dist FLB
Buffering, Queuing, & Scheduling (BQS)
Hi Perf. Memory • HQF/MQC compatible
• TCAM4: 200 M • 128K queues
searches/ SPI/HT GPM Gather BQS OPM SPI/HT • Flexible allocation of schedule resources
second with QFP IPM
• 5+ levels of scheduling hierarchy
• DRAM: 1.6 billion
cache line accesses Data Path Resources
per second
QFP Hardware Assists
• RLB = Regular Lock Block
• TCM = TCAM Controller
• ARL = ACL Range Lookup
• INFRA = DMA Engine, HT access, CSR access
• PLU = Pointer Lookup Unit (Tree Bitmap lookup)
• HMR = Hash Mod Read
• WRC = Weighted RED Controller
• Gather = gathers fragments
• FLB = Flow lock block
• Packets are given internal ID based on source / destination interface, packet header
fields etc
• ID then used internally to ensure packet sequencing
ESI – Enhanced SerDes Interconnect
• Consists of 4 or 8 SerDes in each direction
SerDes operates at either 3.125 or 6.25Gbps
Provides 11.5Gb/sec usable bandwidth per SIP10 or 2x23Gb/sec per
SIP40
• 24b/26b encoding/framing/scrambler protocol derived from
CAT6K Chico EFCP (efficient fabric channel protocol)
ESI
• All transmission in form of 4 x 26b segments
24 bytes of data per frame byte-stripped across 4 SerDes lanes
8 bits of control for sync/alignment and state signaling
• Supports 2 priorities of packet frame plus control messages
with interleaving and preemption
Byte0 c4 c0

Byte1 c5 c1

Byte2 c6 c2

Byte3 c7 c3

• Packets framed with 4B ESI header, padding and 32-bit CRC


segment
• Control messages interleaved with packet frames to carry:
structure
Byte4

Byte5

Byte10 Byte6

Byte11 Byte7

Egress queue status (QSTAT) – for SIP SPA Agg to ESP/QFP


Byte8

Byte9

Link-level XON/XOFF backpressure (both directions) for H/L packets


Link-level XON/XOFF backpressure for QSTAT messages
SPI
• System Packet Interface Reset / Pwr Ctrl
TCAM
(10Mbit)
Resource
DRAM
Pkt Buffer
DRAM
Part Len /
BW SRAM
Temp Sensor (512MB) (128MB)

Optimized to carry packets from many channels EEPROM

QFP complex
Up to OC-192 rates DDRAM
Packet Processor Engines BQS
Boot Flash
FECP
Offers channelization, programmable burst sizes (OBFL,…)

JTAG Ctrl
E-CSR
PPE1 PPE2 PPE3 PPE4


PPE5

PPE6 PPE7 PPE8 PPE40

Offers per-channel backpressure (XonXoff) PCI*


E-RP*

Limited reach Crypto


(Nitrox-II
Dispatcher Packet Buffer
CN2430)
SPI Mux

• SPI-4.2 Reset / Pwr Ctrl


SA table
DRAM Interconnect Interconnect

16b data bus, 1b control bus RPs RPs ESP RPs SIPs

DDR clocking
FIFO buffer status portion (2b status channel)
Up to 256 port addresses with independent flow control for each
Hypertransport (HT)
• Bi-directional serial/parallel high-bandwith Reset / Pwr Ctrl
TCAM
(10Mbit)
Resource
DRAM
Pkt Buffer
DRAM
Part Len /
BW SRAM
Temp Sensor (512MB) (128MB)

low-latency point-point link EEPROM

QFP complex
DDRAM

• Maximum of 3.2Ghz (so up to 25.6 Gbps) Boot Flash


(OBFL,…) FECP
Packet Processor Engines
PPE1 PPE2 PPE3 PPE4 PPE5
BQS
E-CSR

JTAG Ctrl …
• Packet based PPE6 PPE7 PPE8 PPE40
E-RP*
PCI*

Crypto Dispatcher Packet Buffer


• Can mix internal links together (Nitrox-II
CN2430)
SPI Mux
Reset / Pwr Ctrl
SA table
DRAM Interconnect Interconnect
RPs RPs ESP RPs SIPs
Interlaken (I/L)
• Interlaken Characteristics
High-bandwidth reliable packet transfer bus
Builds on SPI4.2 but reduces pin requirements (similar control
word structure)
Uses high-speed SerDes (can bundle multiple serial links to
create a logical connection with multiple channels
Offers backpressure with XonXoff flow control
Offers data integrity with 64B/67B encoding
Up to 6 Gbps per lane or pin
Designed to scale up to 100Gbps (scales with number of lanes)
Up to 256 communication channels or up to 64K with channel extension
Continuous meta frame of programmable frequency (guarantee lane alignment, etc)
• Packet is segmented into bursts and then bursts are bounded by 2 control words
Before and after
Fields indicate start-of-packet, end-of-packet, error detection etc
Bursts are associated with a logical channel (e.g. a port)
Allows for interleaving of data from different channels / low latency operations
Reference http://www.interlakenalliance.com/Interlaken_Protocol_Definition_v1.2.pdf
System Architecture –
ISR4000
Introducing the Cisco ISR 4000 Family
Enabling Branch Services for the 21st Century Network
Delivering the Ultimate Application Experience Over Any Connection

Revolutionary Architecture Service Innovation

 4-10 times faster, at the same price  Native Layer 2 – 7 services


 Deterministic performance with services  Converged network, compute, storage
 Simple, scalable WAN path control
 Pay as you grow
 Best-of-breed security: Sourcefire® IDS
 Virtualized network function
ISR4000 system specification
4321 4331 4351 4431 4451

4 Core CP/SP 4 Core CP/SP


CPU Architecture 4 Core 8 Core 8 Core
6 Core DP 10 Core DP

Network Interface Modules (NIMs) 2 2 3 3 3

Enhanced Service Modules (SM-X) 0 1 2 0 2

1 * dual Phy (RJ45


1 * dual Phy (RJ45 3 * dual Phy (RJ45 4 * dual Phy (RJ45 4 * dual Phy (RJ45
Front-Panel Ethernet & SFP), 1 RJ45
& SFP), 1 RJ45, 1
& SFP) & SFP) & SFP)
SFP

Performance (default/max) 50 / 100 Mbit/s 100 / 300 Mbit/s 200 / 400 Mbit/s 500 / 1000 Mbit/s 1000 / 2000 Mbit/s

Single Internal AC Dual Internal AC or Dual Internal AC or


Power Supply Single External AC Single Internal AC
or DC DC DC

4 / 16 GB for 4 / 16 GB for
4 / 8 GB for 4 / 16 GB for 4 / 16 GB for
Default/Max. DRAM CP/SP/DP CP/SP/DP CP/SP/DP
CP/SP CP/SP
2 GB for DP 2 GB for DP

Default/Max Flash 4 / 8 GB 4 / 16 GB 4 / 16 GB 8 / 32 GB 8 / 32 GB

Management Ethernet 1 Gbit/s 1 Gbit/s 1 Gbit/s 1 Gbit/s 1 Gbit/s

CP= Control Plane, DP = Data Plane, SP = Services Plane


Cisco ISR 4400 Architecture 4 x 1 Gb/sec SGMII

Control Plane Data Plane (6 or


DRAM (1 core) and Services Plane (3 10 cores)
cores) FPGE
4xPCIe
Service containers
WAAS
KVM
EnergyWise DRAM

IOSd
4xPCIe 10G XAUI 1 Gb/sec SGMII
Mgt Eth
ISC
Cons/Aux Platform
System Multigigabit
Controller
USB FPGA Fabric SM-X
Hub
Flash 10 Gb/sec per slot

2 Gb/sec per slot NIM


Cisco ISR 4300 Architecture 3 x 1 Gb/sec SGMII

Control Plane
DRAM (1 core) and Services Plane (3 cores)
FPGE

Service containers
WAAS
KVM
EnergyWise

IOSd
4xPCIe
10G XAUI 1 Gb/sec SGMII
Mgt Eth
ISC
Cons/Aux Platform
System Multigigabit
Controller
USB FPGA Fabric SM-X
Hub
Flash 10 Gb/sec per slot
mSATA
2 Gb/sec per slot NIM
Note:4321 uses 2DP, 1CP & 1SP cores
Maximum interface termination

ISR4321 ISR4331 ISR4351 ISR4431 ISR4451 Comment


SM-X (single width) 0 1 2 0 2
NIMs (single width) 2 2 (3) 3 (5) 3 3 (5) With SM-X-NIM-ADPTR each ( SM-X = NIM )

10GE Routed 0 1 2 0 2 With SM-X-4X1G-1X10G


1 GE Routed 2+4 = 6 3+10 = 13 3+18 = 21 4+6 = 10 4+18 = 22 With onboard + SM-X-6X1G & NIM-2GE-CU-SFP
1 GE Switched 16 40 72 24 72 With NIM-ES2-8-P & SM-X-ES3-24-P
T3/E3 Clear Channel 0 1 2 0 2 With SM-X-1T3/E3
T1/E1 Clear Channel 8 24 40 24 40 With NIM-8MFT-T1/E1 or NIM-4MFT-T1/E1 (4321)
T1/E1 Channelized 4 24 40 24 40 With NIM-8CE1T1-PR or NIM-2CE1T1-PR (4321)
FXS 8 12 20 12 20 With NIM-4FXS
FXO 8 12 20 12 20 With NIM-4FXO
Serial 4 12 20 12 20 With NIM-4T or NIM-2T (4321)
VA DSL 2 3 5 3 5 With NIM-VAB-A or NIM-VA-B or NIM-VAB-M
Service Virtualization for networking
Service Containers

 Dedicated virtualized compute resources


 CPU, disk, memory for each service
 Easily repurpose resources
 Industry-standard hypervisor VM 1 VM 2 VM 3
WAAS Energywise Future App
Benefits

 Better performing network services


 Ease of deployment with zero
footprint; no truck roll
 Greater security through fault isolation
 High reliability
 Flexibility to upgrade network services
independent of router IOS® Software
Future service virtualization
Process Hosting End-Point Hosting

Blade Hosting

Container Cisco IOS XE Cisco IOS XE


Cisco IOS XE
Embedded
Network Services
Feature

External
Server
Feature or Application
Container
Blade

Network Services and


Applications
Quality of Service
ISR4000 only

ISR4000 overall forwarding path with QoS


Packet buffers used
SW pattern matching by forwarding engine
IOS Process
Buffers
Advanced
Forwarding engine
classification,
policing, WRED

FPGE MulitGig Fabric Hierarchical egress


FPGE
FPGE packet scheduling

NIM

UCS-E
Non-blocking fabric
for internal transport
SM-X
ASR1000 only

ASR1000 overall forwarding path with QoS


Packet buffers used
TCAM by QFP
IOS Process
Buffers
Interconnect
Advanced
QFP
classification,
Interconnect policing, WRED

Port rate limiting &


weighting for Interconnect Hierarchical egress
forwarding to ESP packet scheduling
Scheduling Interconnect

Ingress packet Buffers Buffers


buffering Egress SIP packet
buffering
Classifiers
Basic ingress
classification SPA SPA SPA SPA
ASR1000 only

ASR 1000 QoS – SIP ingress path


• Ingress packet priority classification
Interconnect
Classification based on:
802.1p, IPv4 TOS, IPv6 TC, MPLS EXP
Configurable per port or VLAN Buffer status
Scheduling
reporting
• Ingress SIP buffering
Ingress Egress
128 MB input buffer Buffers Buffers
2 queues, high & low per port
• Ingress SIP scheduler Classifiers

Defaults to weighted fair amongst


ingress ports SPA
SPA
SPA
Excess bandwidth is shared SPAs
Excess weight per port is configurable
ASR1000 only

ASR 1000 SIP ingress path For your


• Packets are classified according as high or low priority when the arrive reference
from the wire
• High and low priority packet queues are maintained for each ingress port
• By default all ports have a weight proportional to the interface bandwidth
• High priority packets from all ports will be sent to ESP for processing before handling the
low priority queues.
• By default there is no rate limiting of high priority traffic which could potentially starve other
low priority traffic if there is more SPA than SIP bandwidth (i.e., SIP-10 with two TenGigE
interfaces)
ASR1000 only

ASR 1000 SIP ingress path QoS config For your


• plim qos input policer bandwidth X strict-priority reference
• Limits the amount of high-priority traffic accepted on an interface.
• X is expressed in kilobits per second.
• plim qos input queue [0 | strict-priority ] pause enable
• Enables the generation of Ethernet pause frames when low / high priority packet depth hits a certain
threshold
• plim qos input queue [0 | strict-priority ] pause threshold X
• Defines the threshold of when to generate an ethernet pause frame back to the remote device.
• X is expressed in percent of queue limit.
• plim qos input weight X
• Defines the weight of the ingress interfaces traffic when scheduling traffic to be sent from the SIP to
the ESP for forwarding
ASR1000 only

ASR 1000 SIP ingress path QoS config For your


• plim qos input map [ ip | ipv6 | mpls | cos ] … queue reference
[ 0 | strict-priority ]
• Access to CLI to maps specific IPv4 TOS, IPv6 traffic class , MPLS EXP, or 802.1p values to high or
low priority queues.
• It is possible to classify the various encapsulations simultaneously.
• cos option is only available on Ethernet subinterfaces.
• Enabling cos matching will override any main interface matching for traffic on that specific vlan(s).

• By default the following traffic classes are considered high priority


• IPv4: precedences 6 & 7, DSCP values cs6, cs7
• IPv6: traffic class ef (46)
• MPLS: EXP values 6 & 7
• 802.1p: values 6 & 7
ASR1000 only

ASR 1000 SIP ingress path QoS stats For your


• show platform hard port x/y/z plim qos … reference
• Provides details on QoS configuration for SIP forwarding to ESP.
• show platform hard port x/y/z plim buffer settings detail
• Provides details on SIP buffer utilization in transmit and receive directions.
• show platform hard interface A x/y/z plim qos input map
• Provides details on packet classification for high and low precedence ingress queues on
the SIP.
ASR 1000 QoS – SIP egress path
• 2 Mbyte of egress buffering per SIP card
Interconnect
• No need for additional SIP based classification
or queuing.
Buffer status
Scheduling
• Heavy lifting already done by QFP engine. reporting

• Egress SIP has high and low priority buffers in Ingress Egress
case there is backpressure from a SPA Buffers Buffers

Classifiers

SPA
SPA
SPA
SPAs
For your
reference
ASR1000 SIP egress path
• Egress path on SIP has two queues per interface, high and low priorities
• All packets in high priority queue for an interface must be drained before any low priority
packets will be sent to the SPA for egress
• show platform hard slot X plim buffer settings detail
• Provides details on egress buffer utilization on SIP. These parameters are not user configurable.
ISR4000 only

ISR4000 QoS forwarding path


BQS pkt buffers
A. Ingress packets arrive from MGF Resource memory
and FPGE and are temporarily
stored in small internal packet
buffer until processed C SW pattern matching
QFP scheduling
Multicore dataplane complex
B. Available core is allocated for a
packet and begins processing
(security ACLs, ingress QoS, etc) B

C. Dataplane cores use SW pattern


matching to perform lookups for Dispatcher / buffers
features enabled for this packet,
update statistics, update state for A
stateful features, forward to crypto
engine, find egress interface, etc.
ASR1000 only

ASR1000 QoS ESP forwarding path


Resource BQS pkt buffers
A. Ingress packets arrive through the TCAM
memory
interconnect and are temporarily C
stored in small internal packet Policer
buffer until processed assist
QFP scheduling
QFP forwarding complex complex
B. Available QFP PPE is allocated for
a packet and begins processing
(security ACLs, ingress QoS, etc) B

C. QFP PPEs use DRAM and TCAM


to perform lookups for features Dispatcher / buffers
enabled for this packet, update
statistics, update state for stateful A
features, forward to crypto engine, Interconnect
find egress interface, etc.
ASR1000 only

ASR 1000 ESP Interconnect scheduling


• ESP Interconnect scheduling ensures fair access by each SIP to the Cisco
QFP
• By default each SIP is allocated:
• a minimum of approximately 50 Mb/sec of high priority traffic to the Cisco QFP
• an equal weight for any excess bandwidth beyond the guaranteed minimum

• All high priority traffic from all SIPs is processed before low priority traffic is
handed to the Cisco QFP
• These parameters are not user-configurable.
ASR1000 only

ASR1000 ESP interconnect status For your


• show plat hard slot X serdes qos reference
• Where X is F0 or F1
• This shows how minimum bandwidth is allocated on the ESP forwarding card for incoming traffic for
various linecards in the chassis.
IOS XE Packet Processing Engines for QoS
• Packets are accepted into the forwarding engine and allocated a free core to handle the
packet
• Multiple packets are handled simultaneously in the forwarding engine
• The following QoS functions are handled by forwarding engine:
• Classification
• Marking
• Policing
• WRED

• After all the above QoS functions (along with other packet forwarding features such as
NAT, Netflow, etc.) are handled the packet is put in packet buffer memory handed off to
the Traffic Manager
ISR4000 only

ISR4000 QoS forwarding path


BQS pkt buffers
D. Once packet processing is Resource memory
complete and packet has been D
modified, packet is given to the
scheduler. SW pattern matching
QFP scheduling
E. Based on default and user Multicore dataplane complex
configurations, packets are
scheduled for transmission based
on the egress physical interface.
E
F. After the packet is release for
egress, it is sent to the MGF or
PCIe bus for transmission out the Dispatcher / buffers
physical interface.

F
ASR1000 only

ASR1000 QoS ESP traffic manager path


Resource BQS pkt buffers
D. Once packet processing is TCAM
memory
complete and packet has been D
modified, packet is given to the Policer
scheduler. assist
QFP scheduling
E. Based on default and user QFP forwarding complex complex
configurations, packets are
scheduled for transmission based
on the egress physical interface.
E
F. After the packet is release for
egress, it is sent to the interconnect
then to the SIP card for egress Dispatcher / buffers
from a physical interface.
Interconnect F
Traffic Manager processing
• The Traffic Manager performs all packet scheduling decisions.
• Packets move through the QoS hierarchy even if MQC QoS is not configured.
• Traffic Manager implements a 3 parameter scheduler which gives advanced flexibility
• Minimum - bandwidth
• Excess - bandwidth remaining
• Maximum - shape

• Priority propagation (via minimum) ensures that high priority packets are forwarded first
without loss
Traffic Manager Processing
• Various models of hardware have differing amounts of packet memory
• Packet memory is one large pool
• Interfaces do not reserve a specific amount of packet memory
• Packet memory may be over-provisioned
• Out of resources memory exhaustion conditions
• Non-priority user data dropped at 85% packet memory utilization
• Priority user data dropped at 97% packet memory utilization
• PAK_PRI and internal control packets only dropped at 100% memory utilization
For your
reference
Traffic Manager statistics
• show plat hard qfp active stat drop all | inc BqsOor
• This gives a counter which shows if any packets have been dropped because of packet buffer
memory exhaustion.
• show plat hard qfp active infra bqs status
• Gives metrics on how many active queues and schedules are in use. Also gives statistics on QFP
QoS hierarchies that are under transition.
• show plat hard qfp active bqs 0 packet-buffer util
• Gives metrics on current utilization of packet buffer memory
For your
reference
ESP specific specifications
Card Packet memory Maximum queues TCAM

ASR 1001 64 MB 16,000 5 Mb


ASR 1001-X 512 MB 16,000 10 Mb
ASR 1002-X 512MB 116,000 40 Mb
ESP-5 64 MB 64,000 5 Mb
ESP-10 128 MB 128,000 10 Mb
ESP-20 256 MB 128,000 40 Mb
ESP-40 256 MB 128,000 40 Mb
ESP-100 1GB 232,000 80 Mb
ESP-200 2GB 464,000 160Mb
For your
reference
ESP specific specifications
* - ESP-40 in Cisco ASR
Minimum IOS Supported ASR 1004 requires IOS XE 3.2
Card
software Chassis
ASR 1001 XE 3.2 built-in
ASR 1001-X XE 3.12 built-in
ASR 1002-X XE 3.7 built-in
ESP-5 XE 2.1 1002
ESP-10 XE 2.1 1002,1004,1006
ESP-20 XE 2.2 1004,1006
ESP-40 XE 3.1* 1004,1006,1013
ESP-100 XE 3.7 1006, 1013
ESP-200 XE 3.10 1013
IOS XE – non-queuing highlights
• Classification
• IPv4 precedence/DSCP, IPv6 precedence/DSCP, MPLS EXP, FR-DE, ACL, packet-length, ATM
CLP, COS, inner/outer COS (QinQ), vlan, input-interface, qos-group, discard-class
• QFP is assisted in hardware by TCAM on ASR1000, optimized software matching on ISR4000

• Marking
• IPv4 precedence/DSCP, IPv6 precedence/DSCP, MPLS EXP, FR-DE, discard-class, qos-group,
ATM CLP, COS, inner/outer COS

• Detailed match and marker stats may be enabled with a global configuration option
• platform qos marker-statistics
• platform qos match-statistics per-filter
• platform qos match-statistics per-ace
• Detailed statistics will show per line match statistics in class-maps. For marking, the detailed stats
show the number of packets marked per action.
IOS XE QoS – non-queuing
• Policing
• 1R2C – 1 rate 2 color
• 1R3C – 1 rate 3 color
• 2R2C – 2 rate 2 color
• 2R3C – 2 rate 3 color
• color blind and aware in XE 3.2 and higher software
• supports RFC 2697 and RFC 2698
• explicit rate and percent based configuration
• dedicated policer block in QFP hardware on ASR1000
• Policing order of operation (not configurable)
• XE 3.1 and earlier software evaluates from the parent down to the child
• XE 3.2 and later software evaluates from the child up through to the parent (the same as queuing functions)
IOS XE QoS – non-queuing
• WRED
• precedence (implicit MPLS EXP), dscp, and discard-class based
• ECN marking
• byte, packet, and time based CLI on ASR1000
• packet based only on ISR4000
• packet based configurations limited to exponential constant values 1 through 6 on
ASR1000
• dedicated WRED block in QFP hardware on ASR1000
IOS XE QoS – queuing
• Up to 3 layers of queuing configured with MQC QoS
• Two levels of priority traffic (1 and 2), followed by non-priority traffic
• Strict and conditional priority rate limiting
• 3 parameter scheduler (minimum, maximum, & excess)
• Priority propagation (via minimum) to ensure no loss priority forwarding via minimum
parameter
• burst parameters are accepted but not used by scheduler
• Backpressure mechanism between hardware components to deal with external flow
control
• fair-queue consumes 16 queues for each class configured with it
• Allows configuration of aggregate queue depth and per-flow queue depth
IOS XE QoS – queuing
• Queue-limit may be manually configured with various units on ASR1000
• packets, time, or bytes (packets only on ISR4000)

• Within a policy-map, all classes must use the same type of units for all features
• Using packets based queue-limit deals well with bursts of variable size packets while
providing a maximum limit to introduced latency when all packets are MTU sized
• Using time or byte based queue-limits provides more exact control over maximum latency
but will hold a variable number of packets based on the size of the packets enqueued
• Simplifies use of the same policy-map on interfaces of different speeds
• Time based configuration results in bytes programmed in hardware when policy-map is attached to
egress interface
ASR1000 only

IOS XE QoS – queuing


• Packet Buffer DRAM treated as one large buffer pool
• Two main building blocks for packet buffer DRAM
• Block: each queue gets 1 Kbyte blocks of memory
• Particle: packets are divided into 16 byte particles and linked together

• Advantages to such an implementation:


• Less complex than buffer carving schemes
• Fragmentation is minimal & predictable due to small sized blocks & particles

• Thresholds exist to protect internal control traffic and priority traffic


• Queue-limit considerations
• In scaled configurations, tuning of queue-limit parameter may be necessary to avoid scenarios where
a large number of smaller queues can overprovision packet memory
IOS XE QoS – queue limit management
• Without MQC QoS configuration, interface default queues have 50 ms of buffering in a
packets based configuration (except on ESP-40 which uses 25 ms)
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑠𝑝𝑒𝑒𝑑𝑏𝑖𝑡𝑠/𝑠𝑒𝑐 × 0.050𝑠𝑒𝑐
𝑞𝑢𝑒𝑢𝑒_𝑙𝑖𝑚𝑖𝑡𝑝𝑎𝑐𝑘𝑒𝑡𝑠 =
𝑏𝑖𝑡𝑠
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑚𝑡𝑢𝑏𝑦𝑡𝑒𝑠/𝑝𝑎𝑐𝑘𝑒𝑡 × 8
𝑏𝑦𝑡𝑒
• MQC created queues have 50 ms of buffering by default in a packets based configuration
(on all ESP cards)
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑠𝑝𝑒𝑒𝑑𝑏𝑖𝑡𝑠/𝑠𝑒𝑐 × 0.050𝑠𝑒𝑐
𝑞𝑢𝑒𝑢𝑒_𝑙𝑖𝑚𝑖𝑡𝑝𝑎𝑐𝑘𝑒𝑡𝑠 =
𝑏𝑖𝑡𝑠
𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒_𝑚𝑡𝑢𝑏𝑦𝑡𝑒𝑠/𝑝𝑎𝑐𝑘𝑒𝑡 × 8
𝑏𝑦𝑡𝑒
IOS XE QoS – 3 parameter scheduler
• ASR 1000 QFP provides an advanced 3 parameter scheduler
• Minimum - bandwidth
• Excess - bandwidth remaining
• Maximum - shape

• 3 parameter schedulers share excess bandwidth equally in default configuration


• versus 2 parameter schedules that share excess bandwidth proportional to the minimum
configuration
• bandwidth and bandwidth remaining may not be configured in the same policy-
map or class
IOS XE MQC – 3 parameter scheduler
• The bandwidth remaining command is used to adjust sharing of excess bandwidth
• Two options are available
• bandwidth remaining ratio X, where X ranges from 1 to 1000
• bandwidth remaining percent Y, where Y ranges from 1 to 100

• bandwidth remaining percent based allocations remain the same as classes are added to
a policy
• base is a consistent value of 100

• bandwidth remaining ratio based allocations adjust as more classes are added to
a policy with their own remaining ratio values
• base changes as new classes are added with their own ratios defined or with a default of 1

• By default, all classes have a bandwidth remaining ratio or percent value of 1


3 parameter scheduler – Injected
minimum traffic
policy-map child

dropped
class voice 9 Mb/s Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec) 10 Mb/sec 1 Mb/sec
class critical_services
bandwidth 10000 (kbit/sec)
class internal_services 10 Mb/sec 10 Mb/sec

25 Mb/s
bandwidth 10000 (kbit/sec)
class class-default
bandwidth 1000 (kbit/sec)
! 10 Mb/sec 10 Mb/sec
policy-map parent
class class-default 1 Mb/sec
10 Mb/sec
shape average 25000000 dropped
service-policy child 3 Mb/sec
6 Mb/s
3 parameter scheduler – Injected
minimum traffic
policy-map child

dropped
class voice 9 Mb/sec Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec) 10 Mb/sec 1 Mb/sec
class critical_services
bandwidth 10000 (kbit/sec) 5 Mb/sec 5 Mb/sec
class internal_services

25 Mb/s
bandwidth 10000 (kbit/sec)
class class-default 10 Mb/sec
bandwidth 1000 (kbit/sec) 15 Mb/sec
! 1 Mb/sec
policy-map parent

dropped
1 Mb/sec 4 Mb/sec
class class-default
10 Mb/sec
shape average 25000000
service-policy child
dropped
4 Mb/sec
5 Mb/sec
3 parameter scheduler – Injected
excess traffic
policy-map child

dropped
class voice 9 Mb/sec Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec) 10 Mb/sec 1 Mb/sec
class critical_services
bandwidth remaining ratio 4 5 Mb/sec 5 Mb/sec
class internal_services

25 Mb/s
bandwidth remaining ratio 1
class class-default 9.5 Mb/sec
bandwidth remaining ratio 1 15 Mb/sec
!

dropped
policy-map parent 5.5 Mb/sec
class class-default
10 Mb/sec 9.5 Mb/sec
shape average 25000000
service-policy child dropped

0.5 Mb/sec
3 parameter scheduler – Injected
excess traffic
%
policy-map child

dropped
class voice 9 Mb/sec Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec) 10 Mb/sec 1 Mb/sec
class critical_services
bandwidth remain percent 40 5 Mb/sec 5 Mb/sec
class internal_services 15 Mb/sec

25 Mb/s
bandwidth remain percent 10
class class-default 9 Mb/sec
!
!

dropped
policy-map parent 6 Mb/sec
class class-default 10 Mb/sec
shape average 25000000
service-policy child dropped

10 Mb/sec
Bandwidth Remaining Ratio Example
interface FastEthernet1/1/3.100 interface FastEthernet1/1/3.101 interface FastEthernet1/1/3.102
encapsulation dot1Q 100 encapsulation dot1Q 101 encapsulation dot1Q 102
service-policy output vlan100 service-policy output vlan101 service-policy output vlan102
! !
policy-map vlan100 policy-map vlan100 policy-map vlan102
class class-default class class-default class class-default
bandwidth remaining ratio 89 bandwidth remaining ratio 8 bandwidth remaining ratio 1

160000

140000

120000

PPS 100000 vlan


100
80000 vlan
101
60000
vlan
102
40000

20000

0
12

15

18
0

9
IOS XE QoS hierarchies
• Generally, MQC based policy-maps with queuing functions may be attached to a physical
interface or sub-interface
• It is possible to attach a non-queuing policy-map to one location and then a queuing
policy-map to the other
• Some scenarios are supported with 2 level hierarchical policy-maps on tunnels and a
class-default shaper on the physical interface
• Broadband applications have their own set of supported scenarios which support queuing
policy-maps on sub-interfaces and then on the dynamically created sessions which
traverse that sub-interface
• Innovative hierarchies which move beyond strict parent-child hierarchies can be built using
service-fragment CLI
IOS XE QoS hierarchy
policy-map level1
class class-default
shape average 100000000
!
!
policy-map level2
class user1
shape average 60000000
service-policy level3
class class-default
shape average 60000000
service-policy level3
!
policy-map level3
class prec0
priority
police cir 10000000
class class-default
!
interface default interface gig0/0/0.2
interface !
schedule queue
!
interface gig0/0/0.3
SIP root !
schedule
IOS XE QoS hierarchy
policy-map level1
class class-default
shape average 100000000
!
!
policy-map level2
class user1
shape average 60000000
service-policy level3
class class-default
shape average 60000000
service-policy level3
!
policy-map level3
class prec0
priority
level1 police cir 10000000
schedule class class-default
!
interface default interface gig0/0/0.2
interface service-policy out level1
schedule queue
!
interface gig0/0/0.3
SIP root !
schedule
IOS XE QoS hierarchy
level3 policy-map level1
queues class class-default
shape average 100000000
service-policy level2
!
policy-map level2
class user1
shape average 60000000
service-policy level3
class class-default
level2 shape average 60000000
schedules service-policy level3
!
policy-map level3
class prec0
priority
level1 police cir 10000000
schedule class class-default
!
interface default interface gig0/0/0.2
interface service-policy out level1
schedule queue
!
interface gig0/0/0.3
SIP root !
schedule
ASR 1000 QoS hierarchy
level3 policy-map level1
queues class class-default
shape average 100000000
service-policy level2
!
policy-map level2
class user1
shape average 60000000
service-policy level3
class class-default
level2 shape average 60000000
schedules service-policy level3
!
policy-map level3
class prec0
priority
level1 police cir 10000000
schedule class class-default
!
interface default interface gig0/0/0.2
interface service-policy out level1
schedule queue
!
interface gig0/0/0.3
SIP root service-policy out level2
schedule
ASR 1000 QoS service-fragments 4
multiple instances of sub- policy-map sub-int-mod4
class EF
interface fragment BE police 1000000
class AF4
police 2000000
class class-default fragment BE
shape average 75000000
!
service-fragment BE policy-map int-mod4
class data service-fragment BE
shape average 128000000
class EF
priority level 1
class AF4
priority level 2
class AF1
interface priority shape average 50000000
random-detect
queues !
interface GigabitEthernet0/0/0
interface interface class- service-policy output int-mod4
schedule default queue !
interface GigabitEthernet0/0/0.2
service-policy output sub-int-mod4
SIP root
schedule
ASR 1000 QoS service-fragments 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
class class-default
shape average 100000000
!
service-fragment BE policy-map sub-int-mod3
class EF
police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
service-policy sub-int-child-mod3
!
interface interface class- policy-map sub-int-child-mod3
class AF1
schedule default queue shape average 100000000
class class-default
shape average 25000000
SIP root
schedule
ASR 1000 QoS service-fragments 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
class class-default
fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
service-fragment BE police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
service-policy sub-int-child-mod3
!
sub-interface priority queues policy-map sub-int-child-mod3
class AF1
shape average 100000000
interface interface class- class class-default
schedule default queue shape average 25000000

SIP root
schedule
ASR 1000 QoS service-fragments 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
class class-default
fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
service-fragment BE police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
service-policy sub-int-child-mod3
!
multiple sub-interface priority policy-map sub-int-child-mod3
queues class AF1
shape average 100000000
interface interface class- class class-default
schedule default queue shape average 25000000

SIP root
schedule
ASR 1000 QoS service-fragments 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
multiple instances of class class-default
fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
service-fragment BE police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
service-policy sub-int-child-mod3
!
multiple sub-interface priority policy-map sub-int-child-mod3
queues class AF1
shape average 100000000
interface interface class- class class-default
schedule default queue shape average 25000000

SIP root
schedule
IOS XE observed Packet and frame sizes
• Traffic manager calculates ethernet packet size on everything between the MAC L2
header and the end of payload
• IFG, preamble, and FCS are not included
• For queuing features, the packet size can be adjusted manually

shape average 1000000 account user-defined -4

• atm cell overhead compensation is available so that ATM L2 links downstream are not overdriven

shape average 1000000 account user-defined 24 atm

• Traffic manager includes the 4 byte CRC in packet sizes for frame-relay
• MQC based QoS for ATM performs L3 shaping and only compensates for atm cell
overhead with the above atm directive in shaped classes
• ATM vc rate configurations are ATM L2 based shaping
For your
reference
Selected ASR1000 scalability attributes
ASR ASR ASR CSR
Attribute RP2 RP1
1001 1001-X 1002-X 1000V
Configured class-maps 4,096 4,096 4,096 4,096 4,096 256
Configured policy-maps 16,000 4,096 4,096 1,000 4,096 30
Class-maps per policy-map 1,000 * 1,000 * 1,000 * 256 1,000 * 32
Match rules per class-map 32 32 32 16 32 8
Policer / shaper granularity 8 Kb/s
Policer / shaper accuracy +/- 1%

* Subject to certain restrictions in hierarchical policies and broadband scalability scenarios


§ Current as of XE3.11 software
IOS XE Etherchannel QoS support
• With VLAN based load balancing
1. Egress MQC queuing configuration on Port-channel sub-interfaces
2. Egress MQC queuing configuration on Port-channel member link
3. Policy Aggregation – Egress MQC queuing on sub-interface
4. Ingress policing and marking on Port-channel sub-interface
5. Egress policing and marking on Port-channel member link
6. Policy Aggregation – Egress MQC queuing on main-interface (XE2.6 and higher)

• Active/standby with LACP (1+1)


7. Egress MQC queuing configuration on Port-channel member link (XE2.4 and higher)
9. Egress MQC queuing configuration on PPPoE sessions, model D.2 (XE3.7 and higher)
10. Egress MQC queuing configuration on PPPoE sessions, model F (XE3.8 and higher)

• Etherchannel with LACP and load balancing (active/active)


8. Egress MQC queuing configuration supported on Port-channel member link (XE2.5 and higher)
11. General MQC QoS support on Port-channel main-interface (XE3.12 and higher)
Aggregate Etherchannel QoS
with LACP and flow based load balancing
• Requires that aggregate Port-channel interfaces be identified before creation with the
platform qos port-channel-aggregate X command
• Up to four member links per aggregate Port-channel are supported
• FastEthernet and GigabitEthernet interfaces are supported
• TenGigabitEthernet is not supported at this time
• All member links in a port-channel must be the same speed

• Policy-maps may be applied to the aggregate Port-channel main interface only


• No member link QoS, no sub-interface QoS

• For vlan based QoS, a policy-map using VLAN classification should be applied to the
aggregate Port-channel main-interface
• Supports 3 levels of hierarchical policy-maps
• Including 3 levels of policers and/or queueing
Aggregate Etherchannel QoS
with LACP and flow based load balancing
• Up to 8 aggregate Port-channel interfaces supported
• If flow based load balancing overloads a given physical interface, backpressure will be
exerted to the aggregate hierarchy
• Causes queue build-up for the entire hierarchy across all member-links in the aggregate Port-channel
• Important that there be a variety of flows running over the interface so hashing algorithm can
distribute traffic amongst all interfaces

• Additional prohibited feature combinations:


• Service instances
• Broadband sessions
• VPLS with QoS
• Xconnect with QoS
• DMVPN tunnels with QoS
• Possible to match IP addresses in ACLs for Port-channel policy-map for workaround in certain situations with
known stable IP addressing
Hierarchical QoS with GRE Tunnel
policy-map PARENT
GRE Tunnels
class class-default
shape average 20000000
service-policy output CHILD
policy-map CHILD Gig 0/1.1001
Service
class voice Level =
priority level 1 VRF RED Best
Effort 20 Mbps
class video Video VRF = RED
priority level 2 (GRE Tunnel 0)
class best_effort Scav
bandwidth remaining ratio 9 Voice
class class-default
bandwidth remaining ratio 1 Service
! Level =
VRF GREEN Best
interface tunnel 0 Effort 20 Mbps
Video
service-policy output PARENT VRF = GREEN
(GRE Tunnel 1)
!
Scav
interface tunnel 1 Voice
service-policy output PARENT
Egress GRE tunnel QoS Details
Interface and Tunnel Policy Combination Use Cases
physical interface QoS physical interface QoS
policy with queuing actions policy without queuing actions
tunnel policy-map with Class-default only policy-map on physical interface supported
Tunnel packets bypass interface policy-map
queuing actions Packets will be rate limited by tunnel policy then interface policy

Tunnel packets go through tunnel policy-map fully and


tunnel policy-map without Tunnel packets go through tunnel policy-map fully and then
then through interface policy-map
through interface policy-map
queuing actions (interface default queue)

• Maximum of two level policy-map hierarchies allowed on GRE tunnels.


• Encryption is executed prior to egress queuing.
• In XE2.5 software and later, traffic is classified and prioritized by the tunnel QoS policy-map before
being enqueued for cryptography. Packets marked as priority levels 1 and 2 will move through a high
priority queue for the crypto hardware.
shape average percent details
• new CLI to define qos reference bandwidth
• bandwidth qos-reference <rate>

• This command should be configured on a physical interface.


• Logical interfaces with percent based configurations (such as shape average percent
X)will derive their bandwidths from this value instead of the physical interface speed.
QoS-Group Based LLQ for IPSec
• What is Crypto LLQ?
• throughput of the crypto co-processor is less than QFP forwarding throughput
• high priority queue from QFP to crypto co-processor is used to protect high priority traffic in
oversubscription scenario
• all encryption traffic is sent through a single pair of high / low priority queues to crytpto co-processor

• In IOS XE 2.5S and after, IPSec will use LLQ configuration on GRE/sVTI tunnel if it is
there. If it is not there, IPSec will use LLQ configuration on matching egress interface
QoS-Group Based LLQ for IPSec Example
platform ipsec llq qos-group 5 policy-map input-policy-2
platform ipsec llq qos-group 6 class default
platform ipsec llq qos-group 8 set qos-group 6
! !
! Same class-map used for classify high interface TenGigabitEthernet0/2/0
! priority traffic in egress QoS policy service-policy input input-policy-1
! !
class-map match-all c1 interface TenGigabitEthernet0/3/0
match precedence 5 6 7 service-policy input input-policy-2
class-map match-all c2 !
match precedence 0 1 2 3 interface TenGigabitEthernet0/1/0
class-map match-all default service-policy output egress-policy-1
!
policy-map input-policy-1
class c1
set qos-group 5
class c2
set qos-group 8
!
!
For Your
Reference

QoS

Aware Accounting
QoS Aware Accounting is to report QoS policy statistics of successfully transmitted packets to RADIUS server
in AAA accounting packets for PPP session

• It can be used as an alternative for ISG Traffic Class accounting – Traffic Class classification is currently
based on IP ACL, therefore can not classify IPv6 traffic or classify packets on LAC, on the other hand QoS is
supported for IPv6 traffic or on LAC, therefore provides unique use cases in these scenarios

• Accounting statistics can be sent in either per QoS class basis or a group of QoS classes basis. Introduced
the notion of class grouping on a per session basis. A group is an entity for which a single accounting record
will be provided, such that customer can group number of QoS classes (such as voice and voice-control)
belonging to the same service (such as voice) in the same accounting records.

• Accounting packet is standard RADIUS accounting message containing:


1. Acct-Session-Id : This should be populated as normal.
2. Parent Session Id: The parent session id will be included in VSAs
3. Policy Name + (class name in case we have default command enabled or group name if group enabled)
4. Ingress and Egress Packets/Bytes/Gigawords, packets and bytes values are the successfully transmitted packets excluding
dropped packets.
5. Username: username in QoS accounting records

• Ability to send accounting START or STOP record for a class/group when the feature is enabled or disabled
in a QoS class/group.

• Supports interim accounting


QoS Aware Accounting vs. ISG Accounting
QoS Aware Accounting ISG Traffic Class Accounting ISG Service Accounting

Traffic being counted Accounts traffic matching a class or a Accounts traffic matching TC ACLs Accounts all traffic on the
group of classes of the QoS service since service activation session since service activation
policy applied on session

Session Type PPP Only PPP/IP PPP/IP


IPv6 Traffic Supported Not Supported Not Supported
ASR1K Role PTA / LAC / LNS PTA / LNS PTA / LNS
Scalability 32K sessions with 4 accounting 96K traffic classes 32K sessions
on RP2+ESP20 groups per session.
QoS Aware Accounting: Configuration
aaa accounting network ACCT-LIST1 policy-map pm_in
action-type start-stop periodic interval 1 class voip
group rad-svr-grp1 police rate 100000
aaa-accounting group ACCT-GRP1
accounting group ACCT-GRP1 list ACCT-LIST1 class video
accounting group ACCT-GRP2 list ACCT-LIST2 police rate 80000
accounting group ACCT-GRP3 list ACCT-LIST3 aaa-accounting list ACCT-LIST2
class gaming
policy-map pm_out_child police 50000
class voip class class-default
priority level 1 police 30000
police cir 100000
aaa-accounting group ACCT-GRP1
class video interface Virtual-Template1
priority level 2 ip unnumbered Loopback1
police cir 80000 no logging event link-status
aaa-accounting list ACCT-LIST3 no snmp trap link-status
class gaming keepalive 60
bandwidth remaining ratio 4 ppp mtu adaptive
class class-default ppp authentication pap
bandwidth remaining ratio 1 ppp timeout authentication 90
policy-map pm_out_parent service-policy input pm_in
class class-default service-policy output pm_out_parent
bandwidth remaining ratio 10
shape average 300000
aaa-accounting list ACCT-LIST1
service-policy pm_out_child
High Availability
High-Availability on the ASR 1000

RP
RP

RP
• Redundant ESP / RP on ASR 1006 & ASR 1013 CPU CPU

• Software Redundancy on ASR 1001, 1002 & 1004


FECP FECP

ESP
ESP

ESP
QFP QFP
• Max 50ms loss for ESP fail-over Crypto
Crypto
Assist. PPE BQS
Crypto
Crypto
Assist. PPE BQS

• Zero packet loss on RP fail-over


• Intra-chassis Stateful Switchover (SSO)
• Stateful features: PPPoX, AAA, DHCP, IPSec, NAT, Firewall
SPA

SIP
IOCP
Aggreg.
• IOS XE also provides full support for Network Resiliency
SPA SPA
• NSF/GR for BGP, OSPFv2/v3, IS-IS, EIGRP, LDP
• IP Event Dampening; BFD (BGP, IS-IS, OSPF) SPA

SIP
IOCP
Aggreg.
• first hop redundancy protocols: GLBP, HSRP, VRRP
SPA SPA

• Support for ISSU super and sub-package upgrades SPA

SIP
IOCP
Aggreg.
• Stateful inter-chassis redundancy available for NAT, SBC, Firewall SPA SPA
ASR1000 data plane redundancy
FECP CPU CPU FECP

RP

RP
ESP

ESP
interconn. GE switch interconn. GE switch
Crypto QFP Crypto QFP
Assist. PPE BQS Assist. PPE BQS

interconnect interconnect

Midplane

interconnect interconnect interconnect


SIP

SIP

SIP
SPA SPA SPA
IOCP IOCP
Aggreg. Aggreg. Aggreg.

SPA SPA SPA SPA SPA SPA

Backup link, prenegotiated and ready for forward


Active link, forwarding traffic
ASR1000 control plane architecture
FECP CPU CPU FECP

RP

RP
ESP

ESP
interconn. GE switch interconn. GE switch
Crypto QFP Crypto QFP
Assist. PPE BQS Assist. PPE BQS

interconnect
interconn. interconnect

Midplane

interconnect interconnect interconnect


bps
SIP

SIP

SIP
ntrol SPA SPA SPA
s IOCP IOCP IOCP
Aggreg. Aggreg. Aggreg.

SPA SPA SPA SPA SPA SPA

Ethernet out of band channel (EoBC)


I2C bus
SPA bus
ASR 1006 High Availability Infrastructure
RIB MRIB IPC
IOS active IDB RT Transport IOS standby

Platform Adaptation Layer (PAL) Platform Adaptation Layer (PAL)


RP

RP
Chassis MFIB Chassis
manager
Forwarding manager manager
Forwarding manager
FIB

Linux Kernel Linux Kernel

QFP client QFP client


ESP

ESP
QFP driver QFP driver
Chassis Chassis
manager
Forwarding manager manager
Forwarding manager

Linux Kernel Linux Kernel


Which Events Trigger Failovers?
• The following events may trigger failovers on the RP/ESP:
• Hardware component failures
• Software component failures
• Online Insertion and Removal (OIR)
• CLI-initiated failover (e.g. reload command, force-switchover command)
Failover Triggers: Hardware Failures RP-CPU I2C Mux
• What hardware failures?
Memories Bootflash

RP
• CPUs: RP-CPU, QFP, FECP, IOCP, interconnect CPU, I2C Mux, ESP Crypto Chip, heat
sinks, … EOBC
• Memory: NVRAM, TCAM, Bootflash, RP SDRAM, FECP SDRAM, resource DRAM,
Packet buffer DRAM, particle length DRAM, IOCP SDRAM, …
Interconnect CPU
• Interconnects: ESI Links, I2C links, EOBC Links, SPA-SPI bus, local RP bus, local FP
bus …
• Detected using
• Software running on the failed hardware will crash -> see FECP SPA-SPI I2C
software crashes

ESP
QFP TCAM

SIP
• Watchdog timers: low level watchdogs that can time out
IOCP
• Notification of other field-replaceable units (FRUs) Crypto Memories
• CPLD interrupts / register bits controlled by CMRP
Interconnect CPU Interconnect CPU
• Initiate fail-over events
• Hardware failures are typically fatal such that modules
need to be replaced
• JTAG: RP can program CPLD on other modules. Test interconnects and other boards
(primarily for RMA-ed hardware)
Failover Triggers: Software Failures IOS
active
IOS
standby
• What software Failures? Platform Adaptation Layer

RP
• Kernel: Linux on RP / ESP / SIP (PAL)
• Middleware: chassis manager, forwarding manager Chassis Forwarding
manager manager
• IOS, SPA drivers
• Detected using the process manager (PMAN) Linux Kernel
• PMAN: every software process has a corresponding PMAN process to check its health
• if software process crashes, PMAN will detect via a signal from the kernel
• IPC: between 2 IOS (and only for IOS)
• Hardware watchdog timers supervise Linux and software stack QFP client SPASPA
driver
driver
SPA driver
• Kernel will take the module down in a controlled manner

ESP
QFP driver

SIP
Chassis
• IOS, CMESP, CMSIP, FMESP, QFP Driver/Client, are not manager
Chassis Forwarding
re-startable manager manager
• PMAN-initiated failover using CPLD register bits for ESP or RP
Linux Kernel Linux Kernel
(failover within 3ms)
• some processes are re-startable (CMRP, FMRP, SSH, telnet, …)
• Kernel will try to re-start the processes in this case
• If unsuccessful, then the process will be held down and console message logged
RPact Failover Procedure (1 of 2)
SIP Slot 0 - ESPact Slot 1 - ESPstby Slot 0 - RPact Slot 1 - RPstby
CMSIP Kernel FMESP CMESP Kernel FMESP CMESP Kernel CMRP FMRP Kernel IOS CMRP FMRP Kernel IOS
ACT SBY ACT SBY

Failure Detect RPact


failure

Close ESI links (ESP)


Time

Establish ESI links

Restart
ACT

new RPact information

State information
If not received in time,
status information send restart message.

Update H/W component file system

ESI link status


RPact Failover Procedure (2 of 2)
SIP Slot 0 - ESPact Slot 1 - ESPstby Slot 0 - RPact Slot 1 - RPstby
CMSIP Kernel FMESP CMESP Kernel FMESP CMESP Kernel CMRP FMRP Kernel IOS CMRP FMRP Kernel IOS

Failover

Take-over control using checkpointed state

Forwarding State information

Check new state Check new state


& discard old state & discard old state
Time

Serviced restored
H/W initialization

Initialize EOBC

Start kernel
start

Start CM Start FM Start IOS


Sync other RP info

Run Mastership Detect RPsby

Standby
ESI link status
forwarding State info
ESPact Failover Procedure (1 of 2)
SIP Slot 0 - ESPact Slot 1 - ESPstby Slot 0 - RPact Slot 1 - RPstby
CMSIP Kernel FMESP CMESP Kernel FMESP CMESP Kernel CMRP FMRP Kernel IOS CMRP FMRP Kernel IOS
ACT SBY ACT SBY

Failure
Detect ESPact
failure
State information of failed ESP

Time
Failed ACT
Disable ESI link
w/ failed ESP Reconfigure
ESI link w/ RPs

Enable ESI link


w/ new ESPact
ESI link status ESI link status

Service Recovered with momentary packet loss

Resend state information


ESPact Failover Procedure (2 of 2)
SIP Slot 0 - ESPact Slot 1 - ESPstby Slot 0 - RPact Slot 1 - RPstby
CMSIP Kernel FMESP CMESP Kernel FMESP CMESP Kernel CMRP FMRP Kernel IOS CMRP FMRP Kernel IOS

H/W initialization

Initialize EOBC

Wait for RPact


Time

RPact information
Detect RPact

Activate ESI-link
Download software packages

Start kernel

Start FM Start CM
Other-ESP info (e.g. mastership)

Register with CMRP


SBY
IOS High Availability – Steady State
• In steady state:
• Routing state is synchronized between IOSact and IOSstby
• Routing processes have been created on IOSstby
• If non-stop routing, checkpointed current neighbor state and routing DB
• If graceful restart, protocols have the config from the config sync, but no neighbor DB and no routing DB (needs to
be re-acquired from neighbors)
• RIB is empty on IOSstby
• CEF IS stateful

• Interfaces are logically in down state on IOSstby


• QoS synchronized via config-sync
• Statistic counters are 0 in general
• Exception e.g. for Broadband, where statistics counters are synchronized
IOS High Availability – RP Failover Events
• RF on the newly active RP will see the former active RP as not available
• Notification to IOSstby from CMRP triggers a state change for IOS to become IOSact

• RF is the master of state changes, informs its clients about the state change
• Clients are e.g. network (i.e. interfaces), OSPF, IS-IS, EIGRP, multicast

• Interfaces are brought back up


• RF clients are notified about state change in sequence
• Clients (e.g. routing protocols) then works on their state transition
• Clients acknowledge the state change back to RF
• RF then notifies the next client
IOS High Availability – RP Failover Events
• For routing processes, after state change notification:
• Start sending Hellos
• If Graceful Restart, then re-acquire state from neighbors
• IF NSR, then verify the routing database
• SPF algorithm computed to update the RIB and enter into a converged state
• FIB updated and old / stale routes are flushed in IOS FIB
• CEF Is updated and updates sent to forwarding plane

• Time
• Interface transitions after switchover from DOWN to UP can take O(10sec)
• OSPF cannot send hellos until it is being told that the corresponding interface state is UP
• Total time may take O(Min) with large scale
Stateful Application Inter-Chassis Redundancy
• Current Intra-chassis HA typically protects against
• Control Plane (RP) Failures
• Forwarding Plane (ESP) failures
• Interface failures can be mitigated using link bundling
(e.g. GEC) RP
Crash

• Any other failures may result in extended recovery RP


times ESP
Crash

• Inter-chassis redundancy provides additional ESP


resilience against
Crash
SIP
• Interface Failures
SIP
• System failures
• Site failures (allowing for geographic redundancy)
Down
System level redundancy
VSS, nV RG Infra
• Failover Granularity at the System Level • Failover granularity at the application level
• Control-plane active-standby (NAT, Firewall, SBC etc)
• Active RP considers ‘remote’ linecards under its • Control plane active-active
control – Each RP only considers its own linecards, but
• Forwarding-plane active-active synchronizes application state

• No application granularity for failover • Forwarding-plane active-active


• Need to ensure all features are SSO capable • Can have one set of firewall services resilient,
and other set of firewall services non-resilient

RPact
Crash RPstby RP
Crash
act FW FW RPact
fabric fabric ESP ESP
linecard linecard SIP SIP
linecard linecard SIP SIP
Introduction to RG-Infra
• RG Infra is the IOS Redundancy Group Infrastructure to enable the synchronization of
application state data between different physical systems
• Does the job of RF/CF between chassis

• Infrastructure provides the functions to


• Pair two instances of RG configured on different chassis for application redundancy purposes
• Determine active/standby state of each RG instance
• Exchange application state data (e.g. for NAT/Firewall)
• Detect failures in the local system
• Initiate & manage failover (based on RG priorities, allows for pre-emption)

• Assumptions
• Application state has to be supported by RG infra (ASR 1000 currently supports NAT, Firewall, SBC)
• Connectivity redundancy solved at the architectural level (need to ‘externalize’ the redundant ESI
links of the intra-chassis redundancy solution)
Redundancy Groups Functions
• Registers applications as clients
• Registers (sub)interfaces / {SA/DA}-tuplets in case of firewall
• Determines if traffic needs to be processed or not
• Communicates control information between RGs using a redundancy group protocol
• Advertisement of RGs and RG state
• Determination of peer IP address
• Determination of presence of active RG
• Synchronizes application state data using a transport protocol
• Manages Failovers! RG state

RG control

FW FW
RPact RG infraact RG infrastby RPact

ESP ESP
SIP SIP
SIP SIP
Physical Topology Requirements
• 2 ASR 1000 chassis with single RP / single ESP
• Co-existence of inter- and intra-chassis redundancy is RPact RG infraact Control RPact RG infraact
NOT supported
ESP ESP
• Maximum of 2 cluster members SIP SIP
• Physical connectivity to both members from adjacent SIP Application SIP
routers / switches
• Need to direct traffic to either member system in case of failover (HSRP, VRRP)
• L1/L2 connectivity between the member systems for RG control traffic
• RG instances exchange control traffic (RG hellos, state, fail-over signaling etc.)
• Guaranteed communication required to avoid split-brain condition
• L3 connectivity on roadmap
• L1/L2 connectivity between member systems for
application state data
• Synchronization of NAT/Firewall/SBC state tables
• FIBs are NOT synchronized by RG Infra
ISSU
IOS XE Software packaging - terminology
• IOS XE software for ASR 1000 is released every 4 months, 3 times a year
• Software that is posted in CISCO.COM is called ‘Consolidated Package’
• Consolidated Package contains several ‘sub-packages’ which are extracted from the
Consolidated Package
• The sub-packages can be used to individually upgrade a specific software component of
the ASR 1000
ASR1000 Software Packaging Overview
• ASR1000 Software images are dependent on the Route Processor
• 3 sets of images – IPBase, AIS and AES are available for each of the different RPs
• RP1
• RP2

• Universal images are provided for the fixed configuration chassis and features are license based
• ASR1001 (integrated RP)
• ASR1001-X (integrated RP)
• ASR1002-X (integrated RP)

• The slides in this section cover images/packaging related to the modular RP1 and RP2 systems
• A following section covers the unique packaging for ASR1001, ASR1001-X, and ASR1002-X
Software Sub-packages
• RPBase: RP OS
IOS IOS
• Upgrading of the OS will require reload to the RP and expect minimal changes
• RPIOS: IOS
active standby
• Facilitates Software Redundancy feature Platform Adaptation Layer

RP
• RPAccess (K9 & non-K9): (PAL)
Chassis Forwarding
• Software required for Router access; 2 versions will be available.
manager manager
One that contains open SSH & SSL and one without
• To facilitate software packaging for export-restricted countries
Linux Kernel
• RPControl :
• Control Plane processes that interface between IOS and the rest of the platform
• IOS XE Middleware Control
• ESPBase: messaging
• ESP OS + Control processes + QFP client/driver/ucode:
• Any software upgrade of the ESP requires reload of the ESP
QFP client / driver SPASPA
driver
• SIPBase: driver
SPA driver

ESP
• SIP OS + Control processes QFP code

SIP
Chassis
• OS upgrade requires reload of the SIP manager
Chassis Forwarding
• SIPSPA: manager manager
• SPA drivers and FPD (SPA FPGA image) Linux Kernel
Linux Kernel
• Facilitates SPA driver upgrade of specific SPA slots
Consolidated images sub-packages
RP
RP RP RP RP SIP SIP ESP
Consolidated Package Descriptor Access
Base Control IOS Access Base SPA Base
- K9
Cisco ASR1000 Series RP1
SASR1R1-
ADVANCED ENTERPRISE X X Advanced Enterprise Services X X X X X
AESK9
SERVICES

Cisco ASR1000 Series RP1


Advanced Enterprise Services
ADVANCED ENTERPRISE SASR1R1-AES X X X X X X
w/o crypto
SERVICES W/O CRYPTO

Cisco ASR1000 Series RP1


SASR1R1-AISK9 X X Advanced IP Services X X X X X
ADVANCED IP SERVICES

Cisco ASR1000 Series RP1


Advanced IP Services
ADVANCED IP SERVICES SASR1R1-AIS X X X X X X
w/o crypto
W/O CRYPTO

Cisco ASR1000 Series RP1 IP


SASR1R1-IPBK9 X X IP Base X X X X X
BASE

Cisco ASR1000 Series RP1 IP


SASR1R1-IPB X X IP Base w/o crypto X X X X
BASE W/O CRYPTO
Cisco IOS XE Images content
IP Base Advanced IP services Advanced enterprise
ACL BGP EIGRP
services
All IP Base features
All IP services features
ISIS OSPF RIP BFD
DECNet V
EEM ERSPAN ISSU Broadband (BNG / ISG)
IPX
HSRP VRRP GLBP CUBE (SP) CUBE (Ent)
Multicast NAT NBAR Firewall L2 & L3 VPN
Netflow PPPoE client SNMP MPLS OTV
TACACS All intf IPSLA PfR LISP
IPv6 parity to IPv4 features LI IPSec EVC/BDI
K9 images: SSH SSL E-OAM

Some of the features require Feature Data current to IOS XE3.7. Always check Cisco Feature
Licenses in addition to the software Navigator for the most up to date information regarding
image features included in releases and feature sets.
Universal image licensing
• ASR1001 has its own IOS XE software IOS XE PID Description List
release Price
• For customers to get any of the
3.2.0S SASR1001U-32S Cisco ASR 1001 IOS XE UNIVERSAL $0
following 6 feature sets
• IPB, IPBK9, AIS, AISK9, AES, AESK9, SASR1001NPEK9 Cisco ASR 1001 IOS XE - NO PAYLOAD
3.2.0S $0
-32S ENCRYPTION UNIVERSAL
• Customer needs to order one of the 3
universal images listed in table A below and SASR1001UK9- Cisco ASR 1001 IOS XE - ENCRYPTION
3.2.0S $0
respective IPB or AIS or AES feature 32S UNIVERSAL
license, see table B.

• The feature set licenses are enforced PID Description List Price
via software activation prior to XE3.6,
starting XE3.6 they are changed to SLASR1-IPB Cisco ASR 1000 IP BASE License $5,000
monitoring
SLASR1-AIS Cisco ASR 1000 Advanced IP Services License $10,000
• The performance upgrade license
(2.5Gbps – 5Gbps) are enforced, SLASR1-AES Cisco ASR 1000 Advanced Services License $10,000
starting XE3.7 they are changed to
monitoring mode
ASR 1001 Licenses - “Feature Sets”
For the equivalent feature set on ASR 1000 Series Technology Package License
Universal Software
modular platforms Combination Part Number
Image Part Number
(1002-F, 1002, 1004, 1006, 1013)

IP Base without crypto (IPB) SASR1001U + SLASR1-IPB

IP Base (IPBK9) SASR1001NPEK9 + SLASR1-IPB

Advanced IP Services without crypto (AIS) SASR1001U + SLASR1-AIS

Advanced IP Services (AISK9) SASR1001UK9 + SLASR1-AIS

Advanced Enterprise Services without crypto (AES) SASR1001U + SLASR1-AES

Advanced Enterprise Services (AESK9) SASR1001UK9 + SLASR1-AES


ASR 1000 ISSU Innovation
• Ability to perform software upgrade of the IOS image on the single-engine systems
• Support for in-service software downgrade
• “In Service” component upgrades (SIP-Base, SIP-SPA, ESP-Base) without reboot to the system
• Hitless upgrade of some of the software packages in a single engine system
• Hitless upgrade of some software packages in the active RP of a redundant engine system
• Pre-provisioning Capability
• RP Portability - installing & configuring hardware that are physically not present in the chassis
• Allows configuration of RP in one system (i.e. a 1004) and then move it to another system (i.e. a fully populated 1006)

From \ To 3.1.0 3.1.1 3.1.2 3.2.1 3.2.2


3.1.0 N/A SSO Tested SSO SSO via 3.1.2 SSO via 3.1.2
3.1.1 SSO Tested N/A SSO Tested SSO via 3.1.2 SSO via 3.1.2
3.1.2 SSO SSO Tested N/A SSO Tested SSO Tested
3.2.1 SSO via 3.1.2 SSO via 3.1.2 SSO Tested N/A SSO Tested
3.2.2 SSO via 3.1.2 SSO via 3.1.2 SSO Tested SSO Tested N/A
ISSU upgrade methods
Impact
Prerequisites
Procedure Intended Use High Level Procedure (Total time for module to be ready to
(what to do/know before you start)
process packets)

1. ISSU loadversion standby RP 100sec traffic loss for all cases


• Homogen Build / Stby’s HOT
2. ISSU runversion
Consolidated package Easy upgrade of a redundant • 6RU w/ red. h/w & new
3. ISSU acceptversion (optional)
mode 6RU supers in both active/standby
4. ISSU commitversion
RPs
5. hw-module slot RP-slot reload

1. Upgrade all standby RP sub-pkgs 1. 0 traffic loss


2. Rolling upgrade of SIP slots 2. Traffic loss per SIP
• Homogen Build / Stby’s HOT
Sliding Minimum disruption • 100sec without MDR
Sub-package mode 1 • RPs booted in sub-pkg mode
to redundant 6RU chassis • 0 sec with MDR
& new supers expanded
3. Rolling upgrade of ESPs 3. 50ms traffic loss
4. Upgrade active RP & switchover 4. 0 traffic loss

1. Upgrade selective SPA 1. 30sec traffic loss


2. Rolling upgrade of ESPs 2. 50ms traffic loss
• Homogen Build / Stby’s HOT
• Traffic loss per SIP
SPA FIRST Upgrade to • RPs booted in sub-pkg mode
Sub-package mode 2 • 100sec without MDR
redundant 6RU chassis & new supers expanded
3. Rolling upgrade of SIP slots 3. 0 sec with MDR
4. Upgrade all standby RP sub-pkgs 4. 0 traffic loss
5. Upgrade active RP & switchover 5. 0 traffic loss

PSIRT upgrade of RPIOS • Homogen Build / Stby’s HOT 1. Upgrade standby RPIOS sub-pkg
Sub-package mode 3 1.0 packet loss
only on any chassis type • Booted in sub-pkg mode Switchover (End here)
ISSU Compatibility Summary
• ISSU compatibility is determined by the CONTENT of what went into a release, not the
type of release (ie release, rebuild, etc)
• ISSU supported: Across IOS XE rebuilds (Example: 2.1.0 to 2.1.1)
• ISSU goal: ISSU to work across IOS XE Feature releases (Example 2.1.x to 2.2.x)

• Compatibility is both forward and backward, if applicable (assuming config compatibility)


• Skipping of releases will be allowed, if the 2 releases are ISSU deemed compatible and
stated as such in the Compatibility Matrix
• Compatibility is only supported between like IOS XE images. Both images need to have
the same feature set of the RP-IOS sub-package. For example:
• From IPBase-K9 To IPBase-K9
• From AIS-non-K9 To AIS-non-K9
• Like to like transitions only, no changes in feature set or encryption (K9)
ISSU Support Criteria
• When a new IOS XE feature release or rebuild is released, a compatibility matrix will be
published identifying all the previous releases and rebuilds that release has been tested
for ISSU compatibility.
• The matrix will be available as part of the cisco.com documentation – ASR1000 Configuration Guide

• Compatibility matrix will refer to IOS XE releases and rebuilds using ‘Consolidated
Packages’ only.
• Heterogeneous packages are not used for ISSU compatibility testing.
ISSU upgrade
ISSU path – Example
Compatibility
Target Release Software
2.1.0 2.1.1 2.1.2 2.2.1 2.2.2 2.2.3 2.3.0 2.3.1
Release

2.1.0 N/A SSO Tested SSO SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2
SSO Tested

SSO, not 2.1.1 SSO Tested N/A SSO Tested SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2
Explicitly tested
2.1.2 SSO SSO Tested N/A SSO Tested SSO Tested SSO Tested SSO Tested SSO Tested
SSO, require
2 step upgrade
2.2.1 SSO via 2.1.2 SSO via 2.1.2 SSO Tested N/A SSO Tested SSO SSO SSO

2.2.2 SSO via 2.1.2 SSO via 2.1.2 SSO Tested SSO Tested N/A SSO Tested SSO SSO

2.2.3 SSO via 2.1.2 SSO via 2.1.2 SSO Tested SSO SSO Tested N/A SSO Tested SSO Tested

2.3.0 SSO via 2.1.2 SSO via 2.1.2 SSO Tested SSO SSO SSO Tested N/A SSO Tested

2.3.1 SSO via 2.1.2 SSO via 2.1.2 SSO Tested SSO SSO SSO Tested SSO Tested N/A

2.1.0 2.1.1 2.1.2

2.2.1 2.2.2 2.2.3

2.3.0 2.3.1

IOS XE Releases

May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May
2008 2008 2008 2008 2008 2008 2008 2009 2009 2009 2009 2009 2009
ISSU for new hardware boards
• ISSU for new hardware revisions of boards will be supported when the change is
incremental and uses same image
• e.g. ESP10 -> ESP20 is same chipsets and uses same images
• e.g. ESP40 -> ESP100 is supported across QFP generations

• Main differences between ESP10 and ESP20 are clock speeds of chips and amount of
memory populated on boards
• Generally ISSU support is planned for this type of hardware difference
• When there is new Architecture/Chipset then a separate image for new hardware is
required and ISSU will be supported for each architecture separately, but not between
architectures. Example: RP1 to RP2
• RP2 uses x86 chipset instead of PPC
• Addressing of all software running on RP2 is 64-bit instead of 32-bit of RP1
Advantages of using Software Redundancy
• Software failure
• Software redundancy helps when there is a RP-IOS failure/crash; the active process will switchover to the standby,
while forwarding continues with zero packet loss
• Other software crashes (example: SIP or ESP) cannot benefit from Software redundancy
• Software upgrade
• The software upgrade procedure for ASR1002/ASR1004 allows customers to upgrade the RP-IOS package only as
and defer all the other steps to a later time – example: Maintenance window
• This allows customers to take advantage of any bug fixes of RP-IOS (or in the case of a PSIRT) available in the next
rebuild while maintaining the router in service.
• The heterogeneous configuration of RP-IOS in one version versus the rest of the sub-packages in a different version
is a supported configuration. It is however required that the configuration become homogeneous (i.e all sub-packages
in the same version) before upgrading to the next software version.
Procedure Intended Use Prerequisites High Level Procedure Impact
(what to do/know before you (Total time for module to be ready to
start) process packets)

Sub-package mode 1 Sliding Minimum disruption to •Homogen Build / Stby’s HOT 1. Upgrade standby ‘bay’ & switchover 1. 0 traffic loss
s/w redundant 2/4RU chassis’ •RP booted in sub-pkg mode & 2. Rolling upgrade of SIPs (if possible) 2. 100sec traffic loss per SIP
new super expanded 3. Upgrade ESP (you take a hit) 3. 100sec traffic loss
4. Upgrade remaining RP sub-pkgs 4. X sec – depends on configuration

Sub-package mode 2 PSIRT upgrade of RPIOS only •Homogen Build / Stby’s HOT 1. Upgrade standby bay RPIOS sub-pkg 1. 0 packet loss
on any chassis type •Booted in sub-pkg mode & Switchover (End here)
One shot ISSU procedure
• Existing ISSU procedure is a multiple step process. This enhancement greatly simplifies
the ISSU process by a single CLI which will execute the multiple steps
• CLI: request platform software package install node file <filename> sip-delay <1-172800>
• Sip-delay will allow delay for each SIP upgrade in the sub=package mode

• When this command is executed, it will automatically be adapted to ‘consolidated mode’ or


‘sub-package mode’ running in the system
• In sub-package mode, this CLI will execute the step-by-step procedure documented in
CISCO.COM Platform Consolidated package one shot Sub-packages one shot
ASR 1013 Support Support
• This table summarizes the support
ASR 1006 Support Support
matrix of one shot ISSU in terms of ASR 1004 N/A Not Supported
ASR 1000 platform and package ASR 1002, 1002-X N/A Not Supported
mode running in the system ASR 1001, 1001-X N/A Not Supported
Minimum Disruptive Restart (MDR) IOS IOS
active standby
• Non-MDR upgrade causes 100s packet loss due to re-boot
Platform Adaptation Layer
of SIP/SPAs

RP
(PAL)
• MDR reboot time is 25s for SIP, and 10s for SPAs Chassis Forwarding
manager
manager
• SIP/SPA software upgrade can be down with minimal
interruption packet flow Linux Kernel

• During MDR period, some functions are disabled


• OIR (SPA or transceiver), APS, interface configuration changes, Control
messaging
line alarms
• Phase 1 supports MDR for all GE SPAs
QFP client / driver SPASPA
driver
driver
• Requirements / Caveats SPA driver

ESP
QFP code

SIP
• Hardware (RP, ESP) redundancy Chassis
Forwarding manager
• Supported for SIP40 (SIP10 does not support MDR) Chassis
manager manager
• CPLD or FPGA upgrades require full reload of SPA Linux Kernel
Linux Kernel
• ‘from’ and ‘to’ software versions must support MDR
• Statistics counters will be re-set after the software upgrade
MDR Demonstration Results – Only 4 Packets Lost!
Performance
CPU Evolution

What happened in 2005? Multi-core, great for the general user.


What about Moore’s Law? How does IOS deal with this change?
What makes up performance
System aspects:
• Available purpose-built processors / chips
Configuration
• BW of physical interfaces
• Platform Internal Architecture (i.e. MGF)
• Operative System architecture
Performance
Test aspects:
• Traffic profile (frame size & traffic type)
• Enabled Features Traffic Test
Profile Methodology
• Test Methodology (NDR)
Traffic Profile Overview

Stateless

Stateful
Impact of Packet size
• One route decision = One packet served
• Routing capacity = Number of packets per second served for a given service.
• Big packets
• Many bits switched for each route decision
• = High Mbps number

• Small packets
• Few bits switched for each route decision.
• = Low Mbps number
Mbps or PPS?
Stateless FW Mbps PPS

Platform 64 IMIX 1518 64 IMIX 1518

1941 19.0 108.5 450.8 37,201 37,493 37,120

• Example: 1941 with Firewall configuration and different frame sizes


• Min. Frame Size (64 byte) has 23 times less Mbit/s than Max. Frame Size (1518)
• Across different Frame Sizes the pps is contant, even though Mbps varies
• Packet per second = The true routing capacity and hard to skew
• Only applies until interface or performance license limit is reached
Which number should I believe in?
WAN access speeds with services
Line Rate
N x FE 3925E
WAN Access Speed With Services

3945

Line Rate
FE + 3925

2951

2921
VDSL2+/Sub-rate FE ISR1941
2911
• 25Mbps or 2.8Gbps - Which one is true?
2901
• Answer: Both. It depends on how it was tested
EFM
SubrateFE 1941
1921

800

10 Mb 15 Mb 25 Mb 35 Mb 50 Mb 75 Mb 100 Mb 150 Mb 250 Mb 350 Mb


How to report the result?
 Performance data is usually referred to as either ”Uni-directional” or ”Bi-directional”
 Uni-directional: A traffic flow going to or from a device, not in both directions
 Bi-directional: Can mean one of two things depending on who’s using the term.
1. The sum flows in both directions, hence the term Bi-directional
200Mbps
bi-directional 100Mbps Down 100Mbps UP

Tester UUT Tester


UDP

2. Bandwidth expected in both directions, hence the term Bi-directional


200Mbps
bi-directional 200Mbps Down 200Mbps UP
Tester UUT Tester
# 2 is thus twice as high as #1
Reporting results unambiguous
• ”Aggregate” = The sum of all traffic flows going to and from a device

• Why we report in aggregate numbers:


• represents total performance capacity
• the router’s CPU doesn’t care which way a packet is going
• the traffic generator aggregates all traffic it detects in all flows
• whether it’s a ratio of 90% down and 10% up, or a perfect 50/50 split doesn’t matter
Pay-As-You-Grow performance with ISRs & ASRs
Investment Protection Without Oversubscription

ASR1002-X
5-36Gbps
ASR1001-X
2.5-20Gbps
ISR 4451
1-2Gbps
ISR 4431
500-1000 Mbps
ISR 4351
200-400 Mbps
ISR 4331
100-300 Mbps
ISR 4321
50-100 Mbps 4-10X Faster
Add performance and services anytime
Flexible consumption options
Performance license limit – ISR4k example
• Notice that many of the results are at the exact licensed max limit.
• This means router hit shaper before bottoming out
• How much CPU is then left?
@22% @53%
CPU CPU @65% @81% @89%
@43% CPU CPU
CPU
CPU

@20% @54%
CPU CPU @33%
CPU
ISR Portfolio Performance Overview
2500
22% 57% 97%
Aggregate Throughput

2000
In Mbit/s

1500
21% 54% 95%
99%
1000

31% 58% 77%


500 28% 53% 69%
99%
99% 99% 99% 16% 26% 44%
99%
0
c880/c819 c890 4321 4331 4351 4431 4451
CEF only 190 920 100 300 400 1000 2000
NAT 148 192 100 300 400 1000 2000
IPSEC 80 106 100 300 400 1000 2000

*XX% CPU Utilization

Ideal as “CPE Lite” Ideal as Service-Rich CPE


For your
reference
How many Advanced Services can we pile on?
FW + NAT + QoS + IPSEC AVC AVC + NBAR + QoS + IPSEC
Platform License Mbps CPU % Mbps CPU % Mbps CPU %
50 42 31 45 32 42 54
ISR 4321
100 85 62 67 98 77 99
100 86 53 97 54 91 86
ISR 4331
300 216 95 238 98 138 99
200 175 87 200 91 124 99
ISR 4351
400 272 98 282 98 164 99
500 322 99 279 99 174 99
ISR 4431
1000 545 99 482 99 302 99
1000 545 99 491 99 305 99
ISR 4451
2000 959 99 850 99 540 99
For your
reference
WAAS Performance
ISR-WAAS vWAAS (via UCS-E)
Platform Profile
(TCP connections)
200 750 1300 2500 200 750 1300 2500 6000
WAN (Mbps) 75 100 150 20 50 80 150 200
ISR 4451
LAN (Mbps) 200 300 400 100 250 300 400 400
WAN (Mbps) 25 50
ISR 4431
LAN (Mbps) 200 250
WAN (Mbps) 25 20 50 80 150 200
ISR 4351
LAN (Mbps) 100 100 250 300 400 400
WAN (Mbps) 25 20 50 80 150 200
ISR 4331
LAN (Mbps) 100 100 250 300 400 400
WAN (Mbps) 15
ISR 4321
LAN (Mbps) 50
CSR 1000V Performance
Single feature tests
3500 91%
84%
97%
3000
Throughput (Mbit/s)

100%
82% 50%
2500 46% 81% 53% 81%
100%
93%
2000 79% 1 vCPU
84%
1500 2 vCPU
83%

99%
4 vCPU
1000
99%
97%
500

0
CEF ACL QoS NAT L4 FW IPSec
Challenge: single features don’t load-balance well across multiple CPUs
Testing parameters:
• IMIX traffic at 0.01% Drop Rate
• IOS-XE Image 3.14
• Platform: UCSC-C240-M3S with Intel Xeon E5-2643 v2 running ESXi 5.5
• VM-FEX results are on average 17% higher
ASR1000 Performance
250 70

60
200
50
Throughput (Gbit/s)

Throughput (Gbit/s)
150 40

30
100
20
50
10

0 0
CEF NAT FW IPSEC
ASR1001X ASR1002X ESP40 ESP100 ESP200

ASR1ks perform to their advertised limits in all single feature tests except IPSEC
ASR1000 Performance
40 27
QoS BW
35 24
Base Bw
21
30 Netflow BW
Gb/sec bandwidth

18 ACL BW

Millions PPS
25
15 uRPF BW
20
12 PR2650 BW
15 IPv4 PPS
9
10 ACL PPS
6
QoS PPS
5 3
Netflow PPS
0 0 uRPF PPS
76 132 260 516 1028 1518
PR2650 PPS
Packet size in bytes

• Individual features have small impact with small packet sizes (76B and 132B)
• Individual features have very little impact at large packet sizes (above 260B)
• QFP has excellent behavior even with combined features for larger packet sizes!
How to verify current CPU Load
IOS Router with single CPU
IOS-ROUTER#sh proc cpu his

IOS-ROUTER 04:03:37 AM Monday Dec 22 2014 UTC

11111
88888999999999999999000009999999999888883333399999
222222222277777999999999999999000009999999999000005555599999
100 ****************************** *****
90 *********************************** *****
80 **************************************** *****
70 **************************************** *****
60 **************************************** *****
50 **************************************** *****
40 **************************************************
30 **************************************************
20 **************************************************
10 **************************************************
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per second (last 60 seconds)

Simple command showing overall CPU utilization


How to verify current CPU Load
IOS-XE Router with combined CP/SP and DP CPUs (4300 Series)
Classic command shows IOS-XE command shows
ISR4351#sh proc cpu his
average utilization per core utilization
ISR4351# show platform software status control-processor brief
1 1111111111 1111111111111111111
Load Average
099999999999999999999999990000000000999990000000000000000000
Slot Status 1-Min 5-Min 15-Min
099999999994444499999999990000000000999990000000000000000000
RP0 Unknown 3.10 1.84 1.31
100 *********** ******************************************
90 **********************************************************
Memory (kB)
80 **********************************************************
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
70 **********************************************************
RP0 Healthy 3950588 3920636 (99%) 29952 ( 1%) 3382708 (86%)
60 **********************************************************
50 **********************************************************
CPU Utilization
40 **********************************************************
Slot CPU User System Nice Idle IRQ SIRQ IOwait
30 **********************************************************
RP0 0 85.38 2.20 0.00 12.01 0.00 0.00 0.40
20 **********************************************************
1 3.70 5.60 0.00 89.48 0.00 0.00 1.20
10 **********************************************************
2 81.88 6.40 0.00 11.21 0.00 0.30 0.20
0....5....1....1....2....2....3....3....4....4....5....5....6
3 83.01 5.99 0.00 10.78 0.00 0.09 0.09
0 5 0 5 0 5 0 5 0 5 0
4 0.09 0.00 0.00 99.90 0.00 0.00 0.00
CPU% per second (last 60 seconds)
5 0.00 0.00 0.00 100.00 0.00 0.00 0.00
6 23.00 77.00 0.00 0.00 0.00 0.00 0.00
7 0.00 0.00 0.00 100.00 0.00 0.00 0.00

4351: 1 CPU with 4 cores, each has 2 hyper threads, hence we see 8 “CPUs”
Cores can be used for CP, DP or SP, there is no dedication just scheduling
reservation
How to verify current CPU Load
IOS-XE Router with dedicated CP/SP and DP CPUs (ISR4400 & ASR1k Series)
Classic command shows IOS-XE command shows
ISR4451#sh proc cpu his average CP/SP utilization per core utilization
ISR4451#show platform software status control-processor brief
1111111111111111111111111 Load Average
555550000000000000000000000000 Slot Status 1-Min 5-Min 15-Min
55555 88888222220000000000000000000000000 RP0 Healthy 0.02 0.25 0.15
100 ***********************
90 *********************** Memory (kB)
80 *********************** Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
70 *********************** RP0 Healthy 3972052 3928184 (99%) 43868 ( 1%) 2584140 (65%)
60 ***********************
50 **************************** CPU Utilization
40 **************************** Slot CPU User System Nice Idle IRQ SIRQ IOwait
30 **************************** RP0 0 1.10 1.10 0.00 97.70 0.00 0.10 0.00
20 **************************** 1 0.70 3.50 0.00 95.80 0.00 0.00 0.00
10 ***** ********************************* 2 0.30 1.70 0.00 98.00 0.00 0.00 0.00
0....5....1....1....2....2....3....3....4....4....5....5....6 3 0.30 0.70 0.00 98.99 0.00 0.00 0.00
0 5 0 5 0 5 0 5 0 5 0 4 0.50 0.30 0.00 99.20 0.00 0.00 0.00
CPU% per second (last 60 seconds) 5 3.10 1.90 0.00 95.00 0.00 0.00 0.00

This is the Control-Plane utilization!


show platform software status control-processor brief
How to verify current CPU Load
IOS-XE Router with dedicated CP/SP and DP CPUs (ISR4400 & ASR1k Series)

ISR4451#show platform hardware qfp active datapath utilization


CPP 0: Subdev 0 5 secs 1 min 5 min 60 min
Input: Priority (pps) 0 0 0 0
(bps) 0 0 0 0
Non-Priority (pps) 3 3 3 3
(bps) 2224 2384 2384 2392
Total (pps) 3 3 3 3
(bps) 2224 2384 2384 2392
Output: Priority (pps) 0 0 0 0
(bps) 0 0 0 0
Non-Priority (pps) 3 3 3 3
(bps) 13056 9080 9080 9104
Total (pps) 3 3 3 3
(bps) 13056 9080 9080 9104
Processing: Load (pct) 2 2 2 2

ISR4400 and ASR1k have a second command to monitor Data Plane Cores:
show platform hardware qfp active datapath utilization
We only see average DP utilization, no breakdown per core.
For your
reference
Uni-dimentional scale for select features
ASR1001 ASR1001-X ASR1002-X RP1/ RP2/ RP2/ RP2/ RP2/
ESP10 ESP20 ESP40 ESP100 ESP200
VLANs (per port/SPA/system) 4K/8K/16K 4K/8K/16K 4K/8K/16K 4K/32K/ 4K/32K/ 4K/32K/ 4K/32K/ 4K/32K/
32K 64K 64K 64K 64K
IPv4 routes 1M 1M 3.5M 1.0M 4M 4M 4M 4M
IPv6 routes 1M 1M 3M 0.5M 4M 4M 4M 4M
Sessions 8K not avail 29K 24K 32K 64K 58K 58K
L2TP tunnels 4K 4K 4K 12K 16K 16K 16K 16K
Session setup rate (PTA/L2TP) in cps 150/100 150/100 150/100 100/50 150/100 150/100 150/100 150/100
BGP neighbors 8K 8K 8K 4K 8K 8K 8K 8K
OSPF neighbors 1K 1K 2K 1K 2K 2K 2K 2K
Unique QoS policy- / class-maps 1K/4K 1K/4K 4K/4K 1K/4K 4K/4K 4K/4K 4K/4K 4K/4K
ACL/ACE 4K/25K 4K/50K 4K/120K 4K/50K 4K/100K 4K/100K 4K/400K 4K/400K
Multicast groups 1000 2000 4000 1000 4000 4000 44K 44K
IPv4/IPv6 mroutes 64K 64K 64K 64K 100K 100K 100K 100K
Firewall sessions 250K 2M 2M 1M 2M 2M 6M 6M
NAT + firewall sessions 125K 2M 1M 500K 1M 1M 6M 6M
Netflow cache entries 500K 2M 2M 1M 2M 2M 2M 2M
VRFs 4K 4K 8K 1K 8K 8K 8K 8K
BFD sessions (offloaded) 4095 4095 4095 2047 4095 4095 4095 4095
AVC throughput (Mpps/Gbps) 1.3/5 not avail 6/20 2.5/10 3/20 3.4/20 3.6/40 not avail
ISR DRAM Memory
Demystified
What is DP and CP memory used for?
• Control Plane Memory:
• Used for IOS daemon
• This daemon holds the IOS system as well Control Plane Tables (i.e. Routing Information Base)
• Used for Linux
• This manages the entire device and also allocates memory to service containers
• The linux portion grows when IOS is growing due to information replication into other processes

• Data Plane Memory:


• Used exclusively by IOS for data plane services
• Packet Buffering
• System internal processes
• EX Memory, this grows when scalable features are used (Forwarding Information Base, NAT Table
etc.).

* Allocation will vary by IOS-XE release


How Memory is allocated (4400, 4GB CP, 2GB DP, IOS-XE 3.13.1)
4GB Control Plane 2GB Data Plane

Linux

3.25 GB 1.5 GB 512 MB


750 MB Total:
Linux System EXMEM
free ~18% Free
reserved Reserved Allocated

750 MB 750 MB 1000 MB 750 MB 750 MB free Total: 750 MB packet 750 MB 40 MB 472 MB
Linux OS Linux Cache IOS dHeap IOSd buffer system EXMEM EXMEM
~62.5% Free
used free free used free

470 MB 280 MB Total:


IOSd IOSd ~67.5% Free
free used
How to monitor CP and DP (4400, 4GB CP, 2GB DP, IOS-XE 3.13.1)
ISR4451#show version
<snip>
System image file is "bootflash:/isr4400-universalk9.03.13.01.S.154-3.S1-ext.SPA.bin"
<snip>
cisco ISR4451-X/K9 (2RU) processor with 1687854K/6147K bytes of memory.
Processor board ID FGL165210MU
4 Gigabit Ethernet interfaces Reserved IOS Memory
Total CP Memory 32768K bytes of non-volatile configuration memory.
4194304K bytes of physical memory.
7393215K bytes of flash memory at bootflash:.
Total Flash Memory

ISR4451#show platform resources


**State Acronym: H - Healthy, W - Warning, C - Critical
Resource Usage Max Warning Critical State
----------------------------------------------------------------------------------------------------
RP0 (ok, active) H
Control Processor 2.40% 100% 90% 95% H
DRAM 3180MB(82%) 3878MB 90% 95% H
ESP0(ok, active) H
QFP Reserved CP Memory Usable CP Memory H
DRAM 1589776KB(75%) 2097152KB 80% 90% H
IRAM 0KB(0%) 0KB 80% 90% H
Reserved DP Memory Total DP Memory
How to monitor CP and DP (4400, 4GB CP, 2GB DP, IOS-XE 3.13.1)
The “Show Memory” command is executed inside IOSd, therefore it will only show what is available to the IOSd process.

Total reserved
IOS Memory (includes dHeap)
ISR4451#show memory
Head Total(b) Used(b) Free(b) Lowest(b) Largest(b)
Address Bytes Prev Next Ref PrevF NextF Alloc PC what
Processor 7F4A5B545010 1728363504 284041616 1444321888 679710664 1048575908
lsmpi_io 7F4A5AE431A8 6295128 6294304 824 824 412
Dynamic heap limit(MB) 1000 Use(MB) 0
Total used Total free IOS Memory
Total available dHeap dHeap used IOS Memory (includes dHeap)
ISR4451#show platform software status control-processor brief
Load Average
Slot Status 1-Min 5-Min 15-Min
RP0 Healthy 0.00 0.04 0.06

Memory (kB)
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
RP0 Healthy 3972052 3259444 (82%) 712608 (18%) 1506452 (38%)

CPU Utilization
Slot CPU User System Nice Idle IRQ SIRQ IOwait Total used Memory (excludes
RP0 0 2.39 0.39 0.00 97.00 0.09 0.09 0.00 Cache & dHeap, includes full
1 0.40 0.10 0.00 99.49 0.00 0.00 0.00
750 MB IOS)
2 0.80 0.30 0.00 98.90 0.00 0.00 0.00
3 1.80 3.50 0.00 94.70 0.00 0.00 0.00
4 0.60 1.80 0.00 97.50 0.00 0.10 0.00
5 0.20 0.40 0.00 99.39 0.00 0.00 0.00
How to monitor CP and DP (4400, 4GB CP, 2GB DP, IOS-XE 3.13.1)
ISR4451#show platform hardware qfp active infrastructure exmem statistics
QFP exmem statistics

Type: Name: DRAM, QFP: 0


Total: 2147483648 Total Physical DP Memory
InUse: 1648148480 DP Memory used by System (750 MB), Buffer (756MB) and EX (~20MB)
Free: 499335168 Free DP memory (used by EX only!)
Lowest free water mark: 432488448
<snip>

75% of memory appear to be used


These are reserved for packet buffers and system internals
The EX part (that scales with features like the RIB) has 499 MB free out of 512 MB, hence
it’s utilization is only 2%
How Memory is allocated (4300, 4GB CP&DP, IOS-XE 3.13.1)
4GB Control & Data Plane

Linux

1.7 GB 100 MB Total: ~2% Free


Linux free
reserved

Packet Buffer
950 MB 750 MB 1000 MB 750 MB Total: ~42% Free

EXMEM
256 MB

100 MB
300 MB

free
Linux OS Linux Cache IOS dHeap IOSd
used free free

530 MB 220 MB 236 MB 20 MB


IOSd IOSd EXMEM EXMEM Total: ~ 65% Free
free used free used
How to monitor CP and DP (4300, 4GB CP&DP, IOS-XE 3.13.1)
ISR4351#show version
<snip>
System image file is "bootflash:/isr4300-universalk9.03.13.01.S.154-3.S1-ext.SPA.bin"
<snip>
cisco ISR4351/K9 (2RU) processor with 1687854K/6147K bytes of memory.
Processor board ID FDO1826A00H
Total CP&DP 3 Gigabit Ethernet interfaces Reserved IOS Memory
Memory 32768K bytes of non-volatile configuration memory.
4194304K bytes of physical memory.
3223551K bytes of flash memory at bootflash:.
Total Flash Memory

ISR4351#show platform resources


**State Acronym: H - Healthy, W - Warning, C - Critical
Resource Usage Max Warning Critical State
----------------------------------------------------------------------------------------------------
RP0 (ok, active) C
Control Processor 6.81% 100% 90% 95% H
DRAM 3790MB(98%) 3857MB 90% 95% C
ESP0(ok, active) H
QFP Reserved CP Memory Usable CP Memory H
DRAM 22890KB(8%) 262144KB 80% 90% H
IRAM 107KB(5%) 2048KB 80% 90% H
Used DP Memory for EX Total DP Memory for EX
How to monitor CP and DP (4300, 4GB CP&DP, IOS-XE 3.13.1)
The “Show Memory” command is executed inside IOSd, therefore it will only show what is available to the IOSd process.
Total reserved
IOS Memory (includes dHeap)
ISR4351#show memory
Head Total(b) Used(b) Free(b) Lowest(b) Largest(b)
Processor 7F3CADE75010 1728363504 228806748 1499556756 679710664 1048575908
lsmpi_io 7F3CAD7731A8 6295128 6294304 824 824 412
Dynamic heap limit(MB) 1000 Use(MB) 0 Total used Total free IOS Memory
Total available dHeap dHeap used IOS Memory (includes dHeap)
ISR4351#show platform software status control-processor brief
Load Average
Slot Status 1-Min 5-Min 15-Min
RP0 Unknown 1.00 1.00 1.00

Memory (kB)
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
RP0 Healthy 3950588 3881932 (98%) 68656 ( 2%) 2302648 (58%)

CPU Utilization
Slot CPU User System Nice Idle IRQ SIRQ IOwait
RP0 0 1.29 1.59 0.00 97.10 0.00 0.00 0.00 Total used Memory (excludes
1 0.10 0.20 0.00 99.70 0.00 0.00 0.00 Cache & dHeap, includes full
2 2.70 9.00 0.00 88.28 0.00 0.00 0.00 750 MB IOS)
3 0.30 0.80 0.00 98.89 0.00 0.00 0.00
4 0.50 0.00 0.00 99.49 0.00 0.00 0.00
5 0.00 0.00 0.00 100.00 0.00 0.00 0.00
6 23.27 76.72 0.00 0.00 0.00 0.00 0.00
7 0.00 0.00 0.00 100.00 0.00 0.00 0.00
How to monitor CP and DP (4300, 4GB CP&DP, IOS-XE 3.13.1)
ISR4351#show platform hardware qfp active infrastructure exmem statistics
QFP exmem statistics

Type: Name: DRAM, QFP: 0


Total: 268435456 DP Memory reserved for EX
InUse: 23439360 DP Memory used by EX
Free: 244996096 Free DP memory
Lowest free water mark: 244996096
<snip>
How to monitor CP and DP (1002-X, 16GB CP, 2GB DP, IOS-XE 3.13.1)
ASR1002-X#show version
<snip>
System image file is "bootflash:asr1002x-universalk9.03.13.01.S.154-3.S1-ext.SPA.bin"
<snip>
cisco ASR1002-X (2RU-X) processor (revision 2KP) with 4726803K/6147K bytes of memory.
Processor board ID SSI1646079K
6 Gigabit Ethernet interfaces Reserved IOS Memory
Total CP Memory 32768K bytes of non-volatile configuration memory.
16777216K bytes of physical memory.
6684671K bytes of eUSB flash at bootflash:.
Total Flash Memory
Configuration register is 0x2102

ASR1002-X# show platform resources


**State Acronym: H - Healthy, W - Warning, C - Critical
Resource Usage Max Warning Critical State
----------------------------------------------------------------------------------------------------
RP0 (ok, active) H
Control Processor 5.60% 100% 90% 95% H
DRAM 3448MB(21%) 15954MB 90% 95% H
ESP0(ok, active) H
QFP Reserved CP Memory Usable CP Memory H
TCAM 8cells(0%) 524288cells 45% 55% H
DRAM 198879KB(18%) 1048576KB 80% 90% H
IRAM 8561KB(6%) 131072KB 80% 90% H
Reserved DP Memory Total DP Memory
How to monitor CP and DP (1002-X, 16GB CP, 2GB DP, IOS-XE 3.13.1)
The “Show Memory” command is executed inside IOSd, therefore it will only show what is available to the IOSd process.
Total reserved
IOS Memory
ASR1002-X#show memory
Head Total(b) Used(b) Free(b) Lowest(b) Largest(b)
Processor 7F0615A74010 4840049792 555705528 4284344264 4284328728 4284338796
lsmpi_io 7F06152711A8 6295128 6294304 824 824 412
Total used Total free IOS Memory
IOS Memory (includes dHeap)
ASR1002-X#show platform software status control-processor brief
Load Average
Slot Status 1-Min 5-Min 15-Min
RP0 Healthy 0.00 0.00 0.00 Total used Memory

Memory (kB)
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
RP0 Healthy 16337120 3547568 (22%) 12789552 (78%) 6015204 (37%)

CPU Utilization
Slot CPU User System Nice Idle IRQ SIRQ IOwait
RP0 0 2.29 3.19 0.00 94.40 0.00 0.09 0.00
1 0.80 0.90 0.00 98.30 0.00 0.00 0.00
2 0.20 0.50 0.00 99.30 0.00 0.00 0.00
3 0.19 0.39 0.00 99.40 0.00 0.00 0.00
How to monitor CP and DP (1002-X, 16GB CP, 2GB DP, IOS-XE 3.13.1)
ASR1002-X# show platform hardware qfp active infrastructure exmem statistics
QFP exmem statistics

Type: Name: DRAM, QFP: 0


Total: 1073741824 Total Physical DP Memory
InUse: 205749248 DP Memory used
Free: 867992576 Free DP memory
Lowest free water mark: 867992576
<snip>
Differences between Platforms
ISR4300 ISR4400 ASR1k
• CP IOS monitoring: • CP IOS monitoring: • CP IOS monitoring:
• IOS can grow into dHeap • IOS can grow into dHeap • IOS has fixed 4 GB

• CP memory monitoring: • CP memory monitoring: • CP memory monitoring:


• “used” shows allocated memory • “used” shows allocated memory • “used” shows used memory
• “committed” shows used memory • “committed” shows used memory • “committed” shows allocated
memory

• DP memory monitoring: • DP memory monitoring: • DP memory monitoring:


• DP memory shows only EXMEM • DP memory includes system, • DP memory shows only EXMEM
buffer and EXMEM (starts at 75%
utilization)
Routing Scale Test (ISR C892FSP, 1G DRAM, IOS 15.4.3M1)
show memory
IPv4 BGP Routes Total used Total Free
0 52MB 759MB
100000 134MB 677MB
200000 216MB 594MB
300000 298MB 512MB
400000 381MB 430MB
500000 463MB 348MB
600000 545MB 266MB
700000 627MB 184MB
800000 709MB 102MB
900000 791MB 20MB
1000000 Unsupported

Total IOSd Memory Available: 811 MB


Routing Scale Test (4400, 4GB CP, 2GB DP, IOS-XE 3.13.1)
show platform hardware qfp
show platform software active infrastructure
IPv4 show platform resources show memory status control-processor brief exmem statistics
BGP Total Total Heap
Routes Reserved CP Reserved DP used Free Used used free committed InUse Free
0 3233MB(83%) 1591MB(75%) 290MB 1411MB 0MB 3312MB (83%) 659MB (17%) 1506MB (38%) 1648MB 499MB
100000 3523MB(90%) 1617MB(77%) 431MB 1296MB 0MB 3603MB (91%) 368MB ( 9%) 1661MB (42%) 1656MB 490MB
200000 3819MB(98%) 1627MB(77%) 569MB 1158MB 0MB 3907MB (98%) 64MB ( 2%) 1813MB (46%) 1667MB 480MB
300000 3854MB(99%) 1636MB(78%) 707MB 1020MB 48MB 3945MB (99%) 26MB ( 1%) 1998MB (50%) 1675MB 472MB
400000 3779MB(97%) 1646MB(78%) 845MB 882MB 160MB 3870MB (97%) 101MB ( 3%) 2282MB (57%) 1685MB 461MB
500000 3851MB(99%) 1654MB(78%) 984MB 744MB 304MB 3943MB (99%) 28MB ( 1%) 2580MB (65%) 1694MB 453MB
600000 3853MB(99%) 1664MB(79%) 1122MB 606MB 448MB 3946MB (99%) 25MB ( 1%) 2882MB (73%) 1704MB 442MB
700000 3851MB(99%) 1674MB(79%) 1260MB 467MB 576MB 3943MB (99%) 28MB ( 1%) 3165MB (80%) 1713MB 434MB
800000 3850MB(99%) 1683MB(80%) 1398MB 330MB 688MB 3942MB (99%) 29MB ( 1%) 3430MB (86%) 1723MB 423MB
900000
Unsupported
1000000
Free Memory and Committed Memory should be monitored closely.
Other data is misleading due to the inclusion of heap and cache.
Routing Scale Test (4300, 4GB CP&DP, IOS-XE 3.13.1)
show platform hardware qfp
show platform software active infrastructure
IPv4 show platform resources show memory status control-processor brief exmem statistics
BGP Total Total Heap
Routes Reserved CP Reserved DP used Free Used used free committed InUse Free
0 3773MB(97%) 22MB(8%) 229MB 1498MB 0MB 3888MB (98%) 61MB ( 2%) 2302MB (58%) 23MB 244MB
100000 3830MB(99%) 49MB(18%) 366MB 1362MB 0MB 3920MB (99%) 29MB ( 1%) 2457MB (62%) 50MB 218MB
200000 3830MB(99%) 59MB(22%) 507MB 1220MB 0MB 3920MB (99%) 29MB ( 1%) 2609MB (66%) 60MB 207MB
300000 3830MB(99%) 67MB(25%) 641MB 1087MB 0MB 3920MB (99%) 29MB ( 1%) 2762MB (70%) 69MB 199MB
400000 3829MB(99%) 77MB(29%) 782MB 946MB 112MB 3920MB (99%) 29MB ( 1%) 3030MB (77%) 79MB 188MB
500000 3828MB(99%) 86MB(33%) 919MB 808MB 240MB 3921MB (99%) 29MB ( 1%) 3313MB (84%) 88MB 179MB
600000 3828MB(99%) 96MB(36%) 1056MB 671MB 368MB 3921MB (99%) 29MB ( 1%) 3598604 (91%) 98MB 170MB
700000
800000
Unsupported
900000
1000000

In comparison to 4400 the IOSd memory limit was probably not reached.
The overall memory consumption identified by “committed memory” is the limitation.
Conclusion
There are 3 possible memory bottlenecks:
• 1. IOSd Memory
• Even including dHeap there is a limit to how big IOSd can grow
• 2. Overall Linux Memory
• Because Linux grows at about the same rate as IOSd and reduces it’s cache constantly
this absence of cache eventually becomes an issue
• 3. EXMEM (Data Plane)
• This is unrelated to the CP memory but still can pose a limitation, especially as it can’t
be increased as of 3.13.1
Bottlenecks in graphics
ISR4400, 4GB CP, 2 GB DP, IOS-XE 3.13.1
Linux
EXMEM
IOSd

750 MB Linux 750 MB 750 MB free 1000 MB 470 MB 280 MB 750 MB packet 750 MB 40 MB 472 MB
OS Linux Cache IOS dHeap IOSd IOSd buffer system EXMEM EXMEM
used free free free used used free

ISR4300, 4GB CP & DP, IOS-XE 3.13.1


IOSd

Linux

Packet Buffer
950 MB 750 MB 1000 MB 530 MB 220 MB

300 MB
100 MB
Linux OS Linux Cache IOS dHeap IOSd IOSd

free
used free free free used

236 MB 20 MB
EXMEM EXMEM
free used
Scaling up with bigger memory (IOS-XE 3.13.1)
CP & DP IOS IOS Service
Platform Linux EXMEM
Memory dHeap static Containers
4400 4GB, 2GB 2.25 GB 1 GB 750 MB 512 MB 0 GB

4400 8GB, 2GB 4.25 GB 3 GB 750 MB 512 MB 4 GB

4400 16GB, 2GB 8.25 GB 7 GB 750 MB 512 MB 8 GB

4300 4GB 1.6 GB 1 GB 750 MB 256 MB 0 GB

4300 8GB 3.6 GB 3 GB 750 MB 256 MB 4 GB

4300 16GB 7.6 GB 7 GB 750 MB 256 MB 8 GB

ASR1002-X 16GB, 2GB 12 GB N/A 4GB 1024MB 0 GB


ISR 4k DRAM Variants
ISR 4400 Series ISR 4300 Series

1333 DDR3 ECC 1600 DDR3 ECC 1600 DDR3 ECC


DRAM DIMM DRAM DIMM DRAM Onboard

Control Plane (CP) or Data Plane Control Plane and Data Plane (combined Module)
(DP) (separate Modules)
For your
reference
DRAM options per Platform
ISR4321 ISR4331 ISR4351 ISR4431 ISR4451

CP 1333 DIMMs 0 0 0 2 2

Min/Max CP DIMM Size N/A N/A N/A 2GB-8GB 2GB-8GB

DP 1333 DIMMs 0 0 0 1 1

Min/Max DP DIMM Size N/A N/A N/A 2GB 2GB

CP+DP 1600 DIMMs 1 2 2 0 0

Min/Max CP+DP DIMM Size 4GB 2GB-8GB 2GB-8GB N/A N/A

CP+DP 1600 onboard 4GB 0 0 0 0


DRAM on Motherboard
ISR 4400 Series ISR 4321

DP 1333 DIMM CP + DP 1600 DIMM

CP + DP 1600 Onboard
2 x CP 1333 DIMMs
For your
reference
Spare DRAM Product IDs
Product ID Description
MEM-4400-2G= 2G DRAM (1 DIMM) 1333 for Cisco ISR 4400
MEM-4400-4G= 4G DRAM (1 DIMM) 1333 for Cisco ISR 4400
MEM-4400-8G= 8G DRAM (1 DIMM) 1333 for Cisco ISR 4400
MEM-4300-2G= 2G DRAM (1 DIMM) 1600 for Cisco ISR 4300
MEM-4300-4G= 4G DRAM (1 DIMM) 1600 for Cisco ISR 4300
MEM-4300-8G= 8G DRAM (1 DIMM) 1600 for Cisco ISR 4300
For your
reference
DRAM Compatibility
ISR4321 ISR4331 ISR4351 ISR4431 ISR4451

Type CP & DP CP & DP CP & DP CP DP CP DP

MEM-4400-2G= 2x 1x 2x 1x

MEM-4400-4G= 2x (1x)* 2x (1x)*

MEM-4400-8G= 2x 2x

MEM-4300-2G= 2x 2x

MEM-4300-4G= 1x 2x 2x

MEM-4300-8G= 2x 2x
* Official Support pending
Configuration specifics
Management Ethernet
• ASR1000 and ISR4000 have dedicated GigE Management Ethernet
• Not usable for ‘normal’ traffic
• Supports only basic ACLs
• Most forwarding features do not work on this port (traffic not processed by QFP)
• Intended for out of band router access—has SW support for rate limiting but that takes CPU cycles to
drop packets

• Don’t connect to the ‘outside’ world


• Always configured in dedicated VRF
• VRF cannot be removed from interface
TFTP Package to the RP from ROMMON
• SET the following variables within the ROMMON
• RP does not have full RxBoot environment
ROMMON is basically beefed up to support TFTP
rommon 2 > set
IP_SUBNET_MASK=255.255.0.0
TFTP_SERVER=2.8.54.2
TFTP_FILE=mcpude_12_18.bin
DEFAULT_GATEWAY=2.1.0.1
IP_ADDRESS=2.1.35.52
• Connect the GE management port on the RP to your management VLAN
• access the TFTP server where the “consolidated” package is located
• Issue the following command at ROMMON:
boot tftp:
• Image will be transferred directly to the RP DRAM for execution
Initial RP config in IOS for normal operation
• First thing that you will notice here is the default definition of “Mgmt-intf” VRF (case-
sensitive), which includes RP Mgmt. Gi0 port
Router#sh ip vrf interfaces
Interface IP-Address VRF Protocol
Gi0 unassigned Mgmt-intf up

• Assign the Gi0 interface an IP address, and set the default route in the VRF
ip route vrf Mgmt-intf 0.0.0.0 0.0.0.0 <gateway_ip_address>

• Set the TFTP source interface to Gi0 for file transfers:


ip tftp source-interface gigabitEthernet 0

• Multiple options for file storage and booting when transferring images to the RP
• bootflash: 1-8GB — recommended, larger on systems without harddisk:
• harddisk: 40-80GB — not on all platforms
Configuring Management Ethernet

vrf definition Mgmt-intf


!
address-family ipv4
exit-address-family
!
address-family ipv6
exit-address-family
!
ip domain name vrf Mgmt-intf cisco.com
ip name-server vrf Mgmt-intf 171.70.168.183
ip route vrf Mgmt-intf 0.0.0.0 0.0.0.0 172.27.55.129
!
interface GigabitEthernet0
vrf forwarding Mgmt-intf
ip address 172.27.55.210 255.255.255.128
speed auto
duplex auto
negotiation auto
Filesystem Specifics
• All media shows up as type ‘disk’ regardless of type of media (SATA disk, USB flash, etc)
• harddisk: and bootflash: always formatted as ext2
• External usb0:, usb1: can be formatted as FAT16, FAT32, or ext2
• No support for multiple partitions at this time, only first partition on each device is visible
• fsck supported for all file system types; /automatic is implicit
• IOS does not control these devices directly
• no flash driver in IOS
• no SATA driver in IOS
• Linux has the drivers, does the mount/umount under the covers
Core dumps, crashinfo
• Core dumps for all processes (IOS, cmand, fman_rp, …) and kernel all get written to
• harddisk:core/
• or bootflash:/core when no harddisk is present.

• File name pattern:


<hostname>_<FRU type>_<unit>_<process>_<time>.core.gz

• IOSd generates crashinfo files into bootflash: when it crashes—like other IOS based
platforms
IOS XE System Health Monitoring
• standard IOS CPU utilization and memory usage, e.g.,
“show process cpu” are not sufficient to determine control plane memory
ASR1000 health

RP
RP control processor
• Monitoring the CPU and memory utilization of the following
system elements is strongly recommended Linux
Interconnect
Kernel
• RP CPU and Memory Utilization
• ESP CPU and Memory Utilization Interconnect
• QFP Utilization
QFP

ESP
• NOTE: On fixed configuration platforms it is critical to understand QFP memory
that the RP/ESP/SIP are actually sharing the same CPU and Packet Interconnect
Crypto
memory. Therefore checking the RP values reports for all three. buffer
control plane
FECP

SIP
• Relevant MIBs: memory
SPA aggregation
• CISCO-PROCESS-MIB
• CISCO-ENTITY-QFP-MIB IOCP
IOS XE Control-Processor
• Key data to monitor for BB/ISG
ASR1000# show platform software status control-processor brief
Memory (kB) deployments:
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
RP0 Healthy 16343792 4509516 (28%) 11834276 (72%) 11627180 (71%) • RP/ESP Load Averages
RP1 Healthy 16343792 3962260 (24%) 12381532 (76%) 11621352 (71%)
ESP0 Healthy 16338208 990200 ( 6%) 15348008 (94%) 484804 ( 3%) • Committed Memory
ESP1 Healthy 16338208 1450756 ( 9%) 14887452 (91%) 1094048 ( 7%)
SIP0 Healthy 449336 350208 (78%) 99128 (22%) 359060 (80%) • RP/ESP CPU Utilization
SIP1 Healthy 449336 281628 (63%) 167708 (37%) 250948 (56%)

CPU Utilization • All key data is retrievable via SNMP


Slot CPU User System Nice Idle IRQ SIRQ IOwait
RP0 0 1.39 1.09 0.00 97.50 0.00 0.00 0.00
1 0.29 0.39 0.00 99.30 0.00 0.00 0.00 Due to the Linux cache mechanism, Used and Free
RP1 0 0.50 0.80 0.00 98.60 0.00 0.10 0.00
1 0.00 0.30 0.00 99.69 0.00 0.00 0.00
memory % are not always accurate. The cache gets
ESP0 0 0.00 0.00 0.00 100.00 0.00 0.00 0.00 counted as used when it is really potentially free. The
1 0.00 0.10 0.00 99.89 0.00 0.00 0.00
ESP1 0 0.10 0.80 0.00 99.09 0.00 0.00 0.00
critical item to view from this output is “Healthy” status.
1 0.00 0.00 0.00 100.00 0.00 0.00 0.00 To see accurate Used/Free/Cache usage use the
SIP0
SIP1
0
0
2.80
6.20
1.30
9.60
0.00 95.89
0.00 84.18
0.00
0.00
0.00
0.00
0.00
0.00
‘monitor’ cmd on the next slide.

Committed Value on ASR differs from ISR platform.


ISR – Represents actual memory in use.
ASR – Represent max potential memory usage.
IOS XE Linux top
• Captures actual ‘top’ ASR# show platform software process slot RP active monitor cycles 2 interval 10
output from RP/ESP/SIP. <snip>
top - 14:19:18 up 1 day, 22:09, 0 users, load average: 0.80, 0.53, 0.42
This can be used to Tasks: 227 total, 2 running, 225 sleeping, 0 stopped, 0 zombie
determine Used/Free/ Cpu(s): 1.7%us, 0.6%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8067244k total, 2697464k used, 5369780k free, 169760k buffers
Cache memory usage and Swap: 0k total, 0k used, 0k free, 1394588k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
CPU usage. More accurate 29060 root 20 0 99848 91m 5656 S 2 1.2 0:10.87 smand
than the ‘status control- 29638 root 20 0 4820m 641m 206m S 0 8.1 11:08.68 linux_iosd-imag
9055 root 20 0 178m 53m 31m R 0 0.7 2:56.29 mcpcc-lc-ms
processor brief’ command <snip>
but obviously more in-depth.
• It is IMPORTANT to use 2 cycles at a reasonable interval (5-10sec) and IGNORE the CPU
values from the first output. The first output averages over a very small timeframe and the
CPU reports are invalid. Only the 2nd cycle output averages over the desired interval and
provides accurate results. This is a linux limitation, not and IOS issue.
• In this example, Used Mem = 2.7G, but 1.4G of this is cached. Therefore available
memory = Free 5.4G + Cache 1.4G = 6.8G
IOS XE BQS queue and schedule count
ASR1001# show platform hardware qfp active infrastructure bqs status
BQS-RM Status :
=============================================
Current SW Memory Size: 4000
Object Counts:
Recycle Object Count: 91
Recycle Schedule Count: 15
Recycle Queue Count: 52
# of Active Queues: 144
# of Active Schedules: 56
# of Active Roots: 14
<snip>

• This command has a large amount of output related to QoS actions and events.
• The elements to look for are in the summary table listing the number of active queues and
schedules in the system.
IOS XE QoS sorter memory
ASR1001# show platform hardware qfp active infrastructure bqs sorter memory available
Level:Class Total Available Remaining
============= ====== ========= =========
ROOT:ONCHIP 64 64 100%
ROOT:COS_L2 448 448 100%
ROOT:NORMAL 0 0 0%
BRANCH:ONCHIP 128 122 95%
BRANCH:COS_L2 384 384 100%
BRANCH:NORMAL 15872 15872 100%
STEM:ONCHIP 992 877 88%
STEM:COS_L2 1024 1024 100%
STEM:NORMAL 260064 259934 99%

• Show memory utilization by all the active elements in the BQS system, primarily used for
QoS.
• The last line “STEM:NORMAL” is the primary element to monitor. Keeping the % Remaining
at a reasonable level (> 10%) for dynamic system events.
• Note: This command is dependent on an actual BQS ASIC being present and as such is
not operational on ISR or CSR platforms.
IOS XE QFP memory statistics
• This command shows the
ASR1001# show platform hardware qfp active infrastructure exmem statistics
specific QFP memory usage. QFP exmem statistics
Type: Name: DRAM, QFP: 0
• The SRAM memory is fixed Total: 268435456
InUse: 96961536
and should never change. Free: 171473920
Lowest free water mark: 171438080
Type: Name: SRAM, QFP: 0
• The DRAM memory is the main Total: 32768
memory used, when this InUse: 14880
Free: 17888
reaches near 100% the IRAM Lowest free water mark: 17888
Type: Name: IRAM, QFP: 0
memory will increase to handle Total: 134217728
the extra requirements. InUse: 7027712
Free: 127190016
Lowest free water mark: 127190016
• The IRAM should be monitored
for a reasonable free available
to handle dynamic events. (20-30% free)
• This memory is used for ALL the features that are processed by the QFP.
IOS XE datapath utilization
• This output shows the actual ASR1001# show platform hardware qfp active datapath utilization
processing load at the QFP CPP 0: Subdev 0 5 secs 1 min 5 min 60 min
Input: Priority (pps) 1 1 1 1
from all interfaces. (bps) 680 1160 1144 1152
Non-Priority (pps) 1 4 4 4
• The Input/Output Priority/ (bps) 584 3040 2992 3000
Total (pps) 2 5 5 5
Non-Priority pps and bps counts (bps) 1264 4200 4136 4152
should be the aggregate from all Output: Priority (pps) 0 1 1 1
(bps) 496 864 856 856
interfaces. Non-Priority (pps) 1 4 4 4
(bps) 3184 9168 9064 9200
Total (pps) 1 5 5 5
• The Processing Load (pct) needs (bps) 3680 10032 9920 10056
to be monitored. A consistent Processing: Load (pct) 0 0 0 0

load below 90% is expected.


Once the load goes above 95% there can be ingress packet drops due to processing
backpressure (Flow-Control).
• Note that the QFP also has to process all the inter-chassis control packets also which
adds to the processing load independent of actual traffic.
IOS XE datapath utilization summary
• This output shows the same ASR1001# show platform hardware qfp active datapath utilization
CPP 0: 5 secs 1 min 5 min 60 min
details as the previous Input: Total (pps) 7262 7264 7264 2875
command but combines (bps) 59458736 59462160 29265240496 4891165824
Output: Total (pps) 4 5 5 2
multiple CPP subdev and (bps) 8600 15536 15840 6168
priority/non-priority details into Processing: Load (pct) 1 1 1 1

a shorter version. Mainly useful


on ESP100, ESP200 where
multiple QFP ASICs are used.
IOS XE datapath drops
ASR1001# show platform hardware qfp active statistics drop
-------------------------------------------------------------------------
Global Drop Stats Packets Octets
-------------------------------------------------------------------------
Ipv4NoRoute 10 644
Wred 19 1392

• This output shows the reason for any drops in the QFP complex. There are many reasons
for drops but he output command only shows non-zero statistics (use all keyword to see
all reasons)
• If there are drops outside the QFP they will show up in other places
• “show interface” output
• queue overload
• “show controller” output
• due to flow-control because of the ingress overflow
IOS XE platform shell
• Used when there is not asr1000# request platform software system shell r0
enough information from the
Activity within this shell can jeopardize the functioning of the system.
IOS CLI Are you sure you want to continue? [y/n] y
2009/06/27 16:58:44 : Shell access was granted to user <anon>; Trace file: ,
• Fully functional shell as ‘root’ /harddisk/tracelogs/system_shell_R0.log.20090627165844
**********************************************************************
• you can see/break everything Activity within this shell can jeopardize the functioning
of the system.
from here Use this functionality only under supervision of Cisco Support.

• Shell session is recorded and Session will be logged to:


harddisk:tracelogs/system_shell_R0.log.20090627165844
send to syslog when done **********************************************************************
Terminal type 'network' unknown. Assuming vt100
• “service internal” and “platform
shell” are required on all platforms and some may also require a license to be installed.
Command List
• Summary of RP/ESP/SIP CPU and Memory
show platform software status control-processor brief
• Linux level RP/ESP/SIP CPU,Memory and Process list (top command)
show platform software process slot RP active monitor cycles 2 interval 10
• QoS Queue/Scheduler counts
show platform hardware qfp active infrastructure bqs status
• QoS Resource usage (only on systems with BQS ASIC, ie ASR1k)
show platform hardware qfp active infrastructure bqs sorter memory available
• QFP Memory Usage
show platform hardware qfp active infrastructure exmem statistics
• QFP Datapath Processing
show platform hardware qfp active datapath utilization <summary>
show version command updates
Release Image Name Show Version Output
Previous asr1000rp1- Cisco IOS Software, IOS-XE Software (PPC_LINUX_IOSD-ADVENTERPRISEK9-
advipservicesk9.03.08.00.S.153- M), Version 15.3(1)S, RELEASE SOFTWARE (fc4)
versions
1.S.bin Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2012 by Cisco Systems, Inc.
Compiled Tue 27-Nov-12 11:05 by mcpre

IOS XE Version: 03.08.00.S

Cisco IOS-XE software, Copyright (c) 2005-2012 by cisco Systems, Inc.

IOS XE 3.10 Extended Support Release: Venice#sh ver


Cisco IOS XE Software, Version 03.12.00.S - Standard Support Release
and later
asr1000rp1- Cisco IOS Software, ASR1000 Software (X86_64_LINUX_IOSD-
advipservicesk9.03.10.01.S.153-3.S- ADVIPSERVICESK9-M), Version 15.4(2)S, RELEASE SOFTWARE (fc2)
ext.bin

Standard Support Release:

asr1000rp1-
advipservicesk9.03.11.01.S.153-4.S-
std.bin
ESP 100/200 show command differences
• show platform hardware qfp active infrastructure exmem statistics
• On ESP 100 the SRAM reports 0 values (no SRAM)
• show platform hardware qfp active datapath utilization
• must be executed twice on ESP100/200, once for each 2nd gen QFP
• Use the summary option to see mulitple QFP ASIC details compressed into one output.
• show platform hardware qfp active infrastructure bqs sorter memory
[active, free, available, utilization]
• different output due to two 2nd gen QFP but same fundamental info for active, free, available
• Utilization is not implemented for 2nd gen QFP
• show platform hardware qfp active infrastructure bqs status
• slightly different, ESP100/200 does not report memory size
IOS XE packet tracing
• Introduced in XE3.10 as part of the IOS-XE serviceability initiative.
• Pactrac provides visibility into the treatment of packets of an IOS-XE platform with simple
to use commands. It is intended to be used externally (TAC, customers) and internally
(DE, DT) to troubleshoot, diagnose or gain a deeper understanding of the actions taken on
a packet during packet processing.
• Pactrac limits its inspection to the packets matched by the debug platform condition
statements making it a viable option even under heavy traffic situations seen in customer
environments.
• Three specific levels of inspection are provided by pactrac: accounting, per packet
summary and per packet path data. Each level adds a deeper look into the packet
processing at the expense of some packet processing capability.
Packet-Trace: Configuration Example
• The following shows how one would trace the first 128 packets entering
GigabitEthernet0/0/0 including FIA trace and a copy of up to the first 2048 octets of the
input packet.
debug platform condition interface g0/0/0 ingress
debug platform packet-trace enable
debug platform packet-trace packet 128 fia-trace
debug platform packet-trace copy packet input size 2048
debug platform condition start
Packet-Trace: Configuration Highlights
• Be mindful of how much QFP DRAM memory a config needs and how much memory is available
• memory needed = (stats overhead) + num pkts * (summary size + path data size + copy size)
• Stats overhead and summary size are fixed and about 2KB and 128B respectively
• Path data size and copy size (in/out/both) are user configurable
• Configure as much detail as you want…more detail…more performance impact for matched packets
(reading/writing memory costs!)
• Each config change temporarily disables pactrac and clears counts/buffers
• “Cheap” way of ‘debug plat cond stop’, ‘clear plat pack stats’ and ‘debug plat cond
start’
• Some configs require a ‘stop’ in order to display summary or per packet data
• Currently circular and drop tracing
• REMINDER: Conditions define where and when filters are applied to a packet
Packet-Trace: Show Commands
• Show commands are used to display pactrac configuration and each level of data:
• show platform packet-trace configuration
• Displays packet-trace configuration including any defaults
• show platform packet-trace statistics
• Displays accounting data for all pactrac packets
• show platform packet-trace summary
• Displays summary data for the number of packets specified by debug platform packet-trace packet
• show platform packet-trace packet { all | <pkt-num>} [decode]*
• Displays all path data for all packets or the packet specified
• Decode attempts to display packets captured by debug platform packet-trace copy in user
friendly way
• Only a few protocol headers are supported initially (ARPA, IP, TCP, UDP, ICMP)
• decode was introduced in XE3.11
Deployment use cases
IOS XE for Intelligent WAN

WAN (IP-VPN)
Private
Cloud
Branch

Virtual
Internet Private Cloud

Public Cloud
Intelligent WAN – Leveraging the Internet

Internet as WAN at Five-Nines Reliability

WAN (IP-VPN)
Private
SLAs for Business Critical Applications Cloud
Branch

Virtual
Centralized Security Policy for Internet Access
Internet Private Cloud

Dramatically Lower WAN Costs without Compromise Public Cloud


Intelligent WAN Solution Components

AVC
Private
Cloud

Internet
Virtual
Private Cloud
3G/4G-LTE

Branch WAAS PfR MPLS Public Cloud

Transport Intelligent Path Application Secure Connectivity


Independent Control Optimization

• Consistent operational model • Application best path based • Application monitoring with • Certified strong encryption
on delay, loss, jitter, path Application Visibility &
• Simple Provider migrations Control (AVC) • Comprehensive threat
preference
defense with ASA & IOS
• Scalable and Modular design • Load Balancing for full • Application Acceleration Firewall/IPS
utilization of all bandwidth and bandwidth savings
• DMVPN IPsec overlay design • Cloud Web Security (CWS)
with WAAS
• Improved network availability for scalable secure
direct Internet access
• Performance Routing (PfR)
Cisco Intelligent WAN
Solution Components

Transport Intelligent Secure Application


Independence Path Control Connectivity Optimization
Provider Flexibility Load Balancing Scalable, Strong Encryption Application Visibility
Modular Design Policy-Based Path Selection App-Aware Threat Defense App Acceleration
Common Operational Model Network Availability Cloud Web Security Intelligent Caching

Application Experience / IT Simplicity / Lower WAN Costs


SD WAN (IWAN)
Hybrid WAN
Transport
MPLS
Private
$$$ Cloud

Virtual
Private Cloud
Internet backhaul
Branch
Internet
Direct Cisco
Internet Cloud $ Public
Web Security
Access Cloud

 Secure WAN transport across MPLS  Leverage local Internet path for
and/or Internet for private cloud / DC public cloud and Internet access
access
Increase WAN Capacity Improve App Performance Scale Security at the Branch
Currently Deployed Virtualization Solutions
Control Plane (Virtual) Private Cloud / DC • Public Cloud
• RR, LISP MS/MR.. • CE/PE Functionality

VPC/ vDC Public Cloud


Shared Services
WAN
vWLC vRR VPC1
WAN CSR WAN
vMS/MR vMC 1000V

Internet Internet
CSR VPC2
ISR/ASR ISR/ASR 1000V
Campus
CSR
1000V
ASR 1000 as Services PE
Functions Services Scalability*
• ASR 1006 / 1013 as MSE • Dual-stack • 2.5 – 100 Gbps
L3VPN / L2VPN / VPLS • Multicast / mVPN • 4M Routes
CsC, Extranet • EVC • 8000 VRF
• ASR 1002 –X as PE+LNS • RA-MPLS with MLP • 16000 PW / L2TP
• Hierarchical QoS • Firewall / CGN / NAT64 • 8000 PPP / 1000 MLP
• High-Availability / ISSU • IPSec • 2000 eBGP + NSR
FRR, Fast Convergence
PE-CE BFD + NSR
• Routed PW into VRF
• EOAM / SLA
PE+LNS

VRF
Bridge L2TP
Domain
PW
MSE
IPSec
Ethernet
FR/Serial
POS
ATM LAC/LTS

* uni-dimensional. Scalability may vary with feature combinations


ASR 1000 as Internet Edge
Functions Services Scalability*
• ASR 1006 RP1 / ESP10 providing • HSI with LAC model • Up to 10K residential subs
IP Edge functions 1/2/4 Gbps services • 64000 SIP sessions
Over 3000 systems deployed • VoIP and VVoIP using distributed • 1000 MLD / IGMP
• 3-play IP Edge SBC • 100K mroutes
No MPLS • Multicast with 3 levels of QoS • Oversubscribed System
• Extreme focus on HA & ISSU • GEC VLAN loadbalancing 10GE redundant uplinks
Performed multiple ISSU upgrades
already
Content Farms

RACS
H.248 SIP-ALG
TV VOD SIP
DSLAM
Residential H.248
VLAN

NAT/NAPT L2TP
QoS HSI
OLT Integrated Ethernet/MPLS/IP
Service
ASR 1000 as Enterprise Edge
Functions Services Scalability*
• ASR 1001 / 1002 / 1006 as • FNF • 10Gbps encryption + 30 Gbps
WAN Edge • PFR clear traffic
Secure VPN functions • Multilink FR / PPP • 4000+ IPSec tunnels
Internet Edge • VRF-aware PBR • 60K ACEs in 4K ACLs
• H-QoS • NAT / Firewall • 4K policy maps in 4K class maps
• IPSec: S2S, DMVPN, GETVPN With inter-chassis redundancy • 4K GRE
• USGv6
• Trustsec
• Application Visibility & Control

Internet /
MPLS VPN Corporate
Network

NAT IPSec
QoS /FW / GRE
ASR1000 with AppNav-XE
Virtualize WAN optimization resources into pools of elastic resources with
business driven bindings, greatly simplify deployment and management of
WAAS. Previous
Application Path Affinity Custom
Persistence Affinity Rules

WAAS I/O WAAS Device


Region 1 Region 2 Load Status
vWAAS vWAAS WAAS
WAAS Traffic
Optimization
Load
Load

AppNav WAAS High


WAVE WAVE High Avail Availability
Summary
IOS XE summary

• IOS XE is an evolution of IOS


• provides operational continuity
• configurations move forward
• IOS protocol troubleshooting moves forward
• Data / control / service plane separation
• Functionality isolation, DOS protection
• Improved and predictable performance
• Cost efficiencies
IOS XE summary

• Operational excellence
• QoS, High Availability, easy service enablement
• Platform management
• Multiple processors, memories, busses to be
monitored
• Common code and feature sets across
multiple locations in the network
• Eases deployments, decreases incompatibilities
Thank you
Complete Your Online Session Evaluation
• Give us your feedback to be
entered into a Daily Survey
Drawing. A daily winner
will receive a $750 Amazon
gift card.
• Complete your session surveys
though the Cisco Live mobile
app or your computer on
Cisco Live Connect.
Don’t forget: Cisco Live sessions will be available
for viewing on-demand after the event at
CiscoLive.com/Online
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions