You are on page 1of 76

Juniper Networks

SDN and NFV products


for Service Providers Networks
Evgeny Bugakov
Senior Systems Engineer, JNCIE-SP
21 April 2015
Moscow, Russia
1

Copyright 2014 Juniper Networks, Inc.

AGENDA

Virtualiza*on strategy and goals

vMX product overview and performance

vMX Use cases and deployment models

vMX Roadmap and licensing

Northstar WAN SDN Controller

Copyright 2014 Juniper Networks, Inc.

Virtualization strategy and goals

Copyright 2014 Juniper Networks, Inc.

SoVware

MX Virtualiza*on Strategy
Enterprise Edge/Mobile Edge

Aggrega3on/Metro/Metro core

Applica*ons
Service Provider Edge/Core and EPC

Data center/Central Oce

Control Plane and OS: Virtual JUNOS, Forwarding Plane: Virtualized Trio
Virtual Rou*ng Engine, Virtual Route Reector
Virtual PE, Virtual BNG/LNS, Hardware
Virtualiza*on

vCPE, Enterprise Router

MX SDN Gateway

Cell Site Router


Core
Branch
Oce
HQ

Aggrega*on
Router/ Metro
Core
Carrier Ethernet
Switch

DC/CO Edge
Router

Service Edge
Router

vBNG, vPE, vCPE

Mobile & Packet


GWs

Leverage R&D effort and JUNOS feature velocity across all physical & virtualization initiatives
4

Copyright 2014 Juniper Networks, Inc.

Physical vs. Virtual

Each op3on has its own strength, and it is


created with dierent focus

Physical

Virtual

High throughput, high density

Flexibility to reach higher scale in control plane and


service plane

Guarantee of SLA

Agile, quick to start

Low power consump*on per throughput

Low power consump*on per control plan and service

Scale up

Scale out

Higher entry cost in $ and longer *me to deploy

Lower entry cost in $ and shorter *me to deploy

Distributed or centralized model

Op*mal in centralized cloud-centric deployment

Well development network mgmt system, OSS/BSS

Same pla[orm mgmt as Physical, plus same VM


mgmt as a SW on server in the cloud

Variety of network interfaces for exibility

Cloud centric, Ethernet-only

Excellent price per throughput ra*o

Ability to apply pay as you grow model

Copyright 2014 Juniper Networks, Inc.

Type of deployments with virtual platform


Route Reector

Wireless LAN GW

Branch Router
Cloud CPE
Cloud based VPN

Service
Chaining GW

A whole new
approach to
a tradi3onal
concept

Tradi3onal
func3on, 1:1
form
replacement

Mobile Sec GW

CPE
PE

Lab & POC

DC GW

Mobile GW

Services appliances
SDN GW

New applica3ons
where physical is
not feasible or ideal

Virtual Private Cloud GW


6

Mul*-func*on, mul*-layer integra*on


w/ rou*ng as a plug-in
Copyright 2014 Juniper Networks, Inc.

vMX Product Overview

Copyright 2014 Juniper Networks, Inc.

vMX goals
Agile and Scalable

Orchestrated

Leverage JUNOS and Trio

Scale-out elas*city by spinning up new instances


Faster *me-to-market oering
Ability to add new services via service chaining

vMX treated similar to a cloud based applica*on

Leverages the forwarding feature set of Trio


Leverages the control plane features of JUNOS

Copyright 2014 Juniper Networks, Inc.

Virtual and Physical MX


CONTROL
PLANE
Microcode
crosscompiled
DATA
PLANE

ASIC/
HARDWARE

PFE

TRIO
UCODE

VFP

X86
instructions

Cross compilation creates high leverage of features between Virtual and Physical with minimal re-work

Copyright 2014 Juniper Networks, Inc.

Virtualiza*on techniques: deployment with hypervisors


Guest VM#1

Guest VM#2

Applica*on

Virtual NICs

Virtual NICs

VirtIO drivers

VirtIO drivers
Device emula*on

Hypervisor: KVM, XEN,VMWare ESXi


Physical layer

Physical NICs

Para-virtualization (VirtIO, VMXNET3)


Guest and Hypervisor work together to make emulation
efficient
Offers flexibility for multi-tenancy but with lower I/O
performance
NIC resource is not tied to any one application and can be
shared across multiple applications
vMotion like functionality possible
10

Guest VM#2

Applica*on

Applica*on

Virtual NICs

Virtual NICs

Device emula*on
PCI Pass-through
SR-IOV

Applica*on

Guest VM#1

Device emula*on

Hypervisor: KVM, XEN, VMWare ESXi


Physical layer

Physical NICs

PCI-Pass through with SR-IOV


Device drivers exist in user space
Best for I/O performance but has dependency on NIC type
Direct I/O path between NIC and user-space application
bypassing hypervisor
vMotion like functionality not possible
Copyright 2014 Juniper Networks, Inc.

Virtualiza*on techniques: containers deployment


Applica*on 1

Applica*on 2

Virtual NICs

Virtual NICs

Container engine (Docker, LXC)


Physical layer

Physical NICs

Containers (Docker, LXC)


No hypervisor layer. Much less memory and compute resource
overhead
No need for PCI-pass through or special NIC emulation
Offers high I/O performance
Offers flexibility for multi-tenancy
11

Copyright 2014 Juniper Networks, Inc.

vMX overview
Ecient separa*on of control and data-plane
Data packets are switched within vTRIO
Mul*-threaded SMP implementa*on allows core elas*city
Only control packets forwarded to JUNOS
Feature parity with JUNOS (CLI, interface model, service congura*on)
NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0)


VFP

SNMP

DCD

CHASSISD

RPD


Intel D
PDK

LC-
Kernel




Virtual TRIO

VCP

Guest OS (JUNOS)

Guest OS (Linux)

Hypervisor
x86 Hardware
12

Copyright 2014 Juniper Networks, Inc.

Virtual TRIO Packet Flow


VCP
rpd

vre0

em1:
172.16.0.1

chasd
VMXT = microkernel

fxp0:
<any address>

vre1

br-int

br-ext

172.16.0.3
vpfe0

eth0 :
172.16.0.2

VFP
vTRIO
DPDK

<any address>
vpfe1

eth1 :
<any address>

Virtual nics

Physical nics

13

Copyright 2014 Juniper Networks, Inc.

vMX Orchestration
OpenStack/Scripts for VM
management

SCRIPTS

Guest VM (Linux + DPDK)

Guest VM (FreeBSD)

Optimized data path from physical


NIC to vNIC via SR-IOV (Single Root
IO Virtualization).

VCP

VFP

SR-IOV

Virtual NICs

Bridge / vSwitch

Hypervisor:
KVM

vSwitch for VFP to VCP


communication (internal host path)

Physical layer

Cores
Physical NICs
14

Memory

Management
traffic

Copyright 2014 Juniper Networks, Inc.

vMX Performance

15

Copyright 2014 Juniper Networks, Inc.

vMX Environment
Sample system configuration
Descrip3on

Value

Sample system congura*on

Intel Xeon E5-2667 v2 @ 3.30GHz 25 MB Cache. NIC: Intel 82599 (for SR-IOV only)

Memory

Minimum: 8 GB (2GB for vRE, 4GB for vPFE, 2GB for Host OS)

Storage

Local or NAS

Sample configuration for number of CPUs

16

Use-cases

Requirement

VMX with up to 100Mbps performance

Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].


Min # of Cores: 2 [ 1 core for VFP and 1 core for VCP]. Min memory 8G. VirtIO NIC only.

VMX with up 3G of performance @ 512 bytes

Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].


Min # of Cores: 4 [ 2 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G.
VirtIO or SR-IOV NIC.

VMX with 10G and beyond (assuming min 2 ports of 10G)

Min # of vCPUs: 5 [1 vCPU for VCP and 4 vCPUs for VFP].


Min # of Cores: 5 [ 3 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G.
SR-IOV only NIC.

Copyright 2014 Juniper Networks, Inc.

vMX Baseline Performance


VMX performance in Gbps
2 x 10G ports

6 x 10G ports

# of cores for packet processing *

# of cores for packet processing*

Frame size (Bytes)

10

Frame size (Bytes)

10

256

3.8

7.2

9.3

12.6

256

2.2

4.0

6.8

9.8

512

3.7

7.3

13.5

18.4

19.8

512

4.1

8.1

14

19.0

27.5

1500

10.7

20

20

20

20

1500

11.5

22.9

40

53.2

60

4 x 10G ports

8 x 10G ports


Frame size (Bytes)

# of cores for packet processing*


3

10

Frame size (Bytes)

# of cores for packet processing*


3

12

66

4.8

128

8.3

256

2.1

4.2

6.8

9.6

13.3

512

4.0

7.9

13.8

18.6

26

256

14.4

1500

11.3

22.5

39.1

40

40

512

31

1500

78.5

IMIX

35.3

*Number of cores includes cores for packet processing and associated host func7onality. For each 10G port there is a dedicated core not included in this number.
17

Copyright 2014 Juniper Networks, Inc.

vMX Performance improvement


More

Instructions
utilized to
process a
packet on x86

VMX at
present

Less

VMX
future

Less

More

Degree of
TRIO ASIC
emulation

VMX roadmap is to reduce the number of instructions utilized per packet by running some parts of the
forwarding plane natively on x86 without emulation.
Reduce the number of instructions per packet has an inverse relation with the packet performance
18

Copyright 2014 Juniper Networks, Inc.

vMX use cases


and deployment models
19

Copyright 2014 Juniper Networks, Inc.

Service Provider VMX use case virtual PE (vPE)


Market Requirement

vPE%

Scale-out deployment
scenarios
Low bandwidth, high control
plane scale customers
Dedicated PE for new
services and faster time-tomarket

Branch$
Oce$

Pseudowire%
L3VPN%
IPSEC/Overlay%technology%

DC/CO%Fabric%
%

Internet%

DC/CO%
Gateway%%

CPE%
Peering%

VMX Value Proposition

Provider%MPLS%
cloud%

CPE%

VMX is a virtual extension of


a physical MX PE

L2%PE%

CPE%
L3%PE%

20

SMB$

Branch$
Oce$

Orchestration and
management capabilities
inherent to any virtualized
application apply
Copyright 2014 Juniper Networks, Inc.

VMX as a DC Gateway virtual USGW


Market Requirement
Non#Virtualized#
environment#(L2)##

VPN#Cust#A#

Service Providers need a


gateway router to connect the
virtual networks to the physical
network

VPN#Cust#B#

VMX#
ToR#(L2)#
VRF#B#

MPLS#Cloud#

VRF#A#

Gateway should be capable of


supporting different DC overlay,
DC Interconnect and L2
technologies in the DC such as
GRE, VXLAN, VPLS and EVPN

VPN#
Gateway#
(L3VPN)#

VXLAN#
Gateway#
(VTEP)#
ToR#(IP)#

VTEP#
VM#

VM#

VTEP#
VM#

Virtualized#Server#

VM#

VM#

Virtual#Network#B#

Virtual#Network#A#

VM# VM# VM#

VM# VM# VM#

VM#

Virtualized#Server#

Data#Center/#Central#Oce#

VMX Value Proposition


VMX supports all the overlay, DCI and
L2 technologies available on MX
Scale-out control plane to scale up
VRF instances and number of VPN
routes

21

Copyright 2014 Juniper Networks, Inc.

VMX to oer managed CPE/centralized CPE


Market Requirement

VMX Value Proposition

Service providers want to oer a managed


CPE service and centralize the CPE func*onality
to avoid truck rolls
Large enterprises want a centralized CPE
oering to manage all their branch sites
Both SPs and enterprises want the ability to
oer new services without changing the CPE
device

VMX with service chaining can oer best of


breed rou*ng and L4-L7 func*onality
Service chaining oers the exibility to add
new services in a scale-out manner

Contrail
Controller

Internet$

Branch' Switch$$
Oce'

PE$

vMX$as$vCPE$$
(IPSec,$NAT)$

Provider$MPLS$
cloud$

vSRX$
(Firewall)$

vMX$as$
vPE$

DC/CO$GW$

Provider$MPLS$
cloud$

L2$PE$

DC/CO$Fabric$+$$Contrail$overlay$
L2$PE$
Switch$
Switch$

22

Branch'
Oce'

Branch'
Oce'

Copyright 2014 Juniper Networks, Inc.

Reflection from physical to virtual world


Proof of concept lab validation or SW certification

Virtual

Perfect mirroring eect between carrier


grade physical pla[orm & virtual router
Can provide reec*on eect of an actual
deployment in virtual environment
Ideal to support

Physical
deployment

Proof of Concept lab


New service congura*on/opera*on
prepara*on
SW release valida*on for an actual
deployment
Training lab for opera*onal team
Troubleshoot environment for a real network
issue

CAPEX or OPEX reduc*on for lab


Quick turn around when lab network
scale is required
23

Copyright 2014 Juniper Networks, Inc.

Service Agility: Bring up a new service in a POP


4. Integrated the new service into exis*ng PE when the service is
mature

1. Install a new vMX to


start oering a new
service without
impact to exis*ng
pla[orm

2.

POP

vMX

PE

L3 CPE

24

vMX

SP Network for VPN


service

MX

Scale out the service with vMX quickly if trac prole ts the
requirements

3.

Add service directly to the physical MX GW or add more


physical MX if service is successful and there is more demand
with signicant trac growth

PE
L3 CPE

Copyright 2014 Juniper Networks, Inc.

vBNG, what is it?


Runs on x86 inside virtual machine
Two virtual machines needed, one for forwarding and one for control plane
First iteration supports KVM for hypervisor and OpenStack for orchestration
VMWARE support planned
Based on the same code base and architecture as Junipers VMX
Runs Junos
Full featured and constantly improving
Some features, scale and performance of vBNG will be different than pBNG
Easy migration from pBNG
Supports multiple BB models
vLNS
BNG based on PPP, DHCP, C-VLAN and PWHT connections types

25

Copyright 2014 Juniper Networks, Inc.

Virtual BNG cluster in a data center


vMX as vBNG

BNG cluster

vMX

vMX

vMX

vMX

vMX

Data Center or CO

10K~100K subscribers

Poten*ally BNG func*on can be virtualized, and vMX can help form a BNG cluster at the DC or CO (Roadmap item, not at FRS);
Suitable to perform heavy load BNG control-plane work while there is liwle BW needed;
Pay-as-you-grow model;
Rapid Deployment of new BNG router when needed;
Scale-out works well due to S-MPLS architecture, leverages Inter-Domain L2VPN, L3VPN, VPLS;
26

Copyright 2014 Juniper Networks, Inc.

vMX Route Reector feature set


Route Reectors are characterized by RIB scale (available memory) and BGP
Performance (Policy Computa*on, route resolver, network I/O - determined by
CPU speed)
Memory drives route reector scaling
Larger memory means that RRs can hold more RIB routes
With higher memory an RR can control larger network segments lower
number of RRs required in a network
CPU speed drives faster BGP performance
Faster CPU clock means faster convergence
Faster RR CPUs allow larger network segments controlled by one RR - lower
numbers of RRs required in a network
vRR product addresses these pain point by running Junos image as an RR applica*on
on faster CPUs and with memory on standard servers/appliances
27

Copyright 2014 Juniper Networks, Inc.

Juniper vRR DEVELOPMENT Strategy


vRR development is following three pronged approach
1. Evolve plaeorm capabili3es using virtualiza3on technologies
Allow instan*a*on of Junos image on a non RE hardware
Any Intel Architecture Blade Server / Server
2. Evolve Junos OS and RPD capabili3es
64 bit Junos kernel
64 bit RPD improvements for increased scale
RPD modularity / mul*-threading for bewer convergence performance
3. Evolve Junos BGP capabili3es for RR applica3on
BGP Resilience and Reliability improvements
BGP monitoring protocol
BGP Driven Applica*on control DDoS preven*on via FlowSpec
28

Copyright 2014 Juniper Networks, Inc.

VRR Scaling Results


Tested with 32G vRR instance
Memory
U*liza*on(for
Time taken
receive all routes) to receive all routes # of receiving peers

Address
Family

# of
adver*zing peers

Time taken to adver*se


the routes and Mem U*ls.

ac*ve routes

Total Routes

IPv4

600

4.2 million

42Mil(10path)

60%

11min

600

20min(62%)

IPv4

600

2 million

20Mil(10path)

33%

6min

600

6min(33%)

IPv6

600

4 million

40Mil(10path)

68%

26min

600

26min(68%)

VPNv4

600

2Mil

4Mil (2 paths )

13%

3min

600

3min(13%)

VPNv4

600

4.2Mil

8.4Mil (2 paths )

19%

5min

600

23min(24%)

VPNv4

600

6Mil

12Mil (2 paths )

24%

8min

600

36min(32%)

VPNv6

600

6Mil

12Mil (2 paths )

30%

11min

600

11min(30%)

VPNv6

600

4.2Mil

8.4Mil (2 paths )

22%

8min

600

8min(22%)

* The convergence numbers also improve with higher clock CPU

29

Copyright 2014 Juniper Networks, Inc.

Network based Virtual Route Reector Design


Junos VRR on VMs
On standard servers

Client 3

iBGP

Client 1
Client n

Client 2

vRRs can be deployed in the same loca*ons in the network


Same connec*vity paradigm between vRRs and clients as todays RRs and clients
vRR instan*a*on and connec*vity (underlay) provided by Openstack
30

Copyright 2014 Juniper Networks, Inc.

CLOUD Based Virtual Route Reector DESIGN

Solving the best path selec3on problem for cloud virtual route reector

iBGP

Client 1

Regional
Network 1

Client 2

iBGP


iBGP

R1

Cloud Overlay w/ Contrail or


VMWare
VRR 2 selects path

based on R1 view

GRE,
IGP
Data Center
Cloud

Backbone

VRR 2
Region 2

GRE, IGP
Regional
Network 2

Client 3

R2

VRR 2 selects path


based on R2 view

VRR 1
Region 1

vRR as an Applica*on hosted in DC


GRE tunnel is originated from gre.X (control plane interface)
VRR behaves like it is locally awached to R1 (requires resolu*on RIB cong)
31

Copyright 2014 Juniper Networks, Inc.

EVOLVING SERVICE DELIVERY to bring


cloud proper*es to managed BUSINESS services
30Mbps Firewall

Remote access for 40 employees

Applica3on Repor3ng

Applica3on Accelera3on

There is a App for That

32

There is an App for That

Copyright 2014 Juniper Networks, Inc.

Cloud Based CPE with vMX


Simplify the device required on the customer premise
Centralize key CPE functions & integrate them into the network edge
BNG / PE in SP
Network

Typical CPE Func3ons


Simplied L2 CPE
Routing / IP
Forwarding

NAT

MoCA/ HPAV/
HPNA3
MoCA/ HPAV/
HPNA3

Direct Connect
Extend reach & visibility into the
home
Per device awareness & state
Simplied user experience

33

Voice
Voice

Firewall

Access
Point
Access
Point

DHCP

Switch

Modem / ONT

Switch

Modem / ONT

A Simplied CPE
Remove CPE barriers to service
innova*on
Lower complexity & cost

Routing / IP
Forwarding

DHCP

FW

NAT

In Network CPE func*ons


Leverage & integrate with other
network services
Centralize & consolidate
Seamless integrate with mobile & cloud
based services
Copyright 2014 Juniper Networks, Inc.

Cloud CPE scenario A: Integrated v- branch router


L2 CPE
(op*onally with L3 awareness for
QoS and Assurance)

vCPE instance = VPN


rou*ng instance

Juniper
MX
JS Self-Care App
NID Partners

Addressing, Rou*ng,
Internet & VPN, QoS

Ethernet NID

Edge Router
Cloud CPE Context

NAT, Firewall, IDP


Routing NAT,
FW
VPN
DHCP

Switch with Smart


SFP

LAG, VRRP, OAM, L2 Filters,..

Pros
Simplest onsite CPE
Limited investments
LAN extension
Device visibility
34

Sta*s*cs and Monitoring


per vCPE

Cons
Access network impact
Limited services
Management impact
Copyright 2014 Juniper Networks, Inc.

Cloud CPE scenario B: overlay v- branch router


vCPE instance =
VR on VM

Lightweight L3 CPE

Juniper
Firey
Virtual Director

CPE

VM
VM

VPN

CPE

VM

(Un)Secure Tunnel
L2 or L3 Transport

Pros
No domain constraint
Opera3onal isola3on
VM exibility
Transparent to exis3ng network
35

Cons
Pre-requisites on CPE
Blindsided Edge
Virtualiza3on Tax

VM can be shared across


sites

Copyright 2014 Juniper Networks, Inc.

BROADBAND DEVICE VISIBILITY

EXAMPLE: PARENTAL CONTROL BASED ON DEVICE POLICIES

CONTENT FILTER

TIME OF DAY

Little Jimmys
Desktop

Internet access from this device is not


permiled between 7pm and 7am.

You have tried to access


www.iwishiwere21.com

Try again tomorrow


This site is ltered in order to protect you

L2 Bridge
Laptop

ACTIVITY REPORTING
Volumes

Tablet

HOME NETWORK

36

Content
Facebook.c
Twitter.com
om
Hulu.com
Wikipedia.com
Iwishiwere21.co
m
Portal / Mobile App
Self-care & Repor*ng

Copyright 2014 Juniper Networks, Inc.

More use cases? The limit is our imagination


Virtual pla[orm is one more tool for network provider, and the use cases are
up to users to dene
VPC GW for private,
public and hybrid cloud
Cloud based VPN
NFV plug-in for mul*-
func*on consolida*on

Virtual BNG cluster

Virtual Route Reector

Virtual Mobile service


control GW

Distributed NFV Service Complex

SW cer*ca*on, lab valida*on, network


planning & troubleshoo*ng, proof of concept
37

vGW for service chaining

And more

Copyright 2014 Juniper Networks, Inc.

vMX FRS features

38

Copyright 2014 Juniper Networks, Inc.

vMX Products family

39

Characteris*cs

Target customer

Availability

Trial

Up to 90 day trial
No limit on capacity
Inclusive of all features

Poten*al customers who want


to try-out VMX in their lab or
qualify VMX

Early availability by
end of Feb 2015

Lab simula*on/
Educa*on

No *me-limit enforced
Forwarding plane limited to
50Mbps
Inclusive of all features

Customer wants to simulate


produc*on network in lab
New customer to gain JUNOS
and MX experience

Early availability by
end of Feb 2015

GA product

Bandwidth driven licenses


Two modes for features:
BASE or ADVANCE/
PREMIUM

Produc*on deployment for VMX 14.1R6 (June 2015)

Copyright 2014 Juniper Networks, Inc.

VMX FRS product


Ocial FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R6.
High level overview of FRS product

40

DPDK integra*on. Min 80G throughput per VMX instance.


OpenStack integra*on.
1:1 mapping between VFP and VCP
Hypervisor support: KVM, VMWare ESXi, Xen
High level feature support for FRS
Full IP capabili*es
MPLS: LDP, RSVP
MPLS applica*ons: L3VPN, L2VPN, L2Circuit
IP and MPLS mul*cast
Tunneling: GRE, LT
OAM: BFD
QoS: Intel DPDK QoS feature-set

Copyright 2014 Juniper Networks, Inc.

vMX Roadmap

41

Copyright 2014 Juniper Networks, Inc.

vMX with vRouter and Orchestra*on

vMX with vRouter integra*on

VirtIO u*lized for Para-virtualized drivers

Contrail OpenStack for

VM management

Se~ng up overlay network

NFV Orchestrator (OpenStack Heat templates)


u*lized to easily create and replicate VMX
instances

NFV orchestrator
Template
based cong

Contrail
controller

42

Copyright 2014 Juniper Networks, Inc.

Physical & Virtual MX

Oer a scale-out model across both physical


and virtual resources

NFV orchestrator
Template based cong
BW per instance
Memory
# of WAN ports

Depending on the type of customer and


service oering NFV orchestrator decides
whether to provision the customer on a
physical or virtual resource

Contrail
controller

Virtual Rou*ng Engine

L2 interconnect

VMX1

VMX2

Virtual Forwarding resources

43

Physical Forwarding resources

Copyright 2014 Juniper Networks, Inc.

vMX roadmap
1H2015

Features

Hypervisor

VMX 90-day trial and lab test/


simula3on plaeorm
VMX FRS (JUNOS 14.1R6)
Full IP capabili3es
MPLS applica3ons: L3VPN,
L2VPN, L2Circuit
IP and MPLS mul3cast
Tunneling: GRE, LT
OAM: BFD
Intel DPDK QoS feature-set

KVM with SR-IOV and


VirtIO

44

VMX post FRS features (Target


release 15.1Rx)
L2: Bridging & IRB, VPLS,
VXLAN, EVPN
Inline services: jow, IPFIX
VRR applica3on
VMX live migra3on and HA
architectures
VMX in CSPs (Amazon) as Virtual
Private Cloud Gateway
Inline site-to-site IPSec
XEN
Docker & LXC for VFP
VMWare ESXi with
VMXNET3 and SR-IOV
VMX bring up with OpenStack
u3lizing HEAT templates
VMX Neutron L3 Plugin
VMX working with Contrail
vRouter and integra3on into
Contrail OpenStack

Orchestra*on/
Management.
Licensing

Performance

2H2015

Max vanilla IP performance


with 20 cores @ 1500 bytes:
80G. With IMIX: 36G

2016
L4-L7 feature integra3on
NAPT, Dynamic NAT
NAT64, lw4o6
Dynamic Mul3point IPSec
VPN

Microsos Hyper-V

Enhanced license management


sosware with on-site server or
call-home func3onality for VMX
license-management

Performance improvements for higher PPS


per core (1.5-2MPPS/core) vHypermode
VMX scale-out architectures
Copyright 2014 Juniper Networks, Inc.

vMX Licensing

45

Copyright 2014 Juniper Networks, Inc.

vMX Pricing philosophy

46

Value based pricing

Price as a pla[orm and not just on cost of bandwidth


Each VMX instance is a router with its own control-plane,
data-plane and administra*ve domain
The value lies in the ability to instan*ate routers easily

Elas*c pricing model

Bandwidth based pricing


Pay as you grow model

Copyright 2014 Juniper Networks, Inc.

vMX License structure


Three applica3on packages
BASE: Basic IP rou*ng. No VPN capabili*es
ADVANCED: Same func*onality as IR mode MPCs
PREMIUM: Same func*onality as R mode MPCs
Capacity based licensing
Each applica*on package oers capacity based SKUs
Per instance license

Payment op3ons
Licenses will have a perpetual and subscrip*on op*on
47

Copyright 2014 Juniper Networks, Inc.

Applica*on package func*onality mapping


Applica3on package
BASE

ADVANCED (-IR)

PREMIUM (-R)

Func3onality

Use cases

IP rou*ng with 32K IP routes in FIB


Basic L2 func*onality: L2 Bridging and
switching
No VPN capabili*es: No L2VPN, VPLS,
EVPN and L3VPN

Low end CPE or Layer3


Gateway

Full IP FIB
Full L2 capabili*es includes L2VPN,
VPLS, L2Circuit
VXLAN
EVPN
IP Mul*cast
BASE
L3VPN for IP and Mul*cast

L2vPE
Full IP vPE
Virtual DC GW

L3VPN vPE
Virtual Private Cloud
GW

Note: Application packages exclude IPSec, BNG and VRR functionality.


48

Copyright 2014 Juniper Networks, Inc.

Bandwidth License SKUs


Bandwidth based licenses for each applica*on package for the following processing capacity limits:
100M, 250M, 500M, 1G, 5G, 10G, 40G. Note for 100M, 250M and 500M there is a combined SKU with
all applica*ons included.

BASE

ADVANCE

PREMIUM

100M

250M

500M

1G BASE

5G BASE

10G BASE

40G BASE

1G ADV

5G ADV

10G ADV

40G ADV

1G PRM

5G PRM

10G PRM

40G PRM

Applica*on *ers are addi*ve i.e ADV *er encompasses BASE func*onality
49

Copyright 2014 Juniper Networks, Inc.

VMX soVware License SKUs

50

SKU

Descrip*on

VMX-100M

100M perpetual license. Includes all features in full scale

VMX-250M

250M perpetual license. Includes all features in full scale

VMX-500M

500M perpetual license. Includes all features in full scale

VMX-BASE-1G

1G perpetual license. Includes limited IP FIB and basic L2 func*onality. No VPN features

VMX-BASE-5G

5G perpetual license. Includes limited IP FIB and basic L2 func*onality. No VPN features

VMX-BASE-10G

10G perpetual license. Includes limited IP FIB and basic L2 func*onality. No VPN features

VMX-BASE-40G

40G perpetual license. Includes limited IP FIB and basic L2 func*onality. No VPN features

VMX-ADV-1G

1G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances

VMX-ADV-5G

5G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances

VMX-ADV-10G

10G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances

VMX-ADV-40G

40G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances

VMX-PRM-1G

1G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.

VMX-PRM-5G

5G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.

VMX-PRM-10G

10G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.

VMX-PRM-40G

40G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.

Copyright 2014 Juniper Networks, Inc.

Juniper NorthStar Controller

51

Copyright 2014 Juniper Networks, Inc.

CHALLENGES WITH CURRENT NETWORKS

How to Make the Best Use of the Installed Infrastructure?

52

1
?

How do I use my network resources eciently?

1
2
?

How can I make my network applica3on aware?

3
1
?

How do I get complete & real-3me visibility?

Copyright 2014 Juniper Networks, Inc.

PCE ARCHITECTURE

A Standards-based Approach for Carrier SDN


What are the components?

What is it?
A path Computa7on Element (PCE) is a system
component, applica7on, or network node that is
capable of determining and nding a suitable
route for conveying data between a source and
a des7na7on
PCE

PCEP

PCC

53

Path Computa3on Element (PCE): Computes


the path
Path computa3on Client (PCC): Receives the
path and applies it in the network. Paths are
s*ll signaled with RSVP-TE.
PCE protocol (PCEP): Protocol for PCE/PCC
communica*on

PCC

PCC

Copyright 2014 Juniper Networks, Inc.

PCE: EVOLUTIONARY APPROACH


Active Stateful PCE Extensions

REAL-TIME AWARENESS OF PCE dynamically learns the network topology


LSP & NETWORK STATE
PCCs report the LSP state to the PCE
LSP ATTRIBUTE UPDATES

Via the PCEP, the PCE can update LSP B/W & path awributes,
if the LSP is *controlled*

CREATE & TEAR-DOWN


LSPS
HARDER PROBLEMS
OFFLOADED FROM
NETWORK ELEMENT

The PCE can *create* LSPs on the PCC, ephemerally


P2MP LSP path computa*on & P2MP tree diversity
Disjoint SRC/DST LSP path diversity
Mul*-layer & mul*ple constraints

* No persistent congura*on is present on the PCC


54

Copyright 2014 Juniper Networks, Inc.

ACTIVE STATEFUL PCE


A centralized network controller

The original PCE drass (of the mid-2000s) were mainly focused
around passive stateless PCE architectures:

More recently, theres a need for a more Ac*ve and Stateful PCE
NorthStar is an ac*ve stateful PCE
This ts well to the SDN paradigm of a centralized network controller

What makes an ac3ve Stateful PCE dierent:

The PCE is synchronized, in real-*me, with the network via standard


networking protocols; IGP, PCEP

The PCE has visibility into the network state; b/w availability, LSP awributes
The PCE can take control and create state within the MPLS network
The PCE dictates the order of opera*ons network-wide.

NorthStar

Report LSP state


Create LSP state

MPLS Network
55

Copyright 2014 Juniper Networks, Inc.

NORTHSTAR COMPONENTS & WORKFLOW


OPEN
APIs

SOFTWARE-DRIVEN POLICY

ANALYZE

OPTIMIZE

VIRTUALIZE

Topology Discovery

Path Computa*on

State Installa*on

Rou*ng

Applica*on Specic Algs

PCEP

PCEP
TE LSP discovery

IGP-TE, BGP-LS
TED discovery (BGP-LS, IGP)
LSDB discovery (OSPF, ISIS)

PCEP
Create/Modify TE LSP
One session per LER(PCC)

RSVP
signaling

56

Copyright 2014 Juniper Networks, Inc.

NORTHSTAR MAJOR COMPONENTS


NorthStar consists of several major components:

JUNOS Virtual Machine (VM)


Path Computa*on Server (PCS)
Topology Server
REST Server

Component func3onal responsibili3es:


The JUNOS VM, is used to collect the TE-database & LSDB
A new JUNOS daemon, NTAD, is used remote ash the lsdist0
table to the PCS

The PCS has mul*ple func*ons:

JUNOS VM

PCE

NTAD

REST_Server

RPD

Topo_Server
PCS

KVM Hypervisor
Centos 6.5

BGP-LS/IGP

PCEP

Peers with each PCC using PCEP for LSP state collec*on &
modica*on
Runs applica*on specic Algs for compu*ng LSP paths

The REST server is the interface into the APIs


57

MPLS Network

PCC
Copyright 2014 Juniper Networks, Inc.

NORTHSTAR AS A BLACK-BOX
The JunosVM is used to peer with
the network for topology acquisi*on
using:
BGP-LS
Direct ISIS or OSPF adjacency
ISIS or OSPF adjacency over a
GRE tunnel
PCCs connect to the PCEServer via
PCEP for LSP repor*ng
PCEP sessions are established from
each LSP head-end to the PCE Server
58

3rd Party
Applica*ons

User Interface

TCP

Web Server

PCS

Auth Module

PC Server
JUNOS VM

HTTP

REST_Server

PCE_Server

RPD
BGP-LS/IGP

MPLS Network

PCEP

PCC
Copyright 2014 Juniper Networks, Inc.

NORTHSTAR NORTHBOUND API

Integra*on with 3rd Party Tools and Custom Applica*ons

Standard, custom, & 3rd party Applica3ons


NorthStar pre-packaged applica3ons

Bandwidth Calendaring, Path Diversity, Premium


path, auto-bandwidth / TE++, etc

REST

REST

Topology API

Path computa*on API

Path provisioning API

Topology Discovery

Path Computa*on

Path Installa*on

Applica*on specic algorithms

PCEP

PCEP

59

REST

IGP-TE / BGP-LS

Copyright 2014 Juniper Networks, Inc.

NORTHSTAR 1.0 HIGH AVAILABILITY (HA)


Ac*ve / Standby for delegated LSPs

NorthStar 1.0 supports a high availability model only for


delegated LSPs:
Controllers are not ac*vely synced with each-other

Ac3ve / standby PCE model with up to 16 back-up


controllers:

PCE-group: All PCE belonging to the same group

LSPs are delegated to the primary PCE


Primary PCE is the controller with the highest delega*on priority
Other controllers cannot make changes to the LSPs
If a PCC looses connec*on with its primary PCE, it will immediately
use the PCE with next highest delega*on priority as its new
primary PCE
ALL PCCs MUST use the same primary PCE

[configuration protocols pcep]


pce-group pce {
pce-type active stateful;
lsp-provisioning;
delegation-cleanup-timeout 600;
}
pce jnc1 {
pce-group pce;
delegation-priority 100;
}
pce jnc2 {
pce-group pce;
delegation-priority 50;

jnc1

jnc2

PCEP

PCEP


60

PCC

Copyright 2014 Juniper Networks, Inc.

TOPOLOGY ACQUISITION BGP-LS


Various deployment op*ons are supported

Using BGP-LS, allows an operator to tap into all of BGPs deployment & policy
exibility to support network architectures of all types:
Supports various inter-area and Inter-domain deployment op*ons
Allows for fewer topology acquisition sessions with NorthStar
NorthStar

BGP-LS
session(s)

BGP-LS speaker/Hierarchy
61

NorthStar

BGP-LS
session(s)

ASBRs/ABRs
Copyright 2014 Juniper Networks, Inc.

TOPOLOGY ACQUISITION ISIS, OSPF & GRE TUNNELING


Na*ve protocol topology acquisi*on

NorthStar can also be deployed where it peers with the network via its na3ve IGP:
ISIS and OSPFv2 are supported
GRE tunneling is also supported to increase deployment exibility
Mul*-area, mul*-level & mul*-domain networks MAY require many IGP adjacencies & GRE tunnels

NorthStar

IGP Adj(s)

Redundant IGP Speakers


62

NorthStar

IGP Adj(s) over


GRE tunnels

ASBRs/ABRs

cbarth@vrr-84# show interfaces gre


unit 0 {
tunnel {
source 84.105.199.2;
destination 84.0.0.101;
}
family inet {
address 2.2.2.2/30;
}
family iso;
family mpls;
cbarth@vrr-84# show protocols isis
interface gre.0 {
point-to-point;
level 2 metric 50000;
}
interface lo0.0;
Copyright 2014 Juniper Networks, Inc.

JUNOS PCE CLIENT IMPLEMENTATION


New JUNOS daemon, pccd

Enables a PCE applica*on to set parameters for a tradi*onally congured TE LSPs and
create ephemeral LSPs
PCCD is the relay/message translator between the PCE & RPD
LSP parameters, such as the path & bandwidth, & LSP crea*on instruc*ons are received from the
PCE are communicated to RPD via PCCD
RPD then signals the LSP using RSVP-TE
PCE
PCEP
PCEP
PCEP
PCCD

JUNOS
IPC

RPD

RSVP-TE
MPLS Network

63

Copyright 2014 Juniper Networks, Inc.

NORTHSTAR SIMULATION MODE


NorthStar vs. IP/MPLSview
NorthStar

NorthStar Simula*on

IP/MPLSview

Topology Discovery

MPLS capacity planning

Full Oine Network Planning

LSP Control/Modica*on

Exhaus*ve Failure Analysis

FCAPs (PM, CM, FM)

REAL-TIME NETWORK
FUNCTIONS

MPLS LSP PLANNING &


DESIGN

OFFLINE NETWORK PLANNING


& MANAGEMENT

Dynamic Topology updates via


BGP-LS / IGP-TE
Dynamic LSP state updates via
PCEP
Real-*me modica*on of LSP
awributes via PCEP (ERO, B/W,
pre-emp*on, )
64

Topology acquisi*on via


NorthStar REST API (snapshot)
LSP provisioning via REST API
Exhaus*ve failure analysis &
capacity planning for MPLS LSPs
MPLS LSP design (P2MP, FRR,
JUNOS conglet, )

Topology acquisi*on &


equipment discovery via CLI,
SNMP, NorthStar REST API
Exhaus*ve failure analysis &
capacity planning (IP & MPLS)
Inventory, provisioning, &
performance management
Copyright 2014 Juniper Networks, Inc.

DIVERSE PATH COMPUTATION

Automated Computa*on of end-to-end diverse paths

Network-wide visibility allows NorthStar to support end-to-end LSP path diversity:


Wholly disjoint path computations; Options for link, node and SRLG diversity
Pair of diverse LSPs with the same end-points or with different end-points
SRLG information learned from the IGP dynamically
Supported for PCE created LSPs(at time of provisioning) and delegated LSPs(though manual
creation of diversity group)
Warning!
Shared Risk

Shared Risk
Eliminated

CE

CE

CE

65

Primary Link
Secondary Link

NorthStar

CE

Copyright 2014 Juniper Networks, Inc.

PCE CREATED SYMMETRIC LSPS

Local associa*on of LSP symmetry constraint


NorthStar supports crea3ng symmetric LSPs:

Does not leverage GMPLS extensions for co-routed or associated bi-direc*onal LSPs
Unidirec*onal LSPs (iden*cal names) are created from nodeA to nodeZ & nodeZ to nodeA
Symmetry constraint is maintained locally on NorthStar (awribute: pair =<value>)
Symmetric LSP
crea*on

NorthStar

Symmetric
LSPs
66

Copyright 2014 Juniper Networks, Inc.

MAINTENANCE-MODE RE-ROUTING

Automated Path Re-computa*on, Re-signaling and Restora*on

Automate re-routing of traffic before a scheduled maintenance window:


Simplifies planning and preparation before and during a maintenance window
Eliminate the risk that traffic is mistakenly affected when a node / link goes into maintenance mode
Reduced need for spare capacity through the optimum use of resources available during the
maintenance window
After the maintenance window finished paths are automatically restored to the (new) optimum path

Maintenance mode tagged: LSP


paths are re-computed assuming
aected resources are not
available

In maintenance mode: LSP


paths are automa*cally
(make-before-break)
re-signaled

X
X

67

Maintenance mode removed: all


LSP paths are re-stored to their
(new) op*mal path

NorthStar

Copyright 2014 Juniper Networks, Inc.

BANDWIDTH
C
ALENDARING

Time-based LSP Provisioning

Bandwidth calendaring allows network operators to schedule the


crea3on/dele3on/modica3on of an LSP:

Operator loads
calendar event

An LSP may be scheduled for crea*on or dele*on at some point in the future
An LSP may be scheduled for modica*on as some point in the future
B/W calendaring is built into all the LSP add/modify UI s

NorthStar

Example:
1. Operator pre-provisions a calendar event, either through the calendaring
func*on na*ve to NorthStar or through the path provisioning API
2. NorthStar schedules the LSP provisioning event
3. The LSP path is calculated at the scheduled point in *me and the path is
provisioned in the network

LSP path
provisioning at
scheduled 3me


68

Copyright 2014 Juniper Networks, Inc.

GLOBAL CONCURRENT OPTIMIZATION


Op*mized LSP placement

NorthStar enhances traffic engineering through LSP placement based on a network


wide visibility of the topology and LSP parameters:

CSPF ordering can be user-defined, i.e. the operator can select which parameters such as LSP priority
and LSP bandwidth influence the order of placement
High priority LSP
New Path
Net Groom:
Low priority LSP
request
- Triggered on demand
- User can choose LSPs to be op*mized
- LSP priority is not taken into account
- No pre-emp*on

Bandwidth
bolleneck!

NorthStar

Path Op*miza*on:
- Triggered on demand or on scheduled
intervals (with op*miza*on *mer)
- Global re-op*miza*on toward all LSPs
- LSP priority is taken into account

CSPF
failure

Global re-
op3miza3on

- Preemp*on may happen


69

Copyright 2014 Juniper Networks, Inc.

MPLS AUTO-BANDWIDTH
Auto-Bandwidth Example

1. JUNOS PCC will collect auto-


bandwidth LSP sta*s*cs
2. Every adjustment interval the
PCC will send a PcRpt message
with a LSP bandwidth request

70

PcUpdate
Message

PCC

2
PcRpt Msg,
b/w=12m

3. NorthStar will compute a new


ERO for the requested B/W

PcRpt Msg,
b/w=14m

PcRpt Msg,
b/w=16m

PcRpt Msg,
b/w=15m

1
B/W

4. NorthStar will send a PcUpdate


message with the new ERO &
bandwidth

NorthStar

Adj_Interval

B/W_Sample(s)

*me

Copyright 2014 Juniper Networks, Inc.

INTER-DOMAIN TRAFFIC-ENGINEERING
Op*mal Path Computa*on & LSP Placement

LSP [delegation, creation, optimization] of inter-domain LSPs


Single active PCE across domains, BGP-LS for topology acquisition
JUNOS Inter-AS requirements & constraints
http://www.juniper.net/techpubs/en_US/junos13.3/topics/usage-guidelines/mpls-enabling-inter-as-traffic-engineering-forlsps.html

NorthStar
NorthStar
Area 0

AS 200

Area 3

Area 1
AS 100
Inter-AS Trac-Engineering
71

Area 2
Inter-Area Trac-Engineering
Copyright 2014 Juniper Networks, Inc.

NORTHSTAR SIMULATION MODE


Oine Network Planning & Modeling

NorthStar builds a near real-3me network model for visualiza3on and o-line planning through
dynamic topology / LSP acquisi3on:
Export of topology and LSP state to NorthStar simula*on mode for o-line MPLS network modeling
Add/delete links/nodes/LSPs for future network planning
Exhaus*ve failure analysis, P2MP LSP design/planning, LSP design/planning, FRR design/planning
JUNOS LSP conglet genera*on

NorthStar-Simula3on
Year 5
Year 3
Year 1
Year 1

72

Extension

Copyright 2014 Juniper Networks, Inc.

A REAL CUSTOMER EXAMPLE PCE VALUE


Centralized vs. distributed path computa*on

Up to 15% reduc*on in RSVP reserved B/W

Link U*liza*on (%)

100.00%

80.00%

60.00%

40.00%

20.00%

0.00%
1 4

7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133 136 139 142 145 148 151 154 157 160 163 166 169 172
Distributed CSPF

Distributed CSPF Assump3ons

73

TE-LSP opera*onal routes are used for


distributed CSPF
RSVP-TE Max Reservable BW set BW set
to 92%
Modeling was performed with the exact
opera*on LSP paths

PCE centralized CSPF

Centralized Path Calcula3on Assump3ons


Convert all TE-LSPS to EROs via PCE
design ac*on
Objec*ve func*on is Min Max link
u*liza*ons
Only Primary EROS & Online Bypass LSPS
Modeling was performed with 100% of
TE LSPS being computed by PCE
Copyright 2014 Juniper Networks, Inc.

NORTHSTAR 1.0
FRS delivery

NorthStar FRS is targeted for March-23rd:

(Beta) trials / evalua*ons already ongoing


First customer wins in place

Target JUNOS releases:

14.2R3 Special *
14.2R4* / 15.1R1* / 15.2R1*

NorthStar packaging & plaeorm:


Bare metal applica*on only
No VM support at FRS
Runs on any x86 64bit machine that is supported
by Red Hat 6 or Centos 6
Single hybrid ISO for installa*on
Based on Juniper SCL 6.5R3.0

Supported plaeorms at FRS:


PTX (3K, 5K),
MX (80, 104, 240/480/960, 2010/2020, vMX)
Addi*onal pla[orm support in NorthStar 2.0

* Pending TRD Process

Recommended minimum hardware


requirements:
64-bit dual x86 processor or dual 1.8GHz Intel
Xeon E5 family equivalent
32 GB RAM
1TB storage
2 x 1G/10G network interface

74

Copyright 2014 Juniper Networks, Inc.

NORTHSTAR 1.0 H/W REQUIREMENTS


Subscrip*on based pricing for NorthStar
There is no dependency on Motherboard, NIC cards etc as we support CentOS6.5 as
Host OS, verify it with CentOS6.5 supported hardware portal
No preference on Vendor


Small ( 1-50 Nodes)

CPU: 64-bit dual 1.8GHz Intel Xeon
E5 family equivalent



RAM: 16GB
Hard Drive: 250GB
Network Port: 1/10GE
( CSE2k matches this spec)

75

Medium ( 50-250 Nodes)

Large ( 250+ Nodes)

CPU: 64-bit Quad Intel Xeon


Processor E5520 (2.26 GHz, 8MB L3
Cache) equivalent

CPU: 64 bit Quad core Intel


Xeon Processor X5570 (2.93
GHz,8MB L3 Cache) equivalent

RAM: 64GB
Hard Drive: 500GB
Network Port: 1/10GE

RAM: 128GB
Hard Drive: 1TB
Network Port: 1/10GE

Copyright 2014 Juniper Networks, Inc.

Thank You!