Sie sind auf Seite 1von 92

NFV FOUNDATIONS WORKSHOP (2-DAY)

SEPTEMBER 03-04, 2020 CISCO INDIA VC

DHANUNJAYA VUDATHA
SENIOR SOLUTION ARCHITECT

1
© 2014 Criterion Networks. All Rights Reserved
Criterion Networks Overview
Cloud-based Enablement Solutions for Network
Transformation
Accelerate Network Transformation with Criterion SDCloud®
Network Virtualization and Automation
SDDC ⎸SD-WAN ⎸SDA | Custom Solutions
Development | Qualification ⎸Learning Labs ⎸Proof-of-Concept

▪ Cisco SD-WAN (Viptela) Primer


▪ Virtual Sandboxes Learning
Enablement Solution Provider Labs
▪ PoC Labs
▪ Network Virtualization, Cloud and
Automation Focus ▪ PoV Consulting

▪ Self-Paced Learning Labs for on- ▪ Managed PoV


hands Learning and Skills ▪ Migration Use-cases
Development
▪ Cisco-SD WAN Instructor-Led
Workshops
www.criterionnetworks.com ⎸ ⎸408.715.7754
Trainer Profile
Dhanunjaya Vudatha – Senior Solution Architect – Solution Development, Consultancy, Training

Key Expertize
•Provide consultancy for SDN/NFV/Networking related deployments
•Deployment knowledge of IP networking technologies, CORD, SDN Operator Network
•Deep knowledge on the software and hardware architectures by having worked on the industry
leading next-generation products.

Experience ( 20+ Years of Experience)

•Voice Telephony, ATM, IP Networking, MPLS, Optical Networking, SDN/NFV


•SDN Applications, NFV PoCs
•Central-Office re-architected as Data center ( CORD ) – RCORD, MCORD
•Work Experience - Criterion Networks, Radisys, Motorola, Samsung, Fujitsu Network
Communications, C-DOT
Education

•Masters Degree, Electronic Instrumentation, NIT, Warangal, AP, INDIA


•Bachelors Degree Electronics and Communications Engineering, Sri Venkateswara University College
of Engineering, Thirupathi, AP, INDIA
Contact Details
•Email: dhanunjaya@criterionnetworks.com
•Phone : +1-408.715.7754
Introductions
• Please share the following for the benefit of
the rest of the class
• Your Name
• Your Experience (in the networking domain)
• SDN/NFV/Openstack exposure thus far
• Your Role in the company ( Engineering, Sales, Marketing,
Product Line, etc)

• Taking this session from Home due to Covid-19 Lockdown.


Kindly bear with some sounds/noise around
Workshop Objective
Network Transformation and Essential Skills for Next Generation Network
Engineers

At the conclusion of this session you will:


• Know Challenges with Traditional Networking
• Understand Opensource NFV
• Understand CISCO NFV
• Real Life NFV use cases and deployments
• SP Use Case
• Enterprise Use Case

5
© 2014 Criterion Networks. All Rights Reserved
Workshop Outline
Case for NFV NFV Architecture, Functions, Interfaces
➢ Physical Appliance Challenges ➢ ETSI NFV Reference Architecture
➢ NFV Terminology
➢ Introduction to NFV
➢ NFV Infrastructure
➢ Goals of NFV ▪ Compute domain
➢ How can NFV solve Challenges? ▪ Hypervisor domain
▪ Infrastructure Network domain
➢ Cloud computing architectures/ Business ➢ NFV Virtual Infrastructure Management (VIM)
Models ▪ VIM - Functionality
➢ Server, Network Virtualization ▪ OpenStack Overview
▪ OpenStack Networking
➢ Service Provider Guidelines

NFV Architecture, Functions, Interfaces NFV Architecture, Functions, Interfaces


➢ VNF
NFV VNF Management (VNFM)
▪ VNF Interfaces ▪ VNFM – Functionality, VNFM Interfaces
▪ VNF Design Patterns MANO – Management and Orchestration
▪ VNF Update and Upgrade ▪ OSS/BSS Interfaces, NFV Orchestration
▪ NFVO – Functionality, NFVO Interfaces
▪ VNF States Transitions ▪ Service Graphs, VNFD and Network Service
➢ VNF Life cycle Management Descriptors
▪ Network Service Catalog, Open Source
▪ VNF Onboarding, Instantiation Orchestrators (Tacker, ONAP), TOSCA
▪ VNF Scaling, VNF Forwarding Graph

Hands-on Lab Sessions

© 2014 Criterion Networks. All Rights Reserved


What is SDN & What is NFV?..
Is SDN = NFV?

If not what is the difference?

What is SDN?

What is NFV?

Do you need BigData too?

Then what else?

Where Openstack Fits?


7
© 2014 Criterion Networks. All Rights Reserved
SDN
Can range from networks made up of SDN Enabled/Openflow Network elements with
custom OS and Protocols forming an “SDN controller” using openflow to control &
configure southbound elements. It may encompasses the orchestration, granular control,
activation of services, their flows and logical configurations for SDN and NFV based
network elements including their relationship within service chains along forwarding
graphs.

NFV
NFV encompasses the virtualization of Physical Network Functions(PNFs), and the
orchestration of workloads (virtual functions) and resources on shared commodity
hardware (COTS/Racks-Spine/Leaf).

SDN+NFV
Service Provider’s Virtual Service Delivery Platforms will span from the core of the
network to the customer premise involving SDN and NFV Principles built-in.

Therefor SDN and NFV are two complimentary technologies for network transformations

8
© 2014 Criterion Networks. All Rights Reserved
Network Device Functionality Abstraction
OSS/BSS
Management
NMS Layers

EMS

Management Plane

Device Layers
Control Plane

Data Plane

Network Device
© 2014 Criterion Networks. All Rights Reserved

9
Networking planes – 3 Plane Architecture (Traditional )

Multi-vendor

Vendor-1 Vendor-2 Vendor-3

Distributed Control and complex operations


10
© 2014 Criterion Networks. All Rights Reserved
11
© 2014 Criterion Networks. All Rights Reserved
Network Function Virtualization
Load
Router Firewall IDS/IPS
Balancer

VM VM VM VM

Hypervisor

Commodity Hardware (x86)


(HP, DELL, IBM,….CISCO UCS, ENCS)

COTS
Commercial off the shelf (COTS) refers to any product or service that is developed and marketed commercially. COTS
hardware refers to general-purpose computing, storage, and networking gear that is built and sold for any use case that
requires these resources. It doesn’t enforce usage of a proprietary hardware or software.
Virtualization
is the technology used to run multiple operating systems (OSs) or applications on top of a single physical infrastructure
by providing each of them an abstract view of the hardware. It enables these applications or OSs to run in isolation
while sharing the same hardware resources.
Linux Network Interfaces

13

© 2014 Criterion Networks. All Rights Reserved


GPON Based Broadband Service Provider Network - Traditional
HSI
VoD/Live VoIP
Broadcast MX-960
VoIP
E1/T1- PSTN

HSI
IP/MPLS
Core

Video
Head End
PIM
Multicast
Unicast
Services - IGMP Multicast(Live Broadcast- IPTV),
VoIP, HIS – MPLS
Unicast (VoIP, HSI) Q-in- Pseudo Wires
Management – TR-069, OMCI, PLOAM) Q/SVLAN IPTV Multicast – MPLS
IGMP Snooping
VPLS
Access Network
10/100/1
000
Ethernet BNG
MOE CISCO ASR 9K
ONT 1:32/
RG 1:64 Subscriber
Splitter Management
OLT
Adtran
TA5K ALU -7750SR
Home Calix c7
Network ALU Distribution
ing Network

1:1 & N:1 VLAN Aggregation (DHCP)/PPPoE based


subscriber access model

OSS/BSS NMS SNMP ACS Radius


NTP
(AAA, DHCP
TR-069)
Policy)
Virtual Service Delivery Platforms
LGI, CenturyLink, AT&T Domain2.0…

SDN Controllers Openstack

15
© 2014 Criterion Networks. All Rights Reserved
The modern Networking Stack

L4-L7 L4-L7
NFV
L3

L2 L1-L3
SDN
L1

16
© 2014 Criterion Networks. All Rights Reserved
SDN ?. Or NFV ?.

17
© 2014 Criterion Networks. All Rights Reserved
SDN?. NFV?.

18
© 2014 Criterion Networks. All Rights Reserved
NE DESIGN AND DEPLOYMENT PHILOSOPHY
- TRADITIONAL
Basic Terminology
OSS/BSS
Management
NMS Layers

EMS

Management Plane
Device
Layers
Control Plane

Data Plane

Network Device
Figure: Network Device Functionality Abstraction
© 2014 Criterion Networks. All Rights Reserved
Basic Terminology
Management/Policy Plane (M.P) Control Plane (C.P)
• To configure control plane • Runs in switch/router CPU
• Processing speeds of thousands of
• Monitor the device its operation, packets/sec
interface counters etc • Processes such as STP, Routing
• CLI/SNMP/Netconf Protocols

Data Plane (D.P)


• Processing speeds of millions or
billions of packets/sec
• Data plane functionality such as
Layer 2/3 forwarding, QoS,
NetFlow, ACLs.
• Dedicated ASIC or FPGA or CPU
Traditional Network Design
Management Plane Management Plane Management Plane

Control Plane Control Plane Control Plane

Data Plane (ASIC) Data Plane (FPGA) Data Plane (NPU)

Device A Device B Device C

Figure: Today’s network design with distributed control plane and per device
management; Various Data Plane implementations
Focusing on Device C appliances
Management Plane Management Plane Management Plane

Control Plane Control Plane Control Plane


(Router) (Firewall) (DPI)

Data Plane (CPU) Data Plane (CPU) Data Plane (CPU)

Device C1 Device C2 Device C3

Figure: Low/Mid range network devices with CPU based forwarding on


different physical appliances
Physical Appliance Challenges

One Physical Node per Role

Physical Installation per site

More time-to-market or deployment

More redundant components. Increase in Complexity

Inability to handle spikes in workloads

Less flexibility and Agility


24
© 2014 Criterion Networks. All Rights Reserved
Case Study 1 -
Physical Nodes Per Role/Site

Provisioning

Site A Site C

More time-to-
market
Less Flexible More Opex More time-to-
and Less Agile Costs deploy and
Site B Less Resource
Validate
utilization

25
© 2014 Criterion Networks. All Rights Reserved
Case Study 2-
Redundant Components

Active Firewall
Active LB
Not all networks
might require 99.999
up time

More devices. More


costs

How frequent are


failures? Once in a
year?
Spare Firewall Spare LB

© 2014 Criterion Networks. All Rights Reserved


Case Study 3 -
Inability to handle Peak Workloads
So far 2G/3G speeds
were less.
Recently,4G/5G and
more mobile users !

How many devices can


be planned upfront for
exponential traffic
growth?

? How can we solve this


problem of Auto-
Scaling based on
traffic?

© 2014 Criterion Networks. All Rights Reserved


Limitations of traditional networking devices- I
• Flexibility Limitations
o Vendors design and develop their equipment with a generic set of requirements and offer the functionality as a combination of
specific hardware and software.
o Hardware and Software are packaged as a unit and limited to the vendor’s implementation which restricts the choices of feature
combinations and hardware capabilities.
• Scalability Constraints
o Physical network devices have scalability limitations in both hardware and software.
o The hardware requires power and space, which can become a constraint in densely populated areas.
o On the software side, these traditional devices may not be able to keep up with the scale of changes in the data network, such as
number of routes or labels.
o Each device is designed to handle a limited scale, the operator has a very limited set of options aside from upgrading the device.
• Time-to-Market Challenges
o Service providers often delay offering new services to meet the shift in the market requirements.
o Implementing new services requires upgrading the networking equipment.
o Re-evaluation of new equipment, redesign of the network, or possibly new vendors that may be more suitable to meet the new
needs.
o Longer timeline to offer new services to customers, resulting in loss of business and revenue.
• Manageability Issues

28
© 2014 Criterion Networks. All Rights Reserved
Limitations of traditional networking devices-II
• High Operational Costs
o Device by device provisioning, Multi-vendor purpose built distributed control
• Capacity Over-Provisioning
o Short- and long-term network capacity demands are hard to predict, and as a result networks are built with excess capacity and
are often more than 50% undersubscribed.
o Underutilized and overprovisioned networks result in lower return on investment.

29
© 2014 Criterion Networks. All Rights Reserved
Data Center Needs
► Automation
▪ Agility, the ability to dynamically instantiate networks and to disable
them when they are no longer needed

► Scalability
▪ The use of tunnels and virtual networks can contain the number of
devices in a broadcast domain to a reasonable number.

► Multipathing
▪ Application Aware Routing, SLAs, Transport Independence

► Multitenancy
▪ Hosting dozens, or even hundreds or thousands of customers or
tenants in the same physical data center has become a requirement.
▪ The data center has to provide each of its multiple tenants with their
own (virtual) network that they can manage in a manner similar to
the way that they would manage a physical network.

► Network Virtualization
► Service Insertion
© 2014 Criterion Networks. All Rights Reserved 30
A CASE FOR NFV
Before Software Defined Networking

Servers vs Networking

COMPUTE
EVOLUTION

NETWORKING
EVOLUTION
NETWORK FUNCTION VIRTUALIZATION
Evolution of Application Deployment

• Containers only have User-Space Code


• Share the same Server OS
• Container Technology is OS Level Virtualization !
34
© 2014 Criterion Networks. All Rights Reserved
Network Function Virtualization
Load
Router Firewall IDS/IPS
Balancer

VM VM VM VM

Hypervisor

Commodity Hardware (x86)


(HP, DELL, IBM,….CISCO UCS, ENCS)

COTS
Commercial off the shelf (COTS) refers to any product or service that is developed and marketed commercially. COTS
hardware refers to general-purpose computing, storage, and networking gear that is built and sold for any use case that
requires these resources. It doesn’t enforce usage of a proprietary hardware or software.
Virtualization
is the technology used to run multiple operating systems (OSs) or applications on top of a single physical infrastructure
by providing each of them an abstract view of the hardware. It enables these applications or OSs to run in isolation
while sharing the same hardware resources.
Hypervisor based Virtualization

Application Application Application


• 1 server: multiple apps
Guest Guest Guest
Operating Operating Operating
System System System • Each app runs in a VM

VM VM VM

Hypervisor

Host Operating System

Physical server
Containers - Docker

37
© 2014 Criterion Networks. All Rights Reserved
Comparing Containers and VMs
Containers Virtual Machines
CONTAINER VM

App A App B App C App A App B App C Shared resources Isolated resources

Bins/Libs Bins/Libs Bins/Libs Bins/Libs Bins/Libs Bins/Libs


Lighter weight Full OS + Application

Docker Guest OS Guest OS Guest OS


Faster installation Several minutes to boot

Host OS Hypervisor
No hypervisor Hypervisor-based

Infrastructure Infrastructure
No underlying OS
Linux and Windows
(Type I)

Containers are an app level VMs are an infrastructure to turn


construct one machine into many servers
Containers and VMs together

Monolithic Apps Cloud Native Apps


Deployment on VMs
Server / hypervisor Server clusters, container

Dependencies Easy upgrade


App A App B App C
Deployment on BM
Stateful Microservices
Bins/Libs Bins/Libs Bins/Libs App D

App A App B App C Agile development


Waterfall development (typical)
Docker Docker Bins/Libs (typical)
Bins/Libs Bins/Libs Bins/Libs

Guest OS Guest OS Guest OS


Docker

Host OS Hypervisor

Infrastructure Infrastructure
monolithic apps
microservices

Containers and VMs together provide a tremendous amount of flexibility for IT to


optimally deploy and manage apps.
Docker Host Architecture
• Container runtime: containerd and runc.
• Runc: low-level functionality
• Containerd: provides higher-level functionality
• Open source under CNCF
• Responsible for LC mgmt of a container
• Workflow:
• pull a container image (Registry)
• creates a container from that image
• initializes and runs the container
• stops and removes container if desired
• Docker engine provides additional functionality network
libraries or support for plugins
• Provides a REST IF for automation of container operations
• Docker CLI consumes REST IF

40 Nov 2019
Copyright © 2019 Criterion Networks Inc. All Rights Reserved
Reasons Why Containers are Good for NFV
• Lower Overhead.
• NO guest OS
• Containers have a far smaller memory footprint than virtual machines

• Startup speed.
• Virtual machine images are large because they include a complete guest operating system
• Time taken to start a new VM is largely dictated by the time taken to copy its image to the host on which it is to
run, which may take many seconds.
• By contrast, container images tend to be very small, and they can often start up in less than 50 ms.

• Reduced maintenance.
• Virtual machines contain guest operating systems, and these must be maintained, for example to apply security
patches to protect against recently discovered vulnerabilities.
• Containers require no equivalent maintenance.
• Ease of deployment.
• Containers provide a high degree of portability across operating environments

4141 Nov 2019


Copyright
© 2014© 2019 Criterion
Criterion Networks
Networks. Inc. All
All Rights Rights Reserved
Reserved
NFV Approach

42
© 2014 Criterion Networks. All Rights Reserved
© 2014 Criterion Networks. All Rights Reserved 43
Upgrade Strategy

44
© 2014 Criterion Networks. All Rights Reserved
NFV Drivers
• SERVICE VELOCITY
▪ Ability to launch/create a service faster (means for faster revenue generation opportunities)
▪ Automation of service launch, capacity increase
• MULTI-VENDOR & MULTI-DOMAIN SUPPORT
▪ Ability to do mix and match with the network elements/components
▪ Simpler unified provisioning
▪ Moving away from vendor lock-in
• CAPEX & OPEX REDUCTION

• INNOVATIVE BUSINESS MODELS (NFVIaaS, VNFaaS)

• NFV & SDN – 95 % of operators had confirmed the roadmap (Source Infonetics
Research – Telco Market Research & Consulting Firm)

• NFV & SDN Market size is projected to be 11 Billion USD in next 4 years (2015 to
2020) (Source Infonetics Research – Telco Market Research & Consulting Firm)

45
© 2014 Criterion Networks. All Rights Reserved
How can NFV solve previous
challenges?

46
© 2014 Criterion Networks. All Rights Reserved
Case Study 1 -
Virtual Nodes Per Role/Site

Provisioning

Site A Site C

Virtual
Virtual
Virtual Less time-to-
market
More Flexible Less time-to-
Less Opex
and More deploy and
Costs Site B More
Agile Validate Resource
utilization
47
© 2014 Criterion Networks. All Rights Reserved
Case Study 2-
No Redundant Components

Active Firewall
Active LB
Not all networks
might require 99.999
up time

Virtual No Physical devices.


Launch Virtual device
Virtual when required

How frequent are


failures? Once in a
year?
Spare Firewall Spare LB

© 2014 Criterion Networks. All Rights Reserved


Case Study 3 -
Efficiently handle Peak Workloads

So far 2G/3G speeds


Work load 4 were less.
Work load 3 Recently,4G/5G and
more mobile users !
Work load 2 Average Load
Work load 1
Virtual devices
increase or decrease
Virtual
as per traffic
requirements

No need to purchase
many specialized
devices upfront. Easy
planning

© 2014 Criterion Networks. All Rights Reserved


Is there any need for Physical appliances?
• Require Packet Forwarding at Line Rates
• Servers are designed to replace in 3 years
• Networking devices are designed to be more robust than
servers
• Easy split of management responsibilities
• Server Admins
• Network Admins
• Sales Model from Network Vendors need to adapt

50
© 2014 Criterion Networks. All Rights Reserved
Service Provider Guidelines for SDN/NFV
Case study - ATT domain 2.0
ATT Domain 2.0 Architecture
• Network traffic increased by 150,000 % between 2007 and 2015
• 60% of that traffic is Video
• IoT, Virtual and Augmented reality expected to push more !
Elastic Network Capabilities for Customers, Partners,
3rd Party Provider Tenants of Commercial Clouds
Commercial Cloud Computing
Environments
Network Function Virtualization Infrastructure a cloud
Tenant Applications & Tenant Applications &
Virtual Machine Virtual Machine
distributed where needed to optimize characteristics
such as latency, costs, etc. and with control,
orchestration, and management capabilities for real-
APIs and Dynamic Policy Control
time, automated operations
Network Function Software that will be evolving from
the current form embedded in network appliances to
Network Function Virtualization Infrastructure Cloud
Functions Designed for Cloud
software (re)designed for cloud computing
Virtual Virtual
Wireless Access Applications Applications

Packet & Optical Transport that will be evolving from


current integrated TCP/IP control/data plane routers
Packet and Optical
towards SDN where controllers external to a packet
switch provide the forwarding rules
Broadband /Fiber Access

52
Copyright © 2016 Criterion Networks Inc. All Rights Reserved
Domain 2.0 Principles

Open API

Simple

Scale

53
© 2014 Criterion Networks. All Rights Reserved
Domain 2.0 Journey
• Domain 2.0 White Paper in Nov, 2013
• Announced the Domain 2.0 suppliers list
• Suppliers: Tail-f (Cisco), Ericsson, Juniper, Nokia, Metaswitch etc
• In 2014, launched User Defined Network Cloud (Network on Demand)
• Virtualizing the Mobile Packet Core (Connected Car Apps)
• Virtualized Universal Service Platform (Enterprise VOIP)
• ATT Integrated Cloud (AIC) sites across Central offices
• Total of 150 Network functions
• Virtualize 5% by 2015
• Virtualize 30% by 2016
• Virtualize 75% by 2020
• Software Development
• ECOMP (Enhanced Control, Orchestration, Management & Policy)
software. Open sourced it recently to Linux Foundation (ECOMP+OPEN-O =
ONAP)
• Orange leveraging ECOMP for testing.

54
© 2014 Criterion Networks. All Rights Reserved
ATT Domain 2.0 Virtual Function (VF) Guidelines

➢ VFs should be deployable on the AT&T Integrated Cloud (AIC) platform


which is an opensource/OpenStack based platform.
➢ VF’s should be a disaggregation of existing or new network functions into
granular, smaller reusable VFs.
➢ VFs should be designed for cloud-based elasticity to scale horizontally with
both scale up and scale down.
➢ VFs should be agnostic of the underlying hardware and infrastructure and
run without modifications on the AT&T standard OS images.
➢ VFs should be designed in a manner that allows for implementation with
high availability and no single points of failure.
➢ VFs should be managed via AT&T’s Software Defined Network (SDN)and
Application controllers and orchestration software.
➢ VF lifecycle management will be through AT&T’s Enhanced Control,
Orchestration, Management and Policy (ECOMP) framework.
➢ VFs should comply with the Domain 2.0 Security Framework and guidelines

55
© 2014 Criterion Networks. All Rights Reserved
OEM Guidelines for SDN/NFV
Case study – CISCO DNA
Cisco DNA Vision

Source: http://www.cisco.com/c/en/us/solutions/enterprise-networks/digital-network-architecture/index.html

57
© 2014 Criterion Networks. All Rights Reserved
Example: Draw a Square!

58
© 2014 Criterion Networks. All Rights Reserved
59
© 2014 Criterion Networks. All Rights Reserved
NFV Architecture
61
© 2014 Criterion Networks. All Rights Reserved
ETSI NFV Architecture

62
63
Carrier Grade NFVI

Source: http://www.cisco.com/c/dam/m/fr_fr/events/2015/cisco_day/pdf/4-ciscoday-10june2016-nfvi.pdf
64
© 2014 Criterion Networks. All Rights Reserved
vCPE (Residential/Business)
Multiple Partners

Cisco
VNFs

Enterprise Fabric

65
NFV Concepts
• Network Function (NF): Functional building block with a well defined interfaces
and well defined functional behavior
• Virtualized Network Function (VNF): Software implementation of NF that can be
deployed in a virtualized infrastructure and replaces a vendor’s specialized
hardware with systems performing the same function, yet running on a generic
hardware.
• VNF Set: Connectivity between VNF with no connectivity specified
• NFVI: Hardware and Software required to deploy, manage and execute VNFs
including computation, storage and network
• NFVI PoP: Location of NFVI

66
© 2014 Criterion Networks. All Rights Reserved
NFV Concepts (Cont..)

• VNF Manager(VNFM): VNF life cycle management


• Instantiation, Upgrade
• Scaling, Query, Monitoring
• Diagnostics, Healing, Termination
• Virtualized Infrastructure Manager(VIM): Management of
Computing, Network, Storage, software resources
• Network Service(NS): A composition of network functions and
defined by its functional and behavioral specification
• NFV Service: A network services using NFs with at least one VNF

67
© 2014 Criterion Networks. All Rights Reserved
NFV Concepts (Cont..)
• User Services: Services offered to end customers/users/subscribers
• Deployment Behavior: Deployment resources that NFVI requires
• Number of VMs, memory, disk, images, bandwidth, latency
• Operational Behavior: VNF instance topology and life cycle operations
• Start, Stop, Pause, Migrate
• VNF Descriptor(VNFD): Deployment behavior + Operational Behavior
• NFV Orchestrator(NFVO): Automates the network service deployment,
operation, management, coordination of VNFs and NFVI
• VNF Forwarding Graph(VNFFG): Service Chain when network connectivity
is important

68
© 2014 Criterion Networks. All Rights Reserved
Network Forwarding Graph

69
© 2014 Criterion Networks. All Rights Reserved
Key Components

Cloud Infrastructure (NFVI)

Network Functions (VNF)

Orchestration System (MANO)

Traffic steering and Connectivity (SDN)

Integration to other Systems (OSS/BSS)

70
© 2014 Criterion Networks. All Rights Reserved
End-to-End Flow in the ETSI NFV Framework
Step 1.
The full view of the end-to-end topology is visible to the NFVO.

Step 2.
The NFVO instantiates the required VNFs and communicate this to the VNFM.

Step 3. VNFM determines the number of VMs needed as well as the resources
that each of these will need and reverts back to NFVO with this requirement to
be able to fulfill the VNF creation.

Step 4. Because NFVO has information about the hardware resources, it


validates if there are enough resources available for the VMs to be created. The
NFVO now needs to initiate a request to have these VMs created.

Step 5. NFVO sends request to VIM to create the VMs and allocate the
necessary resources to those VMs.
Step 6. VIM asks the virtualization layer to create these VMs.

Step 7. Once the VMs are successfully created, VIM acknowledges this back to
NFVO.
Step 8. NFVO notifies VNFM that the VMs it needs are available to bring up the
VNFs.
Step 9. VNFM now configures the VNFs with any specific parameters.
Step 10. Upon successful configuration of the VNFs, VNFM communicates to
NFVO that the VNFs are ready, configured, and available to use.
71
© 2014 Criterion Networks. All Rights Reserved
NFV – CISCO
Network Functions Virtualization Infrastructure

Orchestration and Management (MANO)


NSO with vBranch/SDWAN Core Function Pack

Virtual Router Virtual Router Virtual Firewall Virtual WAN Optimization Virtual Wireless LAN
3rd Party VNFs
(ISRv) (vEdge) (ASAv) (vWAAS) Controller (vWLC)

Network Functions Virtualization Infrastructure Software (NFVIS)

Enterprise Network Compute Systems


UCS-E-Series UCS C-Series COTS
(ENCS)

73
© 2014 Criterion Networks. All Rights Reserved
Extending Orchestration to the Datacenter for NFV
OSS Systems

Network Services Orchestrator (NSO) NFVO

VNF Manager
(ESC)

Physical Networks Virtual Networks


Virtualized
Infrastructure
Compute Platforms Manager
Branch & Cloud Remote Site

VNFM • vEdge Cloud + other VNFs

NFVIS • NFVIS 3.7.1


• ENCS5104 - 4-Core
• ENCS5406 - 6-Core
• ENCS5408 - 8-Core
ENCS • ENCS5412 - 12-Core

• WAN Edge Router, Firewall, VPN, DHCP, DNS Servers (Universal CPE)
• Consistent functionality in SW across branch and cloud sites
• Simple and automated up gradation of software
75
© 2014 Criterion Networks. All Rights Reserved
Overlay Networking
ETSI NFV Architecture Revisited

77
Shortcomings of the VLAN technology
• We can have a maximum of 4,096 VLANs—remove some administrative and pre-assigned ones, and we are left
with just over 4,000 VLAN’s.
• Now this becomes a problem if we have say 500 customers in our cloud and each of them is using about 10 of
them. We can very quickly run out of VLANs.
• VLANs need to be configured on all the devices in the Layer 2 (Switching) domain for them to work.
• When we use VLANs, we will need to use Spanning Tree Protocol (STP) for loop protection, and thereby we lose
a lot of multipathing ability (as most multi-path abilities are L3 upwards and not so much on the Layer 2
network).
• VLANs are site-specific, and they are not generally extended between two datacenters.
• In the cloud world, where we don't care where our computing resources stay, we would like to have access to
the same networks, say for a disaster recovery (DR) kind of scenario.
• One of the methods that can alleviate some of the aforementioned problems is the use of an overlay network.

78
© 2014 Criterion Networks. All Rights Reserved
What is an overlay network?
• An overlay network is a network running on top of
another network, underlay network

• The different components or nodes in this kind of


network are connected using virtual links rather than
physical ones.

• The diagram shows the concept of overlay networking


between three datacenters connected by an ISP.

• An overlay network generally works by encapsulating


the data in a format that the underlay network
transports transparently.

79
© 2014 Criterion Networks. All Rights Reserved
Overlay technologies
• Generic Routing Encapsulation (GRE) is one of the first overlay technologies that existed.

• GRE encapsulates layer-3 payload , by setting the destination as the address of the remote tunnel endpoint, and then sends it
down the wire; it performs the opposite function on the IP packet at the other end.

• This way, the underlay network sees the packet as a general IP packet and routes it accordingly.

• Virtual Extensible LAN (VXLAN) is an advancement of the VLAN technology itself.

• Number of VXLANs possible: Theoretically, this has been beefed up to 16 million VXLANs in a network, thereby giving ample
room for growth.

• Virtual tunnel endpoint (VTEP): VXLAN also supports VTEP, which can be used to create a Layer-2 overlay network atop the
Layer 3 endpoints.

80
© 2014 Criterion Networks. All Rights Reserved
DC-1 DC-2

TOM JERRY TOM JERRY


10.1.1.1 10.1.1.1 10.1.1.2 10.1.1.2
vnet1 vnet3 vnet3
vnet1

vnet0 vnet2 vnet0 vnet2


1 2 1 2
vswitch vswitch
10 10

TOR1 VXLAN TOR2

81
Virtual Network Inside the Server
The Virtual Machines are connected to a Virtual
Switch inside the Compute Node (or server).

The traffic is secured using virtual routers and


firewalls.

The Compute Node is connected to a Physical Switch,


which is the entry point into the physical network.

• VM to VM with in the compute


• VM to VM between two computes
• VM to External
• East-West Vs North-South

82

© 2014 Criterion Networks. All Rights Reserved


Linux Network Interfaces

83

© 2014 Criterion Networks. All Rights Reserved


When a VM is launched…
• When a VM is launched, OpenStack creates a virtual interface and attaches it to the OVS instance on the Hypervisor through a
Linux bridge.
• The OVS instance on the Hypervisor has two bridges, br-int for communication in the Hypervisor and br-tun, which is used to
communicate with the other Hypervisors using the VXLAN tunnels.

• The OVS bridge, br-int, uses VLANs to segregate the traffic in the Hypervisors.
• These VLANs are locally significant to the Hypervisor.
• Neutron allocates a unique VNI for every virtual network.
• For any packet leaving the Hypervisor, OVS replaces the VLAN tag with the VNI in the encapsulation header.
• OVS uses local_ip from the plugin configuration as the source VTEP IP for the VXLAN packet.

84
© 2014 Criterion Networks. All Rights Reserved
OpenVswtich Architecture

• Tap devices
• Linux bridges
• Virtual ethernet cables
• OVS bridges
• OVS patch ports

ovs-vsctl show: Prints a brief overview of the switch database configuration, including ports, VLANs, and so on
ovs-vsctl list-br: Prints a list of configured bridges
ovs-vsctl list-ports <bridge>: Prints a list of ports on the specified bridge
ovs-vsctl list interface: Prints a list of interfaces along with statistics and other data

85
© 2014 Criterion Networks. All Rights Reserved
Exercises
Step1: Create a OVS Bridge “br-test” on network node
Step2: Create veth pair {vnet0, vnet1} on network node
Step3: Create veth pair {vnet2, vnet3} on network node
Step4: Create 2 namespaces “tom” and “jerry” on network node
• Namespaces enables multiple instances of a routing table to co-exist within the same Linux box
• Network namespaces make it possible to separate network domains (network interfaces,
routing tables, iptables) into completely separate and independent domains.
• L3 Agent: The neutron-l3-agent is designed to use network namespaces to provide multiple
independent virtual routers per node, that do not interfere with each other or with routing of
the compute node on which they are hosted
Step5: Add vnet1 to “tom” and vnet3 to “jerry”
Step6: Assign IP 10.1.1.1/24 to vnet1 and vnet3 on network node
Step7: Repeat the steps 1 to 5 on compute node
Step8: Assign IP 10.1.1.2/24 to vnet1 and vnet3 on compute node
Step9: Create a Vxlan tunnel port and add to “br-test” on each node using the remote
IP as 172.16.4.0/24 network
Step10: Add flows with flows.txt file in home directory
Step11: Ping across interfaces present in tom namespace
Step12: Ping across interfaces present in jerry namespace

86
© 2014 Criterion Networks. All Rights Reserved
OVS Management
Base commands
OVS is feature rich with different configuration commands, but the majority of your
configuration and troubleshooting can be accomplished with the following 4 commands:
• ovs-vsctl : Used for configuring the ovs-vswitchd configuration database (known as ovs-db)
• ovs-ofctl : A command line tool for monitoring and administering OpenFlow switches
• ovs-dpctl : Used to administer Open vSwitch datapaths
• ovs−appctl : Used for querying and controlling Open vSwitch daemons

87
© 2014 Criterion Networks. All Rights Reserved
oscontrol osnetwork
TOM JERRY TOM JERRY
10.1.1.1 10.1.1.1 10.1.1.2 10.1.1.2
vnet1 vnet3 vnet3
vnet1

vnet0 vnet2 vnet0 vnet2


1 2 1 2
br-test br-test
10 10
VXLAN

sudo ovs-vsctl show


sudo ovs-ofctl show br-test
sudo ovs-ofctl dump-flows br-test
sudo ip netns list
88
oscontrol osnetwork
TOM JERRY TOM JERRY
10.1.1.1 10.1.1.1 10.1.1.2 10.1.1.2
vnet1 vnet3 vnet3
vnet1

vnet0 vnet2 vnet0 vnet2


1 2 1 2
vswitch vswitch
10 10

TOR1 TOR2
VXLAN

89
# ip netns exec ns1 ping -c 3 192.168.0.2
# ip netns exec ns2 ping -c 3 192.168.0.1
ETSI NFV Architecture Revisited

91
NFV MANO
------- VIM - OPENSTACK

Das könnte Ihnen auch gefallen