Sie sind auf Seite 1von 38

VNF Characterization with

Yardstick

Ross Brattain
VNF Characterization

Dec 2017 2
NSB Methodology: VNF performance benchmarking

VNF performance Evaluate both


benchmarking scale-
Collect KPIs:
Native Linux environment up and scale-out Test Infrastructure:
Sample Network
performance data Standard test
Standalone Virtualized KPIs, VNF KPIs and
environment framework for all 3
NFVi KPIs during VNFs performance
graphs for both scale-up environments
Managed virtualized the test
environment (e.g. and scale-out in all three
OpenStack) environments

Dec 2017 3
Collecting KPIs during the test

• We sample KPIs at multiple intervals to allow for investigation


into anomalies during runtime

• Some KPI collection intervals are adjustable

• We collect KPIs from the Traffic generators during the test

• Yardstick collects NFVi KPIs

• There is some reporting in the framework, but really we collect


all the KPIs for analytics to process

Dec 2017 4
Architecture of
NSB extensions
to Yardstick

Note that is some cases


mgmt control-plane and
data-plane share bridges.

https://wiki.opnfv.org/display/yardstick/Yardstick+Framework+Architecture+Evolution#YardstickFrameworkArchitectureEvolution-NetworkServiceTestcase

Dec 2017 5
NFVI KPI collection

• Yardstick collects NFVi KPIs


– Current implementation for Euphrates
• Requires manual config in pod.yaml
• Requires SSH access to the compute node(s)
• Compiles collectd directly on the compute node(s)
• Plan is to create collectd docker for standardized NFVi KPI collection
• http://docs.opnfv.org/en/stable-
euphrates/submodules/yardstick/docs/testing/user/userguide/13-
nsb_operation.html#collectd-kpis
– Future plan is to use Barometer enabled installers or collectd in docker container.

• We want to be able to correlate NFVI KPIs with individual VNFs


– If we had core pinning we could match VNF vCPUs to NFVi cores

Dec 2017 6
KPIs
Using collectd plugins to collect KPIs. Covered by Barometer project

• Plugins
– Libvirt
– DPDK Stats
– DPDK-Events
– OvS Events
– Vswitch-stats
– Huge pages
– RAM Memory
– Intel PMU plugin

Dec 2017 7
PMU stats
"data": { "ipc": "0.104876976287796",
"compute_0": { "percent-nice": "0",
"core": { "counter-branch-load-misses": "2467117",
"cpu": { "counter-task-clock": "4916543925",
"15": { "counter-iTLB-loads": "82729",
"counter-dTLB-store-misses": "58636", "memory_bandwidth-remote": "2179072",
"counter-LLC-stores": "489707", "counter-cache-misses": "199009",
"counter-alignment-faults": "0", "counter-major-faults": "0",
"percent-wait": "0", "counter-branch-misses": "2415651",
"counter-L1-icache-load-misses": "2939731", "counter-L1-dcache-stores": "46462967",
"counter-minor-faults": "866", "counter-branch-loads": "40227471",
"percent-softirq": "0.210084033613445", "percent-system": "0.420168067226891",
"counter-cpu-migrations": "0", "counter-cpu-cycles": "450481129",
"counter-emulation-faults": "0", "counter-page-faults": "866",
"percent-interrupt": "0", "counter-LLC-store-misses": "36495",
"counter-cpu-clock": "4917450931", "percent-idle": "97.6890756302521",
"percent-user": "1.68067226890756", "bytes-llc": "3211264",
"counter-branches": "39651833", "counter-LLC-load-misses": "40730",
"counter-context-switches": "678", "counter-LLC-loads": "1097597",
"counter-dTLB-load-misses": "306755", "counter-L1-dcache-load-misses": "4536261",
"counter-cache-references": "5927861", "counter-dTLB-loads": "45401019",
"memory_bandwidth-local": "11182080", "counter-L1-dcache-loads": "56572765",
"counter-instructions": "203146046", "counter-dTLB-stores": "43919141"
"counter-iTLB-load-misses": "215057", },
"percent-steal": "0",
"counter-bus-cycles": "25634247",
Dec 2017 8
Different contexts for testing VNFs

Stand-alone Virtualized Heat


Baremetal Linux
Network application with multiple VMs Host OS – Network application with multiple VMs
User space
application
Network application in multiprocess VNF application Control plane / VNF application Control plane /
management management
applications applications
VNF application Control plane / Dpdk stack Dpdk stack
management Host OS – qemu virtIO
Socket i /f Socket i /f
applications User space pmd pmd
Dpdk stack
application
pmd Socket i /f virtIO
Igb _uio
qemu Igb _uio Eth drv
Host Kernel Eth
Igb _uio br-int
Driver
OVS
Host Kernel with KVM Eth br- eth
Hardware Driver
Host Kernel
NIC 1GbE NIC IOMMU with KVM Eth
Igb _uio
Driver
Hardware IOMMU
Int Mgmt NIC 1GbE NIC Hardware
NIC 1GbE NIC

Int Mgmt

Int Mgmt

Dec 2017 9
Heat context changes required for VNF characterization

• Multi-ports per network


– Match Neutron topology to test topology
– Specify ports per network, otherwise full mesh by default
• Overriding Heat IP address
– Some Traffic profiles are static and cannot adjust IP address to make Heat subnet
• Disabling Neutron port security
– We really want raw L2 access for testing, we don’t want Neutron to filter L2 or L3
• Disabling gateway_ip
– Otherwise we can’t SSH into VNF due to multiple default routes.
• Disabling DHCP
– Sometimes we override the Neutron IP addresses with static Traffic Profile IP address, so we don’t need or
want DHCP on the port.
• Provider network support
– SR-IOV

Dec 2017 10
3-Node setup - Correlated Traffic

+----------+ +----------+ +------------+


| | | | | |
| | | | | |
| | (0)----->(0) | | | UDP |
| TG1 | | DUT | | Replay |
| | | | | |
| | | |(1)<---->(0)| |
+----------+ +----------+ +------------+
trafficgen_0 vnf trafficgen_1

Dec 2017 11
Limiting ports per network (3-node correlated traffic)
servers:
vnf_0: networks:
floating_ip: true
mgmt:
placement: "pgrp1"
network_ports: cidr: '10.0.1.0/24'
mgmt: uplink_0:
- mgmt
uplink_0:
VNF is connected to cidr: '10.0.2.0/24'

- xe0 both downlink_0 and gateway_ip: 'null'


downlink_0: port_security_enabled: False
- xe1
uplink_0
enable_dhcp: 'false'
tg_0:
floating_ip: true downlink_0:
placement: "pgrp1" cidr: '10.0.3.0/24'
network_ports: gateway_ip: 'null'
mgmt:
- mgmt port_security_enabled: False
uplink_0: enable_dhcp: 'false'
- xe0 uplink_1:
# Trex always needs two ports
uplink_1: cidr: '10.0.4.0/24'
- xe1 gateway_ip: 'null'
tg_1:
floating_ip: true The traffic generators port_security_enabled: False
enable_dhcp: 'false'
placement: "pgrp1" are connected to a
network_ports:
mgmt: single network only
- mgmt
downlink_0:
- xe0
Dec 2017 12
Heat correlated scale-4

tg_0 and tg_1 are


connected to all
VNFs

VNFs are only


connected to tg_0
and tg_1

Dec 2017 13
Overriding Heat allocated IP address

vnf_0: networks:
In this case we have a Traffic profile
floating_ip: true mgmt:
placement: "pgrp1"
with a fixed src and dest IP so we need cidr: '10.0.1.0/24'
network_ports: to override the IP address provided by uplink_0:
mgmt: Heat. cidr: '10.0.2.0/24'
- mgmt gateway_ip: 'null'
uplink_0: We can’t just adjust the Heat subnet port_security_enabled: False
- xe0: because we need a specific IP address enable_dhcp: 'false'
local_ip: "10.44.0.20" downlink_0:
from the subnet.
netmask: "255.255.255.0" cidr: '10.0.3.0/24'
downlink_0: gateway_ip: 'null'
- xe1:
Ultimately the Heat subnet is not port_security_enabled: False
local_ip: "10.44.0.30"
really necessary if we have raw L2 enable_dhcp: 'false'
netmask: "255.255.255.0" ports and no port security, we can uplink_1:
send spoof any IP address for testing cidr: '10.0.4.0/24'
purposes. gateway_ip: 'null'
port_security_enabled: False

https://gerrit.opnfv.org/gerrit/#/c/45393/ enable_dhcp: 'false'

Dec 2017 14
SR-IOV Provider networks
Yardstick should support this
type of network setup, but
we haven’t been able to test
yet due complexity of
deployment

Yardstick needs help testing


this network topology

You can specify


SR-IOV provider
networks with the
following:
networks:
test-net:
cidr: '192.168.1.0/24'
provider: "sriov"
physical_network: 'physnet1'

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/network_functions_virtualization_planning_guide/ch_sriov-planning

Dec 2017 15
Scale-out

Scale-out with
multiple adapters
and VNFs

Dec 2017 16
NSB CG-NAT Test (SR-IOV) Configuration
Multi-VM - 2x 10GbE per VM, 10 VMs
UDP Replay System (swaps IPv4 and MAC addresses)

DUT
Intel® Ethernet X710-DA4 NIC Intel® Ethernet X710-DA4 NIC Intel® Ethernet X710-
DA4 NIC

SR-IOV

VM1 VM2 VM3 VM4 VM5 VM6 VM7 VM8 VM9 VM10

Qemu* KVM

Intel® Ethernet X710-


Intel® Ethernet X710-DA4 NIC Intel® Ethernet X710-DA4 NIC DA4 NIC

IXIA*

Traffic with Private IPv4 address Traffic with Public IPv4 address

Note: Each NIC port is configured with 2 Rx Queues and 3 Tx Queues for Load balancing.

17
Standardized framework

• Yardstick is designed to support multiple contexts


– We added k8s context
• Mixed context environment, k8s to VM ping test

• Auto-detection

• Traffic generators work in all contexts

Dec 2017 18
Traffic generator tests

• RFC2544
– Collecting KPIs during test
• Correlated traffic
– Using UDP_replay
• RFC 3511
– Using Ixload
• Concurrent connections (1B, 4K, 64K, 256K & 1024K transactions)
• Connections per second (1B, 4K, 64K, 256K & 1024K transactions)
• HTTP Throughput (1B, 4K, 64K, 256K & 1024K transactions)
• HTTP Transactions per second (1B, 4K, 64K, 256K & 1024K transactions)

Dec 2017 19
Traffic generator OpenStack Network Topology

This is one possible network topology.


It illustrates the complexity required by VNF 1 VNF 2

OpenStack.
The topology cannot be defined by
Yardstick. Yardstick can only use the
existing topology created by the
installer.

Yardstick doesn’t have auto-discovery of


network topologies because we support
multiple contexts.

In this case in order for the Traffic VLAN 101


VLAN 102
In this case we use provider
Generator to access both VNFs it Traffic networks to bypass the Neutron
Generator
needs to be trunked and insert its router bottleneck
own VLAN tags
Dec 2017 20
Testcase → Topology → VNF descriptor

Testcase The YAML files are linked by relative pathnames


schema: yardstick:task:0.1
VNF descriptor
scenarios:
vnfd:vnfd-catalog:
- type: NSPerf vpe_vnf.yaml
vnfd:
topology: vpe_vnf_topology.yaml - id: VpeApproxVnf
name: VpeVnfSshIntel
short-name: VpeVnf
description: vPe approximation using DPDK
Topology vdu:
- id: vpevnf-baremetal
nsd:nsd-catalog:
vpe_vnf_topology.yaml name: vpevnf-baremetal
nsd:
description: vpe approximation using DPDK
- id: VPE
external-interface:
name: VPE - name: xe0
short-name: VPE virtual-interface:
description: scenario with VPE,L3fwd and VNF type: PCI-PASSTHROUGH
constituent-vnfd: vnfd-connection-point-ref: xe0
- member-vnf-index: '1' routing_table: {{ routing_table }}
vnfd-id-ref: tg__0
VNF model: ../../vnf_descriptors/tg_rfc2544_tpl.yaml
- member-vnf-index: '2'
vnfd-id-ref: vnf__0
Dec 2017VNF model: ../../vnf_descriptors/vpe_vnf.yaml 21
Sample VNF Descriptor
The highlighted area, external-
interface list is the important
vdu: area.
vnfd:vnfd-catalog: - id: aclvnf-baremetal This defines the network
Python class name
vnfd: name: aclvnf-baremetal interface information
- id: AclApproxVnf description: ACL approximation using DPDK presented to the VNF
name: AclVnfSshIntel external-interface: application and the Traffic
- name: xe0 Generator.
short-name: AclVnf
virtual-interface:
description: ACL approximation using DPDK
type: PCI-PASSTHROUGH We do not have a defined
mgmt-interface:
vpci: '{{ interfaces.xe0.vpci }}' schema, but minimum
vdu-id: aclvnf-baremetal requirements are local/remote
local_ip: '{{ interfaces.xe0.local_ip }}'
user: '{{user}}' dst_ip: '{{ interfaces.xe0.dst_ip }}' MAC addresses and IP
password: '{{password}}' local_mac: '{{ interfaces.xe0.local_mac }}' addresses.
ip: '{{ip}}' netmask: '{{ interfaces.xe0.netmask }}'
key_filename: '{{key_filename}}' dst_mac: '{{ interfaces.xe0.dst_mac }}' VPCI is needed for DPDK
connection-point: vnfd-connection-point-ref: xe0 binding/unbinding, but VPCI
mgmt-interface is required for SSH connection can be auto-detected on Linux
- name: xe0 routing_table: {{ routing_table }}
to VNF.
benchmark: nodes
type: VPORT
password and key_filename must be Python kpi:
- name: xe1
‘None‘ values in order for the Paramiko - packets_in
type: VPORT SSH library to use default SSH auth scheme. - packets_fwd
Paramiko will check for ssh private keys in the
- packets_dropped
regular locations, then fall back to password
auth if provided.

Dec 2017 22
Topology Example vld:id maps to Heat Neutron
network name
vld:

nsd:nsd-catalog:
- id: uplink_0 Port pairs are defined as
name: tg__0 to vnf__0 link 1
nsd: vnfd-connection-point-
type: ELAN
- id: 3tg-topology vnfd-connection-point-ref:
ref
name: 3tg-topology - member-vnf-index-ref: '1'
short-name: 3tg-topology vnfd-connection-point-ref: xe0
description: 3tg-topology vnfd-id-ref: tg__0
constituent-vnfd: - member-vnf-index-ref: '2'

- member-vnf-index: '1' vnfd-connection-point-ref: xe0


vnfd-id-ref: vnf__0
vnfd-id-ref: tg__0
VNF model: ../../vnf_descriptors/tg_rfc2544_tpl.yaml xe0 maps to Heat
- id: downlink_0
- member-vnf-index: '2' Neutron port name
name: vnf__0 to tg__0 link 2
vnfd-id-ref: vnf__0 type: ELAN
VNF model: ../../vnf_descriptors/acl_vnf.yaml vnfd-connection-point-ref:
- member-vnf-index-ref: '2'
vnfd-connection-point-ref: xe1
vnfd-id-ref: vnf__0
VNF model links to VNF descriptor YAML template - member-vnf-index-ref: '1'
vnfd-connection-point-ref: xe1
vnfd-id-ref: tg__0

Dec 2017 23
Sample test case context:
servers:
vnf_0:
scenarios: floating_ip: true
placement: "pgrp1"
- type: NSPerf
network_ports:
traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml mgmt:
topology: acl-tg-topology.yaml - mgmt
nodes: uplink_0:
- xe0
tg__0: tg_0.yardstick
downlink_0:
vnf__0: vnf_0.yardstick - xe1
options: tg_0:
framesize: floating_ip: true
uplink: {64B: 100} placement: "pgrp1"
network_ports:
downlink: {64B: 100} mgmt:
flow: - mgmt
src_ip: [{'tg__0': 'xe0'}] uplink_0:
dst_ip: [{'tg__0': 'xe1'}] - xe0
count: 1
uplink_0 network name downlink_0:
- xe1
traffic_type: 4 and xe0 port name networks:
rfc2544: match values in topology mgmt:
allowed_drop_rate: 0.0001 - 0.0001 cidr: '10.0.1.0/24'
uplink_0:
vnf__0:
cidr: '10.1.0.0/24'
rules: acl_1rule.yaml gateway_ip: 'null'
runner: port_security_enabled: False
type: Iteration enable_dhcp: 'false'
iterations: 10 downlink_0:
cidr: '10.1.1.0/24'
interval: 35 gateway_ip: 'null'
port_security_enabled: False
enable_dhcp: 'false'

Dec 2017 24
Accessing the VNF descriptor from a Python class

The VNF descriptor is accessed


as a Python object using the
self.vnfd_helper instance
mgmt_interface = self.vnfd_helper.mgmt_interface
self.connection = ssh.SSH.from_node(mgmt_interface) variable
- name: xe0
virtual-interface:
type: PCI-PASSTHROUGH
for port in self.vnfd_helper.port_pairs.all_ports: vpci: '{{ interfaces.xe0.vpci }}'
vintf = self.vnfd_helper.find_interface(name=port)["virtual-interface"] local_ip: '{{ interfaces.xe0.local_ip }}'
dst_mac = vintf["dst_mac"] dst_ip: '{{ interfaces.xe0.dst_ip }}'
dst_ip = vintf["dst_ip"] local_mac: '{{ interfaces.xe0.local_mac }}'

local_mac = vintf["local_mac"] netmask: '{{ interfaces.xe0.netmask }}'


dst_mac: '{{ interfaces.xe0.dst_mac }}'
local_ip = vintf["local_ip"]
From vnfd_helper we can
access the external-interfaces
list and all the fields, vpci,
local_mac, dest_mac, etc.

Dec 2017 25
NetworkServiceTestCase Setup

def setup(self):

# 1. Verify if infrastructure mapping can meet topology # 3. Run experiment

self.map_topology_to_infrastructure(self.context_cfg, self.topology) # Start listeners first to avoid losing packets

# 1a. Load VNF models traffic_runners = [vnf for vnf in self.vnfs if vnf.runs_traffic]

self.vnfs = self.load_vnf_models(self.scenario_cfg, self.context_cfg) for traffic_gen in traffic_runners:

# 1b. Fill traffic profile with information from topology traffic_gen.listen_traffic(self.traffic_profile)

self.traffic_profile = self._fill_traffic_profile(self.scenario_cfg, # register collector with yardstick for KPI collection.

self.context_cfg) self.collector = Collector(self.vnfs, self.traffic_profile)

# 2. Provision VNFs self.collector.start()

for vnf in self.vnfs: # Start the actual traffic

LOG.info("Instantiating %s", vnf.name) for traffic_gen in traffic_runners:

vnf.instantiate(self.scenario_cfg, self.context_cfg) LOG.info("Starting traffic on %s", traffic_gen.name)

traffic_gen.run_traffic(self.traffic_profile)

Dec 2017 26
SampleVNFs

https://wiki.opnfv.org/display/SAM/Technical+Briefs+of+VNFs

• vACL

• vFW

• CGNAPT

Dec 2017 27
How to characterize a new VNF

• Need an image

• Need a flavor

• Create VNF descriptor

• Heat Deploy or pre-Deploy (do you want to use Heat?)


– Heat Deploy
• How do we start the VNF after instance is created?
– Pre-Deploy
• How do we control the VNF once it is deployed
• Run Test

Dec 2017 28
GenericVNF abstract class

""" Class providing file-like API for generic VNF implementation """
def __init__(self, name, vnfd):
super(GenericVNF, self).__init__(name, vnfd)
self.runs_traffic = False

def instantiate(self, scenario_cfg, context_cfg):

def wait_for_instantiate(self):

def terminate(self):

def scale(self, flavor=""):

def collect_kpi(self):

Dec 2017 29
Controlling the VNF

• You can do anything Python can do


– REST-API
– SSH
– Raw socket

Dec 2017 30
Matching topology to VNF interfaces

• NSB will provide you with a dict listing all the ports created by Heat

• The ports will have a vld_id that matches the topology

• uplink_0, downlink_0

• So you may have xe0 on vld_id downlink_0, and xe1 on vld_id uplink_0.

• You need to match the MAC address of xe0 to the interface in the system
with that MAC address

• We usually disable DHCP or gateway_ip on the data-plane interfaces with


Heat.
– Because otherwise we can get multiple gateways or default routes and can’t
connect to the mgmt interface on the VNF.
Dec 2017 31
Backup

Dec 2017 32
NetworkServiceTestCase sequence diagram part 1

Dec 2017
https://wiki.opnfv.org/display/yardstick/NSB+sequence+diagram 33
NetworkServiceTest sequence diagram part 2

Dec 2017
https://wiki.opnfv.org/display/yardstick/NSB+sequence+diagram 34
12.8.1.4. SR-IOV 3-Node setup - Correlated Traffic
+--------------------+
| |
| |
| DUT |
| (VNF) |
| |
+--------------------+
| VF NIC | | VF NIC |
+--------+ +--------+
^ ^
| |
| |
+----------+ +-------------------------+ +--------------+
| | | ^ ^ | | |
| | | | | | | |
| | (0)<----->(0) | ------ | | | TG2 |
| TG1 | | SUT | | | (UDP Replay) |
| | | | | | |
| | (n)<----->(n) | ------ | (n)<-->(n) | |
+----------+ +-------------------------+ +--------------+
trafficgen_1 host trafficgen_2

Dec 2017 35
12.8.1.3. SR-IOV 2-Node setup:
+--------------------+
| |
| |
| DUT |
| (VNF) |
| |
+--------------------+
| VF NIC | | VF NIC |
+--------+ +--------+
^ ^
| |
| |
+----------+ +-------------------------+
| | | ^ ^ |
| | | | | |
| | (0)<----->(0) | ------ | |
| TG1 | | SUT | |
| | | | |
| | (n)<----->(n) |------------------ |
+----------+ +-------------------------+
trafficgen_1 host

Dec 2017 36
1

Dec 2017 37
The target
requirement is to test
Network Services
consisting of multiple
VNFs

ETSI GS NFV-TST 001 V1.1.1 (2016-04)

Dec 2017 38

Das könnte Ihnen auch gefallen