Sie sind auf Seite 1von 61

OpenStack Networking

Overview of the networking challenges and solu5ons in OpenStack



Yves Fauser
Network Virtualiza5on Pla>orm System Engineer @ VMware

OSDC 2014, Berlin, 08-10.04

The perfect storm

hOp://en.wikipedia.org/wiki/File:Hurricane_Isabel_from_ISS.jpg

The perfect storm


=>

=>

=>

Very feature rich vSwitch (Tunneling, QoS,


monitoring & management, automated control
through OpenFlow and OVSDB)
Part of the Linux Kernel since 3.3
OpenFlow and OVSDB (RFC 7047) is used
between OpenVSwitch and external controllers
Numerous OpenSource and Commercial
controllers emerged in the last years
Examples; NOX, Beacon, Floodlight,
OpenDayLight, VMware NSX, Big Controller, NEC,
etc.
OpenStack drives the need for exible and fast
network deployment models
The OpenStack Neutron Project oers a network
abstrac5on that enables OpenSource Projects
and commercial implementa5ons to innovate
with and for OpenStack

Open vSwitch

Open vSwitch Features vs. Linux-Bridge


Feature

Open vSwitch

Linux Bridge

MAC Learning Bridge

VLAN support (802.1Q)

X (na5ve in OVS)

using vlan

Sta5c Link Aggrega5on (LAG)

X (na5ve in OVS)

using ifenslave

Dynamic Link Aggrega5on (LACP)

X (na5ve in OVS)

using ifenslave

Support for MAC-in-IP encapsula5on (GRE, VXLAN, )

X (na5ve in OVS)

VXLAN support
in 3.7 Kernel +
iproute2

Trac capturing / SPAN (RSPAN with encap. into GRE)

X (na5ve in OVS)

Using advanced
trac control

Flow monitoring (NetFlow, sFlow, IPFIX, )

X (na5ve in OVS)

e.g. using
ipt_ne>low

External management interfaces (OpenFlow & OVSDB)

Mul5ple-Table forwarding pipeline with ow-caching engine

Performance improvements (e.g. RSS Support)

hOp://openvswitch.org/features/
hOps://github.com/homework/openvswitch/blob/master/WHY-OVS

Open vSwitch (OVS)


Transport
Network

MGMT

eth1

eth0
user

kernel

Cong/State DB

Congura5on Data
Interface
(ovsdb, CLI, )

ovsdb-server

Flow Data Interface


(OpenFlow, CLI,

ovs-vswitchd

Linux IP stack + rouDng table


192.168.10.1

Tunnel Ports
(to Linux IP
Stack)

br-tun (ow tables)


Flows

br-int (ow tables)

WEB

WEB

APP

APP

Open vSwitch with a controller cluster


Transport
Network

MGMT
Controller
Cluster

eth1

eth0
TCP 6633
OpenFlow

TCP 6632
OVSDB

user

kernel

Cong/State DB

Linux IP stack + rouDng table


192.168.10.1

Br-0 (ow tables)


Flows & Tunnel Ports
(to Linux IP Stack)

ovsdb-server

br-int (ow table)

ovs-vswitchd

WEB

WEB

APP

APP

Common misconcepEons with regards to


controllers
Misconcep5on 1)
Trac will ow through the controller cluster, un5l a specic ow is installed in the switch
through OpenFlow
It depends!
Most architectures dont send any trac to the controller
(e.g. VMware NSX doesnt do it)
In some architectures, where address space is limited (e.g. CAM/TCAM in low end ToR
Switches), the controller gets the rst few data packets, and then installs a ow in the
Hardware. This is usually not the case when controlling OVS, as OVS holds the Tables in
the Hypervisors Memory (and there is plenty!)
Misconcep5on 2)
The controller is a single point of failure
Controllers are usually deployed as scale out clusters
Depending on the chosen architecture, even a complete controller cluster outage
doesnt aect trac forwarding

OpenFlow and Controller


based Networks

MulEple incarnaEons of SDN

hOp://upload.wikimedia.org/wikipedia/commons/f/f8/Blind_men_and_elephant3.jpg

So what is SDN? It depends on the were you stand!

SDN dened Control / Data plane separaEon

The core concept of OpenFlow is control and data


plane separa5on

The purist point of view is;


Without the clear separa5on of control and data
plane, one should not call his solu5on an SDN
solu5on

Control plane
Distributed protocols used
OSPF, STP, etc.
Populates the data plane with forward. entries
Internal API

There are heated debates if the use of Hybrid


approaches qualify for being real SDN

Data plane

Hardware specic
Bound by ASIC/TCAM limits in physical devices

SDN dened Control / Data plane separaEon

Controller

There are heated debates if the use of Hybrid


approaches qualify for being real SDN
The purist point of view is;
Without the clear separa5on of control and data
plane, one should not call his solu5on an SDN
solu5on

Control plane
Central management of forwarding tables
Populates the data plane with forwarding entries using
OpenFlow as an external southbound interface

Internal API OpenFlow

The core concept of OpenFlow is control and data


plane separa5on

Control plane

Data plane

Hardware specic
Bound by ASIC/TCAM limits in physical devices

SDN Controllers Landscape (incomplete list)


OpenSource Controllers

hOp://www.noxrepo.org

hOp://www.projec>loodlight.org

hOp://www.opendaylight.org

C++ and Phython


controllers open sourced
by Nicira
NOX was the rst
controller in the market
Java based controller
Focused to enable apps to
evolve independently of
the control plane func5on
Backed by BigSwitch
Networks Engineers

Commercial Controllers
Commercial con5nua5on of
NOX with a focus on
Network virtualiza5on
using Overlays
Commercial version of
Floodlight controller by
BigSwitch Networks with a
focus on OpenFlow
controlled Switch Fabrics

Java based controller


community-led, open,
industry-supported
framework
First Java based controller
Basis of Floodlight

hOps://openow.stanford.edu/display/Beacon/Home

And a lot more @: hOp://yuba.stanford.edu/~casado/of-sw.html

etc

Network VirtualizaEon,
an SDN ApplicaEon

What are the key components of network virtualization?!

Network VirtualizaEon A technical deniEon


Network virtualiza5on is:
A reproduc5on of physical networks:

Q: Do you have L2 broadcast / mul5cast, so apps do not need to be modied?

Q: Do you have the same visibility and control over network behavior?

A fully isolated environment:

Q: Could two tenants decide to use the same RFC 1918 private IP space?

Q: Could you clone a network (IPs, MACs, and all) and deploy a second copy?

Physical network loca5on independent:

Q: Can two VMs be on the same L2 logical network, while in dierent physical L2 networks?

Q: Can a VM migrate without disrup5ng its security policies, packet counters, or ow state?

Physical network state independent:

Q: Do physical devices need to be updated when a new network/workloads is provisioned?

Q: Does the applica5on depend on a feature in the physical switch specic to a vendor?

Q: If a physical device died and was replaced, would applica5on details need to be known?

Network virtualiza5on is NOT:

Running network func5onality in a VM (e.g., Router or Load-balancer VM)

OpenStack Projects &


Networking

Some of the Integrated (aka Core) projects


Dashboard
(horizon)

Network
(Neutron)

Provides UI
for other projects

Provides network
connecDvity

Compute
(nova)
Block
Storage
(cinder)

Provides
Images

Provides
volumes
Provides AuthenDcaDon and Service
Catalog for other Projects

Iden5ty
(keystone)

Image
repo
(glance)

Stores
Images as
Objects

Object
Storage
(Swix)

OpenStack Networking before Neutron

Inspired by

Nova has its own networking service


nova-network. It was used before Neutron
Nova-network is s5ll present today,
and can be used instead of Neutron
Nova-network does -
base L2 network provisioning
through Linux Bridge (brctl)
IP Address management for
Tenants (in SQL DB)

nova-api
(OS,EC2,Admin)

nova-console
(vnc/vmrc)

nova-compute

nova-cert

Libvirt, XenAPI, etc.

Hypervisor
(KVM, Xen,
etc.)

Nova
DB

Queue

novaconsoleauth

nova-metadata

nova-scheduler

nova-volume

congure DHCP and DNS entries in


dnsmasq

nova-network

congure fw-policies and NAT in


IPTables (nova-compute)
Nova-network only knows 3 basic Network-Models;

Network-Providers
(Linux-Bridge or OVS with
brcompat, dnsmasq, IPTables)

Flat & Flat DHCP direct bridging of Instance to external eth. Interface
with and w/o DHCP
VLAN based Every tenant gets a VLAN, DHCP enabled

Volume-Provider
(iSCSI, LVM, etc.)

Nova-Networking Drawbacks that


lead to develop Neutron
Nova-Networking is missing an well dened API for consuming networking services
(tenant API for dened topologies and addresses)
Nova-Networking only allows for the 3 simple models;
Flat, Flat/DHCP and VLAN/DHCP, all of those are limited in scale and exibility
e.g. max. 4094 VLAN ID limit
Closed solu5on; No ability to use network services from 3rd par5es and/or
to integrate with Network vendors or overcome the limita5ons of Nova-Network
No support for:
Advanced Open vSwitch features like Network Virtualiza5on (IP-Tunnels instead of VLANs)
Mul5ple user congurable networks per project
User congurable routers (L3 Devices)

OpenStack Neutron Plugin Concept


Neutron
Core API"

Neutron
API Extension"

Neutron Service (Server)"


"

L2 network abstrac5on deni5on and management, IP address


management
Device and service aOachment framework
Does NOT do any actual implementa5on of abstrac5on

Extension API
implementa5on is
op5onal

"

Plugin API"
"
Vendor/User Plugin"

Maps abstrac5on to implementa5on on the Network (Overlay e.g. NSX or physical Network)
Makes all decisions about *how* a network is to be implemented
Can provide addi5onal features through API extensions.
Extensions can either be generic (e.g. L3 Router / NAT), or Vendor Specic

"

Core and service plugins


Core plugin implement the core Neutron API func5ons
(l2 Networking, IPAM, )
Service plugins implements addi5onal network services
(l3 rou5ng, Load Balancing, Firewall, VPN)
Implementa5ons might choose to implement relevant extensions in the Core plugin
itself

Neutron
Core API"
Function"

Plugin"

Core
"

L3
"

FW
"

Core Plugin
"

Core

L3

"

"

Core
Plugin
"

FW
"

FW
plugin
"

Core
"

Core
Plugin
"

L3
"

L3
plugin
"

FW
"

FW
plugin
"

OpenStack Neutron Plugin locaEons


!
# cat /etc/neutron/neutron.conf | grep "core_plugin"!
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin!
!
# cat /etc/neutron/neutron.conf | grep "service_plugins!
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin!
!
!
# ls /usr/share/pyshared/neutron/plugins/!
bigswitch cisco
embrane __init__.py metaplugin ml2
nec
openvswitch ryu!
brocade
common hyperv
linuxbridge midonet
mlnx nicira plumgrid!
!
# ls /usr/share/pyshared/neutron/services/!
firewall __init__.py l3_router loadbalancer metering provider_configuration.py
service_base.py vpn"
"

OpenStack Neutron Modular Plugins


Before the modular plugin (ML2), every team or vendor had to implement a
complete plugin including IPAM, DB Access, etc.
The ML2 Plugin separates core func5ons like IPAM, virtual network id management,
etc. from vendor/implementa5on specic func5ons, and therefore makes it easier
for vendors not to reinvent to wheel with regards to ID Management, DB access
Exis5ng and future non-modular plugins are called monolithic plugins
ML2 calls the management of network types type drivers, and the implementa5on
specic part mechanism drivers
ML2 Plugin & API Extensions"

GRE

VXLAN
VLAN

etc.

Mechanism Manager "

Arista

OVS

Linux Bridge

etc.
Cisco

Mechanism
Drivers"

Type
Drivers"

Type Manager"

OpenStack Neutron ML2 locaEons


!
# cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep type_drivers!
# the neutron.ml2.type_drivers namespace.!
# Example: type_drivers = flat,vlan,gre,vxlan!
type_drivers = gre!
!
# cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep mechanism_drivers!
# to be loaded from the neutron.ml2.mechanism_drivers namespace.!
# Example: mechanism_drivers = arista!
# Example: mechanism_drivers = cisco,logger!
mechanism_drivers = openvswitch,linuxbridge!
!

!
# ls /usr/share/pyshared/neutron/plugins/ml2/drivers/!
cisco
l2pop
mechanism_ncs.py mech_hyperv.py
mech_openvswitch.py
type_gre.py
type_tunnel.py type_vxlan.py __init__.py mech_agent.py mech_arista
mech_linuxbridge.py type_flat.py type_local.py type_vlan.py!
"

Some of the Plugins available in the market


(1/2)
ML2 modular Plugin
With support for the type drivers: local, at, VLAN, GRE, VXLAN
And the following mechanism drivers: Arista, Cisco Nexus, Hyper-V Agent, L2
Popula5on, Linuxbridge, Open vSwitch Agent, Tail-f NCS
Open vSwitch Plugin The most used (Open Source) plugin today
Supports GRE based Overlays, NAT/Security groups, etc.
Depreca5on planned for Icehouse release in favor of ML2
Linuxbridge Plugin
Limited to L2 func5onality, L3, oa5ng IPs and provider networks.
No support for Overlays
Depreca5on planned for Icehouse release in favor of ML2

Some of the Plugins available in the market


(2/2)
VMware NSX (aka Nicira NVP) Plugin
Network Virtualiza5on solu5on with centralized controller + OpenVSwitch
Cisco UCS / Nexus 5000 Plugin
Provisions VLANs on Nexus 5000 switches and on UCS Fabric-Interconnect as well as
UCS B-Series Servers network card (palo adapter)
Can use GRE and only congure OVS, but then theres no VLAN provisioning
NEC and Ryu Plugin
Openow Hop-by-Hop implementa5ons with NEC or Ryu controller
Other plugins include Midokura, Juniper (Contrail), Big Switch, Brocade, Plumgrid, Embrane,
Melanox
LBaaS Service Plugins from; A10 and Citrix
This List can only be incomplete, please check the latest informa5on on:
hOps://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
hOp://www.sdncentral.com/openstack-neutron-quantum-plug-ins-comprehensive-list/

New Plugins / ML2 Drivers in Icehouse Release


New ML2 Mechanism Drivers:
Mechanism Driver for OpenDaylight Controller
Brocade ML2 Mechanism Driver for VDX Switch Cluster
New Neutron Plugins
IBM SDN-VE Controller Plugin
Nuage Networks Controller Plugin
Service Plugins
Embrane and Radware LBaaS driver
Cisco VPNaaS driver
Various
VMware NSX - DHCP and Metadata Service
This list is incomplete, please see here for more details:
hOps://blueprints.launchpad.net/neutron/icehouse

Neutron OVS Agent Architecture


The following components play a role in OVS Agent Architecture
Neutron-OVS-Agent: Receives tunnel & ow setup informa5on from OVS-Plugin and programs OVS to build
tunnels and to steers trac into those tunnels
Neutron-DHCP-Agent: Sets up dnsmasq in a namespace per congured network/subnet,
and enters mac/ip combina5on in dnsmasq dhcp lease le
Neutron-L3-Agent: Sets up iptables/rou5ng/NAT Tables (routers) as directed by OVS Plugin or ML2 OVS
mech_driver
In most cases GRE or VXLAN overlay
tunnels are used, but at and vlan
modes are also possible

Neutron-
Network-Node

Neutron-Server + OVS-Plugin
N.-L3-Agent
NAT &
oaDng
-IPs

WAN/
Internet

N.-DHCP-Agent

iptables/
rouDng
iptables/

rouDng

dnsmasq
dnsmasq

Compute Node

External
Network
(or VLAN)

nova-compute

nova-compute

N.-OVS-Agent
ovsdb/
ovsvsd

N.-OVS-Agent

N.-OVS-Agent
ovsdb/
ovsvsd

hypervisor
VM VM

ovsdb/
ovsvsd

hypervisor
VM VM
br-int

br-int

br-int
br-ex
IP Stack

Compute Node

br-tun

br-tun

IP Stack

IP Stack

L2 in L3 (GRE)
Tunnel
Layer 3 Transport Network

Layer 3 Transport Net.

br-tun

Using SDN controllers VMware NSX Plugin example


Centralized scale-out controller cluster controls all Open vSwitches in all Compute- and Network Nodes.
It congures the tunnel interfaces and programs the ow tables of OVS
NSX L3 Gateway Service (scale-out) is taking over the L3 rou5ng and NAT func5ons
NSX Service-Node relieves the Compute Nodes from the task of replica5ng broadcast, unknown unicast
and mul5cast trac sourced by VMs
Security-Groups are implemented na5vely in OVS, instead of iptables/ebtables
Neutron-
Network-Node

NSX Controller
Cluster

Compute Node

Compute Node

nova-compute

nova-compute

Neutron-Server + NVP-Plugin
N.-DHCP-Agent
ovsdb/
ovsvsd

dnsmasq
dnsmasq

ovsdb/
ovsvsd

VM VM

VM VM

ovsdb/
ovsvsd

br-int

br-int

br-int

WAN/
Internet

hypervisor

hypervisor

br-0

br-0
IP Stack

IP Stack

IP Stack

Management
Network

NSX L3GW +
NAT

Layer 3 Transport Network

br-0

NSX Service-Node

L2 in L3 (STT) Tunnel

Layer 3 Transport Net.

hOps://www.ickr.com/photos/17258892@N05/2588347668/lightbox/

Thank You!
And have a great Conference

OSDC 2014, Berlin, 08-10.04

Backup Slides

Neutron Agent Status


This output shows the Neutron agents status axer a base installa5on
# neutron agent-list!
+--------------------------------------+--------------------+---------------+-------+----------------+!
| id
| agent_type
| host
| alive | admin_state_up |!
+--------------------------------------+--------------------+---------------+-------+----------------+!
| 1a58601c-ff41-4dc5-914f-d37ec5761b06 | L3 agent
| os-controller | :-)
| True
|!
| 416c854b-611b-42f9-b7b1-3bbe0bd840f2 | DHCP agent
| os-controller | :-)
| True
|!
| 57bed0b7-55da-455a-8351-fd28e05cf1dc | Open vSwitch agent | os-controller | :-)
| True
|!
| 7b1ae4e8-7bc2-480e-82a7-0eb6a02b119f | Open vSwitch agent | os-compute-1 | :-)
| True
|!
| d5d27e99-ba76-4e5f-bdfe-ef7d0638a52e | Open vSwitch agent | os-compute-2 | :-)
| True
|!
+--------------------------------------+--------------------+---------------+-------+----------------+!

Neutron OVS Tunnel Structure


This output shows the OVS cong on the OpenStack Network-Node before any logical network has
been congured
# ovs-vsctl show!
09d5b89a-600d-4da3-b761-11206456385a!
Bridge br-ex!
Port br-ex!
Interface br-ex!
type: internal!
Port "eth2"!
Interface "eth2"!
Bridge br-tun!
Port br-tun!
Interface br-tun!
type: internal!
Port patch-int!
Interface patch-int!
type: patch!
options: {peer=patch-tun}!
Port "gre-172.16.0.11"!
Interface "gre-172.16.0.11"!
type: gre!
options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.11"}!
Port "gre-172.16.0.12"!
Interface "gre-172.16.0.12"!
type: gre!
options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.12"}!
Bridge br-int!
Port patch-tun!
Interface patch-tun!
type: patch!
options: {peer=patch-int}!
Port br-int!
Interface br-int!
type: internal!
ovs_version: "1.10.2"!

Neutron OVS Tunnel Structure


This output shows the OVS cong on the OpenStack Network-Node before any logical network has
been congured
# ovs-vsctl show!
09d5b89a-600d-4da3-b761-11206456385a!
Bridge br-ex!
Port br-ex!
Interface br-ex!
type: internal!
Port "eth2"!
Interface "eth2"!
Bridge br-tun!
Port br-tun!
Interface br-tun!
type: internal!
Port patch-int!
Interface patch-int!
type: patch!
options: {peer=patch-tun}!
Port "gre-172.16.0.11"!
Interface "gre-172.16.0.11"!
type: gre!
options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.11"}!
Port "gre-172.16.0.12"!
Interface "gre-172.16.0.12"!
type: gre!
options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.12"}!
Bridge br-int!
Port patch-tun!
Interface patch-tun!
type: patch!
options: {peer=patch-int}!
Port br-int!
Interface br-int!
type: internal!
ovs_version: "1.10.2"!

!
# Interface to first compute node!
!
Port "gre-172.16.0.11"!
Interface "gre-172.16.0.11"!
type: gre!
options: !
{in_key=flow, local_ip="172.16.0.10",
out_key=flow, remote_ip="172.16.0.11"}!

!
# Interface to second compute node!
!
Port "gre-172.16.0.12"!
Interface "gre-172.16.0.12"!
type: gre!
options: !
{in_key=flow, local_ip="172.16.0.10",
out_key=flow, remote_ip="172.16.0.12"}!

Neutron OVS Tunnel Structure


This output shows the OVS cong on the OpenStack Network-Node before any logical network has
been congured
# ovs-vsctl show!
09d5b89a-600d-4da3-b761-11206456385a!
Bridge br-ex!
Port br-ex!
Interface br-ex!
type: internal!
Port "eth2"!
Interface "eth2"!
Bridge br-tun!
Port br-tun!
Interface br-tun!
type: internal!
Port patch-int!
Interface patch-int!
type: patch!
options: {peer=patch-tun}!
Port "gre-172.16.0.11"!
Interface "gre-172.16.0.11"!
type: gre!
options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.11"}!
Port "gre-172.16.0.12"!
Interface "gre-172.16.0.12"!
type: gre!
options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.12"}!
Bridge br-int!
Port patch-tun!
Interface patch-tun!
type: patch!
options: {peer=patch-int}!
Port br-int!
Interface br-int!
type: internal!
ovs_version: "1.10.2"!

!
# Patch from br-tun table to br-int table!
!
Port patch-int!
Interface patch-int!
type: patch!
options: {peer=patch-tun}!

!
# patch from br-int table to br-tun table!
!
Port patch-tun!
Interface patch-tun!
type: patch!
options: {peer=patch-int}!

Neutron Internal Network CreaEon


Now we will create a logical L2 network, without any subnet assigned to it
# neutron net-create Internal-Network!
!
Created a new network:!
+---------------------------+--------------------------------------+!
| Field
| Value
|!
+---------------------------+--------------------------------------+!
| admin_state_up
| True
|!
| id
| 56a76117-8910-4d85-b91d-8e6842e0a510 |!
| name
| Internal-Network
|!
| provider:network_type
| gre
|!
| provider:physical_network |
|!
| provider:segmentation_id | 1
|!
| shared
| False
|!
| status
| ACTIVE
|!
| subnets
|
|!
| tenant_id
| b1178a03969b4f638937f5a632fb547a
|!
+---------------------------+--------------------------------------+!
!
# neutron net-list!
+--------------------------------------+------------------+---------+!
| id
| name
| subnets |!
+--------------------------------------+------------------+---------+!
| 56a76117-8910-4d85-b91d-8e6842e0a510 | Internal-Network |
|!
+--------------------------------------+------------------+---------+!

Neutron Internal Subnet CreaEon


Now we will create and aOach a new Subnet to the L2 network, without any subnet assigned to it
# neutron subnet-create Internal-Network --name Internal-Subnet 10.12.13.0/24!
Created a new subnet:!
+------------------+------------------------------------------------+!
| Field
| Value
|!
+------------------+------------------------------------------------+!
| allocation_pools | {"start": "10.12.13.2", "end": "10.12.13.254"} |!
| cidr
| 10.12.13.0/24
|!
| dns_nameservers |
|!
| enable_dhcp
| True
|!
| gateway_ip
| 10.12.13.1
|!
| host_routes
|
|!
| id
| b4c95b8b-65a4-402e-8359-69b55d6c9bf1
|!
| ip_version
| 4
|!
| name
| Internal-Subnet
|!
| network_id
| 56a76117-8910-4d85-b91d-8e6842e0a510
|!
| tenant_id
| b1178a03969b4f638937f5a632fb547a
|!
+------------------+------------------------------------------------+!
!
# neutron subnet-list -c id -c cidr -c name!
+--------------------------------------+----------------+-----------------+!
| id
| cidr
| name
|!
+--------------------------------------+----------------+-----------------+!
| b4c95b8b-65a4-402e-8359-69b55d6c9bf1 | 10.12.13.0/24 | Internal-Subnet |!
+--------------------------------------+----------------+-----------------+!
!
# ip netns show!
#!

Note: The dhcp namespace will be created when the rst instance boots

Neutron external Network CreaEon 1/2


Now we will create a external network deni5on, and add an IP subnet and pool to it
# neutron net-create External-Net --router:external=True!
!
Created a new network:!
+---------------------------+--------------------------------------+!
| Field
| Value
|!
+---------------------------+--------------------------------------+!
| admin_state_up
| True
|!
| id
| 8998c547-ff7c-45f8-884a-a6d4bcaa5de7 |!
| name
| External-Net
|!
| provider:network_type
| gre
|!
| provider:physical_network |
|!
| provider:segmentation_id | 2
|!
| router:external
| True
|!
| shared
| False
|!
| status
| ACTIVE
|!
| subnets
|
|!
| tenant_id
| b1178a03969b4f638937f5a632fb547a
|!
+---------------------------+--------------------------------------+!
!

Neutron external Network CreaEon 2/2


Now we will create a external network deni5on, and add an IP subnet and pool to it
# neutron subnet-create External-Net 172.16.65.0/24 \!
--allocation-pool start=172.16.65.100,end=172.16.65.150!
!
Created a new subnet:!
+------------------+----------------------------------------------------+!
| Field
| Value
|!
+------------------+----------------------------------------------------+!
| allocation_pools | {"start": "172.16.65.100", "end": "172.16.65.150"} |!
| cidr
| 172.16.65.0/24
|!
| dns_nameservers |
|!
| enable_dhcp
| True
|!
| gateway_ip
| 172.16.65.1
|!
| host_routes
|
|!
| id
| 16eb9d34-819f-4525-99ab-ec9358ea132f
|!
| ip_version
| 4
|!
| name
|
|!
| network_id
| 8998c547-ff7c-45f8-884a-a6d4bcaa5de7
|!
| tenant_id
| b1178a03969b4f638937f5a632fb547a
|!
+------------------+----------------------------------------------------+!
!
!

Neutron Router CreaEon 1/4


Now we will create a router, and connect it to the Uplink (external network) we created earlier
# neutron router-create MyRouter!
!
Created a new router:!
+-----------------------+--------------------------------------+!
| Field
| Value
|!
+-----------------------+--------------------------------------+!
| admin_state_up
| True
|!
| external_gateway_info |
|!
| id
| bda86e19-4831-4bfb-b3f4-bb79113ceab1 |!
| name
| MyRouter
|!
| status
| ACTIVE
|!
| tenant_id
| b1178a03969b4f638937f5a632fb547a
|!
+-----------------------+--------------------------------------+!
!
# neutron router-gateway-set MyRouter External-Net!
Set gateway for router MyRouter!
!
# neutron router-interface-add MyRouter Internal-Subnet!
Added interface a86dfa2b-9ceb-43ba-90ea-fb67ef5c5d17 to router MyRouter.!
!

Neutron Router CreaEon 2/4


Now we will create a router, and connect it to the Uplink (external network) we created earlier
# neutron router-show MyRouter!
+-----------------------+-----------------------------------------------------------------------------+!
| Field
| Value
|!
+-----------------------+-----------------------------------------------------------------------------+!
| admin_state_up
| True
|!
| external_gateway_info | {"network_id": "8998c547-ff7c-45f8-884a-a6d4bcaa5de7", "enable_snat": true} |!
| id
| bda86e19-4831-4bfb-b3f4-bb79113ceab1
|!
| name
| MyRouter
|!
| routes
|
|!
| status
| ACTIVE
|!
| tenant_id
| b1178a03969b4f638937f5a632fb547a
|!
+-----------------------+-----------------------------------------------------------------------------+!
!
# neutron router-port-list MyRouter -c fixed_ips!
+--------------------------------------------------------------------------------------+!
| fixed_ips
|!
+--------------------------------------------------------------------------------------+!
| {"subnet_id": "b4c95b8b-65a4-402e-8359-69b55d6c9bf1", "ip_address": "10.12.13.1"}
|!
| {"subnet_id": "16eb9d34-819f-4525-99ab-ec9358ea132f", "ip_address": "172.16.65.100"} |!
+--------------------------------------------------------------------------------------+!
!

Neutron Router CreaEon 3/4


Now that the router is created, and interfaces are assigned to it, we will see a new namespace
# ip netns show!
qrouter-bda86e19-4831-4bfb-b3f4-bb79113ceab1!
!
# ip netns exec qrouter-bda86e19-4831-4bfb-b3f4-bb79113ceab1 /bin/bash!
!
# ip addr!
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN !
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00!
inet 127.0.0.1/8 scope host lo!
inet6 ::1/128 scope host !
valid_lft forever preferred_lft forever!
10: qg-f9d1f494-7f: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
link/ether fa:16:3e:02:9a:1c brd ff:ff:ff:ff:ff:ff!
inet 172.16.65.100/24 brd 172.16.65.255 scope global qg-f9d1f494-7f!
inet6 fe80::f816:3eff:fe02:9a1c/64 scope link !
valid_lft forever preferred_lft forever!
11: qr-a86dfa2b-9c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
link/ether fa:16:3e:7b:1a:92 brd ff:ff:ff:ff:ff:ff!
inet 10.12.13.1/24 brd 10.12.13.255 scope global qr-a86dfa2b-9c!
inet6 fe80::f816:3eff:fe7b:1a92/64 scope link !
valid_lft forever preferred_lft forever!
!
# netstat -rn!
Kernel IP routing table!
Destination
Gateway
Genmask
Flags
MSS Window irtt
0.0.0.0
172.16.65.1
0.0.0.0
UG
0 0
0
10.12.13.0
0.0.0.0
255.255.255.0
U
0 0
0
172.16.65.0
0.0.0.0
255.255.255.0
U
0 0
0

UNKNOWN !

UNKNOWN !

Iface!
qg-f9d1f494-7f!
qr-a86dfa2b-9c!
qg-f9d1f494-7f!

Neutron Router CreaEon 4/4 OVS View


The OVS show will now show the tap interfaces to the router Namespace, and to the external interface
root@os-controller:/home/localadmin# ovs-vsctl show!
09d5b89a-600d-4da3-b761-11206456385a!
Bridge br-ex!
!
Port "qg-f9d1f494-7f"!
Interface "qg-f9d1f494-7f"!
# external router interface is patched
type: internal!
to br-ex, and therefore bridged out to
Port br-ex!
interface eth2!
Interface br-ex!
type: internal!

Port "eth2"!
Interface "eth2!
!
.... SNIP ....!
!
Bridge br-int!
Port patch-tun!
Interface patch-tun!
type: patch!
options: {peer=patch-int}!
!
Port "qr-a86dfa2b-9c"!
tag: 1!
# Internal router interface is patched
Interface "qr-a86dfa2b-9c"!
to br-int, and therefore connected to
type: internal!
the br-int flow table!
Port br-int!
Interface br-int!

type: internal!
ovs_version: "1.10.2"!

Neutron Horizon Dashboard View

Nova Boot two Instances

Now we will boot two cirros Instances, and connect those to the virtual network we created earlier

# nova boot --flavor 1 --image 'CirrOS 0.3.1 \!


--nic net-id=56a76117-8910-4d85-b91d-8e6842e0a510 Instance1!
!
+--------------------------------------+--------------------------------------+!
| Property
| Value
|!
+--------------------------------------+--------------------------------------+!
| OS-EXT-STS:task_state
| scheduling
|!
| image
| CirrOS 0.3.1
|!
| OS-EXT-STS:vm_state
| building
|!
| OS-EXT-SRV-ATTR:instance_name
| instance-0000000b
|!
!
... SNIP ...!
!
# nova boot --flavor 1 --image 'CirrOS 0.3.1' \!
--nic net-id=56a76117-8910-4d85-b91d-8e6842e0a510 Instance2!
!
+--------------------------------------+--------------------------------------+!
| Property
| Value
|!
+--------------------------------------+--------------------------------------+!
| OS-EXT-STS:task_state
| scheduling
|!
| image
| CirrOS 0.3.1
|!
| OS-EXT-STS:vm_state
| building
|!
| OS-EXT-SRV-ATTR:instance_name
| instance-0000000c
|!
!
... SNIP ...!
!
!

Neutron Horizon Dashboard View

Neutron DHCP Namespace / dnsmasq process


Axer the rst Instances was started, Neutron created the dhcp namespace and started a dnsmasq
process in it
# ip netns show!
qdhcp-56a76117-8910-4d85-b91d-8e6842e0a510!
qrouter-bda86e19-4831-4bfb-b3f4-bb79113ceab1!
!
# ip netns exec qdhcp-56a76117-8910-4d85-b91d-8e6842e0a510 /bin/bash!
!
# ip addr!
... SNIP ...!
12: tap383cd579-5e: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN !
link/ether fa:16:3e:de:5f:bf brd ff:ff:ff:ff:ff:ff!
inet 10.12.13.3/24 brd 10.12.13.255 scope global tap383cd579-5e!
inet 169.254.169.254/16 brd 169.254.255.255 scope global tap383cd579-5e!
inet6 fe80::f816:3eff:fede:5fbf/64 scope link !
valid_lft forever preferred_lft forever!
!
# ps -ef | grep dnsmasq!
nobody
16209
1 0 22:29 ?
00:00:00 dnsmasq --no-hosts --no-resolv --strict-order --bindinterfaces --interface=tap383cd579-5e --except-interface=lo --pid-file=/var/lib/neutron/dhcp/
56a76117-8910-4d85-b91d-8e6842e0a510/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/56a76117-8910-4d85b91d-8e6842e0a510/host --dhcp-optsfile=/var/lib/neutron/dhcp/56a76117-8910-4d85-b91d-8e6842e0a510/opts
--leasefile-ro --dhcp-range=set:tag0,10.12.13.0,static,86400s --dhcp-lease-max=256 --conf-file= -domain=openstacklocal!
root
22102 15608 0 22:58 pts/0
00:00:00 grep --color=auto dnsmasq!
!
# cat /var/lib/neutron/dhcp/56a76117-8910-4d85-b91d-8e6842e0a510/host!
fa:16:3e:ee:1e:2f,host-10-12-13-2.openstacklocal,10.12.13.2!
fa:16:3e:7b:1a:92,host-10-12-13-1.openstacklocal,10.12.13.1!
fa:16:3e:17:75:f6,host-10-12-13-4.openstacklocal,10.12.13.4!
!

Neutron Instance cong le


Heres what the network part of the Instance congura5on for KVM looks like
-- COMPUTE NODE 1 ---!
!
# virsh list!
Id
Name
State!
----------------------------------------------------!
6
instance-0000000b
running!
!
# virsh dumpxml 6!
<domain type='kvm' id='6'>!
<name>instance-0000000b</name>!
!
... SNIP ...!
!
<interface type='bridge'>!
<mac address='fa:16:3e:64:20:31'/>!
<source bridge='br-int'/>!
<virtualport type='openvswitch'>!
<parameters interfaceid='32141443-073a-4be9-993b-51f3e131b037'/>!
</virtualport>!
!
<target dev='tap32141443-07'/>!
# Instance Port id tap32141443-07!
<model type='virtio'/>!

<alias name='net0'/>!
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>!
</interface>!
!
... SNIP ...!

Neutron OVS view aber Instances are connected


Now lets examine what the patches and ow tables look like on OVS axer the Instances were started
-- COMPUTE NODE 1 ---!
!
root@os-compute-1:/home/localadmin# ovs-vsctl show!
Bridge br-int!
... SNIP ...!
Port patch-tun!
Interface patch-tun!
type: patch!
options: {peer=patch-int}!
!
Port "tap32141443-07"!
# Instance Port id tap32141443-07!
tag: 6!
!
Interface "tap32141443-07"!
Bridge br-tun!
# Instance Port mapping into br-int
... SNIP ...!
flow-table!
Port "gre-172.16.0.12"!

Interface "gre-172.16.0.12"!
type: gre!
options: {in_key=flow, local_ip="172.16.0.11", out_key=flow, remote_ip="172.16.0.12"}!
Port patch-int!
Interface patch-int!
type: patch!
options: {peer=patch-tun}!
Port "gre-172.16.0.10"!
Interface "gre-172.16.0.10"!
type: gre!
options: {in_key=flow, local_ip="172.16.0.11", out_key=flow, remote_ip="172.16.0.10"}!
ovs_version: "1.10.2"!

Neutron OVS ows created through rootwrap by OVS-Agent


OVS ows and interfaces get created through rootwrapper by the OVS Agent
-- COMPUTE NODE 1 ---!
!
# tail -f /var/log/syslog !
!
Apr 6 23:51:34 os-compute-1 ovs-vsctl: 00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --mayexist add-port br-int tap60b3782b-80 -- set Interface tap60b3782b-80 "external-ids:attached-mac=\"fa:
16:3e:64:20:31\"" -- set Interface tap60b3782b-80 "external-ids:iface-id=\"60b3782b-8096-497d-96a4f3a8dc187eb6\"" -- set Interface tap60b3782b-80 "external-ids:vm-id=\"17f0fdee-3ecd-440f-8e77c43d2fcda9de\"" -- set Interface tap60b3782b-80 external-ids:iface-status=active!
!
Apr 6 23:51:37 os-compute-1 neutron-rootwrap: (root > root) Executing ['/usr/bin/ovs-ofctl', 'modflows,'br_tun,'hard_timeout=0,idle_timeout=0,priority=1,table=21,dl_vlan=6,actions=strip_vlan,set_tu
nnel:1, output 3,2'] (filter match = ovs-ofctl)!
!
Apr 6 23:51:37 os-compute-1 neutron-rootwrap: (root > root) Executing ['/usr/bin/ovs-ofctl', 'addflow', 'br-tun', 'hard_timeout=0,idle_timeout=0,priority=1,table=2,tun_id=1,actions=mod_vlan_vid:
6,resubmit(,10)'] (filter match = ovs-ofctl)!
!
Apr 6 23:51:37 os-compute-1 neutron-rootwrap: (root > root) Executing ['/usr/bin/ovs-vsctl', '-timeout=2', 'set', 'Port', 'tap60b3782b-80', 'tag=6'] (filter match = ovs-vsctl)!
!
Apr 6 23:51:37 os-compute-1 ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set
Port tap60b3782b-80 tag=6!
!
Apr 6 23:51:37 os-compute-1 neutron-rootwrap: (root > root) Executing ['/usr/bin/ovs-ofctl', 'delflows', 'br-int', 'in_port=7'] (filter match = ovs-ofctl)!
!

Neutron OVS MAC learning


OVS with OVS Agent s5ll uses classic MAC Learning to understand were which MAC Address is in the
Network
-- COMPUTE NODE 1 ---!
!
# ovs-appctl fdb/show br-int!
port VLAN MAC
Age!
4
6 fa:16:3e:64:20:31
4!
-1
6 fa:16:3e:de:5f:bf
4!
!
# ovs-appctl dpif/show br-int!
br-int (system@ovs-system):!
!lookups: hit:1461 missed:343!
!flows: cur: 0, avg: 8.634, max: 39, life span: 7746(ms)!
!
!hourly avg: add rate: 0.654/min, del rate: 0.658/min!
!
!overall avg: add rate: 0.775/min, del rate: 0.775/min!
!br-int 65534/1: (internal)!
!patch-tun 1/none: (patch: peer=patch-int)!
!tap60b3782b-80 7/4:!
!
# ovs-appctl dpif/show br-tun!
br-tun (system@ovs-system):!
!lookups: hit:568 missed:364!
!flows: cur: 0, avg: 9.707, max: 39, life span: 5976(ms)!
!
!hourly avg: add rate: 0.730/min, del rate: 0.731/min!
!
!overall avg: add rate: 0.817/min, del rate: 0.817/min!
!br-tun 65534/2: (internal)!
!gre-172.16.0.10 2/3: (gre: key=flow, local_ip=172.16.0.11, remote_ip=172.16.0.10)!
!gre-172.16.0.12 3/3: (gre: key=flow, local_ip=172.16.0.11, remote_ip=172.16.0.12)!
!patch-int 1/none: (patch: peer=patch-tun)!

Neutron OVS Table Structure


OVS Agent programs a complex Table structure into OVS

hOps://wiki.openstack.org/wiki/Ovs-ow-logic

Neutron IPTable Rules Compute Nodes Security G.


The following output shows what Neutron congures into IPTables on the compute node to implement
security groups
-- COMPUTE NODE 1 ---!
!
# iptables L!
SNIP !
Chain neutron-openvswi-i7fff0812-9
target
prot opt source
DROP
all -- anywhere
RETURN
all -- anywhere
RETURN
tcp -- anywhere
RETURN
icmp -- anywhere
RETURN
udp -- anywhere
RETURN
udp -- 10.12.13.3
SNIP !
Chain neutron-openvswi-o7fff0812-9
target
prot opt source
RETURN
udp -- anywhere
!
neutron-openvswi-s7fff0812-9 all
DROP
udp -- anywhere
DROP
all -- anywhere
RETURN
all -- anywhere
RETURN
all -- anywhere
!
Chain neutron-openvswi-s7fff0812-9
target
prot opt source
RETURN
all -- 10.12.13.2
DROP
all -- anywhere

(1 references)!
destination
anywhere
anywhere
anywhere
anywhere
anywhere
anywhere
(2 references)!
destination
anywhere
--

anywhere
anywhere
anywhere
anywhere
anywhere

(1 references)!
destination
anywhere
anywhere

!
state INVALID!
state RELATED,ESTABLISHED!
tcp multiport dports tcpmux:65535!
!
udp multiport dports 1:65535!
udp spt:bootps dpt:bootpc!

!
udp spt:bootpc dpt:bootps!
anywhere
!
udp spt:bootps dpt:bootpc!
state INVALID!
state RELATED,ESTABLISHED!
!

!
MAC FA:16:3E:43:C6:20!
!

Neutron IPTable Rules Compute Nodes Security G.


The following output shows what Neutron congures into IPTables on the compute node to implement
security groups
-- COMPUTE NODE 1 ---!
!
# iptables L!
SNIP !
Chain neutron-openvswi-i7fff0812-9
target
prot opt source
DROP
all -- anywhere
RETURN
all -- anywhere
RETURN
tcp -- anywhere
RETURN
icmp -- anywhere
RETURN
udp -- anywhere
RETURN
udp -- 10.12.13.3
SNIP !
Chain neutron-openvswi-o7fff0812-9
target
prot opt source
RETURN
udp -- anywhere
!
neutron-openvswi-s7fff0812-9 all
DROP
udp -- anywhere
DROP
all -- anywhere
RETURN
all -- anywhere
RETURN
all -- anywhere
!
Chain neutron-openvswi-s7fff0812-9
target
prot opt source
RETURN
all -- 10.12.13.2
DROP
all -- anywhere

(1 references)!
destination
anywhere
anywhere
anywhere
anywhere
anywhere
anywhere
(2 references)!
destination
anywhere
--

anywhere
anywhere
anywhere
anywhere
anywhere!

(1 references)!
destination
anywhere
anywhere

!
# Inbound rule to Instances!
!
state INVALID!
state RELATED,ESTABLISHED!
tcp multiport dports tcpmux:65535!
!
udp multiport dports 1:65535!
udp spt:bootps dpt:bootpc!

!
# Default outbound allow dhcp!
!
udp spt:bootpc dpt:bootps!
anywhere
!
udp spt:bootps dpt:bootpc!
state INVALID!
state RELATED,ESTABLISHED!

!
# Port Security Rule only
allow Instance MAC outbound!
!
MAC FA:16:3E:43:C6:20!
!

Neutron add oaEng-ip to instance


We will now add a oa5ng-ip to an instance
# neutron floatingip-create External-Net!
Created a new floatingip:!
+---------------------+--------------------------------------+!
| Field
| Value
|!
+---------------------+--------------------------------------+!
| fixed_ip_address
|
|!
| floating_ip_address | 172.16.65.101
|!
| floating_network_id | 8998c547-ff7c-45f8-884a-a6d4bcaa5de7 |!
| id
| 5d3a71e6-f94e-4c9f-9389-474abc559900 |!
| port_id
|
|!
| router_id
|
|!
| tenant_id
| 94fa9a0f01f24ba2983d06575add8764
|!
+---------------------+--------------------------------------+!
# nova list!
+--------------------------------------+---------------+--------+------------+-------------+---------!
| ID
| Name
| Status | Task State | Power State | Networks
|!
+--------------------------------------+---------------+--------+------------+-------------+----------!
| af2d9b9f-3e25-4242-82f9-b059778cf217 | Instance1
| ACTIVE | None
| Running
| InternalNetwork=10.12.13.2 |!
| 2206f513-9313-4c87-be09-3cfacbc6d2a2 | Instance2
| ACTIVE | None
| Running
| InternalNetwork=10.12.13.4 |!
+--------------------------------------+---------------+--------+------------+-------------+----------!
!
# nova add-floating-ip Instance1 172.16.65.101!
#!
!

Neutron add oaEng-ip to instance


We will now add a oa5ng-ip to an instance
# nova show Instance1!
+--------------------------------------+----------------------------------------------------------+!
| Property
| Value
|!
+--------------------------------------+----------------------------------------------------------+!
| status
| ACTIVE
|!
| updated
| 2014-04-08T00:08:23Z
|!
| OS-EXT-STS:task_state
| None
|!
| OS-EXT-SRV-ATTR:host
| os-compute-1
|!
| key_name
| None
|!
| image
| CirrOS 0.3.1 (55438187-bc0e-4245-b4a7-edb338cf47bd)
|!
!
... SNIP ...|!
!
| accessIPv4
|
|!
| accessIPv6
|
|!
| Internal-Network network
| 10.12.13.2, 172.16.65.101
|!
| progress
| 0
|!
| OS-EXT-STS:power_state
| 1
|!
| OS-EXT-AZ:availability_zone
| nova
|!
| config_drive
|
|!
+--------------------------------------+----------------------------------------------------------+!
!

Neutron oaEng-ip, router namespace


This is what a oa5ng IP looks like in the router Namespace and in IPTables
# ip netns exec qrouter-c6687e7c-ab1c-4336-ab1e-8021f9c59925 /bin/bash!
!
# ip addr!
!
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN !
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00!
inet 127.0.0.1/8 scope host lo!
inet6 ::1/128 scope host !
valid_lft forever preferred_lft forever!
13: qg-92d91e4c-2d: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN !
link/ether fa:16:3e:58:f6:2c brd ff:ff:ff:ff:ff:ff!
inet 172.16.65.100/24 brd 172.16.65.255 scope global qg-92d91e4c-2d!
inet 172.16.65.101/32 brd 172.16.65.101 scope global qg-92d91e4c-2d!
inet6 fe80::f816:3eff:fe58:f62c/64 scope link !
valid_lft forever preferred_lft forever!
14: qr-8abeb2b0-a6: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN !
link/ether fa:16:3e:18:6e:93 brd ff:ff:ff:ff:ff:ff!
inet 10.12.13.1/24 brd 10.12.13.255 scope global qr-8abeb2b0-a6!
inet6 fe80::f816:3eff:fe18:6e93/64 scope link !
valid_lft forever preferred_lft forever!
!
!

# Router IP!
# configured floating-ip!

Neutron oaEng-ip, IPTables NAT


This is what a oa5ng IP looks like in the router Namespace and in IPTables
# iptables -t nat -L!
!
...SNIP ...!
!
Chain neutron-l3-agent-OUTPUT (1 references)!
target
prot opt source
destination
!
DNAT
all -- anywhere
172.16.65.101
to:10.12.13.2!
!
Chain neutron-l3-agent-POSTROUTING (1 references)!
target
prot opt source
destination
!
ACCEPT
all -- anywhere
anywhere
! ctstate DNAT!
!
Chain neutron-l3-agent-PREROUTING (1 references)!
target
prot opt source
destination
!
REDIRECT
tcp -- anywhere
169.254.169.254
tcp dpt:http redir ports 9697!
DNAT
all -- anywhere
172.16.65.101
to:10.12.13.2!
!
Chain neutron-l3-agent-float-snat (1 references)!
target
prot opt source
destination
!
SNAT
all -- 10.12.13.2
anywhere
to:172.16.65.101!
!
Chain neutron-l3-agent-snat (1 references)!
target
prot opt source
destination
!
neutron-l3-agent-float-snat all -- anywhere
anywhere
!
SNAT
all -- 10.12.13.0/24
anywhere
to:172.16.65.100!
!
Chain neutron-postrouting-bottom (1 references)!
target
prot opt source
destination
!
neutron-l3-agent-snat all -- anywhere
anywhere !

Neutron oaEng-ip, IPTables NAT


This is what a oa5ng IP looks like in the router Namespace and in IPTables
# iptables -t nat -L!
!
!
# floating-ip DNAT!
...SNIP ...!

!
Chain neutron-l3-agent-OUTPUT (1 references)!
target
prot opt source
destination
!
DNAT
all -- anywhere
172.16.65.101
to:10.12.13.2!
!
Chain neutron-l3-agent-POSTROUTING (1 references)!
target
prot opt source
destination
!
ACCEPT
all -- anywhere
anywhere
! ctstate DNAT!
!
Chain neutron-l3-agent-PREROUTING (1 references)!
target
prot opt source
destination
!
REDIRECT
tcp -- anywhere
169.254.169.254
tcp dpt:http redir ports 9697!
DNAT
all -- anywhere
172.16.65.101
to:10.12.13.2!
!
Chain neutron-l3-agent-float-snat (1 references)!
target
prot opt source
destination
!
SNAT
all -- 10.12.13.2
anywhere
to:172.16.65.101!
!
!
Chain neutron-l3-agent-snat (1 references)!
# floating-ip SNAT!
target
prot opt source
destination
!

neutron-l3-agent-float-snat all -- anywhere
anywhere
!
SNAT
all -- 10.12.13.0/24
anywhere
to:172.16.65.100!
!
!
Chain neutron-postrouting-bottom (1 references)!
# default SNAT for all
target
prot opt source
destination
!
instances!
neutron-l3-agent-snat all -- anywhere
anywhere !

Das könnte Ihnen auch gefallen