Beruflich Dokumente
Kultur Dokumente
CCP offers LBaaS to its customers which can be used to provide Loadbalancing for tenant VMs. CCP uses F5 load balancers which supports
L4-L7 functionality among virtual networks of different types (L3 gateway). In CCP, Load-Balancer-as-a-Service (LBaas) is configured with the F5
plug-in.The f5 LBaaS plugin driver, as well as the f5 LBaaS agent with iControl driver, supports the OpenStack Neutron Havana release. There
are no alternations made to the community LBaaS database schema, nor any additional database tables generated.
By default the f5 LBaaS plugin uses an agent scheduler which will keep all LBaaS Pools associated with the same tenant on the same TMOS
Device Service Group. This aids in scaling by keeping all network objects associated with an OpenStack tenant on the same TMOS Device
Service Group, instead of requiring them to be provisioned on all possible TMOS Device Service Groups in your cloud deployment.
Enable the load-balancing plug-in by using the service_plugins option in the /etc/neutron/neutron.conf file:
Add lbaas to the list, using a comma as separator. For example:
service_plugins =
neutron.services.l3_router.l3_router_plugin.L3Route
rPlugin,neutron.services.loadbalancer.plugin.LoadBa
lancerPlugin
service_provider =
LOADBALANCER:F5:neutron.services.loadbalancer.drive
rs.f5.plugin_driver.F5PluginDriver:default
Make sure the following configured in the /etc/neutron/f5-bigip-lbaas-agent.ini file. Details are at bottom of this page from readme with plugin.
f5_device_type = external
f5_ha_type = pair
#each device is configured seperately
sync_mode = replication
#Device VLAN to interface and tag mapping
f5_external_physical_mappings = default:1.3:True, f
loating:Portchannel_To_Access_Rack_TOR:True
#Device Tunneling (VTEP) selfips
f5_vtep_folder = 'Common'
f5_vtep_selfip_name = 'vtep'
#L2 population service to update fdb tunnel
entries.
l2_population = True
f5_global_routed_mode = False
use_namespaces = True
f5_route_domain_strictness = False
f5_snat_mode = True
f5_snat_addresses_per_subnet = 1
# This setting will cause all networks with
# the router:external attribute set to True
# to be created in the Common partition and
# placed in route domain 0.
f5_common_external_networks = True
###################################################
############################
# Device Driver - iControl Driver Setting
###################################################
############################
icontrol_hostname = <f5-1 MGMT IP>,<f5-2 MGMT IP>
# icontrol_username must be a valid Administrator
username
# on all devices in a device sync failover group.
#If Multiple LbAgents in environment,each agent
host should use different iControl admin user.
icontrol_username = <username>
icontrol_password = <password>
icontrol_connection_retry_interval = 10
# Device Driver Setting
f5_bigip_lbaas_device_driver =
neutron.services.loadbalancer.drivers.f5.bigip.icon
trol_driver.iControlDriver
1
2
3
4
OPENSTACK_NEUTRON_NETWORK = {
'enable_lb': True,
...
}
Apply the settings by restarting the httpd service. You can now view the Load Balancer management options in the Project view in the
dashboard.
The F5 device also need some configs for plugin to work. Most importantly it needs SDN license for the GRE and overlay to work. Check the SDN
license on device.
S1-green-P:Active:Changes Pending] config # tmsh show sys license | grep SDN
SDN Services (KYEPIDK-JUSAAZB)
Setting
Allowed Values
Default Value
Description
debug
True or False
False
periodic_interval
10
use_namespace
True or False
True
f5_static_agent_configuration_d
ata
None
f5_device_type
external
external
f5_ha_type
pair
sync_mode
auto_sync or replication
replication
f5_external_physical_mappings
default:1.1:True
f5_vtep_folder
Common
f5_vtep_selfip_name
vtep
f5_global_routed_mode
True or False
False
f5_snat_mode
True or False
True
f5_snat_addresses_per_subnet
f5_route_domain_strictness
True or False
False
f5_common_external_networks
True or False
False
f5_bigip_lbaas_device_driver
neutron.services.loadbalancer.dr
ivers.f5.bigip.icontrol_driver.iCon
trolDriver
In EA only the
neutron.services.loadbalancer.dr
ivers.f5.bigip.icontrol_driver.iCon
trolDriver is allowed
icontrol_hostname
192.168.1.245
icontrol_username
admin
icontrol_password
admin
icontrol_connection_retry_interv
al
10
VLANs
For VLAN connectivity, the f5 TMOS devices use a mapping between the Neutron network provider:physical_network attribute and TMM interface
names. This is analogous to the Open vSwitch agents mapping between the Neutron network provider:physical_network and their interface bridge
name. The mapping is created in the f5-bigip-lbaas-agent.ini configuration file using in the f5_external_physical_mappings setting. The name of
the provider:physical_network entries can be added to a comman separated list with mappings to the TMM interface or LAG trunk name, and a
boolean attribute to specify if 802.1q tagging will be applied. An example would which would map the provider:physical_network containing
'ph-eth3' to TMM interface 1.1 with 802.1q tagging would like like this:
f5_external_physical_mappings = ph-eth3:1.1:True
A default mapping should be included for cases where the provider:physical_network does not match any configuration settings. A default
mapping simply uses the word default instead of a known provider:physical_network attribute.
An example that would include the previously illustrated mapping, a default mapping, and LAG trunk mapping, might look like this:
f5_external_physical_mappings = default:1.1:True, ph-eth3:1.1:True, ph-eth4:lag-trunk-1:True
Tunnels
For GRE and VxLAN tunnels, the f5 TMOS devices expect to communicate with Open vSwitch VTEPs. The VTEP addresses for Open vSwitch
VTEPs is learned from their registered Neutron agent configurations tunneling_ip attribute.
The ML2 L2 Population service is supported by the f5 LBaaS iControl driver, such that only Open vSwitch agents hosting Members will have
overlay tunnels built to them for Member IP access. When the ML2 L2 Population service is used, static ARP entries will be created on the TMOS
devices to remove the need for them to send ARP broadcast (flooding) across the tunnels to learn the location of Members. In order to support
this, the ML2 port binding extensions and segmentation models must be present. The port binding extensions and segmentation model are
defined by default with the community ML2 core plugin and Open vSwitch agents on the compute nodes.
When VIPs are placed on tenant overlay networks, the f5 LBaaS agent will send tunnel update RPC messages to the Open vSwitch agents
informing them of TMOS device VTEPs. This enables tenant guest virtual machines or network node services to interact with the TMOS
provisioned VIPs across overlay networks. An f5 LBaaS agent's connected TMOS VTEP addresses are placed in the agent's configurations and
reported to Neutron. The VTEP addesses are listed as tunneling_ips.
tenant will also have a TMOS route domain created, providing segmentation for IP address spaces between tenants.
If an assocated Neutron network for a VIP or Member are marked as shared=True, and the f5 LBaaS agent is not in global routed mode, all
assoicated L2 and L3 objects will be created in the /Common administrative partition and associated with route domain 0 (zero) on all TMOS
devices.