Due to requirements outside of my control, there was a requirement to run multiple "provider" networks each with each providing their own floating address pool from a single network node, I wanted to do this as simply as possible using a single l3 agent rather than having to figure out how to get systemd to start multiple with different configuration files.
Currently I've installed and configured an OpenStack instance that looks like this:
+---------------------+ | | | +--+----+ | | | | +-----------+-+ +--+----------+ | | Compute | | Compute | | | 01 | | 02 | | +------+------+ +-----+-------+ | | | | | +----------+ | +------------+--+ | | | | | +-------------+ +-----+-------+ | | | Controller | | Network | | | | | | | +---+ Tenant Networks (vlan tagged) (vlan ID's 350 - 400) | +-----+----+--+ +------+----+-+ | | | | | | | | | +-----------+ Floating Networks (vlan tagged) (vlan ID's 340 - 349) | | | | | | | | +------------+--------------+----------------+ Management Network (10.5.2.0/25) | | +------------------------------------+ External API Network (10.5.2.128/25)
There are two compute nodes, a controller node that runs all of the API services, and a network node that is strictly used for providing network functions (routers, load balancers, firewalls, all that fun stuff!).
There are two flat networks that provide the following:
- External API access
- A management network that OpenStack uses internally to communicate between instances and to manage it, which is not accessible from the other three networks.
The other two networks are both vlan tagged:
- Tenant networks, with the possibility of 50 vlan ID's
- Floating networks, with existing vlan ID's for existing networks
Since the OpenStack Icehouse release, the l3 agent has supported the ability to use the Open vSwitch configuration to specify how traffic should be routed rather than statically defining that a single l3 agent routes certain traffic to a single Linux bridge. Setting this up is fairly simple if you follow the documentation, with one caveat, variables you think would be defined to no value, actually have a value and thus need to be explicitly zeroed out.
On the network node
First, we need to configure the l3 agent, so let's set some extra variables in
gateway_external_network_id = external_network_bridge =
It is important that these two are set, not left commented out, unfortunately when commented out they have some defaults set and it will fail to work, so explicitly setting them to blank will fix that issue.
Next, we need to set up our Open vSwitch configuration. In
/etc/neutron/plugin.ini the following needs to be configured:
Note, that these may already be configured, in which case there is nothing left to do. Mine currently looks like this:
bridge_mappings = tenant1:br-tnt,provider1:br-ex
This basically specifies that any networks created under "provider name"
tenant1 are going to be mapped to the Open vSwitch bridge
br-tnt and any
networks with "provider name"
provider1 will be mapped to
br-tnt is mapped to my tenant network and on the switch has vlan ID's 350 -
400 assigned, and
br-ex has vlan ID's 340 - 349 assigned.
Following the above knowledge, my
network_vlan_ranges is configured as such:
network_vlan_ranges = tenant1:350:400,provider1:340:349
Make sure to restart all neutron services:
openstack-service restart neutron
On the controller (where
On the controller we just need to make sure that our
matches what is on the network node, with one exception, we do not list our
provider1 vlan ranges since we don't want to make those available to
accidentally be assigned when a regular tenant creates a new network.
So our configuration should list:
network_vlan_ranges = tenant1:350:400
Make sure that all neutron services are restarted:
openstack-service restart neutron
Create the Neutron networks
Now, as an administrative user we need to create the provider networks.
source ~/keystonerc_admin neutron net-create "192.168.1.0/24-floating" \ --router:external True \ --provider:network_type vlan \ --provider:physical_network provider1 \ --provider:segmentation_id 340 neutron net-create "192.168.2.0/24-floating" \ --router:external True \ --provider:network_type vlan \ --provider:physical_network provider1 \ --provider:segmentation_id 341
Notice how we've created two networks, given them each individual names (I like
to use the name of the network they are going to be used for) and have been
attached to the
provider1. Note that
provider1 is completely
administratively defined, and could just as well have been
physnet1, so long
as it is consistent across all of the configuration files.
Now let's create subnets on this network:
neutron subnet-create "192.168.1.0/24-floating" 192.168.1.0/24 \ --allocation-pool start=192.168.1.4,end=192.168.1.254 \ --disable-dhcp --gateway 192.168.1.1 neutron subnet-create "192.168.2.0/24-floating" 192.168.2.0/24 \ --allocation-pool start=192.168.2.4,end=192.168.2.254 \ --disable-dhcp --gateway 192.168.2.1
Now that these networks are defined, you should be able to have tenants create
routers and set their gateways to either of these new networks by selecting
from the drop-down in Horizon or by calling
neutron router-gateway-set <router
id> <network id> on the command line.
The l3 agent will automatically configure and set up the router as required on the network node, and traffic will flow to either vlan 340 or vlan 341 as defined above depending on what floating network the user uses as a gateway.
This drastically simplifies the configuration of multiple floating IP networks since no longer is there a requirement to start up and configure multiple l3 agents each with their own network ID configured. This makes configuration less brittle and easier to maintain over time.