typedef int (*funcptr)();

An engineers technical notebook

Change one password

It is that time of year where security professionals the world over end up talking with friends and family about security. It will be inevitable, almost as inevitable as someone wearing a stupid Christmas sweater they are a little too proud of.

The standard advice we've been giving for years is pretty simple:

  1. Don't re-use your passwords across sites
  2. Use a password manager

Anyone that has done technical support for anyone that isn't as familiar with IT knows well that as soon as you complicate something, you end up getting twice the calls, even for things that are not your fault; "Well, since you setup that password thing my printer won't print" ...

It is fantastic advice, it is where we should all strive to be, we should all have password managers and should never re-use passwords.

However let's change one single password. Start small.

There is likely to be a single account that is the root of trust for all other accounts. An email address, either at an ISP somewhere (and maybe this is the year you get them to switch from that old Earthlink email address?) or more likely a free email provider.

That's the account we want to target.

If we can secure the root of trust, the email address that can be used for password reset emails and for phishing we've already won a large battle. Individual accounts may still be "vulnerable", but now we've closed one giant hole.

After all, we all learn to walk before we learn how to run. This small step can set the tone for even more and better security later.

Should we go further? Absolutely, identify the primary accounts that are high risk, as an example:

  1. Facebook
  2. Apple iCloud
  3. Microsoft account
  4. Twitter

Facebook Login/Twitter is used across many different websites, Apple's iCloud allows remote wipe of devices, and Microsoft Account is used for access to local machines and likely to OneDrive and other online accounts storing personal documents and files.

There are many more that I am missing, those can be next, but even the above tend to roll back up to a single email address.

There is nothing new under the sun, and password re-use is well known and ridiculed, even Randall Munroe of XKCD fame published a comic about password re-use a long time ago, however there is one comic that comes to mind to help create better passwords:

correct horse battery staple

Pick four random words from the English language, create a funny sentence and you are off to the races. Don't use correct horse battery staple as a password, it's a terrible password now, but the idea behind generating such a password is fantastic.

Just changing one password can increase someones security posture just a little bit, and who knows, next year you'll have received less spam email that can be traced back to their address book being siphoned off and then abused.

For bonus points, have them sign up for ';--have i been pwned?, now each time a new service is breached your friends or relatives will get a little bit of notice, and can get an idea for why different passwords are a necessity these days, and maybe next year they will ask you to show them how to set up that password manager so they can be even more secure!

Happy Holidays, and good luck with your IT help desk duties this year, especially getting that printer driver installed, because lets be honest, we'll get blamed for the broken printer in two months whether we touched it or not.

Python Packaging and Distribution

There's been many a discussion online related to a variety of tools related to Python packaging and distribution. There is pip, pipenv and poetry that have been the tools under discussion.

As an open source maintainer as part of the Pylons Project, while I would love to be writing code I end up spending a lot of time dealing with user questions around packaging/distributing their source code using the software I've helped build, and as we move forward myself and other maintainers were wondering if we were actually helping users move forward in the best way possible using best of breed tools.1

As the Python community has moved from easy_install to pip, we too have kept the documentation up to date. We went from python setup.py develop to pip install -e . to create editable installs of local projects, and try to let people know the pitfalls of using both easy_install and pip in the same project (mostly with an answer that falls in line with: remove your virtual environment and start over, just use pip).

As part of Pyramid we have developed and maintain various different cookiecutter templates, and our goal is to attempt to provide templates that are both useful, but also follow best practices that are being adopted within the community at large so that newcomers can use their existing skills/knowledge and those that are starting with us walk away with a knowledge and experience that applies not just to development of Pyramid applications, but also applies to the broader community as a whole.

pip

Pip is a great tool that has simplified installation of packages, it supports using binary distributions named wheels and has a way to easily install software from the Python Packaging Index. It has a rather naive dependency resolution process, but for the most part it works and works well. It replaced easy_install as the tool to use for installing packages.

While you can use a requirements.txt file with pip to install a "blessed" list of software there is no good way to "lock" the dependencies of dependencies without manually adding it to the list of requirements. This ends up making it very difficult to manage, and it is very difficult to know that what has been tested is what the user is actually going to get because packages may be updated at anytime, and re-creating the same exact environment is difficult and fraught with errors.

This is where Pipfile is supposed to help. This is a project to add a new, more descriptive requirements file, as well as allowing for a lockfile that would lock not just your primary packages you have listed, but also all dependencies of dependencies all the way down the tree. This helps with reproducibility and allows for the same installation on two different systems to have the exact same software/dependencies installed.

pipenv

While pip is a great tool, and with the Pipfile changes it would allow for locking of dependencies, there is one more puzzle piece missing. When installing packages while you can install them into the global namespace, the recommended way is to install all packages for a particular tool/project into a virtual environment.

Normally you'd invoke virtualenv, to create this environment and then you'd make sure to install all packages within it, thereby isolating it from the rest of the system.

pipenv automates this for you, as well as using Pipfile it also supports locking using Pipfile.lock and provides a bunch of tooling around adding/removing dependencies from a local project.

pipenv allows you to easily create an environment and manage dependencies, but it makes no effort to solve the problem of distributing and building a package that may be installed by third parties.

poetry

Poetry is a similar project to pipenv, with a major difference being that it was built to help with distributing/developing applications and building a distributable package that may then be installed using pip.

Instead of using a Pipfile it uses a recently standardised pyproject.toml file instead. Like pipenv it also supports locking, and it provides tooling around adding/removing dependencies as well as managing what versions are required.

Ultimately those dependencies are going to end up as metadata in a distributable package.

Poetry makes it easier to manage a software development project, whether that is for an application using various libraries for internal use, or for libraries that are going to be distributed to other developers.

The divide

This is where the divide really starts, while you can use pipenv with a standard setuptools project, any dependencies you add to the Pipfile using pipenv's tooling will not be listed as a dependencies for your project when you distribute it, this either means you need to duplicate the list in both setup.py as well as the Pipfile, or you have to add your current project as an editable install within your Pipfile which means your Pipfile is now not as easily distributable.

There are work-arounds that people have used, such as having setup.py read a requirements.txt, so that you could have all your requirements listed in a text file, and not in setup.py, but asking to do the same with a Pipfile in pipenv was met with a "Do not do this.".

poetry explicitly allows you to add dependencies in one place, and those dependency listings are then automatically inserted into the package metadata that is created when you build your distributable package.

The two use cases

There are two competing use cases, one is the deployment of software packages and being able to run them, but not as a developer, the other is a developer of software packages that needs to define dependencies for the project to run.

pipenv solves the deployment case. If I was a user I could very simply grab a known good Pipfile.lock and use pipenv to install a known good set of software, this is great when I am deploying a project. It is the use case that many in the Python Packaging Authority also seem to be optimizing for.

The other use case is for developers that are building new software, either by using a list of existing packages and deploying privately, or people developing software for other developers to be published on the Python Packaging Index.

This latter group of people is under represented due to it likely being much smaller, and existing tools like setuptools and setup.py already providing a "good enough" experience. Innovation in this area is something that readily needs to be improved upon to make it easier to create new libraries/packages that follow best practices. The amount of copy and pastes people have done for adding a setup.py to their projects or to make something work is long. It's all a little bit of black magic, and there is a great many things that have been carried over because of cargo cult programming.

Explicit mentions by the Python Packaging Authority

Reading the packaging guide on managing dependencies, pipenv is the recommended tool:

This tutorial walks you through the use of Pipenv to manage dependencies for an application. It will show you how to install and use the necessary tools and make strong recommendations on best practices.

this language, along with what packaging.python.org implies as a URL makes it difficult as a project maintainer to recommend alternate tools, becuase even if those tools are superior for the use case we are recommending them for it is always going to lead to questions from users, such as:

Why are you not using pipenv, the official tool recommended by Python.org?

We get similar questions about easy_install vs pip all of the time, as well as why people should switch, and we can point to various bits of documentation that explains why pip is a better choice.

If we were to recommend an alternate the appeal to authority that python.org implies is going to make it much more difficult, and the question will become "why is the Pylons Project not using recommended tooling?"

poetry is listed as a footnote on that page, alongside pip-tools and hatch, and is mentioned only for doing library development, with no mention of other requirements that may make it a much better tool for developing locally.

Deployment is not development

If I am using pipenv with a non-installable project (no setup.py) I end up having to figure out how to get the code, and the Pipfile/Pipfile.lock to my environment I am deploying into. pipenv's install provides a way to make sure to only install if the Pipfile.lock is up to date or otherwise will fail to continue. If you are using a local project though, and it uses setup.py the only way that the Pipfile.lock will contain any sub-dependencies of your setup.py project is if you install it as editable. Otherwise sub-dependencies are not locked.2

If I am using poetry I get an pip installable project, but it doesn't contain any hard pins or lock files. I'd have to distribute pyproject.lock as well as my wheel. This gets me a little closer, but still no lock file that includes my newly produced wheel, and has all of its dependencies locked.

The Python Packaging Authority based on Twitter conversations with its members and the documentation on packaging.python.org suggest using pipenv for development. pipenv is particular ill-suited for development if the goal is to create a package to be deployed to production. With two locations to define dependencies it leaves people scratching their heads as to which is canonical, and if a dependency is added to Pipfile but not setup.py it may leave a developer thinking their package is ready for distribution when in reality it is missing a dependency that is required to run/use said distribution.

At this point using both projects seems like a win-win. Use poetry to build/develop a package, then use pipenv in the integration phase to create a Pipfile.lock that is used to deploy in production. This way you get the best of both worlds. A great tool that can help you register entry points and another that can help you with deploying a known good set of dependencies.

Interestingly, even the pipenv docs seem to agree that it is a deployment tool:

Specify your target Python version in your Pipfile’s [requires] section. Ideally, you should only have one target Python version, as this is a deployment tool.

-- Pipenv - General Recommendations & Version Control

Use pipenv if you have a script that requires a couple of dependencies and doesn't need all of the extra overhead of packaging metadata/packaging. Use poetry if you want to build a distributable project that can easily be deployed by others, and use both if you develop a project and need a known good environment to deploy.

In summary

There will likely never be a time that one single tool is considered good enough, and competition between tools is a way to keep advancing forward. Packaging in the Python community for a long time has been difficult. Wheels has made things a little better. pip has made management of installing new packages easier and improved upon easy_install. Here's to the next evolution.


Now, can we talk about standardising on pyproject.toml since that is already where "project" metadata needs to go, might as well re-use the name instead of having two different names/files. Oh, and PEP 517 can't come soon enough so that alternate tools like flit can be used instead of setuptools/setup.py.


  1. We created an issue named Support poetry, flit, pipenv, or ...? that attempts to go over the pros and cons of the various tools and how we currently support our users in our documentation on building projects using pyramid, including how to create a project that is distributable. Pyramid heavily uses pkg_resources and entry points. The way to register the entry points is to have an installable package.

    The framework is flexible enough that there is no requirement for entry points, but at that point you are in territory where the default tooling provided by the project will not work, and some of the convenience tools/functionality that Pyramid provides it's users/developers is not available. 

  2. See documentation for Editable Dependencies (e.g. -e .) which as of this writing states:

    Sub-dependencies are not added to the Pipfile.lock if you leave the -e option out.

Mac OS X El Capitan Installer Removes Custom Group ID and Membership

As always, after Apple releases their new operating system, my systems are upgraded. This time the upgrade was less of a surprise in terms of what it brings because I'd been beta testing the new release for the past couple of weeks, however I was still caught off guard.

On OS X, by default all user accounts start at ID 501 and count up, so if you have two accounts, you will have user ID 501 and 502 in use. For most people they will most likely never change this, and all is well. The default group ID for all new user accounts is staff which has a group ID of 20. So if you have a single account named for example janedoe her user ID would be 501 and her group ID would be 20 (staff).

Coming from a FreeBSD world and running a lot of FreeBSD systems, user accounts start at 1001, and count up. When you create a new user account on FreeBSD, by default that user is also added to a group with the same name as the username, with the same ID. So you end up with an account with ID 1001 and default group ID 1001. Using the same example, a user named janedoe would have a user ID of 1001, and a group ID of 1001 (janedoe).

When I first installed OS X, and almost every single new installation since, I have followed these steps to change my user ID and group ID to match those on my FreeBSD systems:

  1. Assumption is that you have a separate user account other than the one you are about to modify that you can temporarily use that has administrator privileges on the local Mac; I create an "Administrator" account for that reason.
  2. System Preferences
  3. Users and Groups
  4. Click the + (You may need to click the lock in the bottom left first)
  5. Change the dropdown to group
  6. Enter Full Name: janedoe
  7. Create group
  8. Right click on group (janedoe)
  9. Advanced Options...
  10. Change the Group ID to 1001
  11. Okay
  12. Right click on user (janedoe)
  13. Advanced Options...
  14. Change User ID from 501 to 1001
  15. Change Group from staff to janedoe
  16. Okay
  17. Close System Preferences
  18. Open Terminal, become root user (sudo su)
  19. cd /Users/janedoe
  20. find . -uid 501 -print0 | xargs -0 chown 1001:1001

This allows me to have the same user ID and group ID on both my Mac OS X and on FreeBSD, thereby making it easier to use tools like rsync that keeps ownership and permissions, as well as using NFS. Other ways to do something similar is using LDAP/Kerberos with shared directory service, but that is a little heavy handed for a home network.

This has worked for me without issues since OS X 10.8, even upgrading from 10.8 to 10.9 and then 10.10 did not change anything. However as soon as I did the upgrade to El Capitan (10.11) I noticed that all of my ls -lah output looked like this:

drwxr-xr-x+  13 xistence  1001   442B Oct  1 16:58 Desktop
drwx------+  28 xistence  1001   952B Aug 31 12:17 Documents
drwx------+  89 xistence  1001   3.0K Oct  1 15:56 Downloads
drwx------@  72 xistence  1001   2.4K Oct  2 00:16 Library

and id provided this valuable output:

uid=1001(xistence) gid=20(xistence) groups=20(xistence),12(everyone),61(localaccounts),399(com.apple.access_ssh),402(com.apple.sharepoint.group.2),401(com.apple.sharepoint.group.1),100(_lpoperator)

Wait, what happened to the staff group that I am supposed to be a member of, and why is my xistence group ID now stating it is 20 and not 1001 as I was expecting.

I wondered if the upgrade had messed up my group somehow, and it was confirmed when I checked with dscl.

$ dscl . -read /Groups/xistence
[...]
Password: *
PrimaryGroupID: 20
RealName: xistence
RecordName: xistence
RecordType: dsRecTypeStandard:Groups

Do note that the group xistence does not show up in System Preferences -> Users and Groups, so we'll have to do some command line magic.

Well, that's worrisome, why is this matching a built-in group's ID? Specifically let's check the staff group and make sure it still has the appropriate group ID.

$ dscl . -read /Groups/staff
[...]
GroupMembership: root
Password: *
PrimaryGroupID: 20
RealName: Staff
RecordName: staff BUILTIN\Users
RecordType: dsRecTypeStandard:Groups

Next I had to check to see what my user account was set to as the default group ID:

$ dscl . -read /Users/xistence
[...]
NFSHomeDirectory: /Users/xistence
Password: ********
PrimaryGroupID: 20
RealName:
 Bert JW Regeer
RecordName: xistence bertjw@regeer.org com.apple.idms.appleid.prd.53696d524c62372b48344a53755864634e4f374b32513d3d
RecordType: dsRecTypeStandard:Users
UniqueID: 1001
UserShell: /bin/bash

Well, that is not entirely what I was expecting either, at last it didn't touch my user ID. Time to fix things.

First let's change the xistence group's group ID to 1001, and then change the Primary Group ID for the user xistence to group ID 1001.

# dscl . -change /Groups/xistence PrimaryGroupID 20 1001
# dscl . -change /Users/xistence PrimaryGroupID 20 1001

After that id looked a little bit more sane:

uid=1001(xistence) gid=1001(xistence) groups=1001(xistence),12(everyone),61(localaccounts),399(com.apple.access_ssh),402(com.apple.sharepoint.group.2),401(com.apple.sharepoint.group.1),100(_lpoperator)

However now the group staff is missing from the list of groups that the user xistence is a member of, which I don't think will hurt anything, but we still want to be able to read/write any folders that are designated as staff elsewhere in the OS, and any other privileges that entails. So let's add the user xistence to the staff group:

# dscl . -append /Groups/staff GroupMembership xistence

Let's verify, and check id again:

uid=1001(xistence) gid=1001(xistence) groups=1001(xistence),12(everyone),20(staff),61(localaccounts),399(com.apple.access_ssh),402(com.apple.sharepoint.group.2),401(com.apple.sharepoint.group.1),100(_lpoperator)

For this to fully take effect, log out and log back in. This will make sure that all new files have the correct user ID/group ID set.

After the change to the Group ID, the group still doesn't show up in System Preferences -> Users and Groups, which I find weird since it is not a built-in group.

Luckily everything is back to the way it was before the upgrade, and my backup scripts and NFS shares work again without issues.

Cobbler with CentOS 7 failure to boot/kickstart

Over the past week I've been working on building out an instance of Cobbler and testing some of the provisioning that it is able to do. One of the operating systems that I wanted to deploy is CentOS 7.

After I imported the system into cobbler, it correctly showed up in the pxelinux boot menu and it would happily load the kernel and the initrd, however after initial bootup it would throw the following error message:

dracut-initqueue[867]: Warning: Could not boot.
dracut-initqueue[867]: Warning: /dev/root does not exist

         Starting Dracut Emergency Shell...
Warning: /dev/root does not exist

Generating "/run/initramfs/rdsosreport.txt"


Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view the system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report

After that it gives you a root shell.

Some Google searching led me to an mailing list post for Cobbler where someone mentioned that adding ksdevice=link to the Cobbler profile allowed the system to boot without issues.

However before I just implement a change I want to know why that fixes the issue, so I searched Google for "kickstart ksdevice" and found Red Hat's documentation on starting a kickstart. Searching that page for "ksdevice" led me to this section:

ksdevice=<device>

The installation program uses this network device to connect to the network. You can specify the device in one of five ways:

  • the device name of the interface, for example, eth0
  • the MAC address of the interface, for example, 00:12:34:56:78:9a
  • the keyword link, which specifies the first interface with its link in the up state
  • the keyword bootif, which uses the MAC address that pxelinux set in the BOOTIF variable. Set IPAPPEND 2 in your pxelinux.cfg file to have pxelinux set the BOOTIF variable.
  • the keyword ibft, which uses the MAC address of the interface specified by iBFT

For example, consider a system connected to an NFS server through the eth1 device. To perform a kickstart installation on this system using a kickstart file from the NFS server, you would use the command ks=nfs:<server>:/<path> ksdevice=eth1 at the boot: prompt.

While ksdevice=link would work for some of the machines I am deploying, it wouldn't work for most since they have multiple interfaces and each one of those interfaces would have link, what I really wanted was ksdevice=bootif, which is the most sensible default.

So I modified the profile with ksdevice=link just to test, and that worked without issues, so then I modified the profile and added ksdevice=link, and this failed.

I figured I should check the pxelinux.cfg/default file that Cobbler generates upon issuing a cobbler sync and verify that ksdevice=bootif is actually listed correctly.

What I found was this:

LABEL CentOS-7.1-x86_64
        kernel /images/CentOS-7.1-x86_64/vmlinuz
        MENU LABEL CentOS-7.1-x86_64
        append initrd=/images/CentOS-7.1-x86_64/initrd.img ksdevice=${net0/mac} lang=  kssendmac text  ks=http://10.10.10.1/cblr/svc/op/ks/profile/CentOS-7.1-x86_64
        ipappend 2

This has a ksdevice=${net0/mac} which is not what I had put in the profile, overwriting ksdevice in the profile with ksdevice=link did correctly put that into the pxelinux.cfg/default file, so Cobbler was overwriting my change somehow.

A quick search of ${net0/mac} led me to a page about gPXE commandline items that contained the same variable. At which point I remembered that in Cobbler you set up your profile to be gPXE enabled or not. The default when you import an image is to enable gPXE support.

cobbler profile report  --name=CentOS-7.1-x86_64

Name                           : CentOS-7.1-x86_64
TFTP Boot Files                : {}
Comment                        : 
DHCP Tag                       : default
Distribution                   : CentOS-7.1-x86_64
Enable gPXE?                   : True
Enable PXE Menu?               : 1
[...]

So let's modify the profile to disable gPXE support:

cobbler profile edit --name=CentOS-7.1-x86_64 --enable-gpxe=False
cobbler sync

Verify that the change was made:

cobbler profile report  --name=CentOS-7.1-x86_64

[...]
Enable gPXE?                   : False
[...]

Then let's take a look at our pxelinux.cfg/default file and make sure that it looks correct:

LABEL CentOS-7.1-x86_64
        kernel /images/CentOS-7.1-x86_64/vmlinuz
        MENU LABEL CentOS-7.1-x86_64
        append initrd=/images/CentOS-7.1-x86_64/initrd.img ksdevice=bootif lang=  kssendmac text  ks=http://10.10.10.1/cblr/svc/op/ks/profile/CentOS-7.1-x86_64
        ipappend 2

This time our ksdevice is correctly set. Upon rebooting my PXE booted server it picked up the correct interface, made a DHCP request and kickstarted the server using the provided kickstart file, and installation completed successfully.

So unless you chain-boot gPXE from pxelinux by default, make sure that your profiles are not set to be gPXE enabled if you want to use them directly from the pxelinux menu.

While researching more about this article, I found a blog post by Vlad Ionescu about PXE installing RHEL 7 from Cobbler where he suggests disabling ksdevice entirely and adding an extra inst.repo variable to the kernel command line, however on older versions of CentOS 7 and Red Hat Enterprise Linux 7 there is a bug report that shows that an empty ksdevice could cause anaconda to crash, and setting a manual inst.repo for every profile seems like overkill when just disabling gPXE for the profile also solves the problem.

Neutron L3 agent with multiple provider networks

Due to requirements outside of my control, there was a requirement to run multiple "provider" networks each with each providing their own floating address pool from a single network node, I wanted to do this as simply as possible using a single l3 agent rather than having to figure out how to get systemd to start multiple with different configuration files.

Currently I've installed and configured an OpenStack instance that looks like this:

+---------------------+
|                     |
|                  +--+----+
|                  |       |
|      +-----------+-+  +--+----------+
|      | Compute     |  | Compute     |
|      |     01      |  |     02      |
|      +------+------+  +-----+-------+
|             |               |
|             |               +----------+
|             +------------+--+          |
|                          |             |
| +-------------+    +-----+-------+     |
| | Controller  |    |   Network   |     |
| |             |    |             |     +---+  Tenant Networks (vlan tagged) (vlan ID's 350 - 400)
| +-----+----+--+    +------+----+-+
|       |    |              |    |
|       |    |              |    +-----------+  Floating Networks (vlan tagged) (vlan ID's 340 - 349)
|       |    |              |
|       |    |              |
+------------+--------------+----------------+  Management Network (10.5.2.0/25)
        |
        |
        +------------------------------------+  External API Network (10.5.2.128/25)

There are two compute nodes, a controller node that runs all of the API services, and a network node that is strictly used for providing network functions (routers, load balancers, firewalls, all that fun stuff!).

There are two flat networks that provide the following:

  1. External API access
  2. A management network that OpenStack uses internally to communicate between instances and to manage it, which is not accessible from the other three networks.

The other two networks are both vlan tagged:

  1. Tenant networks, with the possibility of 50 vlan ID's
  2. Floating networks, with existing vlan ID's for existing networks

Since the OpenStack Icehouse release, the l3 agent has supported the ability to use the Open vSwitch configuration to specify how traffic should be routed rather than statically defining that a single l3 agent routes certain traffic to a single Linux bridge. Setting this up is fairly simple if you follow the documentation, with one caveat, variables you think would be defined to no value, actually have a value and thus need to be explicitly zeroed out.

On the network node

First, we need to configure the l3 agent, so let's set some extra variables in /etc/neutron/l3-agent.ini:

gateway_external_network_id =
external_network_bridge =

It is important that these two are set, not left commented out, unfortunately when commented out they have some defaults set and it will fail to work, so explicitly setting them to blank will fix that issue.

Next, we need to set up our Open vSwitch configuration. In /etc/neutron/plugin.ini the following needs to be configured:

  • bridge_mappings
  • network_vlan_ranges

Note, that these may already be configured, in which case there is nothing left to do. Mine currently looks like this:

bridge_mappings = tenant1:br-tnt,provider1:br-ex

This basically specifies that any networks created under "provider name" tenant1 are going to be mapped to the Open vSwitch bridge br-tnt and any networks with "provider name" provider1 will be mapped to br-ex.

br-tnt is mapped to my tenant network and on the switch has vlan ID's 350 - 400 assigned, and br-ex has vlan ID's 340 - 349 assigned.

Following the above knowledge, my network_vlan_ranges is configured as such:

network_vlan_ranges = tenant1:350:400,provider1:340:349

Make sure to restart all neutron services:

openstack-service restart neutron

On the controller (where neutron-server lives)

On the controller we just need to make sure that our network_vlan_ranges matches what is on the network node, with one exception, we do not list our provider1 vlan ranges since we don't want to make those available to accidentally be assigned when a regular tenant creates a new network.

So our configuration should list:

network_vlan_ranges = tenant1:350:400

Make sure that all neutron services are restarted:

openstack-service restart neutron

Create the Neutron networks

Now, as an administrative user we need to create the provider networks.

source ~/keystonerc_admin

neutron net-create "192.168.1.0/24-floating" \
--router:external True \
--provider:network_type vlan \
--provider:physical_network provider1 \
--provider:segmentation_id 340

neutron net-create "192.168.2.0/24-floating" \
--router:external True \
--provider:network_type vlan \
--provider:physical_network provider1 \
--provider:segmentation_id 341

Notice how we've created two networks, given them each individual names (I like to use the name of the network they are going to be used for) and have been attached to the provider1. Note that provider1 is completely administratively defined, and could just as well have been physnet1, so long as it is consistent across all of the configuration files.

Now let's create subnets on this network:

neutron subnet-create "192.168.1.0/24-floating" 192.168.1.0/24 \
--allocation-pool start=192.168.1.4,end=192.168.1.254 \
--disable-dhcp --gateway 192.168.1.1

neutron subnet-create "192.168.2.0/24-floating" 192.168.2.0/24 \
--allocation-pool start=192.168.2.4,end=192.168.2.254 \
--disable-dhcp --gateway 192.168.2.1

Now that these networks are defined, you should be able to have tenants create routers and set their gateways to either of these new networks by selecting from the drop-down in Horizon or by calling neutron router-gateway-set <router id> <network id> on the command line.

The l3 agent will automatically configure and set up the router as required on the network node, and traffic will flow to either vlan 340 or vlan 341 as defined above depending on what floating network the user uses as a gateway.

This drastically simplifies the configuration of multiple floating IP networks since no longer is there a requirement to start up and configure multiple l3 agents each with their own network ID configured. This makes configuration less brittle and easier to maintain over time.