The standard advice we've been giving for years is pretty simple:
Anyone that has done technical support for anyone that isn't as familiar with IT knows well that as soon as you complicate something, you end up getting twice the calls, even for things that are not your fault; "Well, since you setup that password thing my printer won't print" ...
It is fantastic advice, it is where we should all strive to be, we should all have password managers and should never re-use passwords.
However let's change one single password. Start small.
There is likely to be a single account that is the root of trust for all other accounts. An email address, either at an ISP somewhere (and maybe this is the year you get them to switch from that old Earthlink email address?) or more likely a free email provider.
That's the account we want to target.
If we can secure the root of trust, the email address that can be used for password reset emails and for phishing we've already won a large battle. Individual accounts may still be "vulnerable", but now we've closed one giant hole.
After all, we all learn to walk before we learn how to run. This small step can set the tone for even more and better security later.
Should we go further? Absolutely, identify the primary accounts that are high risk, as an example:
Facebook Login/Twitter is used across many different websites, Apple's iCloud allows remote wipe of devices, and Microsoft Account is used for access to local machines and likely to OneDrive and other online accounts storing personal documents and files.
There are many more that I am missing, those can be next, but even the above tend to roll back up to a single email address.
There is nothing new under the sun, and password re-use is well known and ridiculed, even Randall Munroe of XKCD fame published a comic about password re-use a long time ago, however there is one comic that comes to mind to help create better passwords:
Pick four random words from the English language, create a funny sentence and
you are off to the races. Don't use correct horse battery staple
as a
password, it's a terrible password now, but the idea behind generating such a
password is fantastic.
Just changing one password can increase someones security posture just a little bit, and who knows, next year you'll have received less spam email that can be traced back to their address book being siphoned off and then abused.
For bonus points, have them sign up for ';--have i been pwned?, now each time a new service is breached your friends or relatives will get a little bit of notice, and can get an idea for why different passwords are a necessity these days, and maybe next year they will ask you to show them how to set up that password manager so they can be even more secure!
Happy Holidays, and good luck with your IT help desk duties this year, especially getting that printer driver installed, because lets be honest, we'll get blamed for the broken printer in two months whether we touched it or not.
]]>As an open source maintainer as part of the Pylons Project, while I would love to be writing code I end up spending a lot of time dealing with user questions around packaging/distributing their source code using the software I've helped build, and as we move forward myself and other maintainers were wondering if we were actually helping users move forward in the best way possible using best of breed tools.1
As the Python community has moved from easy_install to pip, we too have
kept the documentation up to date. We went from python setup.py develop
to
pip install -e .
to create editable installs of local projects, and try to
let people know the pitfalls of using both easy_install and pip in the same
project (mostly with an answer that falls in line with: remove your virtual
environment and start over, just use pip).
As part of Pyramid we have developed and maintain various different cookiecutter templates, and our goal is to attempt to provide templates that are both useful, but also follow best practices that are being adopted within the community at large so that newcomers can use their existing skills/knowledge and those that are starting with us walk away with a knowledge and experience that applies not just to development of Pyramid applications, but also applies to the broader community as a whole.
Pip is a great tool that has simplified installation of packages, it supports using binary distributions named wheels and has a way to easily install software from the Python Packaging Index. It has a rather naive dependency resolution process, but for the most part it works and works well. It replaced easy_install as the tool to use for installing packages.
While you can use a requirements.txt
file with pip to install a "blessed"
list of software there is no good way to "lock" the dependencies of
dependencies without manually adding it to the list of requirements. This ends
up making it very difficult to manage, and it is very difficult to know that
what has been tested is what the user is actually going to get because packages
may be updated at anytime, and re-creating the same exact environment is
difficult and fraught with errors.
This is where Pipfile is supposed to help. This is a project to add a new, more descriptive requirements file, as well as allowing for a lockfile that would lock not just your primary packages you have listed, but also all dependencies of dependencies all the way down the tree. This helps with reproducibility and allows for the same installation on two different systems to have the exact same software/dependencies installed.
While pip is a great tool, and with the Pipfile changes it would allow for locking of dependencies, there is one more puzzle piece missing. When installing packages while you can install them into the global namespace, the recommended way is to install all packages for a particular tool/project into a virtual environment.
Normally you'd invoke virtualenv, to create this environment and then you'd make sure to install all packages within it, thereby isolating it from the rest of the system.
pipenv automates this for you, as well as using Pipfile
it also supports
locking using Pipfile.lock
and provides a bunch of tooling around
adding/removing dependencies from a local project.
pipenv allows you to easily create an environment and manage dependencies, but it makes no effort to solve the problem of distributing and building a package that may be installed by third parties.
Poetry is a similar project to pipenv, with a major difference being that it was built to help with distributing/developing applications and building a distributable package that may then be installed using pip.
Instead of using a Pipfile
it uses a recently standardised pyproject.toml
file instead. Like pipenv it also supports locking, and it provides tooling
around adding/removing dependencies as well as managing what versions are
required.
Ultimately those dependencies are going to end up as metadata in a distributable package.
Poetry makes it easier to manage a software development project, whether that is for an application using various libraries for internal use, or for libraries that are going to be distributed to other developers.
This is where the divide really starts, while you can use pipenv with a
standard setuptools project, any dependencies you add to the Pipfile
using pipenv's tooling will not be listed as a dependencies for your project
when you distribute it, this either means you need to duplicate the list in
both setup.py
as well as the Pipfile
, or you have to add your current
project as an editable install within your Pipfile
which means your Pipfile
is now not as easily distributable.
There are work-arounds that people have used, such as having setup.py
read a
requirements.txt
, so that you could have all your requirements listed in a
text file, and not in setup.py
, but asking to do the same with a Pipfile
in
pipenv was met with a "Do not do this.".
poetry explicitly allows you to add dependencies in one place, and those dependency listings are then automatically inserted into the package metadata that is created when you build your distributable package.
There are two competing use cases, one is the deployment of software packages and being able to run them, but not as a developer, the other is a developer of software packages that needs to define dependencies for the project to run.
pipenv solves the deployment case. If I was a user I could very simply grab a
known good Pipfile.lock
and use pipenv to install a known good set of
software, this is great when I am deploying a project. It is the use case that
many in the Python Packaging Authority also seem to be optimizing for.
The other use case is for developers that are building new software, either by using a list of existing packages and deploying privately, or people developing software for other developers to be published on the Python Packaging Index.
This latter group of people is under represented due to it likely being much
smaller, and existing tools like setuptools
and setup.py
already providing
a "good enough" experience. Innovation in this area is something that readily
needs to be improved upon to make it easier to create new libraries/packages
that follow best practices. The amount of copy and pastes people have done for
adding a setup.py
to their projects or to make something work is long. It's
all a little bit of black magic, and there is a great many things that have
been carried over because of cargo cult programming.
Reading the packaging guide on managing dependencies, pipenv is the recommended tool:
This tutorial walks you through the use of Pipenv to manage dependencies for an application. It will show you how to install and use the necessary tools and make strong recommendations on best practices.
this language, along with what packaging.python.org
implies as a URL makes it
difficult as a project maintainer to recommend alternate tools, becuase even if
those tools are superior for the use case we are recommending them for it is
always going to lead to questions from users, such as:
Why are you not using pipenv, the official tool recommended by Python.org?
We get similar questions about easy_install
vs pip
all of the time, as
well as why people should switch, and we can point to various bits of
documentation that explains why pip is a better choice.
If we were to recommend an alternate the appeal to authority that python.org
implies is going to make it much more difficult, and the question will become
"why is the Pylons Project not using recommended tooling?"
poetry is listed as a footnote on that page, alongside pip-tools and hatch, and is mentioned only for doing library development, with no mention of other requirements that may make it a much better tool for developing locally.
If I am using pipenv with a non-installable project (no setup.py) I end up
having to figure out how to get the code, and the Pipfile
/Pipfile.lock
to
my environment I am deploying into. pipenv's install provides a way to make
sure to only install if the Pipfile.lock
is up to date or otherwise will fail
to continue. If you are using a local project though, and it uses setup.py
the only way that the Pipfile.lock
will contain any sub-dependencies of your
setup.py
project is if you install it as editable
. Otherwise
sub-dependencies are not locked.2
If I am using poetry I get an pip installable project, but it doesn't contain
any hard pins or lock files. I'd have to distribute pyproject.lock
as well as
my wheel. This gets me a little closer, but still no lock file that includes my
newly produced wheel, and has all of its dependencies locked.
The Python Packaging Authority based on Twitter conversations with its members
and the documentation on packaging.python.org
suggest using pipenv for
development. pipenv is particular ill-suited for development if the goal is to
create a package to be deployed to production. With two locations to define
dependencies it leaves people scratching their heads as to which is canonical,
and if a dependency is added to Pipfile
but not setup.py
it may leave a
developer thinking their package is ready for distribution when in reality it
is missing a dependency that is required to run/use said distribution.
At this point using both projects seems like a win-win. Use poetry to
build/develop a package, then use pipenv in the integration phase to create a
Pipfile.lock
that is used to deploy in production. This way you get the best
of both worlds. A great tool that can help you register entry points and
another that can help you with deploying a known good set of dependencies.
Interestingly, even the pipenv docs seem to agree that it is a deployment tool:
Specify your target Python version in your Pipfile’s [requires] section. Ideally, you should only have one target Python version, as this is a deployment tool.
Use pipenv if you have a script that requires a couple of dependencies and doesn't need all of the extra overhead of packaging metadata/packaging. Use poetry if you want to build a distributable project that can easily be deployed by others, and use both if you develop a project and need a known good environment to deploy.
There will likely never be a time that one single tool is considered good enough, and competition between tools is a way to keep advancing forward. Packaging in the Python community for a long time has been difficult. Wheels has made things a little better. pip has made management of installing new packages easier and improved upon easy_install. Here's to the next evolution.
Now, can we talk about standardising on pyproject.toml
since that is already
where "project" metadata needs to go, might as well re-use the name instead of
having two different names/files. Oh, and PEP 517 can't come soon enough
so that alternate tools like flit can be used instead of
setuptools/setup.py
.
We created an issue named Support poetry,
flit, pipenv, or ...? that attempts to go over the pros and cons of the
various tools and how we currently support our users in our documentation on
building projects using pyramid, including how to create a project that is
distributable. Pyramid heavily uses pkg_resources
and entry points.
The way to register the entry points is to have an installable package.
The framework is flexible enough that there is no requirement for entry points, but at that point you are in territory where the default tooling provided by the project will not work, and some of the convenience tools/functionality that Pyramid provides it's users/developers is not available. ↩
See documentation for Editable Dependencies (e.g. -e
.
) which as of this writing states:
Sub-dependencies are not added to the Pipfile.lock if you leave the -e option out.
On OS X, by default all user accounts start at ID 501 and count up, so if you
have two accounts, you will have user ID 501 and 502 in use. For most people
they will most likely never change this, and all is well. The default group ID
for all new user accounts is staff
which has a group ID of 20. So if you have
a single account named for example janedoe
her user ID would be 501 and her
group ID would be 20 (staff
).
Coming from a FreeBSD world and running a lot of FreeBSD systems, user accounts
start at 1001, and count up. When you create a new user account on FreeBSD, by
default that user is also added to a group with the same name as the username,
with the same ID. So you end up with an account with ID 1001 and default group
ID 1001. Using the same example, a user named janedoe
would have a user ID of
1001, and a group ID of 1001 (janedoe
).
When I first installed OS X, and almost every single new installation since, I have followed these steps to change my user ID and group ID to match those on my FreeBSD systems:
+
(You may need to click the lock in the bottom left first)group
janedoe
janedoe
)janedoe
)staff
to janedoe
cd /Users/janedoe
This allows me to have the same user ID and group ID on both my Mac OS X and on FreeBSD, thereby making it easier to use tools like rsync that keeps ownership and permissions, as well as using NFS. Other ways to do something similar is using LDAP/Kerberos with shared directory service, but that is a little heavy handed for a home network.
This has worked for me without issues since OS X 10.8, even upgrading from 10.8 to
10.9 and then 10.10 did not change anything. However as soon as I did the
upgrade to El Capitan (10.11) I noticed that all of my ls -lah
output looked
like this:
drwxr-xr-x+ 13 xistence 1001 442B Oct 1 16:58 Desktop drwx------+ 28 xistence 1001 952B Aug 31 12:17 Documents drwx------+ 89 xistence 1001 3.0K Oct 1 15:56 Downloads drwx------@ 72 xistence 1001 2.4K Oct 2 00:16 Library
and id
provided this valuable output:
uid=1001(xistence) gid=20(xistence) groups=20(xistence),12(everyone),61(localaccounts),399(com.apple.access_ssh),402(com.apple.sharepoint.group.2),401(com.apple.sharepoint.group.1),100(_lpoperator)
Wait, what happened to the staff
group that I am supposed to be a member of,
and why is my xistence
group ID now stating it is 20 and not 1001 as I was
expecting.
I wondered if the upgrade had messed up my group somehow, and it was
confirmed when I checked with dscl
.
$ dscl . -read /Groups/xistence [...] Password: * PrimaryGroupID: 20 RealName: xistence RecordName: xistence RecordType: dsRecTypeStandard:Groups
Do note that the group xistence
does not show up in System Preferences ->
Users and Groups, so we'll have to do some command line magic.
Well, that's worrisome, why is this matching a built-in group's ID? Specifically
let's check the staff
group and make sure it still has the appropriate group
ID.
$ dscl . -read /Groups/staff [...] GroupMembership: root Password: * PrimaryGroupID: 20 RealName: Staff RecordName: staff BUILTIN\Users RecordType: dsRecTypeStandard:Groups
Next I had to check to see what my user account was set to as the default group ID:
$ dscl . -read /Users/xistence [...] NFSHomeDirectory: /Users/xistence Password: ******** PrimaryGroupID: 20 RealName: Bert JW Regeer RecordName: xistence bertjw@regeer.org com.apple.idms.appleid.prd.53696d524c62372b48344a53755864634e4f374b32513d3d RecordType: dsRecTypeStandard:Users UniqueID: 1001 UserShell: /bin/bash
Well, that is not entirely what I was expecting either, at last it didn't touch my user ID. Time to fix things.
First let's change the xistence
group's group ID to 1001, and then change the
Primary Group ID for the user xistence
to group ID 1001.
# dscl . -change /Groups/xistence PrimaryGroupID 20 1001 # dscl . -change /Users/xistence PrimaryGroupID 20 1001
After that id
looked a little bit more sane:
uid=1001(xistence) gid=1001(xistence) groups=1001(xistence),12(everyone),61(localaccounts),399(com.apple.access_ssh),402(com.apple.sharepoint.group.2),401(com.apple.sharepoint.group.1),100(_lpoperator)
However now the group staff
is missing from the list of groups that the user
xistence
is a member of, which I don't think will hurt anything, but we still
want to be able to read/write any folders that are designated as staff
elsewhere in the OS, and any other privileges that entails. So let's add the
user xistence
to the staff
group:
# dscl . -append /Groups/staff GroupMembership xistence
Let's verify, and check id
again:
uid=1001(xistence) gid=1001(xistence) groups=1001(xistence),12(everyone),20(staff),61(localaccounts),399(com.apple.access_ssh),402(com.apple.sharepoint.group.2),401(com.apple.sharepoint.group.1),100(_lpoperator)
For this to fully take effect, log out and log back in. This will make sure that all new files have the correct user ID/group ID set.
After the change to the Group ID, the group still doesn't show up in System Preferences -> Users and Groups, which I find weird since it is not a built-in group.
Luckily everything is back to the way it was before the upgrade, and my backup scripts and NFS shares work again without issues.
]]>After I imported the system into cobbler, it correctly showed up in the pxelinux boot menu and it would happily load the kernel and the initrd, however after initial bootup it would throw the following error message:
dracut-initqueue[867]: Warning: Could not boot. dracut-initqueue[867]: Warning: /dev/root does not exist Starting Dracut Emergency Shell... Warning: /dev/root does not exist Generating "/run/initramfs/rdsosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view the system logs. You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report
After that it gives you a root shell.
Some Google searching led me to an mailing list post for Cobbler where
someone mentioned that adding ksdevice=link
to the Cobbler profile allowed
the system to boot without issues.
However before I just implement a change I want to know why that fixes the issue, so I searched Google for "kickstart ksdevice" and found Red Hat's documentation on starting a kickstart. Searching that page for "ksdevice" led me to this section:
ksdevice=<device>
The installation program uses this network device to connect to the network. You can specify the device in one of five ways:
- the device name of the interface, for example,
eth0
- the MAC address of the interface, for example,
00:12:34:56:78:9a
- the keyword
link
, which specifies the first interface with its link in the up state- the keyword
bootif
, which uses the MAC address that pxelinux set in the BOOTIF variable. Set IPAPPEND 2 in your pxelinux.cfg file to have pxelinux set the BOOTIF variable.- the keyword
ibft
, which uses the MAC address of the interface specified by iBFTFor example, consider a system connected to an NFS server through the eth1 device. To perform a kickstart installation on this system using a kickstart file from the NFS server, you would use the command
ks=nfs:<server>:/<path> ksdevice=eth1
at the boot: prompt.
While ksdevice=link
would work for some of the machines I am deploying, it
wouldn't work for most since they have multiple interfaces and each one of
those interfaces would have link, what I really wanted was ksdevice=bootif
,
which is the most sensible default.
So I modified the profile with ksdevice=link
just to test, and that worked
without issues, so then I modified the profile and added ksdevice=link
, and
this failed.
I figured I should check the pxelinux.cfg/default
file that Cobbler generates
upon issuing a cobbler sync
and verify that ksdevice=bootif
is actually
listed correctly.
What I found was this:
LABEL CentOS-7.1-x86_64 kernel /images/CentOS-7.1-x86_64/vmlinuz MENU LABEL CentOS-7.1-x86_64 append initrd=/images/CentOS-7.1-x86_64/initrd.img ksdevice=${net0/mac} lang= kssendmac text ks=http://10.10.10.1/cblr/svc/op/ks/profile/CentOS-7.1-x86_64 ipappend 2
This has a ksdevice=${net0/mac}
which is not what I had put in the profile,
overwriting ksdevice
in the profile with ksdevice=link
did correctly put
that into the pxelinux.cfg/default
file, so Cobbler was overwriting my change
somehow.
A quick search of ${net0/mac}
led me to a page about gPXE commandline
items that contained the same variable. At which point I remembered that in
Cobbler you set up your profile to be gPXE enabled or not. The default when you
import an image is to enable gPXE support.
cobbler profile report --name=CentOS-7.1-x86_64 Name : CentOS-7.1-x86_64 TFTP Boot Files : {} Comment : DHCP Tag : default Distribution : CentOS-7.1-x86_64 Enable gPXE? : True Enable PXE Menu? : 1 [...]
So let's modify the profile to disable gPXE support:
cobbler profile edit --name=CentOS-7.1-x86_64 --enable-gpxe=False cobbler sync
Verify that the change was made:
cobbler profile report --name=CentOS-7.1-x86_64 [...] Enable gPXE? : False [...]
Then let's take a look at our pxelinux.cfg/default
file and make sure that it
looks correct:
LABEL CentOS-7.1-x86_64 kernel /images/CentOS-7.1-x86_64/vmlinuz MENU LABEL CentOS-7.1-x86_64 append initrd=/images/CentOS-7.1-x86_64/initrd.img ksdevice=bootif lang= kssendmac text ks=http://10.10.10.1/cblr/svc/op/ks/profile/CentOS-7.1-x86_64 ipappend 2
This time our ksdevice is correctly set. Upon rebooting my PXE booted server it picked up the correct interface, made a DHCP request and kickstarted the server using the provided kickstart file, and installation completed successfully.
So unless you chain-boot gPXE from pxelinux by default, make sure that your profiles are not set to be gPXE enabled if you want to use them directly from the pxelinux menu.
While researching more about this article, I found a blog post by Vlad Ionescu
about PXE installing RHEL 7 from Cobbler where he suggests disabling
ksdevice
entirely and adding an extra inst.repo
variable to the kernel
command line, however on older versions of CentOS 7 and Red Hat Enterprise
Linux 7 there is a bug report that shows that an empty ksdevice
could
cause anaconda to crash, and setting a manual inst.repo
for every profile
seems like overkill when just disabling gPXE for the profile also solves the
problem.
Currently I've installed and configured an OpenStack instance that looks like this:
+---------------------+ | | | +--+----+ | | | | +-----------+-+ +--+----------+ | | Compute | | Compute | | | 01 | | 02 | | +------+------+ +-----+-------+ | | | | | +----------+ | +------------+--+ | | | | | +-------------+ +-----+-------+ | | | Controller | | Network | | | | | | | +---+ Tenant Networks (vlan tagged) (vlan ID's 350 - 400) | +-----+----+--+ +------+----+-+ | | | | | | | | | +-----------+ Floating Networks (vlan tagged) (vlan ID's 340 - 349) | | | | | | | | +------------+--------------+----------------+ Management Network (10.5.2.0/25) | | +------------------------------------+ External API Network (10.5.2.128/25)
There are two compute nodes, a controller node that runs all of the API services, and a network node that is strictly used for providing network functions (routers, load balancers, firewalls, all that fun stuff!).
There are two flat networks that provide the following:
The other two networks are both vlan tagged:
Since the OpenStack Icehouse release, the l3 agent has supported the ability to use the Open vSwitch configuration to specify how traffic should be routed rather than statically defining that a single l3 agent routes certain traffic to a single Linux bridge. Setting this up is fairly simple if you follow the documentation, with one caveat, variables you think would be defined to no value, actually have a value and thus need to be explicitly zeroed out.
First, we need to configure the l3 agent, so let's set some extra variables in
/etc/neutron/l3-agent.ini
:
gateway_external_network_id = external_network_bridge =
It is important that these two are set, not left commented out, unfortunately when commented out they have some defaults set and it will fail to work, so explicitly setting them to blank will fix that issue.
Next, we need to set up our Open vSwitch configuration. In
/etc/neutron/plugin.ini
the following needs to be configured:
bridge_mappings
network_vlan_ranges
Note, that these may already be configured, in which case there is nothing left to do. Mine currently looks like this:
bridge_mappings = tenant1:br-tnt,provider1:br-ex
This basically specifies that any networks created under "provider name"
tenant1
are going to be mapped to the Open vSwitch bridge br-tnt
and any
networks with "provider name" provider1
will be mapped to br-ex
.
br-tnt
is mapped to my tenant network and on the switch has vlan ID's 350 -
400 assigned, and br-ex
has vlan ID's 340 - 349 assigned.
Following the above knowledge, my network_vlan_ranges
is configured as such:
network_vlan_ranges = tenant1:350:400,provider1:340:349
Make sure to restart all neutron services:
openstack-service restart neutron
neutron-server
lives)On the controller we just need to make sure that our network_vlan_ranges
matches what is on the network node, with one exception, we do not list our
provider1
vlan ranges since we don't want to make those available to
accidentally be assigned when a regular tenant creates a new network.
So our configuration should list:
network_vlan_ranges = tenant1:350:400
Make sure that all neutron services are restarted:
openstack-service restart neutron
Now, as an administrative user we need to create the provider networks.
source ~/keystonerc_admin neutron net-create "192.168.1.0/24-floating" \ --router:external True \ --provider:network_type vlan \ --provider:physical_network provider1 \ --provider:segmentation_id 340 neutron net-create "192.168.2.0/24-floating" \ --router:external True \ --provider:network_type vlan \ --provider:physical_network provider1 \ --provider:segmentation_id 341
Notice how we've created two networks, given them each individual names (I like
to use the name of the network they are going to be used for) and have been
attached to the provider1
. Note that provider1
is completely
administratively defined, and could just as well have been physnet1
, so long
as it is consistent across all of the configuration files.
Now let's create subnets on this network:
neutron subnet-create "192.168.1.0/24-floating" 192.168.1.0/24 \ --allocation-pool start=192.168.1.4,end=192.168.1.254 \ --disable-dhcp --gateway 192.168.1.1 neutron subnet-create "192.168.2.0/24-floating" 192.168.2.0/24 \ --allocation-pool start=192.168.2.4,end=192.168.2.254 \ --disable-dhcp --gateway 192.168.2.1
Now that these networks are defined, you should be able to have tenants create
routers and set their gateways to either of these new networks by selecting
from the drop-down in Horizon or by calling neutron router-gateway-set <router
id> <network id>
on the command line.
The l3 agent will automatically configure and set up the router as required on the network node, and traffic will flow to either vlan 340 or vlan 341 as defined above depending on what floating network the user uses as a gateway.
This drastically simplifies the configuration of multiple floating IP networks since no longer is there a requirement to start up and configure multiple l3 agents each with their own network ID configured. This makes configuration less brittle and easier to maintain over time.
]]>You might find something similiar to the following in your logs, and no good documentation on how to fix it.
ERROR nova.compute.manager [req-7cb1c029-beb4-4905-a9d9-62d488540eda f542d1b5afeb4908b8b132c4486f9fa8 c2bfab5ad24642359f43cdff9bb00047] [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] Setting instance vm_state to ERROR TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] Traceback (most recent call last): TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5596, in _error_out_instance_on_exception TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] yield TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3459, in resize_instance TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] block_device_info) TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4980, in migrate_disk_and_power_off TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] utils.execute('ssh', dest, 'mkdir', '-p', inst_base) TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] File "/usr/lib/python2.7/site-packages/nova/utils.py", line 165, in execute TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] return processutils.execute(*cmd, **kwargs) TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] File "/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 193, in execute TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] cmd=' '.join(cmd)) TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] ProcessExecutionError: Unexpected error while running command. TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/99736f90-db0f-4cba-8f44-a73a603eee0b TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] Exit code: 255 TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] Stdout: '' TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] Stderr: 'Host key verification failed.\r\n' TRACE nova.compute.manager [instance: 99736f90-db0f-4cba-8f44-a73a603eee0b] ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unexpected error while running command. Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/99736f90-db0f-4cba-8f44-a73a603eee0b Exit code: 255 Stdout: '' Stderr: 'Host key verification failed.\r\n'
When OpenStack's nova is instructed to resize an instance it will also change the host it is running on, almost never will it schedule the instance on the same host and do the resize on the same host it already exists. There is a configuration flag to change this, however in my case I would rather the scheduler be run again, especially if the instance size is changing drastically. During the resize process, the node where the instance is currently running will use SSH to connect to the instance where the resized instance will live, and copy over the instance and associated files.
There are a couple of assumptions I will be making:
nova
, and qemu
user both have the same UID on all compute nodesFirst things first, let's make sure our nova
user has an appropriate shell set:
cat /etc/passwd | grep nova
Verify that the last entry is /bin/bash
.
If not, let's modify the user and make it so:
usermod -s /bin/bash nova
After doing this the next steps are all run as the nova
user.
su - nova
We need to generate an SSH key:
ssh-keygen -t rsa
Follow the directions, and save the key WITHOUT a passphrase.
Next up we need to configure SSH to not do host key verification, unless you want to manually SSH to all compute nodes that exist and accept the key (and continue to do so for each new compute node you add).
cat << EOF > ~/.ssh/config Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null EOF
Next we need to make sure we copy the the contents of id_rsa.pub
to
authorized_keys
and set the mode on it correctly.
cat ~/.ssh/id_rsa.pub > .ssh/authorized_keys chmod 600 .ssh/authorized_keys
This should be all the configuration for SSH you need to do. Now comes the
import part, you will need to tar up and copy the ~nova/.ssh
directory to
every single compute node you have provisioned. This way all compute nodes will
be able to SSH to the remote host to run the commands required to copy an
instance over, and resize it.
If you have any instances that are currently in the ERROR
state due to a
failed resize, you will be able to issue the following command to reset the
state back to running and try again:
nova reset-state --active <ID of instance>
This will start the instance, and you will be able to once again issue the resize command to resize the instance.
]]>./binary --version
or even ./binary
the
version is printed that it was built from. This can make it much simpler to
debug any potential issues, especially if fixes may have already been made but
a bad binary was deployed.
Make sure that your wscript somewhere near the top contains the following:
APPNAME = 'myapp' VERSION = '0.0.0'
Then in your configure(cfg)
add the following:
cfg.env.VERSION = VERSION cfg.env.APPNAME = APPNAME git_version = try_git_version() if git_version: cfg.env.VERSION += '-' + git_version
The try_git_version()
function is fairly simple and looks like this:
def try_git_version(): import os import sys version = None try: version = os.popen('git describe --always --dirty --long').read().strip() except Exception as e: print e return version
It runs git describe --always --dirty --long
which will return something
along these lines: 401b85f-dirty
. If you have any annoted tags, it will
return the tag name as well.
If git
is not installed, or it is not a valid git
directory, then it will
simply return None
. At that point all we have to go on is the VERSION
variable set at the top of the wscript.
Now that we have our configuration environment set up with the VERSION
we
want to get that into a file that we can then include in our C++ source code.
build_version.h.in
file#ifndef BUILD_VERSION_H_IN_941AD1F24D0A9D #define BUILD_VERSION_H_IN_941AD1F24D0A9D char VERSION[] = "@VERSION@"; #endif /* BUILD_VERSION_H_IN_941AD1F24D0A9D */
build(ctx)
ctx(features='subst', source='build_version.h.in', target='build_version.h', VERSION = ctx.env['VERSION'], )
This uses the substitution feature to transform build_version.h.in
into
build_version.h
, while inserting the version into the file.
build_version.h
in your source code#include "build_version.h"
And add something along these lines to your main()
:
std::cerr << "Version: " << VERSION << std::endl;
This will print out the VERSION
that has been stored in build_version.h
.
Check out my mdns-announce project on Github for an example of how this is implemented.
]]>Ultimately we want to be able to have X amount pieces of data that are tied to a particular user. Unfortunately due the fact that HTTP is a stateless protocol we have to use cookies. Cookies are small little pieces of data that are transmitted from the server to the client (generally done once), and then upon the user coming back to the website they are transmitted from the client to the server. This allows us to uniquely track a single user across connections to our website.
If the website allows a user to authenticate and the fact that they are authenticated is stored in the session, we also want to make sure that we can aggressively expire a session, if this is possible depends on our session storage.
There are a multitude of ways to store the session data, but it ultimately boils down to server-side or client-side. Server-side can be done in Cassandra, Memcache, Redis or even in a SQL database.
The main one that has been used for years is to use server side storage. Storing a small file on the servers hard drive that contains the data, and the client is sent a cookie that contains a unique identifier that is linked to the on-disk storage.
For example:
1 => /tmp/session_1 2 => /tmp/session_2 ... N => /tmp/session_N
The upside to server-side storage is that it is possible for us to very easily expire a session, simply remove the associated file/data that is stored and the users session has now become invalid.
The other method that has recently started being used more to make it easier to scale the server side is to store session data encoded in base64 in the cookie itself. In this case there is no unique session ID, and no data is stored server side.
The downside to using client-side storage is that there is no way, short of the expiration on the cookie itself for the website to expire a session. There are work-arounds, but they all require storing state server-side. A hybrid approach for example is possible, store a unique ID along with the session data, and store that unique ID server side, but none of the extra data. Remove the unique ID server side and if we receive a session that contains a unique ID we don't recognise, we simply clear the session.
Being able to easily expire a users sessions allows for extra security measures. For example in Google Mail it is possible to sign out all other locations, this forces those other locations to re-authenticate before gaining access to your account.
This is a good security measure to have, so that if a users cookie is stolen, or their credentials are compromised upon changing their password all their sessions are invalidated and an attacker using an old cookie/session ID can't continue to wreak havoc on the users account.
If we are just storing a session ID, or the full session the cookie should be hardened so that it can not be tampered with by a client. Even if you are protecting the cookie using SSL, we still don't want to allow a malicious user to modify the cookie to change the session ID or the session itself.
The single best way to make sure your cookie has not been tampered with is to cryptographically sign your cookie, and upon receiving the cookie from the client verifying that the signature matches what you are expecting. This is especially important if you are using client-side storage, because you don't want someone to be able to change the user ID from 950 to 1 and suddenly impersonate a different user.
HMAC (Hash-based message authentication code) is an cryptographic construct that uses a hashing algorithm (SHA-1, SHA-256, SHA-3) to create a MAC (message authentication code) with a secret key. It is very easy given the secret key and the original data to create the MAC, but it is very difficult if not impossible to take the original data, and MAC and get the secret key.
This allows us to do the following:
data = "Hello World" mac = HMAC(data, sha256, "SEEKRIT")
Our mac
would now be equal to:
e655f98cb9b3c02f45576f7906d64b0b7f8731f25a5319c42ca666917aca45a4
If we now create our cookie as follows:
cookie = mac + " " + data
It would look as follows:
cookie = e655f98cb9b3c02f45576f7906d64b0b7f8731f25a5319c42ca666917aca45a4 Hello World
We can then send that to the client that requested the page. Once the client visits the next page, their browser will send that same cookie back to use. If we split the mac from the data, we can then do the following operation:
cookie = e655f98cb9b3c02f45576f7906d64b0b7f8731f25a5319c42ca666917aca45a4 Hello World data = "Hello World" mac = e655f98cb9b3c02f45576f7906d64b0b7f8731f25a5319c42ca666917aca45a4 mac_verify = HMAC(data, sha256, "SEEKRIT") mac_verify == mac
If and only if mac_verify
and mac
are the same can we be sure that the
cookie has not been tampered with.
This requires that the client is NEVER aware of what we are using as our secret key. In the above exmaples that is "SEEKRIT". In your web application you will be required to make this a configuration variable, and you will have to take care not to commit that configuration variable to a git repository and upload it to github (for example).
Using a bare hash algorithm allows for length extension attacks if used incorrectly, this would allow an attacker to concatenate extra data to the end of our existing data, modify the "MAC" and the server would accept it.
This construct is thus very dangerous:
data = "Hello World" key = "SEEKRIT" mac = SHA1(key + data)
The following construct is still not recommended, but is not nearly as dangerous:
mac = SHA1(data + key)
Due to the key being last, this is not vulnerable to a length extension attack, however please don't do this, instead stick to using an HMAC instead.
When using client-side storage, it may be beneficial to encrypt the data to add an extra layer of security. Even if encrypting the data you need to continue using a MAC.
Using just encryption will not protect you against decrypting bad data because an attacker decided to provide invalid data. Signing the cookie data with a MAC makes sure that the attacker is not able to mess with the ciphertext.
I am most familiar with the Pylons Project's Pyramid Web Framework, the
default session implementation that is provided by the project is named
SignedCookieSessionFactory
, as the name implies this uses a client-side
cookie to store the session data, which is signed using a secret key that is
provided upon instantiation of the factory.
Flask sessions also uses a signed cookie for client-side session storage.
Ruby on Rails uses a signed/encrypted cookie for client-side session storage by default.
PHP does not by default sign the session cookie, it does however use server-side storage for session data by default. However extra security can be added by installing PHP SuHoSin which adds session cookie encryption/signing.
]]>Maintaining custom ports and integrating them into your build process doesn't need to be difficult. The documentation surrounding this process however is either non-existent, or lacking in its clarity. At the end of the day, it really is as simple as maintaining a repository whose structure matches the ports tree layout, then managing that repository and the standard ports tree with portshaker, and finally handing the end result off to poudriere.
For this example, we'll assume a git repo is used and that you're already
familiar with how to build FreeBSD ports. We'll also assume that we
have but a single port that we're maintaining and that it is called myport
.
The hierarchy of your repo should simply be category/myport
. We'll refer to
this repo simply as myrepo
.
Portshaker is the tool responsible for taking multiple ports sources and then
merging them down into a single target. In our case, we have two sources: our
git repo (myrepo
) containing myport
, and the standard FreeBSD ports tree.
We aim to merge this down into a single ports tree that poudriere will then use
for its builds.
To configure portshaker, add the following to the
/usr/local/etc/portshaker.conf
file:
# vim:set syntax=sh: # $Id: portshaker.conf.sample 116 2008-09-30 16:15:02Z romain.tartiere $ #---[ Base directory for mirrored Ports Trees ]--- mirror_base_dir="/var/cache/portshaker" #---[ Directories where to merge ports ]--- ports_trees="default" use_zfs="no" poudriere_ports_mountpoint="/usr/local/poudriere/ports" default_poudriere_tree="default" default_merge_from="freebsd myrepo"
Some key points here are that the two items listed in for the
default_merge_from
argument need to have scripts present in the
/usr/local/etc/portshaker.d
directory. Further more, the combination of the
poudriere_ports_mountpoint
and default_poudriere_tree
needs to be a ports
tree that is then registered with poudriere.
Next, we need to tell portshaker how to go off and fetch our two types of ports
trees, freebsd
and myrepo
. For the freebsd
ports tree, create
/usr/local/etc/portshaker.d/freebsd
with the following contents and make it
executable:
#!/bin/sh . /usr/local/share/portshaker/portshaker.subr method="portsnap" run_portshaker_command $*
Next, create a similar script to handle our repository containing our custom
port. /usr/local/etc/portshaker.d/myrepo
should contain the following and
similarly be executable:
#!/bin/sh . /usr/local/share/portshaker/portshaker.subr method="git" git_clone_uri="http://github.com/scott.sturdivant/packaging.git" git_branch="master" run_portshaker_command $*
Obviously replace the git_clone_uri
and git_branch
variables to reflect
your actual configuration. For more information about the values and what they
can contain, consult man portshaker.d
Now, portshaker should be all set. Execute portshaker -U
to update your
merge_from
ports trees (freebsd
and myrepo
). You'll see the standard
portsnap fetch and extract process as well as a git clone. After a good
bit of time, these will both be present in the /var/cache/portshaker
directory. Go ahead and merge them together by executing portshaker -M
.
Hooray! You now have /usr/local/poudriere/ports/default/ports
that is a
combination of the normal ports tree and your custom one.
We're effectively complete with configuring portshaker. Whenever your port is
updated, just re-run portshaker -U
and portshaker -M
to grab the latest
changes and perform the merge.
Poudriere is a good tool for building ports. We will use it to handle our
merged directory. Begin by configuring poudriere
(/usr/local/etc/poudriere.conf
):
NO_ZFS=yes FREEBSD_HOST=ftp://ftp.freebsd.org RESOLV_CONF=/etc/resolv.conf BASEFS=/usr/local/poudriere USE_PORTLINT=no USE_TMPFS=yes DISTFILES_CACHE=/usr/ports/distfiles CHECK_CHANGED_OPTIONS=yes
Really there's nothing here that is specific to the problem at hand, so feel free to consult the provided configuration file to tune it to your needs.
Now, the step that is specific is to set poudriere up with a ports tree that
it does not manage, specifically our resultant merged directory. If you
consult man poudriere
, it specifies that for the ports
subcommand, there is
a -m method
switch which controls the methodology used to create the ports
tree. By default, it is portsnap. This is confusing as in our case, we do not
want poudriere to actually do anything. We want it to just use an existing
path. Fortunately, there is a way!
The poudriere wiki has an entry for using the system ports tree, so we adopt it for our needs by executing:
poudriere ports -c -F -f none -M /usr/local/poudriere/ports/default \ -p default
If you've consulted the poudriere manpage, you'll see that the -F
and -f
switches both reference ZFS in their help. As we're not using ZFS, it's not
clear how they will behave. However, in conjunction with the custom mountpoint
(-M /usr/local/poudriere/ports/default
), we ultimately wind up with what we
want, a ports tree that poudriere can use, but does not manage:
# poudriere ports -l PORTSTREE METHOD PATH default - /usr/local/poudriere/ports/default
Note that this resulting PATH is the combination of the
poudriere_ports_mountpoint
and default_poudriere_tree
variables present in
our /usr/local/etc/portshaker.conf
configuration file.
Go ahead and create your jail(s) like you normally would (i.e.
poudriere -c -j 92amd64 -V 9.2-RELEASE -a amd64
) and any other configuration
you would like, and then go ahead and build myport
with
poudriere bulk -j 92amd64 -p default category/myport
. Success!
A couple of days ago on reddit.com's /r/netsec a poster by the name of Dan Weber posted what he believed to be an attack on PHP sessions: Hacking PHP sessions by running out of memory (reddit link). The way the "attack" works is as follows:
The "attack" would be to run the PHP script out of memory on number 3, since once something is set on the session it is immediately stored, so even if the user is not supposed to be logged in, they are now logged in since their session says they are.
I wouldn't necessarily call this a PHP hack, this is really just bad practice in terms of programming, the logic should be reversed.
That would solve the problem at hand, and now there is no way for the user to trick the PHP script into believing she is logged in when that is not the case.
However as the discussion went on on Reddit, it became even more clear that there are no good resources on what you should store in the user session, and what you shouldn't store in the user session. Some of these things may seem like common knowledge, but sadly this is something every single new person to programming has to learn on their own.
Let's get this out of the way, this is in no way limited to PHP, but it is the one I will be using as an example. This can all apply to Ruby (Ruby on Rails), Python (Pyramid) or many other frameworks.
The basic problem is that generally writing to the session is not an atomic transaction based on the page accessed, so the assumption made in this article is that when you write to the session it is instantly committed, and there is no way to roll it back upon failure. If there was, our first example listed wouldn't be able to occur since upon running out of memory at step 3, the session would have been rolled back and cleaned up.
You should only really store anything in the session that if it were made public it would do very little harm.
Really the list of items to be stored in a session are as follows:
More importantly, don't store permission bits, or group memberships, or anything that is used to allow/deny access to particular resources. You want to store just enough information that upon a user accessing your site you are able to retrieve the users information from storage, and based upon that information from storage you then make decisions such as permissions/group memberships.
One of the things that Dan Weber brought up in the Reddit post was storing the users permission level and group membership in the session. If your code is then relying on the session to always contain the right permission level, then there is no way to expire someones access to the data.
If instead on every page visit we simply pull out the users unique id and verify the permissions upon access, as soon as the permissions are revoked by the administrator the user no longer has access to the various resources.
There has to be an easy way to remember something from page visit to page visit that isn't considered detrimental if the information gets lost. One of those things is flash messages. Flash messages are generally used to provide the user indication that something has changed, they are shown once and then never again.
Storing these as session data makes sense. If the flash message gets set, great, if it doesn't get set, it doesn't matter. Flash messages are simply a notification tool, if the user misses them it isn't important.
Definitely don't store any kind of permission bits, groups a user is a part of or anything that would allow the user access that they normally would not be able to access.
On each page access check what permissions the user has. While it may mean a little more heavy lifting server side it provides extra security, and the means to enforce changes in permissions instantly.
Keep secure programming practices in mind at all times, always consider how the information you are storing/processing is accessed/viewed/administered. More importantly think about the access controls that are in place, and how one could expire access to a particular resource without requiring a co-operative client.
The ordering of how variables are set, and when they are set are very
important. $_SESSION['isadmin'] = True
at the top of a PHP script, and then
removing it by checking to see if the user is actually an administrator later
on in the script is a bad idea.
Always order your code so that if a failure does occur there is no chance that a critical section of your code is executed by accident, or that information is stored in a half-verified state. This is especially important for access control.
]]>