Ansible and AWS EC2

Ansible is not lacking in awesome.  I’ve used Puppet and Chef and others to manage Linux but Ansible meets my criteria for host management for the specific reason that it uses SSH to manage hosts rather than an agent.  Ansible is also simple to get up and running quickly.

In just a few hours, I was managing hosts and doing real work to keep DHCP configs straight.  Adding more and more functionality to playbooks can be done easily.  As I’ve been using Ansible, I’m expanding in both my understanding of the tool and of the infrastructure that I manage.

I’m currently using Ansible to deploy to EC2 Linux hosts.  My plan is to be able to deploy an EC2 host through the AWS API.  I already have a bootstrap playbook in place to add various users, distribute ssh keys, and add those users to /etc/sudoers.  Ansible includes modules to add authorized_keys and install software, so doing so never feels like a hack or that I’m stretching the tool.

I’m also using Ansible to manage an Asterisk server, several MySQL servers, various DNS servers, and soon several Raspberry Pi computers.  I have a combination of physical servers, virtual servers through Xen, and AWS hosts.  I manage those via a custom variable called hosttype and I can do things like this to add an apt repository to the sources list on physical or virtual servers that using the Debian Jessie release:

- name: add apt source repo when physical or virtual
    repo="deb-src jessie main contrib non-free"
    when: ansible_distribution_release == "jessie" and
          (hosttype == "phy" or hosttype == "vir")

I don’t like to share passwords among MySQL servers, and Ansible enables trivial customization on a per group or per-host basis using group_vars or host_vars.

I’ll be deploying Raspberry PI and EC2 en masse later this year and Ansible will make doing so terribly easy and repeatable.

Hands-on with Amazon Echo

I was in the market for a Bluetooth speaker that had decent sound.  I got that and then some with the new Amazon Echo.  I received the Echo yesterday through a Prime pre-order.  Amazon has put a lot of thought and effort into the packaging of the Echo.  The unboxing was reminiscent of unboxing an Apple device.Amazon Echo Unboxing

The first thing that struck me was the size and weight of the Echo.  Here’s a pic showing the speaker on my desk next to a CD (Classic Quadrophenia) that should give an example of the scale.

Amazon Echo

Note the blue ring in the pic too.  When the Echo is thinking or responding to something, the blue light is on.  The ring can turn bright red if, for example, the Echo loses its wifi connection.  It did that once yesterday when I had the device in the kitchen (fairly far away from the wireless AP to which it was connected).

On the top are two buttons, one is a mute button for the mic, if you don’t want the Echo listening to your every word.  The other is a button that can be pressed to wake up the Echo to receive a command, if you don’t want to speak the “wake word”.  At this time, the only “wake words” are “Alexa” (the default) and “Amazon”.  I’m guessing they’ll change that eventually so that you can customize or choose a “wake word”.

Amazon Echo Setup

Setting up the Echo is rather straightforward.  Plug it in.  The Echo takes about 40 to 45 seconds to boot, during which time the blue ring will use a white rotate effect.  When the Echo is booted the first time, the ring will turn orange and the Echo will say “Hello”.  The next step is to visit or download the Echo app from your respective App Store.

The echo sets up its own temporary wifi network and will audibly tell you to connect to that wifi network using your device.  When you do so, you’ll then be able to choose the wifi network to which the Echo should connect.  The Echo will then connect to your home wifi network and you’re ready to roll.

The Echo also comes with a remote and a fairly powerful magnetic holder for said remote.  I haven’t yet experimented with the remote.  I suppose the use case for the remote is when you don’t want to shout “Alexa, stop” across the room.

The Bluetooth pairing process is easy too.  Simply say “Alexa, connect” and the Echo will go into a mode where it can be seen and paired to nearby devices.  There is no additional security code to enter, so anyone nearby who is watching could theoretically connect to the device while in pairing mode.  Connecting and disconnecting devices from Bluetooth is as easy as saying “Alexa, connect” and “Alexa, disconnect”.

The “Alarm” feature might be useful, though there doesn’t seem to be a way to set the alarm tone, just the time.  You can speak the alarm time, like “Alexa, set an alarm for 2pm” and watch as it gets set within the Echo web site.  That’s a novelty though and I haven’t explored the alarm function beyond merely setting it and then jumping out of my chair when the alarm went off and I had the volume too high.

Speaking of volume, there are multiple ways to adjust the volume.  Saying things like “Alexa, increase volume” will make the playback louder and you can also say “Alexa, volume 3″ to manually set the volume to a certain level.  Sadly, the range is 0 to 10 and not 11, as I was hoping.  There is also a dial on the top of the device that can be turned left and right to manually adjust the volume.  Manually adjusting the volume gives finer control over the output. For example, saying “Alexa, volume one” will result in the 0 through 10 scale, but you can still adjust the volume down below one manually.

There is a news function and you can customize from among several audio feeds such as NPR, BBC, Economist, and others.  Saying “Alexa, news” begins these feeds and “Alexa, next” skips to the next “Flash” news feed, as they are called.  For non-audio based feeds, the Echo reads the news aloud.

Weather is available for the current location and you can ask for future or current conditions for  both local and remote locations.

How’s the Sound?

I bought a Bluetooth speaker with good sound.  The Echo has that.  The sound is rich and full range, with sufficient bass and midrange.  With any small-size speaker there is a simple inability to move a lot of air, as you would find on a full-size speaker.  Therefore, I don’t believe a Bluetooth speaker will ever be able to provide the drive of a nice JBL monitor.

I set up a playlist through Amazon Prime Music and can now do things like “Alexa, shuffle playlist Classic Rock” and random songs will be chosen from that playlist.


I suspect Amazon will be working hard to enhance the Echo.  For instance, the calendaring function currently only works with Google Calendar.  I’d love to see that integrated with other calendaring options, maybe through the Alexa AppKit or natively.

I can see the need to order additional accessories like power supplies so that I can move the Echo to different rooms… though I suppose I could just order more Echoes.

I haven’t yet explored the home control aspects of the Echo that can be found with Belkin Wemo devices, though I’m hoping to.  I’m hopeful that Amazon won’t release an “Echo 2″ right away so that this is instantly obsolete either!

Deploying and Debugging PHP with AWS Elastic Beanstalk

AWS Elastic Beanstalk provides a compelling platform for deployment of applications. My web site, the one you’re viewing this page on now, has historically been deployed on a Linux computer hosted somewhere.  Well, ok, the software from which you’re reading this is WordPress but it’s hosted on the same Apache server as the main web site.

I recently redesigned the main site and in the process purposefully made the site more portable.  This essentially means that I can create a zip file with the site, including relevant Apache configuration bits contained in an htaccess file and then deploy them onto any equivalent server, regardless of that server’s underlying Apache or filesystem configuration.

That got me thinking:  Can I deploy the site onto Elastic Beanstalk in Amazon Web Services?  The answer:  Yes I can.  The path that I followed was essentially to clone a clean copy of the repository from git and then create a zip file with the contents but not the .git-related bits.  Here’s the command, executed from within the directory with the code:

zip -r . --exclude=\*.git\*

The next step is to then deploy this into AWS Elastic Beanstalk.  That’s relatively straightforward using the wizard in AWS. In a few minutes, AWS had deployed a micro instance with the code on it.  I needed to undo some hard-coded path information and also redo the bit within the templating system that relied on a local wordpress file for gathering recent blog posts.  It wasn’t immediately clear what the issue was though, and the biggest challenge I encountered was debugging.

Debugging PHP in Elastic Beanstalk

My workflow would normally call for ssh’ing into the server and looking at error logs.  However, I found that adding the ability to display errors was helpful.  That setting is found within the Configuration area of Elastic Beanstalk:Display Errors in Elastic Beanstalk

However, that’s not a setting I would use on a production site.  There are two other ways to troubleshoot Elastic Beanstalk.  First, you can view and download Apache logs and other related logs from Elastic Beanstalk.  For example, adding error_log() statements to  the code would result in log entries in these easily-viewable logs.

The other debug option is to enable an EC2 key pair for the EC2 instance associated with the application.  This is done at application creation or later through the Configuration.  Therefore, I simply deployed another application with the same Application Version and chose an EC2 key pair this time.

EC2 Key Pair

Note that AWS replaces the instance entirely if you change this key at a later date, so if you have Single Instance-hosted version of the application, the site will be unavailable while a new instance is spun up.

AWS Red Status

Once the key pair is enabled on the server, it’s simply a matter of ssh’ing into the EC2 instance using the key and the ec2-user account, like so:

ssh -i mykey.pem

Doing so, it’s easy to navigate around to see everything on the server, including log files.

SSH to Elastic Beanstalk

Note that the current version of the application can be found within the /var/app/current directory on Elastic Beanstalk-deployed PHP applications.  You can even edit files directly there, but I wouldn’t recommend it since it breaks the zip-based deployment paradigm and architecture.

In summary, Elastic Beanstalk deployment and debugging were much easier and much more powerful than I envisioned.


Monitoring SIP Peer in Asterisk

I’ve been experimenting with an external SIP provider for outbound and inbound calling.  Nothing groundbreaking about that, plenty of people use SIP providers rather than traditional landlines.  I recently had an issue with the SIP peer going into unreachable status in asterisk.  After debugging with the provider I found it to be a weird ARP issue local to the asterisk server.  The server thought that some of the provider’s IPs were local traffic and so the traffic wasn’t being passed to the default gateway.  Clearing the arp cache and the ip route cache fixed that issue.

The issue got me thinking about how to monitor the status of the provider, so I set up a simple script that opens an ssh session to the asterisk server and looks for the status of that peer.  When the status is not “OK”, the output is printed and, through the magic of cron, is sent to me.

Here’s the script:


ssh -i /root/.ssh/mykey ‘asterisk -x “sip show peers”‘ | grep <providername> | grep -v OK

The script initiates an ssh session using a private key.  The matching public key has already been placed in authorized_keys on the asterisk server…  and yes, slap my hand for ssh’ing as root here;  I need to fix that.  The command “asterisk -x ‘sip show peers'” is executed.  That output is piped to grep for the <providername> which is then piped to grep -v to exclude the “OK” output since I assume things are OK and only want to know when they’re not OK.

Admittedly, nothing groundbreaking about this simple one line script either!  But here it is nonetheless, in case anyone finds it useful for monitoring when a sip peer goes unreachable or lagged.

Installing nftables on Debian 7.5

[Last Update: 8/11/2014 – Clean up some bits around the options to select.]

This article discusses installation of nftables, the new Linux firewall software, on a Debian 7.5 system.  Nftables is under very active development and therefore the installation steps may change by the time you view this article.  Specifically, the various prerequisites needed in order to build nftables will likely no longer be needed as the software matures, and more importantly, as packages for it become available.

Note: This article begins with a base of Debian 7.5.0 netinst with the SSH Server and Standard System Utilities installed.

There are two primary components involved in an nftables system:  The first component is the Linux kernel, which provides the underlying nftables core modules.  The second component is the administration program called nft.

Compiling a kernel

The Linux kernel that comes with Debian 7.5.0 is based on version

Before you can compile a kernel, you need to get a kernel.  As of this writing, the latest stable kernel is 3.15.  Retrieving that from the Linux server with the wget command looks like this:


Then unpack the kernel source:

tar -xvf linux-3.15.tar.xz

You’ll now have a pristine kernel ready to be built.

Several packages are essential and some are helpful for compiling a kernel on Debian.  The package named kernel-package provides useful utilities for creating a Debian packaged kernel.  Kernel-package has several prerequisites but those are all installed when you select kernel-package for installation on the system.

The method shown in this article uses the ‘menuconfig’ option to build the kernel.  Other methods such as simply the text-based config option are also available.  The menuconfig option requires the ncurses-devel package.  On Debian, this is found as part of the libncurses5-dev package and can be installed with this command (run as root):

apt-get install libncurses5-dev kernel-package

Note:  You may need to update the package list by running apt-get update prior to the packages becoming available for installation.

From within the linux-3.15 (or whatever version) directory, run:

make menuconfig

The options necessary within the kernel for nftables are found in the Networking support hierarchy.

Drill-down to the Networking support -> Networking options -> Network packet filtering framework (Netfilter).

Inside of the IP: Netfilter Configuration select IPv4 NAT.  Back up at the Network packet filtering framework menu, select IPv6 Netfilter Configuration and enable IPv6 NAT along with its sub-options of MASQUERADE target support and NPT target support.

Back at the Network packet filtering framework level, enter the Core Netfilter Configuration menu and enable Netfilter nf_tables support.  Doing so opens up several additional options.

Netfilter nf_tables mixed IPv4/IPv6 tables support
Netfilter nf_tables IPv6 exthdr module
Netfilter nf_tables meta module
Netfilter nf_tables conntrack module
Netfilter nf_tables rbtree set module
Netfilter nf_tables hash set module
Netfilter nf_tables counter module
Netfilter nf_tables log module
Netfilter nf_tables limit module
Netfilter nf_tables nat module
Netfilter nf_tables queue module
Netfilter nf_tables reject support
Netfilter x_tables over nf_tables module

Back in the Network packet filtering framework (Netfilter) level, select IP: Netfilter Configuration and find the IPv4 nf_tables support section and enable IPv4 nf_tables route chain support, IPv4 nf_tables nat chain support, and ARP nf_tables support.  Back at the Network packet filtering framework (Netfilter) level, select IPv6: Netfilter Configuration again and enable IPv6 nf_tables route chain support, and IPv6 nf_tables nat chain support.

Note: For the purposes of this article, all of the options will be selected as modules.

Finally, within the Network packet filtering framework (Netfilter) section, enable the Ethernet Bridge nf_tables support feature if you need this functionality.

Once your kernel configuration is complete, you can clean the source tree with the command:

 make-kpkg clean

Now it’s time to compile the kernel.  Depending on the speed of your system it make take several minutes to several hours.  If you have multiple processors, you can likely speed up the process by having make-kpkg use them.  This is accomplished by setting the CONCURRENCY_LEVEL environment variable.  For instance, on a system with two processors, the variable is set as such:


Alternately, specify all of it on the command line:

CONCURRENCY_LEVEL=2 INSTALL_MOD_STRIP=1 make-kpkg --initrd --revision=1 kernel_image

Note: On a dual processor quad core system the compile took about 30 minutes.

Once the kernel has been compiled, installation is accomplished (as root) with the command:

 dpkg -i linux-image-<your_version_here>.deb

Rebooting the server brings up the shiny new kernel but the server isn’t quite ready to run nf_tables yet.  Prior to compiling the nft administration program, you can verify that the nf_tables module can load.  First, see if the module is already loaded:

 lsmod | grep nf_tables

If there’s output then the module has already been loaded.  If not, then you can load the module with modprobe, as such:

 modprobe nf_tables

Rerunning the lsmod command (lsmod | grep nf_tables) should give output now, similar to this:

 nf_tables              37955  0
nfnetlink              12989  1 nf_tables

 Compiling the nft Administration Program

The nft administration program enables control over the firewall, in much the same way that the iptables command controlled an iptables-based firewall.  The nft program depends on the libmnl and libnftnl libraries.  With the large amount of active development underway on nf_tables and related libraries, this tutorial shows how to get the latest copy using Git rather than attempting to install from a package or another method.

 apt-get install autoconf2.13 libtool pkg-config flex bison libgmp-dev libreadline6-dev dblatex

Note that dblatex is only needed if you want PDF documentation, which I sometimes do.  You can save some space and security footprint by not adding dblatex to the previous apt-get command line.

The three repositories can be cloned into your current directory with the commands:

git clone git://
git clone git://
git clone git://

Once a copy has been downloaded, the next step is to compile the software.  Both libnml and libnftnl are prerequisites for compiling nftables so those will be compiled first with the commands (all run as superuser/root):

 cd libmnl
make install

Now cd backwards a directory and into the libnftnl directory and compile it:

 cd ../libnftnl
make install

Finally, compile nftables:

 cd ../nftables
make install

With the nftables administration program compiled and installed you can now run nft commands!  Depending on your installation, you may need to reboot and/or run ldconfig.  I did both; a reboot didn’t fix it so running ldconfig as root was the next logical step.  Actually, that might have been the first logical step before rebooting, but that’s how it goes sometimes.

In any event, running the following command should do nothing (and that’s what we want right now):

 nft list tables

If the command returns nothing at all, then nft is working fine.  You can create a table with the command:

 nft add table filter

Now create a chain with the command:

nft add chain filter input { type filter hook input priority 0 \; }

Note that the space and backslash before the semi-colon are necessary when entering the command from the command line.

You can now run nft list tables and it will show:

 table filter

Running the following command shows the contents of the table:

 nft list table filter -a

The output will be:

 table ip filter {
chain input {
type filter hook input priority 0;

That’s it!  You now have nftables running. There are several good tutorials out there that deal with creating an nftables firewall once you’re at this point and I’m also updating my Linux Firewalls book to include coverage of nftables!  It’ll be out in the fall of 2014.


Update: Asterisk on Raspberry Pi

I had been successfully running Asterisk on a Raspberry Pi with an Obi110 interface to PSTN for about a year.  However, I recently switched back to a standard 1u rack mount server for the phone system.  The Raspberry Pi server was just fast enough to support asterisk with a SIP and PSTN outbound and several internal SIP clients but the SD card just wasn’t reliable enough.

Something, and I never found out what, was quite wonky with SD card, image, or Raspberry Pi itself for this particular server.  At various times it would stop working and fail to boot properly after power cycle.  Swapping out the SD card for a new one with the same image worked sometimes but sometimes I had to swap out the entire Pi for another one.

I was already sending {just about} all logs towards a centralized log server to prevent writes on the box itself.  In order to increase reliability my next step was to add a USB hub and an external hard drive for the root filesystem, relying on the SD card for boot only.  However, at that point I figured I was only going to create a mess of wires without being fully assured of increasing reliability all that much.  Now there would be two more points of failure (the USB hub and the external drive) thereby making recovery all the more difficult.

I was quite happy with the performance of the Pi for this purpose.  I wonder aloud if something like the Intel Galileo would fare better, if one could get asterisk running on the primary flash.  Regardless, it was a successful experiment.

nft: error while loading shared libraries: cannot open shared object file: No such file or directory

After compiling nftables and attempting to run nft list tables I received the error:

nft: error while loading shared libraries: cannot open shared object file: No such file or directory

Turns out I needed to run ldconfig in order to fix the error.  I also rebooted prior to running ldconfig but probably didn’t need to.

svn to git without history

#assumes existence of gituser which has to be added manually.
mkdir /opt/git/newrepo.git
cd /opt/git/newrepo.git
git --bare init
cd /opt/git
chown -R gituser.gituser newrepo.git
cd ~
mkdir svnrepo-export
cd svnrepo-export
svn export <path-to-svn-repo>
git init
git add .
git commit -m "initial commit"
git remote add origin gituser@localhost:/opt/git/newrepo.git
git push origin master
<move old real svn repo out of the way>
git clone gituser@localhost:/opt/git/newrepo.git <directory>

Perl to Python RSS Conversion

For quite some time, I’ve had my own personal homepage containing commonly used links, server status, subject lines of e-mails, and RSS news feeds.  Nothing exciting there.  The RSS feeds are retrieved by a program that runs every N minutes through cron and places the entries into a MySQL table.  Again, nothing exciting.  However, recently the Perl program that I’ve been using to retrieve the RSS has been consuming a bigger percentage of the available resources on the server.  More appropriately, the server on which the RSS retriever is hosted is more heavily utilized now thus when the RSS parser runs it became noticeable on the load average of server.

Of course, one way to solve it is to throw more hardware at it, like more CPU and RAM.  However, that would be too easy.  Instead I threw together a python program using feedparser just to see the difference in performance between the two for this purpose.  The results were surprising.  Python took about 2.8 seconds in real time and used significantly less system resources to do so.  Perl took ~11 seconds for the same feeds at roughly the same time.

I’m not writing this to be a knock against Perl; more likely the methods that I used to parse the RSS in Perl (and my general Perl programming skills?) are the issue.

Timings below.


real 0m2.868s
user 0m1.808s
sys 0m0.072s


real 0m11.016s
user 0m4.108s
sys 0m0.144s