Preparing for Automation with Ansible

Using Automation for Server Administration

Managing multiple servers is cumbersome. The time consumed managing service configurations across the servers, ensuring software is up to date, and managing local files and user accounts grows seemingly exponentially. Therefore, we need a tool to help automate these tasks.

There are several tools available for maintaining servers and equipment. Among the popular tools are Ansible, Chef, Puppet, and Salt. We will use Ansible for this course. Ansible has several advantages over other server automation tools: Ansible does not require additional software such as a background agent to be installed on each server. Rather, Ansible uses ssh for management and ssh is available on every Linux server that you will manage regularly.  Avoiding additional software means having less chance of software bugs and is one less vector for attackers to exploit.

Preparing for Automation

Best practice dictates that we use root privileges as little as possible and that we don’t utilize root privileges across the network when possible to avoid doing so.

Therefore, we will create a normal, non-privileged user, on each device that we intend to bring under Ansible control.  We will grant the user sudo privileges with no password required.  We could further limit the IPs from which this account can login and we could also limit the commands that the user is allowed to run. However, for this tutorial our user configuration will be sufficient and is typical for many business environments. We will gain some security through obscurity by creating the user with a unique name, one that would not typically be included in a script run by an attacker.

Prerequisites:  You should have two computers available running Linux, virtual machines are fine, as are Raspberry Pi’s. The two computers should be able to ping each other and you should be able ssh between the two computers.  This tutorial uses Debian 9.3 but all commands, aside from those that use apt-get, will run on any of the popular Linux distributions.

Terminology:  There will be one Ansible server which will be referred to as the “server” for the purposes of automation.  It is from this server that we will manage one or more “client” devices also known as hosts in Ansible terminology.  When you see server in this tutorial, it will refer to the device on which Ansible is installed.  When you see client in this tutorial, it will refer to the host or hosts that will be managed by Ansible.

Preparing for Ansible

We will use a custom user for our automation efforts. The user will need to be created on the Ansible server and any client hosts that we will manage with Ansible. This user will not have a password and therefore will not be able to login directly.

Task 1:

On both the Ansible server and the client:

useradd -m automat

On the Ansible server, change the shell of the automat user to /bin/bash using the chsh command.

Task 2:

We need to generate an ssh key on the primary server and distribute that to any other hosts that we intend to bring under Ansible automation.  On the Ansible server, you will need to “become” the automat user. To do so, first go root and then use su – to become the automat user.

su -

#Enter the root password

From the root prompt:

su - automat

You have now “become” the automat user, just as if you had logged in as that user with ssh.  Run the following command as automat in order to generate an ssh key:


Task 3:

We now need to distribute the public key to any hosts where we want to login.  Recall that the automat user does not have a password and therefore cannot login using ssh, so the ssh-copy-id command will not work.  Therefore, on the client you will need to become the automat user.  As automat, create a directory called .ssh and copy the contents of from the Ansible server and paste the contents into a file called authorized_keys in the .ssh directory on the client, as follows:

On the Ansible server, copy the contents of /home/automat/.ssh/ to the clipboard.

On the client host, as automat:

mkdir .ssh

Edit .ssh/authorized_keys using vim or nano and paste the contents of the clipboard into the file, saving and exiting.

Checkpoint 1

Once the public key has been distributed onto the client, you will be able to ssh directly from the Ansible server to the client as the automat user and you will not be prompted for a password.  Note that the first time you ssh to the client you will be prompted to “accept” the host key of the client machine, type “yes” for that prompt.

If you are prompted for a password when using the automat user then look back at the previous steps to ensure that they have been completed correctly.

Task 4:

Ansible will need to have elevated privileges in order to manage the client hosts. Therefore, we will use sudo to grant those privileges.  Therefore, we need to install sudo on both server and client.  Run the following, as root:

apt-get install sudo

Task 5:

With sudo installed, we need to add an entry to the sudoers file such that our new user can run commands as root without needing a password.  Use visudo on both the server and the client to add the following entry:


Checkpoint 2

You should be able to view the contents of the shadow file as the automat user when using sudo and should not be prompted for a password.

On both server and client, as the “automat” user, run the following, which should result in a “Permission Denied” error or similar:

less /etc/shadow

Continuing as automat, now run the command with sudo, you should not be prompted for a password.  If you are prompted for a password, then the sudoers entry must not be correct:

sudo less /etc/shadow

Task 6:

One final item in the initial configuration, ssh as root from server to client in order to cache the host key in root’s “known_hosts” file.  On the Ansible server, as root, ssh to the client computer. You will be prompted to accept the host key of the client, enter yes to cache the key.  Note that you do not need to complete the connection, merely type “yes” to cache the key and then CTRL-C to terminate the connection.

Note: If you’re using DHCP and the IP address of the client changes, you will need to cache the ssh host key again using this method.


  • User named automat added to both servers; User should not have a password (Task 1).
  • An SSH keypair generated on the primary server (Task 2).
  • The public ssh key placed in ~automat/.ssh/authorized_keys on the secondary server (Task 3).
  • Able to ssh from primary server to secondary server as the automat user without being prompted for a password (Checkpoint 1).
  • The sudo package installed (Task 4).
  • A sudoers entry on each server for the automat user:  automat ALL=NOPASSWD:ALL  (Task 5).
  • Able to run commands as automat on each server using sudo without being prompted          for a password (Checkpoint 2).
  • Have executed ssh from server to client in order to cache the client’s host key in root’s         known_hosts file (Task 6).

Installing Ansible

One of the best features of Ansible is that you don’t need to install any agent software on each client to be managed.  In this section, we will install Ansible on the server.

Task 7:

Ansible only needs to be installed on the Ansible server itself by using the following command:

apt-get install ansible

Configure Ansible for First Use

Ansible configuration is stored in /etc/ansible. Within the /etc/ansible directory you will find two files, ansible.cfg and hosts. The ansible.cfg file contains basic configuration for how Ansible will behave while the hosts file contains information on hosts and host groups that are under Ansible control.

Most of the defaults defined in the ansible.cfg configuration file will work for our use. We can also change most of the options at run-time through command line options. However, we know that we will always use a user called ‘automat’ for Ansible and we know that the key pair that we want to use is stored in the home directory for that automat user. Therefore, we can set these two items as our custom defaults rather than needing to specify them on the command line every time.

Task 8:

Edit /etc/ansible/ansible.cfg, find the following configuration lines, uncomment them, and set them as follows:

remote_user = automat
private_key_file = /home/automat/.ssh/id_rsa

With those two customizations in place, Ansible will always ssh to the client devices using the automat user and will use the private key that we generated and distributed.

Adding a Host to Ansible

Adding a host to Ansible is as simple as adding its IP address or hostname to the /etc/ansible/hosts file. Assuming that we don’t have a DNS entry for our client computer and we don’t have an entry for it in /etc/hosts, we will add it by IP address.

Note: If you’re using DHCP and the IP address of the client changes, you will need to add the new IP into /etc/ansible/hosts and you will need to accept the ssh host key again as well.

Ansible can use host groups to maintain a logical collection of hosts. For instance, by region, by role, and so on. For now, we will simply add the client to ungrouped hosts.

Task 9:

Edit /etc/ansible/hosts and add the IP of the client machine to it.

Testing Ansible

Ansible is executed with the ansible command, typically run as root or with sudo privileges. The command is run from the Ansible server.  Ansible uses modules in order to execute commands on remote client machines. One such module is “ping”. The ping module does not use ICMP but rather attempts to login to the specified host(s) and examines the python environment after login to the client. On success, the ping module returns the word “pong”.

We can test Ansible and its ability to communicate with the client using the ping module.  The ansible command also requires a list of hosts on which the given command or playbook (more on playbooks soon) will be executed. There is a convenient alias called “all” which is an alias for all hosts. This saves us the time of needing to enter the IP address or host names individually. Like you might expect, the all alias will attempt to execute the given Ansible command on all hosts that are defined in /etc/ansible/hosts.

Checkpoint 3

Testing the ability to communicate between Ansible server and client host is the final step. The command to execute is as follows. Run this command either as root or as the automat user. If you run it as automat, preface it with sudo.

ansible -m ping all

The command in Checkpoint 3 calls the ansible command, passing -m ping to specify the ping module, followed by the host specification, in this case all.  This example output from a successful Ansible ping shows a client host at the IP | SUCCESS => {
    "changed": false,
    "ping": "pong"

If you do not receive this success message, refer back to the Checklist provided earlier to ensure that you can complete each of those steps successfully and work through the final tasks related to Ansible again to ensure that they are complete.

With a successful Ansible ping, you have configured Ansible on the server and on one client. Another helpful module is the setup module. You can use the setup module to gather facts about client hosts. Some of these facts will be used for making decisions in playbooks and can be used as variables.

cPanel Backups

cPanel, the popular web hosting platform, has been through some updates over the years.  The latest update changed how the landing page is rendered.  AngularJS is now used for the creation of a reseller list.  The effect of this change is that any parsing done by scripts (perl, python, or otherwise) now needs to search elsewhere for that list of accounts to backup.

The goal is to have the main reseller account login with its credentials, gather a list of its accounts, and then run a backup of each individual account, sending the backup to a third-party off-site server via FTP(s).  I developed a script for a client several years ago and it has been working successfully with very little care and feeding.  The change to Angular meant some updating was needed to that script.

The ultimate fix was to login as normal and then call the list_accounts file which returns a JSON-encoded list of accounts underneath the reseller.  The fix itself was rather easy to implement because the return is JSON.  Finding the fix is, as always, an adventure.

Parsing a CSV with JavaScript

I had a question from a student as to parsing a CSV file with JavaScript – not jQuery, not anything else, just JavaScript.  Easy, right?  Should be if you’ve worked with files and JavaScript before.  I hadn’t done so at the time, so it served as a bit of a challenge, and in a good way.

One caveat on the code in this post:  It’s ugly.  I’m using an inline “onsubmit” event handler for the form, and I hate myself for doing so.  It’s also not optimized in any way but is more Proof of Concept than anything.  If you’re going to use this in a production environment, first fix that event handler and then clean the code up and include error checking/handling.  I also don’t know how well this would perform with a large CSV file.

Speaking of CSV, the code assumes a CSV file that contains no other commas other than those separating the actual values.  Here’s the sample that I used:

Stevens Point,41,Sunny

As a side note, I want to make it back to Halifax once when it’s not raining.

Build an HTML Page

Let’s build an HTML page to grab the file.  The HTML is simple, just a form with an input type of “file” and a submit button.  The HTML also features a <table> element so that I can dump the resulting contents of the CSV out to the screen.

<!doctype html>
<form onsubmit="return processFile();" action="#" name="myForm" id="aForm" method="POST">
<input type="file" id="myFile" name="myFile"><br>
<input type="submit" name="submitMe" value="Process File">
<table id="myTable"></table>

JavaScript CSV

Next up is the JavaScript.  The form makes an array of files available when retrieved.  So:

var theFile = document.getElementById("myFile").files[0];

Now “theFile” contains the actual file as uploaded.  Next, some minimal error checking to see if theFile is actually something.  If it is, then a couple variables are initialized and set for later use:

var table = document.getElementById("myTable");
 var headerLine = "";

And then the key bit:  A FileReader() object is instantiated:

var myReader = new FileReader();

A function is attached to the onload event of the myReader FileReader.  This function is where the magic happens:

 myReader.onload = function(e) {
   var content = myReader.result;
   var lines = content.split("\r");
   for (var count = 0; count < lines.length; count++) {
     var row = document.createElement("tr");
     var rowContent = lines[count].split(",");
       for (var i = 0; i < rowContent.length; i++) {
         if (count == 0) {
           var cellElement = document.createElement("th");
         } else {
           var cellElement = document.createElement("td");
         var cellContent = document.createTextNode(rowContent[i]);
       }  //end rowContent for loop
     } //end main for loop
   }  //end onload function 
 }  //end if(theFile)

Actually, the magic begins outside of the onload function with the line


When this line executes, then the onload function is fired for the FileReader object.  The first line within the onload function then gathers the contents of the file into a variable called ‘content’.  The content is then split along Return characters (\r).  So now we have variable that contains the CSV line-by-line:

   var content = myReader.result;
   var lines = content.split("\r");

Next, a for loop is entered.  This for loop creates a new table row (tr) for each line in the CSV:

     var row = document.createElement("tr");

The contents of the row are then split at commas:

 var rowContent = lines[count].split(",");

The contents of each row (in the rowContent variable) are then looped in the next for loop.  If it’s the first line of the CSV then we assume it contains heading values and therefore make a “th” element.  Otherwise simple “td” elements are created for each cell in the table:

         if (count == 0) {
           var cellElement = document.createElement("th");
         } else {
           var cellElement = document.createElement("td");

Next, the code creates text nodes for each bit of content, appends those text nodes to the row and then appends the table row to the HTML table.

         var cellContent = document.createTextNode(rowContent[i]);
       }  //end rowContent for loop
     } //end main for loop

Finally, the code does a return false so that the form isn’t actually submitted.

Here’s the full code, with in-page JavaScript:

<!doctype html>
<script type="text/javascript">
function processFile() {
 var fileSize = 0;
 var theFile = document.getElementById("myFile").files[0];
 if (theFile) {
 var table = document.getElementById("myTable");
 var headerLine = "";
 var myReader = new FileReader();
 myReader.onload = function(e) {
 var content = myReader.result;
 var lines = content.split("\r");
 for (var count = 0; count < lines.length; count++) {
 var row = document.createElement("tr");
 var rowContent = lines[count].split(",");
 for (var i = 0; i < rowContent.length; i++) {
 if (count == 0) {
 var cellElement = document.createElement("th");
 } else {
 var cellElement = document.createElement("td");
 var cellContent = document.createTextNode(rowContent[i]);
 return false;
<form onsubmit="return processFile();" action="#" name="myForm" id="aForm" method="POST">
<input type="file" id="myFile" name="myFile"><br>
<input type="submit" name="submitMe" value="Process File">
<table id="myTable"></table>

Ansible and AWS EC2

Ansible is not lacking in awesome.  I’ve used Puppet and Chef and others to manage Linux but Ansible meets my criteria for host management for the specific reason that it uses SSH to manage hosts rather than an agent.  Ansible is also simple to get up and running quickly.

In just a few hours, I was managing hosts and doing real work to keep DHCP configs straight.  Adding more and more functionality to playbooks can be done easily.  As I’ve been using Ansible, I’m expanding in both my understanding of the tool and of the infrastructure that I manage.

I’m currently using Ansible to deploy to EC2 Linux hosts.  My plan is to be able to deploy an EC2 host through the AWS API.  I already have a bootstrap playbook in place to add various users, distribute ssh keys, and add those users to /etc/sudoers.  Ansible includes modules to add authorized_keys and install software, so doing so never feels like a hack or that I’m stretching the tool.

I’m also using Ansible to manage an Asterisk server, several MySQL servers, various DNS servers, and soon several Raspberry Pi computers.  I have a combination of physical servers, virtual servers through Xen, and AWS hosts.  I manage those via a custom variable called hosttype and I can do things like this to add an apt repository to the sources list on physical or virtual servers that using the Debian Jessie release:

- name: add apt source repo when physical or virtual
    repo="deb-src jessie main contrib non-free"
    when: ansible_distribution_release == "jessie" and
          (hosttype == "phy" or hosttype == "vir")

I don’t like to share passwords among MySQL servers, and Ansible enables trivial customization on a per group or per-host basis using group_vars or host_vars.

I’ll be deploying Raspberry PI and EC2 en masse later this year and Ansible will make doing so terribly easy and repeatable.

Hands-on with Amazon Echo

I was in the market for a Bluetooth speaker that had decent sound.  I got that and then some with the new Amazon Echo.  I received the Echo yesterday through a Prime pre-order.  Amazon has put a lot of thought and effort into the packaging of the Echo.  The unboxing was reminiscent of unboxing an Apple device.Amazon Echo Unboxing

The first thing that struck me was the size and weight of the Echo.  Here’s a pic showing the speaker on my desk next to a CD (Classic Quadrophenia) that should give an example of the scale.

Amazon Echo

Note the blue ring in the pic too.  When the Echo is thinking or responding to something, the blue light is on.  The ring can turn bright red if, for example, the Echo loses its wifi connection.  It did that once yesterday when I had the device in the kitchen (fairly far away from the wireless AP to which it was connected).

On the top are two buttons, one is a mute button for the mic, if you don’t want the Echo listening to your every word.  The other is a button that can be pressed to wake up the Echo to receive a command, if you don’t want to speak the “wake word”.  At this time, the only “wake words” are “Alexa” (the default) and “Amazon”.  I’m guessing they’ll change that eventually so that you can customize or choose a “wake word”.

Amazon Echo Setup

Setting up the Echo is rather straightforward.  Plug it in.  The Echo takes about 40 to 45 seconds to boot, during which time the blue ring will use a white rotate effect.  When the Echo is booted the first time, the ring will turn orange and the Echo will say “Hello”.  The next step is to visit or download the Echo app from your respective App Store.

The echo sets up its own temporary wifi network and will audibly tell you to connect to that wifi network using your device.  When you do so, you’ll then be able to choose the wifi network to which the Echo should connect.  The Echo will then connect to your home wifi network and you’re ready to roll.

The Echo also comes with a remote and a fairly powerful magnetic holder for said remote.  I haven’t yet experimented with the remote.  I suppose the use case for the remote is when you don’t want to shout “Alexa, stop” across the room.

The Bluetooth pairing process is easy too.  Simply say “Alexa, connect” and the Echo will go into a mode where it can be seen and paired to nearby devices.  There is no additional security code to enter, so anyone nearby who is watching could theoretically connect to the device while in pairing mode.  Connecting and disconnecting devices from Bluetooth is as easy as saying “Alexa, connect” and “Alexa, disconnect”.

The “Alarm” feature might be useful, though there doesn’t seem to be a way to set the alarm tone, just the time.  You can speak the alarm time, like “Alexa, set an alarm for 2pm” and watch as it gets set within the Echo web site.  That’s a novelty though and I haven’t explored the alarm function beyond merely setting it and then jumping out of my chair when the alarm went off and I had the volume too high.

Speaking of volume, there are multiple ways to adjust the volume.  Saying things like “Alexa, increase volume” will make the playback louder and you can also say “Alexa, volume 3” to manually set the volume to a certain level.  Sadly, the range is 0 to 10 and not 11, as I was hoping.  There is also a dial on the top of the device that can be turned left and right to manually adjust the volume.  Manually adjusting the volume gives finer control over the output. For example, saying “Alexa, volume one” will result in the 0 through 10 scale, but you can still adjust the volume down below one manually.

There is a news function and you can customize from among several audio feeds such as NPR, BBC, Economist, and others.  Saying “Alexa, news” begins these feeds and “Alexa, next” skips to the next “Flash” news feed, as they are called.  For non-audio based feeds, the Echo reads the news aloud.

Weather is available for the current location and you can ask for future or current conditions for  both local and remote locations.

How’s the Sound?

I bought a Bluetooth speaker with good sound.  The Echo has that.  The sound is rich and full range, with sufficient bass and midrange.  With any small-size speaker there is a simple inability to move a lot of air, as you would find on a full-size speaker.  Therefore, I don’t believe a Bluetooth speaker will ever be able to provide the drive of a nice JBL monitor.

I set up a playlist through Amazon Prime Music and can now do things like “Alexa, shuffle playlist Classic Rock” and random songs will be chosen from that playlist.


I suspect Amazon will be working hard to enhance the Echo.  For instance, the calendaring function currently only works with Google Calendar.  I’d love to see that integrated with other calendaring options, maybe through the Alexa AppKit or natively.

I can see the need to order additional accessories like power supplies so that I can move the Echo to different rooms… though I suppose I could just order more Echoes.

I haven’t yet explored the home control aspects of the Echo that can be found with Belkin Wemo devices, though I’m hoping to.  I’m hopeful that Amazon won’t release an “Echo 2” right away so that this is instantly obsolete either!

Deploying and Debugging PHP with AWS Elastic Beanstalk

AWS Elastic Beanstalk provides a compelling platform for deployment of applications. My web site, the one you’re viewing this page on now, has historically been deployed on a Linux computer hosted somewhere.  Well, ok, the software from which you’re reading this is WordPress but it’s hosted on the same Apache server as the main web site.

I recently redesigned the main site and in the process purposefully made the site more portable.  This essentially means that I can create a zip file with the site, including relevant Apache configuration bits contained in an htaccess file and then deploy them onto any equivalent server, regardless of that server’s underlying Apache or filesystem configuration.

That got me thinking:  Can I deploy the site onto Elastic Beanstalk in Amazon Web Services?  The answer:  Yes I can.  The path that I followed was essentially to clone a clean copy of the repository from git and then create a zip file with the contents but not the .git-related bits.  Here’s the command, executed from within the directory with the code:

zip -r . --exclude=\*.git\*

The next step is to then deploy this into AWS Elastic Beanstalk.  That’s relatively straightforward using the wizard in AWS. In a few minutes, AWS had deployed a micro instance with the code on it.  I needed to undo some hard-coded path information and also redo the bit within the templating system that relied on a local wordpress file for gathering recent blog posts.  It wasn’t immediately clear what the issue was though, and the biggest challenge I encountered was debugging.

Debugging PHP in Elastic Beanstalk

My workflow would normally call for ssh’ing into the server and looking at error logs.  However, I found that adding the ability to display errors was helpful.  That setting is found within the Configuration area of Elastic Beanstalk:Display Errors in Elastic Beanstalk

However, that’s not a setting I would use on a production site.  There are two other ways to troubleshoot Elastic Beanstalk.  First, you can view and download Apache logs and other related logs from Elastic Beanstalk.  For example, adding error_log() statements to  the code would result in log entries in these easily-viewable logs.

The other debug option is to enable an EC2 key pair for the EC2 instance associated with the application.  This is done at application creation or later through the Configuration.  Therefore, I simply deployed another application with the same Application Version and chose an EC2 key pair this time.

EC2 Key Pair

Note that AWS replaces the instance entirely if you change this key at a later date, so if you have Single Instance-hosted version of the application, the site will be unavailable while a new instance is spun up.

AWS Red Status

Once the key pair is enabled on the server, it’s simply a matter of ssh’ing into the EC2 instance using the key and the ec2-user account, like so:

ssh -i mykey.pem

Doing so, it’s easy to navigate around to see everything on the server, including log files.

SSH to Elastic Beanstalk

Note that the current version of the application can be found within the /var/app/current directory on Elastic Beanstalk-deployed PHP applications.  You can even edit files directly there, but I wouldn’t recommend it since it breaks the zip-based deployment paradigm and architecture.

In summary, Elastic Beanstalk deployment and debugging were much easier and much more powerful than I envisioned.


Monitoring SIP Peer in Asterisk

I’ve been experimenting with an external SIP provider for outbound and inbound calling.  Nothing groundbreaking about that, plenty of people use SIP providers rather than traditional landlines.  I recently had an issue with the SIP peer going into unreachable status in asterisk.  After debugging with the provider I found it to be a weird ARP issue local to the asterisk server.  The server thought that some of the provider’s IPs were local traffic and so the traffic wasn’t being passed to the default gateway.  Clearing the arp cache and the ip route cache fixed that issue.

The issue got me thinking about how to monitor the status of the provider, so I set up a simple script that opens an ssh session to the asterisk server and looks for the status of that peer.  When the status is not “OK”, the output is printed and, through the magic of cron, is sent to me.

Here’s the script:


ssh -i /root/.ssh/mykey ‘asterisk -x “sip show peers”‘ | grep <providername> | grep -v OK

The script initiates an ssh session using a private key.  The matching public key has already been placed in authorized_keys on the asterisk server…  and yes, slap my hand for ssh’ing as root here;  I need to fix that.  The command “asterisk -x ‘sip show peers'” is executed.  That output is piped to grep for the <providername> which is then piped to grep -v to exclude the “OK” output since I assume things are OK and only want to know when they’re not OK.

Admittedly, nothing groundbreaking about this simple one line script either!  But here it is nonetheless, in case anyone finds it useful for monitoring when a sip peer goes unreachable or lagged.

Installing nftables on Debian 7.5

[Last Update: 8/11/2014 – Clean up some bits around the options to select.]

This article discusses installation of nftables, the new Linux firewall software, on a Debian 7.5 system.  Nftables is under very active development and therefore the installation steps may change by the time you view this article.  Specifically, the various prerequisites needed in order to build nftables will likely no longer be needed as the software matures, and more importantly, as packages for it become available.

Note: This article begins with a base of Debian 7.5.0 netinst with the SSH Server and Standard System Utilities installed.

There are two primary components involved in an nftables system:  The first component is the Linux kernel, which provides the underlying nftables core modules.  The second component is the administration program called nft.

Compiling a kernel

The Linux kernel that comes with Debian 7.5.0 is based on version

Before you can compile a kernel, you need to get a kernel.  As of this writing, the latest stable kernel is 3.15.  Retrieving that from the Linux server with the wget command looks like this:


Then unpack the kernel source:

tar -xvf linux-3.15.tar.xz

You’ll now have a pristine kernel ready to be built.

Several packages are essential and some are helpful for compiling a kernel on Debian.  The package named kernel-package provides useful utilities for creating a Debian packaged kernel.  Kernel-package has several prerequisites but those are all installed when you select kernel-package for installation on the system.

The method shown in this article uses the ‘menuconfig’ option to build the kernel.  Other methods such as simply the text-based config option are also available.  The menuconfig option requires the ncurses-devel package.  On Debian, this is found as part of the libncurses5-dev package and can be installed with this command (run as root):

apt-get install libncurses5-dev kernel-package

Note:  You may need to update the package list by running apt-get update prior to the packages becoming available for installation.

From within the linux-3.15 (or whatever version) directory, run:

make menuconfig

The options necessary within the kernel for nftables are found in the Networking support hierarchy.

Drill-down to the Networking support -> Networking options -> Network packet filtering framework (Netfilter).

Inside of the IP: Netfilter Configuration select IPv4 NAT.  Back up at the Network packet filtering framework menu, select IPv6 Netfilter Configuration and enable IPv6 NAT along with its sub-options of MASQUERADE target support and NPT target support.

Back at the Network packet filtering framework level, enter the Core Netfilter Configuration menu and enable Netfilter nf_tables support.  Doing so opens up several additional options.

Netfilter nf_tables mixed IPv4/IPv6 tables support
Netfilter nf_tables IPv6 exthdr module
Netfilter nf_tables meta module
Netfilter nf_tables conntrack module
Netfilter nf_tables rbtree set module
Netfilter nf_tables hash set module
Netfilter nf_tables counter module
Netfilter nf_tables log module
Netfilter nf_tables limit module
Netfilter nf_tables nat module
Netfilter nf_tables queue module
Netfilter nf_tables reject support
Netfilter x_tables over nf_tables module

Back in the Network packet filtering framework (Netfilter) level, select IP: Netfilter Configuration and find the IPv4 nf_tables support section and enable IPv4 nf_tables route chain support, IPv4 nf_tables nat chain support, and ARP nf_tables support.  Back at the Network packet filtering framework (Netfilter) level, select IPv6: Netfilter Configuration again and enable IPv6 nf_tables route chain support, and IPv6 nf_tables nat chain support.

Note: For the purposes of this article, all of the options will be selected as modules.

Finally, within the Network packet filtering framework (Netfilter) section, enable the Ethernet Bridge nf_tables support feature if you need this functionality.

Once your kernel configuration is complete, you can clean the source tree with the command:

 make-kpkg clean

Now it’s time to compile the kernel.  Depending on the speed of your system it make take several minutes to several hours.  If you have multiple processors, you can likely speed up the process by having make-kpkg use them.  This is accomplished by setting the CONCURRENCY_LEVEL environment variable.  For instance, on a system with two processors, the variable is set as such:


Alternately, specify all of it on the command line:

CONCURRENCY_LEVEL=2 INSTALL_MOD_STRIP=1 make-kpkg --initrd --revision=1 kernel_image

Note: On a dual processor quad core system the compile took about 30 minutes.

Once the kernel has been compiled, installation is accomplished (as root) with the command:

 dpkg -i linux-image-<your_version_here>.deb

Rebooting the server brings up the shiny new kernel but the server isn’t quite ready to run nf_tables yet.  Prior to compiling the nft administration program, you can verify that the nf_tables module can load.  First, see if the module is already loaded:

 lsmod | grep nf_tables

If there’s output then the module has already been loaded.  If not, then you can load the module with modprobe, as such:

 modprobe nf_tables

Rerunning the lsmod command (lsmod | grep nf_tables) should give output now, similar to this:

 nf_tables              37955  0
nfnetlink              12989  1 nf_tables

 Compiling the nft Administration Program

The nft administration program enables control over the firewall, in much the same way that the iptables command controlled an iptables-based firewall.  The nft program depends on the libmnl and libnftnl libraries.  With the large amount of active development underway on nf_tables and related libraries, this tutorial shows how to get the latest copy using Git rather than attempting to install from a package or another method.

 apt-get install autoconf2.13 libtool pkg-config flex bison libgmp-dev libreadline6-dev dblatex

Note that dblatex is only needed if you want PDF documentation, which I sometimes do.  You can save some space and security footprint by not adding dblatex to the previous apt-get command line.

The three repositories can be cloned into your current directory with the commands:

git clone git://
git clone git://
git clone git://

Once a copy has been downloaded, the next step is to compile the software.  Both libnml and libnftnl are prerequisites for compiling nftables so those will be compiled first with the commands (all run as superuser/root):

 cd libmnl
make install

Now cd backwards a directory and into the libnftnl directory and compile it:

 cd ../libnftnl
make install

Finally, compile nftables:

 cd ../nftables
make install

With the nftables administration program compiled and installed you can now run nft commands!  Depending on your installation, you may need to reboot and/or run ldconfig.  I did both; a reboot didn’t fix it so running ldconfig as root was the next logical step.  Actually, that might have been the first logical step before rebooting, but that’s how it goes sometimes.

In any event, running the following command should do nothing (and that’s what we want right now):

 nft list tables

If the command returns nothing at all, then nft is working fine.  You can create a table with the command:

 nft add table filter

Now create a chain with the command:

nft add chain filter input { type filter hook input priority 0 \; }

Note that the space and backslash before the semi-colon are necessary when entering the command from the command line.

You can now run nft list tables and it will show:

 table filter

Running the following command shows the contents of the table:

 nft list table filter -a

The output will be:

 table ip filter {
chain input {
type filter hook input priority 0;

That’s it!  You now have nftables running. There are several good tutorials out there that deal with creating an nftables firewall once you’re at this point and I’m also updating my Linux Firewalls book to include coverage of nftables!  It’ll be out in the fall of 2014.


Update: Asterisk on Raspberry Pi

I had been successfully running Asterisk on a Raspberry Pi with an Obi110 interface to PSTN for about a year.  However, I recently switched back to a standard 1u rack mount server for the phone system.  The Raspberry Pi server was just fast enough to support asterisk with a SIP and PSTN outbound and several internal SIP clients but the SD card just wasn’t reliable enough.

Something, and I never found out what, was quite wonky with SD card, image, or Raspberry Pi itself for this particular server.  At various times it would stop working and fail to boot properly after power cycle.  Swapping out the SD card for a new one with the same image worked sometimes but sometimes I had to swap out the entire Pi for another one.

I was already sending {just about} all logs towards a centralized log server to prevent writes on the box itself.  In order to increase reliability my next step was to add a USB hub and an external hard drive for the root filesystem, relying on the SD card for boot only.  However, at that point I figured I was only going to create a mess of wires without being fully assured of increasing reliability all that much.  Now there would be two more points of failure (the USB hub and the external drive) thereby making recovery all the more difficult.

I was quite happy with the performance of the Pi for this purpose.  I wonder aloud if something like the Intel Galileo would fare better, if one could get asterisk running on the primary flash.  Regardless, it was a successful experiment.

nft: error while loading shared libraries: cannot open shared object file: No such file or directory

After compiling nftables and attempting to run nft list tables I received the error:

nft: error while loading shared libraries: cannot open shared object file: No such file or directory

Turns out I needed to run ldconfig in order to fix the error.  I also rebooted prior to running ldconfig but probably didn’t need to.