Category Archives: Linux & Open Source

Preparing for Automation with Ansible

Using Automation for Server Administration

Managing multiple servers is cumbersome. The time consumed managing service configurations across the servers, ensuring software is up to date, and managing local files and user accounts grows seemingly exponentially. Therefore, we need a tool to help automate these tasks.

There are several tools available for maintaining servers and equipment. Among the popular tools are Ansible, Chef, Puppet, and Salt. We will use Ansible for this course. Ansible has several advantages over other server automation tools: Ansible does not require additional software such as a background agent to be installed on each server. Rather, Ansible uses ssh for management and ssh is available on every Linux server that you will manage regularly.  Avoiding additional software means having less chance of software bugs and is one less vector for attackers to exploit.

Preparing for Automation

Best practice dictates that we use root privileges as little as possible and that we don’t utilize root privileges across the network when possible to avoid doing so.

Therefore, we will create a normal, non-privileged user, on each device that we intend to bring under Ansible control.  We will grant the user sudo privileges with no password required.  We could further limit the IPs from which this account can login and we could also limit the commands that the user is allowed to run. However, for this tutorial our user configuration will be sufficient and is typical for many business environments. We will gain some security through obscurity by creating the user with a unique name, one that would not typically be included in a script run by an attacker.

Prerequisites:  You should have two computers available running Linux, virtual machines are fine, as are Raspberry Pi’s. The two computers should be able to ping each other and you should be able ssh between the two computers.  This tutorial uses Debian 9.3 but all commands, aside from those that use apt-get, will run on any of the popular Linux distributions.

Terminology:  There will be one Ansible server which will be referred to as the “server” for the purposes of automation.  It is from this server that we will manage one or more “client” devices also known as hosts in Ansible terminology.  When you see server in this tutorial, it will refer to the device on which Ansible is installed.  When you see client in this tutorial, it will refer to the host or hosts that will be managed by Ansible.

Preparing for Ansible

We will use a custom user for our automation efforts. The user will need to be created on the Ansible server and any client hosts that we will manage with Ansible. This user will not have a password and therefore will not be able to login directly.

Task 1:

On both the Ansible server and the client:

useradd -m automat

On the Ansible server, change the shell of the automat user to /bin/bash using the chsh command.

Task 2:

We need to generate an ssh key on the primary server and distribute that to any other hosts that we intend to bring under Ansible automation.  On the Ansible server, you will need to “become” the automat user. To do so, first go root and then use su – to become the automat user.

su -

#Enter the root password

From the root prompt:

su - automat

You have now “become” the automat user, just as if you had logged in as that user with ssh.  Run the following command as automat in order to generate an ssh key:


Task 3:

We now need to distribute the public key to any hosts where we want to login.  Recall that the automat user does not have a password and therefore cannot login using ssh, so the ssh-copy-id command will not work.  Therefore, on the client you will need to become the automat user.  As automat, create a directory called .ssh and copy the contents of from the Ansible server and paste the contents into a file called authorized_keys in the .ssh directory on the client, as follows:

On the Ansible server, copy the contents of /home/automat/.ssh/ to the clipboard.

On the client host, as automat:

mkdir .ssh

Edit .ssh/authorized_keys using vim or nano and paste the contents of the clipboard into the file, saving and exiting.

Checkpoint 1

Once the public key has been distributed onto the client, you will be able to ssh directly from the Ansible server to the client as the automat user and you will not be prompted for a password.  Note that the first time you ssh to the client you will be prompted to “accept” the host key of the client machine, type “yes” for that prompt.

If you are prompted for a password when using the automat user then look back at the previous steps to ensure that they have been completed correctly.

Task 4:

Ansible will need to have elevated privileges in order to manage the client hosts. Therefore, we will use sudo to grant those privileges.  Therefore, we need to install sudo on both server and client.  Run the following, as root:

apt-get install sudo

Task 5:

With sudo installed, we need to add an entry to the sudoers file such that our new user can run commands as root without needing a password.  Use visudo on both the server and the client to add the following entry:


Checkpoint 2

You should be able to view the contents of the shadow file as the automat user when using sudo and should not be prompted for a password.

On both server and client, as the “automat” user, run the following, which should result in a “Permission Denied” error or similar:

less /etc/shadow

Continuing as automat, now run the command with sudo, you should not be prompted for a password.  If you are prompted for a password, then the sudoers entry must not be correct:

sudo less /etc/shadow

Task 6:

One final item in the initial configuration, ssh as root from server to client in order to cache the host key in root’s “known_hosts” file.  On the Ansible server, as root, ssh to the client computer. You will be prompted to accept the host key of the client, enter yes to cache the key.  Note that you do not need to complete the connection, merely type “yes” to cache the key and then CTRL-C to terminate the connection.

Note: If you’re using DHCP and the IP address of the client changes, you will need to cache the ssh host key again using this method.


  • User named automat added to both servers; User should not have a password (Task 1).
  • An SSH keypair generated on the primary server (Task 2).
  • The public ssh key placed in ~automat/.ssh/authorized_keys on the secondary server (Task 3).
  • Able to ssh from primary server to secondary server as the automat user without being prompted for a password (Checkpoint 1).
  • The sudo package installed (Task 4).
  • A sudoers entry on each server for the automat user:  automat ALL=NOPASSWD:ALL  (Task 5).
  • Able to run commands as automat on each server using sudo without being prompted          for a password (Checkpoint 2).
  • Have executed ssh from server to client in order to cache the client’s host key in root’s         known_hosts file (Task 6).

Installing Ansible

One of the best features of Ansible is that you don’t need to install any agent software on each client to be managed.  In this section, we will install Ansible on the server.

Task 7:

Ansible only needs to be installed on the Ansible server itself by using the following command:

apt-get install ansible

Configure Ansible for First Use

Ansible configuration is stored in /etc/ansible. Within the /etc/ansible directory you will find two files, ansible.cfg and hosts. The ansible.cfg file contains basic configuration for how Ansible will behave while the hosts file contains information on hosts and host groups that are under Ansible control.

Most of the defaults defined in the ansible.cfg configuration file will work for our use. We can also change most of the options at run-time through command line options. However, we know that we will always use a user called ‘automat’ for Ansible and we know that the key pair that we want to use is stored in the home directory for that automat user. Therefore, we can set these two items as our custom defaults rather than needing to specify them on the command line every time.

Task 8:

Edit /etc/ansible/ansible.cfg, find the following configuration lines, uncomment them, and set them as follows:

remote_user = automat
private_key_file = /home/automat/.ssh/id_rsa

With those two customizations in place, Ansible will always ssh to the client devices using the automat user and will use the private key that we generated and distributed.

Adding a Host to Ansible

Adding a host to Ansible is as simple as adding its IP address or hostname to the /etc/ansible/hosts file. Assuming that we don’t have a DNS entry for our client computer and we don’t have an entry for it in /etc/hosts, we will add it by IP address.

Note: If you’re using DHCP and the IP address of the client changes, you will need to add the new IP into /etc/ansible/hosts and you will need to accept the ssh host key again as well.

Ansible can use host groups to maintain a logical collection of hosts. For instance, by region, by role, and so on. For now, we will simply add the client to ungrouped hosts.

Task 9:

Edit /etc/ansible/hosts and add the IP of the client machine to it.

Testing Ansible

Ansible is executed with the ansible command, typically run as root or with sudo privileges. The command is run from the Ansible server.  Ansible uses modules in order to execute commands on remote client machines. One such module is “ping”. The ping module does not use ICMP but rather attempts to login to the specified host(s) and examines the python environment after login to the client. On success, the ping module returns the word “pong”.

We can test Ansible and its ability to communicate with the client using the ping module.  The ansible command also requires a list of hosts on which the given command or playbook (more on playbooks soon) will be executed. There is a convenient alias called “all” which is an alias for all hosts. This saves us the time of needing to enter the IP address or host names individually. Like you might expect, the all alias will attempt to execute the given Ansible command on all hosts that are defined in /etc/ansible/hosts.

Checkpoint 3

Testing the ability to communicate between Ansible server and client host is the final step. The command to execute is as follows. Run this command either as root or as the automat user. If you run it as automat, preface it with sudo.

ansible -m ping all

The command in Checkpoint 3 calls the ansible command, passing -m ping to specify the ping module, followed by the host specification, in this case all.  This example output from a successful Ansible ping shows a client host at the IP | SUCCESS => {
    "changed": false,
    "ping": "pong"

If you do not receive this success message, refer back to the Checklist provided earlier to ensure that you can complete each of those steps successfully and work through the final tasks related to Ansible again to ensure that they are complete.

With a successful Ansible ping, you have configured Ansible on the server and on one client. Another helpful module is the setup module. You can use the setup module to gather facts about client hosts. Some of these facts will be used for making decisions in playbooks and can be used as variables.

Ansible and AWS EC2

Ansible is not lacking in awesome.  I’ve used Puppet and Chef and others to manage Linux but Ansible meets my criteria for host management for the specific reason that it uses SSH to manage hosts rather than an agent.  Ansible is also simple to get up and running quickly.

In just a few hours, I was managing hosts and doing real work to keep DHCP configs straight.  Adding more and more functionality to playbooks can be done easily.  As I’ve been using Ansible, I’m expanding in both my understanding of the tool and of the infrastructure that I manage.

I’m currently using Ansible to deploy to EC2 Linux hosts.  My plan is to be able to deploy an EC2 host through the AWS API.  I already have a bootstrap playbook in place to add various users, distribute ssh keys, and add those users to /etc/sudoers.  Ansible includes modules to add authorized_keys and install software, so doing so never feels like a hack or that I’m stretching the tool.

I’m also using Ansible to manage an Asterisk server, several MySQL servers, various DNS servers, and soon several Raspberry Pi computers.  I have a combination of physical servers, virtual servers through Xen, and AWS hosts.  I manage those via a custom variable called hosttype and I can do things like this to add an apt repository to the sources list on physical or virtual servers that using the Debian Jessie release:

- name: add apt source repo when physical or virtual
    repo="deb-src jessie main contrib non-free"
    when: ansible_distribution_release == "jessie" and
          (hosttype == "phy" or hosttype == "vir")

I don’t like to share passwords among MySQL servers, and Ansible enables trivial customization on a per group or per-host basis using group_vars or host_vars.

I’ll be deploying Raspberry PI and EC2 en masse later this year and Ansible will make doing so terribly easy and repeatable.

Installing nftables on Debian 7.5

[Last Update: 8/11/2014 – Clean up some bits around the options to select.]

This article discusses installation of nftables, the new Linux firewall software, on a Debian 7.5 system.  Nftables is under very active development and therefore the installation steps may change by the time you view this article.  Specifically, the various prerequisites needed in order to build nftables will likely no longer be needed as the software matures, and more importantly, as packages for it become available.

Note: This article begins with a base of Debian 7.5.0 netinst with the SSH Server and Standard System Utilities installed.

There are two primary components involved in an nftables system:  The first component is the Linux kernel, which provides the underlying nftables core modules.  The second component is the administration program called nft.

Compiling a kernel

The Linux kernel that comes with Debian 7.5.0 is based on version

Before you can compile a kernel, you need to get a kernel.  As of this writing, the latest stable kernel is 3.15.  Retrieving that from the Linux server with the wget command looks like this:


Then unpack the kernel source:

tar -xvf linux-3.15.tar.xz

You’ll now have a pristine kernel ready to be built.

Several packages are essential and some are helpful for compiling a kernel on Debian.  The package named kernel-package provides useful utilities for creating a Debian packaged kernel.  Kernel-package has several prerequisites but those are all installed when you select kernel-package for installation on the system.

The method shown in this article uses the ‘menuconfig’ option to build the kernel.  Other methods such as simply the text-based config option are also available.  The menuconfig option requires the ncurses-devel package.  On Debian, this is found as part of the libncurses5-dev package and can be installed with this command (run as root):

apt-get install libncurses5-dev kernel-package

Note:  You may need to update the package list by running apt-get update prior to the packages becoming available for installation.

From within the linux-3.15 (or whatever version) directory, run:

make menuconfig

The options necessary within the kernel for nftables are found in the Networking support hierarchy.

Drill-down to the Networking support -> Networking options -> Network packet filtering framework (Netfilter).

Inside of the IP: Netfilter Configuration select IPv4 NAT.  Back up at the Network packet filtering framework menu, select IPv6 Netfilter Configuration and enable IPv6 NAT along with its sub-options of MASQUERADE target support and NPT target support.

Back at the Network packet filtering framework level, enter the Core Netfilter Configuration menu and enable Netfilter nf_tables support.  Doing so opens up several additional options.

Netfilter nf_tables mixed IPv4/IPv6 tables support
Netfilter nf_tables IPv6 exthdr module
Netfilter nf_tables meta module
Netfilter nf_tables conntrack module
Netfilter nf_tables rbtree set module
Netfilter nf_tables hash set module
Netfilter nf_tables counter module
Netfilter nf_tables log module
Netfilter nf_tables limit module
Netfilter nf_tables nat module
Netfilter nf_tables queue module
Netfilter nf_tables reject support
Netfilter x_tables over nf_tables module

Back in the Network packet filtering framework (Netfilter) level, select IP: Netfilter Configuration and find the IPv4 nf_tables support section and enable IPv4 nf_tables route chain support, IPv4 nf_tables nat chain support, and ARP nf_tables support.  Back at the Network packet filtering framework (Netfilter) level, select IPv6: Netfilter Configuration again and enable IPv6 nf_tables route chain support, and IPv6 nf_tables nat chain support.

Note: For the purposes of this article, all of the options will be selected as modules.

Finally, within the Network packet filtering framework (Netfilter) section, enable the Ethernet Bridge nf_tables support feature if you need this functionality.

Once your kernel configuration is complete, you can clean the source tree with the command:

 make-kpkg clean

Now it’s time to compile the kernel.  Depending on the speed of your system it make take several minutes to several hours.  If you have multiple processors, you can likely speed up the process by having make-kpkg use them.  This is accomplished by setting the CONCURRENCY_LEVEL environment variable.  For instance, on a system with two processors, the variable is set as such:


Alternately, specify all of it on the command line:

CONCURRENCY_LEVEL=2 INSTALL_MOD_STRIP=1 make-kpkg --initrd --revision=1 kernel_image

Note: On a dual processor quad core system the compile took about 30 minutes.

Once the kernel has been compiled, installation is accomplished (as root) with the command:

 dpkg -i linux-image-<your_version_here>.deb

Rebooting the server brings up the shiny new kernel but the server isn’t quite ready to run nf_tables yet.  Prior to compiling the nft administration program, you can verify that the nf_tables module can load.  First, see if the module is already loaded:

 lsmod | grep nf_tables

If there’s output then the module has already been loaded.  If not, then you can load the module with modprobe, as such:

 modprobe nf_tables

Rerunning the lsmod command (lsmod | grep nf_tables) should give output now, similar to this:

 nf_tables              37955  0
nfnetlink              12989  1 nf_tables

 Compiling the nft Administration Program

The nft administration program enables control over the firewall, in much the same way that the iptables command controlled an iptables-based firewall.  The nft program depends on the libmnl and libnftnl libraries.  With the large amount of active development underway on nf_tables and related libraries, this tutorial shows how to get the latest copy using Git rather than attempting to install from a package or another method.

 apt-get install autoconf2.13 libtool pkg-config flex bison libgmp-dev libreadline6-dev dblatex

Note that dblatex is only needed if you want PDF documentation, which I sometimes do.  You can save some space and security footprint by not adding dblatex to the previous apt-get command line.

The three repositories can be cloned into your current directory with the commands:

git clone git://
git clone git://
git clone git://

Once a copy has been downloaded, the next step is to compile the software.  Both libnml and libnftnl are prerequisites for compiling nftables so those will be compiled first with the commands (all run as superuser/root):

 cd libmnl
make install

Now cd backwards a directory and into the libnftnl directory and compile it:

 cd ../libnftnl
make install

Finally, compile nftables:

 cd ../nftables
make install

With the nftables administration program compiled and installed you can now run nft commands!  Depending on your installation, you may need to reboot and/or run ldconfig.  I did both; a reboot didn’t fix it so running ldconfig as root was the next logical step.  Actually, that might have been the first logical step before rebooting, but that’s how it goes sometimes.

In any event, running the following command should do nothing (and that’s what we want right now):

 nft list tables

If the command returns nothing at all, then nft is working fine.  You can create a table with the command:

 nft add table filter

Now create a chain with the command:

nft add chain filter input { type filter hook input priority 0 \; }

Note that the space and backslash before the semi-colon are necessary when entering the command from the command line.

You can now run nft list tables and it will show:

 table filter

Running the following command shows the contents of the table:

 nft list table filter -a

The output will be:

 table ip filter {
chain input {
type filter hook input priority 0;

That’s it!  You now have nftables running. There are several good tutorials out there that deal with creating an nftables firewall once you’re at this point and I’m also updating my Linux Firewalls book to include coverage of nftables!  It’ll be out in the fall of 2014.


nft: error while loading shared libraries: cannot open shared object file: No such file or directory

After compiling nftables and attempting to run nft list tables I received the error:

nft: error while loading shared libraries: cannot open shared object file: No such file or directory

Turns out I needed to run ldconfig in order to fix the error.  I also rebooted prior to running ldconfig but probably didn’t need to.

Perl to Python RSS Conversion

For quite some time, I’ve had my own personal homepage containing commonly used links, server status, subject lines of e-mails, and RSS news feeds.  Nothing exciting there.  The RSS feeds are retrieved by a program that runs every N minutes through cron and places the entries into a MySQL table.  Again, nothing exciting.  However, recently the Perl program that I’ve been using to retrieve the RSS has been consuming a bigger percentage of the available resources on the server.  More appropriately, the server on which the RSS retriever is hosted is more heavily utilized now thus when the RSS parser runs it became noticeable on the load average of server.

Of course, one way to solve it is to throw more hardware at it, like more CPU and RAM.  However, that would be too easy.  Instead I threw together a python program using feedparser just to see the difference in performance between the two for this purpose.  The results were surprising.  Python took about 2.8 seconds in real time and used significantly less system resources to do so.  Perl took ~11 seconds for the same feeds at roughly the same time.

I’m not writing this to be a knock against Perl; more likely the methods that I used to parse the RSS in Perl (and my general Perl programming skills?) are the issue.

Timings below.


real 0m2.868s
user 0m1.808s
sys 0m0.072s


real 0m11.016s
user 0m4.108s
sys 0m0.144s



Debian Upgrade to Wheezy: MySQL & Dovecot Problems

Upgraded to Debian Wheezy last night.  Followed the official upgrade instructions.  Things went generally well and I’m amazed by how well major upgrades go with Debian.  Wheezy is the second major release for this particular server and it had an uptime of 476 days before today’s upgrade.

A couple problems were noted, specifically with the upgrade of the mysql server and dovecot.  Both seem to have breaking changes.  For MySQL, the breaking change is that in MySQL server 5.5 the master-host and other master-* options are no longer supported.  See the MySQL manual for more details.  I commented out the various replication-related options in /etc/mysql/my.cnf for now and will need to fix that quickly.

The other break-change on this computer was with dovecot.  Looks like all of the dovecot options are now split into multiple files in /etc/dovecot/conf.d with the traditional dovecot.conf now being a shell that refers to other files.  For this particular server I needed to change the path to the SSL certificates; now dovecot wants them in the /etc/dovecot hierarchy and I needed to change the mail_location to be Maildir rather than mbox (not sure why that was the new default now) and add mail_privileged_group of mail.  Dovecot’s working now.

Among the fun things that I’ve already discovered is that I can mount a Synology SMB share without “file exists” problems and airprint finally works for me (though we’ll see for how long).

Once I get comfortable with the stability of the new system I’ll begin migrating other, more mission-critical, servers.

Apache2/PHP Crash – Yikes

I was working on a patched Debian system recently using PHP functions feof and fread.  I went to run my test script and managed to auger Apache in while at the same time dumping over 1GB worth of errors into the Apache error log in a matter of minutes.  Over and over (over 5,000,000 entries, actually), with these errors:

[Wed Dec 07 11:17:00 2011] [error] [client xx.xx.xx.xx] PHP Warning:  feof() expects parameter 1 to be resource, boolean given in /web/public_html/newsite/testfeed.php on line 5
[Wed Dec 07 11:17:00 2011] [error] [client xx.xx.xx.xx] PHP Warning:  fread() expects parameter 1 to be resource, boolean given in /web/public_html/newsite/testfeed.php on line 6

I ended up having to stop apache and restart it but it’s a scary denial of service in a few lines of PHP code.  It took about 2 minutes and 23 seconds to produce over 5,000,000 errors in the error log for this script.



GrSecurity-related Firefox crash

I’m seeing a weird crash of Firefox (1.0PR) on a Debian testing box apparently due to Firefox trying something that GrSec doesn’t like. Specifically,

kernel: grsec: attempted resource overstep by requesting 4096 for RLIMIT_CORE against limit 0 by (firefox-bin:3092) UID(1000) EUID(1000), parent (firefox-bin:30648) UID(1000) EUID(1000)

This is still on 2.4.25 though, so I should probably update that.

Curiously, it only happens when visiting and only then sometimes, though pretty regularly on that site. I hate to submit any type of bug report to either firefox or grsec for this since I haven’t had time to look into it more. But if anyone out there has a quick fix0r for this, please let me know.

Xandros v3 now available as Open Circulation Edition!

Xandros, the popular Debian-based Linux desktop package that I wrote about for LinuxWorld Magazine has made their newest version, 3.0, available for free download. Called the “Open Circulation Edition”, this version combines Skype Internet calling, Firefox, Thunderbird, and more into the already great Xandros package.

Other bits from their press release:

  • Four-click install with automatic disk partitioning
  • Dual-boot installation with Windows XP
  • Industry-leading hardware detection and configuration
  • Drag-and-drop CD burning in Xandros File Manager
  • Seamless file and print sharing on Windows networks
  • Resistance to spyware, adware, and pop-ups

Go to the Open Circulation Edition page.