Preparing for Automation with Ansible

Using Automation for Server Administration

Managing multiple servers is cumbersome. The time consumed managing service configurations across the servers, ensuring software is up to date, and managing local files and user accounts grows seemingly exponentially. Therefore, we need a tool to help automate these tasks.

There are several tools available for maintaining servers and equipment. Among the popular tools are Ansible, Chef, Puppet, and Salt. We will use Ansible for this course. Ansible has several advantages over other server automation tools: Ansible does not require additional software such as a background agent to be installed on each server. Rather, Ansible uses ssh for management and ssh is available on every Linux server that you will manage regularly.  Avoiding additional software means having less chance of software bugs and is one less vector for attackers to exploit.

Preparing for Automation

Best practice dictates that we use root privileges as little as possible and that we don’t utilize root privileges across the network when possible to avoid doing so.

Therefore, we will create a normal, non-privileged user, on each device that we intend to bring under Ansible control.  We will grant the user sudo privileges with no password required.  We could further limit the IPs from which this account can login and we could also limit the commands that the user is allowed to run. However, for this tutorial our user configuration will be sufficient and is typical for many business environments. We will gain some security through obscurity by creating the user with a unique name, one that would not typically be included in a script run by an attacker.

Prerequisites:  You should have two computers available running Linux, virtual machines are fine, as are Raspberry Pi’s. The two computers should be able to ping each other and you should be able ssh between the two computers.  This tutorial uses Debian 9.3 but all commands, aside from those that use apt-get, will run on any of the popular Linux distributions.

Terminology:  There will be one Ansible server which will be referred to as the “server” for the purposes of automation.  It is from this server that we will manage one or more “client” devices also known as hosts in Ansible terminology.  When you see server in this tutorial, it will refer to the device on which Ansible is installed.  When you see client in this tutorial, it will refer to the host or hosts that will be managed by Ansible.

Preparing for Ansible

We will use a custom user for our automation efforts. The user will need to be created on the Ansible server and any client hosts that we will manage with Ansible. This user will not have a password and therefore will not be able to login directly.

Task 1:

On both the Ansible server and the client:

useradd -m automat

On the Ansible server, change the shell of the automat user to /bin/bash using the chsh command.

Task 2:

We need to generate an ssh key on the primary server and distribute that to any other hosts that we intend to bring under Ansible automation.  On the Ansible server, you will need to “become” the automat user. To do so, first go root and then use su – to become the automat user.

su -

#Enter the root password

From the root prompt:

su - automat

You have now “become” the automat user, just as if you had logged in as that user with ssh.  Run the following command as automat in order to generate an ssh key:


Task 3:

We now need to distribute the public key to any hosts where we want to login.  Recall that the automat user does not have a password and therefore cannot login using ssh, so the ssh-copy-id command will not work.  Therefore, on the client you will need to become the automat user.  As automat, create a directory called .ssh and copy the contents of from the Ansible server and paste the contents into a file called authorized_keys in the .ssh directory on the client, as follows:

On the Ansible server, copy the contents of /home/automat/.ssh/ to the clipboard.

On the client host, as automat:

mkdir .ssh

Edit .ssh/authorized_keys using vim or nano and paste the contents of the clipboard into the file, saving and exiting.

Checkpoint 1

Once the public key has been distributed onto the client, you will be able to ssh directly from the Ansible server to the client as the automat user and you will not be prompted for a password.  Note that the first time you ssh to the client you will be prompted to “accept” the host key of the client machine, type “yes” for that prompt.

If you are prompted for a password when using the automat user then look back at the previous steps to ensure that they have been completed correctly.

Task 4:

Ansible will need to have elevated privileges in order to manage the client hosts. Therefore, we will use sudo to grant those privileges.  Therefore, we need to install sudo on both server and client.  Run the following, as root:

apt-get install sudo

Task 5:

With sudo installed, we need to add an entry to the sudoers file such that our new user can run commands as root without needing a password.  Use visudo on both the server and the client to add the following entry:


Checkpoint 2

You should be able to view the contents of the shadow file as the automat user when using sudo and should not be prompted for a password.

On both server and client, as the “automat” user, run the following, which should result in a “Permission Denied” error or similar:

less /etc/shadow

Continuing as automat, now run the command with sudo, you should not be prompted for a password.  If you are prompted for a password, then the sudoers entry must not be correct:

sudo less /etc/shadow

Task 6:

One final item in the initial configuration, ssh as root from server to client in order to cache the host key in root’s “known_hosts” file.  On the Ansible server, as root, ssh to the client computer. You will be prompted to accept the host key of the client, enter yes to cache the key.  Note that you do not need to complete the connection, merely type “yes” to cache the key and then CTRL-C to terminate the connection.

Note: If you’re using DHCP and the IP address of the client changes, you will need to cache the ssh host key again using this method.


  • User named automat added to both servers; User should not have a password (Task 1).
  • An SSH keypair generated on the primary server (Task 2).
  • The public ssh key placed in ~automat/.ssh/authorized_keys on the secondary server (Task 3).
  • Able to ssh from primary server to secondary server as the automat user without being prompted for a password (Checkpoint 1).
  • The sudo package installed (Task 4).
  • A sudoers entry on each server for the automat user:  automat ALL=NOPASSWD:ALL  (Task 5).
  • Able to run commands as automat on each server using sudo without being prompted          for a password (Checkpoint 2).
  • Have executed ssh from server to client in order to cache the client’s host key in root’s         known_hosts file (Task 6).

Installing Ansible

One of the best features of Ansible is that you don’t need to install any agent software on each client to be managed.  In this section, we will install Ansible on the server.

Task 7:

Ansible only needs to be installed on the Ansible server itself by using the following command:

apt-get install ansible

Configure Ansible for First Use

Ansible configuration is stored in /etc/ansible. Within the /etc/ansible directory you will find two files, ansible.cfg and hosts. The ansible.cfg file contains basic configuration for how Ansible will behave while the hosts file contains information on hosts and host groups that are under Ansible control.

Most of the defaults defined in the ansible.cfg configuration file will work for our use. We can also change most of the options at run-time through command line options. However, we know that we will always use a user called ‘automat’ for Ansible and we know that the key pair that we want to use is stored in the home directory for that automat user. Therefore, we can set these two items as our custom defaults rather than needing to specify them on the command line every time.

Task 8:

Edit /etc/ansible/ansible.cfg, find the following configuration lines, uncomment them, and set them as follows:

remote_user = automat
private_key_file = /home/automat/.ssh/id_rsa

With those two customizations in place, Ansible will always ssh to the client devices using the automat user and will use the private key that we generated and distributed.

Adding a Host to Ansible

Adding a host to Ansible is as simple as adding its IP address or hostname to the /etc/ansible/hosts file. Assuming that we don’t have a DNS entry for our client computer and we don’t have an entry for it in /etc/hosts, we will add it by IP address.

Note: If you’re using DHCP and the IP address of the client changes, you will need to add the new IP into /etc/ansible/hosts and you will need to accept the ssh host key again as well.

Ansible can use host groups to maintain a logical collection of hosts. For instance, by region, by role, and so on. For now, we will simply add the client to ungrouped hosts.

Task 9:

Edit /etc/ansible/hosts and add the IP of the client machine to it.

Testing Ansible

Ansible is executed with the ansible command, typically run as root or with sudo privileges. The command is run from the Ansible server.  Ansible uses modules in order to execute commands on remote client machines. One such module is “ping”. The ping module does not use ICMP but rather attempts to login to the specified host(s) and examines the python environment after login to the client. On success, the ping module returns the word “pong”.

We can test Ansible and its ability to communicate with the client using the ping module.  The ansible command also requires a list of hosts on which the given command or playbook (more on playbooks soon) will be executed. There is a convenient alias called “all” which is an alias for all hosts. This saves us the time of needing to enter the IP address or host names individually. Like you might expect, the all alias will attempt to execute the given Ansible command on all hosts that are defined in /etc/ansible/hosts.

Checkpoint 3

Testing the ability to communicate between Ansible server and client host is the final step. The command to execute is as follows. Run this command either as root or as the automat user. If you run it as automat, preface it with sudo.

ansible -m ping all

The command in Checkpoint 3 calls the ansible command, passing -m ping to specify the ping module, followed by the host specification, in this case all.  This example output from a successful Ansible ping shows a client host at the IP | SUCCESS => {
    "changed": false,
    "ping": "pong"

If you do not receive this success message, refer back to the Checklist provided earlier to ensure that you can complete each of those steps successfully and work through the final tasks related to Ansible again to ensure that they are complete.

With a successful Ansible ping, you have configured Ansible on the server and on one client. Another helpful module is the setup module. You can use the setup module to gather facts about client hosts. Some of these facts will be used for making decisions in playbooks and can be used as variables.