Ansible is not lacking in awesome. I’ve used Puppet and Chef and others to manage Linux but Ansible meets my criteria for host management for the specific reason that it uses SSH to manage hosts rather than an agent. Ansible is also simple to get up and running quickly.
In just a few hours, I was managing hosts and doing real work to keep DHCP configs straight. Adding more and more functionality to playbooks can be done easily. As I’ve been using Ansible, I’m expanding in both my understanding of the tool and of the infrastructure that I manage.
I’m currently using Ansible to deploy to EC2 Linux hosts. My plan is to be able to deploy an EC2 host through the AWS API. I already have a bootstrap playbook in place to add various users, distribute ssh keys, and add those users to /etc/sudoers. Ansible includes modules to add authorized_keys and install software, so doing so never feels like a hack or that I’m stretching the tool.
I’m also using Ansible to manage an Asterisk server, several MySQL servers, various DNS servers, and soon several Raspberry Pi computers. I have a combination of physical servers, virtual servers through Xen, and AWS hosts. I manage those via a custom variable called hosttype and I can do things like this to add an apt repository to the sources list on physical or virtual servers that using the Debian Jessie release:
- name: add apt source repo when physical or virtual apt_repository: repo="deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free" state=present update_cache=yes when: ansible_distribution_release == "jessie" and (hosttype == "phy" or hosttype == "vir")
I don’t like to share passwords among MySQL servers, and Ansible enables trivial customization on a per group or per-host basis using group_vars or host_vars.
I’ll be deploying Raspberry PI and EC2 en masse later this year and Ansible will make doing so terribly easy and repeatable.