This will be Part 2 out of 2 of the Ansible primer series.

Having covered the basics in Part 1, we can now move on to more advanced aspects of Ansible.

We were able to run an Ansible playbook on our localhost but not much can be done with localhost. It’s now time to connect to and manage external hosts and perform some configuration management there.

Terminology


Let’s get familiar with more Ansible terms.


Control Machine


This is the machine running Ansible and connecting to remote/external hosts to perform configuration management.

Remote Host

This is the machine Ansible access via SSH to configure. It does not need to have Ansible installed. The only requirements are SSH and Python.

For the rest of the post, the remote host is an Ubuntu 14.04 EC2 host launched on AWS.

Let’s go ahead and update our inventory.

[webservers]

52.36.50.107


The updated inventory file tells Ansible we now have a group of remote hosts called webservers with one host.

Accessing Remote Hosts

As mentioned, Ansible connects to remote hosts via SSH. To access the external remote hosts, we need to setup password-less SSH access, i.e. the control host have to be able to access the remote host without it asking for a password.

There’re plenty of tutorials online on how to do this and below are some links;

SSH Keys

By default, SSH will look for a private key in ˜/.ssh/id_rsa. But we want to be able to tell Ansible where to look for the private key of each host, so we have flexibility later when different hosts use different keys.

Route 1: One key for all hosts

Add an option to ansible.cfg file under [defaults] section:


private_key_file=/etc/ansible/keys/access.pem

Route 2: One key per host

In your inventory file, add a variable to the host:

[webservers]

52.36.50.107 ansible_ssh_private_key_file=/etc/ansible/keys/web.pem

Route 3: One key per group of hosts

Also in your inventory file, but using group variables:

[webservers:vars]
ansible_ssh_private_key_file=/etc/ansible/keys/web.pem

Route 4: Use SSH config

Since Ansible uses SSH, you can leave totally up to SSH to decide which key to use in each host. This is the least ideal route.

Host 55.44.33.22 *.awesomecompany.ly

IdentityFile /etc/ansible/keys/access.pem


We will go with Route 3. Our inventory file now looks like this;

[webservers]
52.212.184.18


[webservers:vars]

ansible_ssh_private_key_file=/etc/ansible/keys/web.pem


Configure Remote Hosts

Let’s update our playbook to target the webservers and install Apache on the remote hosts. Save the updated file as webservers.yml

---

- name: ansible playground

  hosts: webservers

  sudo: yes

  tasks:

  - apt: name=apache2 state=present
  • hosts: webservers : We’re targeting the webservers group
  • sudo: yes : Ansible will perform tasks as root
  • apt : This is the Ansible module and takes the name and state parameters

We can now run the playbook

ansible-playbook webservers.yml

playbook_run_webservers

Let’s run our playbook again.

playbook_run_webservers_again

As you can see from the second run, the result is different but nothing happens.

Tasks should always be idempotent which means that once our desired state has been reached, nothing should change when we repeat the same task.

Looking at the language we used, we say state=present instead of install. This way the Ansible apt module going to make sure it is present and install only if it must.

Conclusion

Now that you can create and run playbooks, they need to do something useful. Ansible has many (hundreds?) of modules to perform actions in the operational system.

And that’s it for this primer series.

At AltoStack, our experts can maintain your DevOps platform and be responsible for day-to-day operational issues, allowing you to develop and ship your product without the need for internal DevOps hires.