We used Fabric to automate deploying new versions of the source code to our servers. But provisioning a fresh server, and updating the Nginx and Gunicorn config files, was all left as a manual process.
This is the kind of job that’s increasingly given to tools called “Configuration Management” or “Continuous Deployment” tools. Chef and Puppet were the first popular ones, and in the Python world there’s Salt and Ansible.
Of all of these, Ansible is the easiest to get started with. We can get it working with just two files:
pip2 install --user ansible # Python 2 sadly
An “inventory file” at deploy_tools/inventory.ansible defines what servers we can run against:
deploy_tools/inventory.ansible
[live]
superlists.ottg.eu ansible_become
=
yes ansible_ssh_user=elspeth
[staging]
superlists-staging.ottg.eu ansible_become
=
yes ansible_ssh_user=elspeth
[local]
localhost ansible_ssh_user
=
root ansible_ssh_port=6666 ansible_host=127.0.0.1
(The local entry is just an example, in my case a Virtualbox VM, with port forwarding for ports 22 and 80 set up.)
Next the Ansible “playbook”, which defines what to do on the server. This uses a syntax called YAML:
deploy_tools/provision.ansible.yaml
---
-
hosts
:
all
vars
:
host
:
"{{
inventory_hostname
}}"
tasks
:
-
name
:
Deadsnakes PPA to get Python 3.6
apt_repository
:
repo='ppa:fkrull/deadsnakes'
-
name
:
make sure required packages are installed
apt
:
pkg=nginx,git,python3.6,python3.6-venv state=present
-
name
:
allow long hostnames in nginx
lineinfile
:
dest=/etc/nginx/nginx.conf
regexp='(\s+)#? ?server_names_hash_bucket_size'
backrefs=yes
line='\1server_names_hash_bucket_size 64;'
-
name
:
add nginx config to sites-available
template
:
src=./nginx.conf.j2 dest=/etc/nginx/sites-available/{{ host }}
notify
:
-
restart nginx
-
name
:
add symlink in nginx sites-enabled
file
:
src=/etc/nginx/sites-available/{{ host }}
dest=/etc/nginx/sites-enabled/{{ host }}
state=link
notify
:
-
restart nginx
The inventory_hostname
variable is the domain name of the server we’re running against.
I’m using the vars
section to rename it to “host”, just for convenience.
In this section, we install our required software using apt
, tweak the Nginx
config to allow long hostnames using a regular expression replacer, and then write the Nginx config file using a template. This is a modified version
of the template file we saved into deploy_tools/nginx.template.conf in
Chapter 9, but it now uses a specific templating syntax—Jinja2, which is
actually a lot like the Django template syntax:
deploy_tools/nginx.conf.j2
server { listen 80; server_name {{ host }}; location /static { alias /home/{{ ansible_ssh_user }}/sites/{{ host }}/static; } location / { proxy_set_header Host {{ host }}; proxy_pass http://unix:/tmp/{{ host }}.socket; } }
Here’s the second half of our playbook:
deploy_tools/provision.ansible.yaml
-
name
:
write gunicorn service script
template
:
src=./gunicorn.service.j2
dest=/etc/systemd/system/gunicorn-{{ host }}.service
notify
:
-
restart gunicorn
handlers
:
-
name
:
restart nginx
service
:
name=nginx state=restarted
-
name
:
restart gunicorn
systemd
:
name=gunicorn-{{ host }}
daemon_reload=yes
enabled=yes
state=restarted
Once again we use a template for our Gunicorn config:
deploy_tools/gunicorn.service.j2
[
Unit]
Description
=
Gunicorn serverfor
{{
host}}
[
Service]
User
={{
ansible_ssh_user}}
WorkingDirectory
=
/home/{{
ansible_ssh_user}}
/sites/{{
host}}
/sourceRestart
=
on-failureExecStart
=
/home/{{
ansible_ssh_user}}
/sites/{{
host}}
/virtualenv/bin/gunicorn\
--bind unix:/tmp/{{
host}}
.socket\
--access-logfile ../access.log\
--error-logfile ../error.log\
superlists.wsgi:application[
Install]
WantedBy
=
multi-user.target
Then we have two “handlers” to restart Nginx and Gunicorn. Ansible is clever, so if it sees multiple steps all call the same handlers, it waits until the last one before calling it.
And that’s it! The command to kick all these off is:
ansible-playbook -i inventory.ansible provision.ansible.yaml --limit=staging --ask-become-pass
Lots more info in the Ansible docs.
I’ve just given a little taster of what’s possible with Ansible. But the more you automate about your deployments, the more confidence you will have in them. Here are a few more things to look into.
We’ve seen that Ansible can help with some aspects of provisioning, but it can also do pretty much all of our deployment for us. See if you can extend the playbook to do everything that we currently do in our Fabric deploy script, including notifying the restarts as required.
Running tests against the staging site gives us the ultimate confidence that things are going to work when we go live, but we can also use a VM on our local machine.
Download Vagrant and Virtualbox, and see if you can get Vagrant to build a dev server on your own PC, using our Ansible playbook to deploy code to it. Rewire the FT runner to be able to test against the local VM.
Having a Vagrant config file is particularly helpful when working in a team—it helps new developers to spin up servers that look exactly like yours.