In the last chapter we saw how to use provisioners to customize our images during the build process. In this chapter, we're going to continue to explore building and provisioning images with Packer. And we're going to look at one of Packer's most interesting and complex use cases: building Docker images.
To build Docker images, Packer uses the Docker daemon to run containers, runs provisioners on those containers, then can commit Docker images locally or push them up to the Docker Hub. But, interestingly, Packer doesn't use Dockerfiles
to build images. Instead Packer uses the same provisioners we saw in Chapter 3 to build images. This allows us to create a consistent mental model for all our images, across all platforms.
We're going to learn how to build and push Docker images with Packer.
When building Docker images, Packer and the Docker builder need to run on a host that has Docker installed. Installing Docker is relatively simple, and we're not going to show you how to do it in this book. There are, however, a lot of resources available online to help you.
You can confirm you have Docker available on your host by running the docker
binary.
$ docker --version
Docker version 17.06.0-ce-rc1, build 7f8486a
The Docker builder is just like any other Packer builder: it uses resources, in this case a local Docker daemon, to build an image. Let's create a template for a basic Docker build:
$ touch docker_basic.json
And now populate that template:
{
"builders": [{
"type": "docker",
"image": "ubuntu",
"export_path": "docker-basic.tar"
}]
}
The template is much like our template from Chapter 3—simple and not very practical. Currently it just creates an image from the ubuntu
stock image and exports a tar ball of it. Let's explore each key.
The type of builder we've specified is docker
. We've specified a base image for the builder to work from; this is much like using the FROM
instruction in a Dockerfile
, using the image
key.
The type
, as always, and the image
are required keys for the Docker builder. You must also specify what to do with the container that the Docker builder builds.
The Docker builder has three possible output actions. You must specify one:
export_path
key.discard
key.commit
key.Let's build our template now and see what happens.
$ packer build docker_basic.json
docker output will be in this color.
==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: ubuntu
docker: Using default tag: latest
docker: latest: Pulling from library/ubuntu
docker: Digest: sha256:ea1d854d38be82f54d39efe2c67000bed1b0334...
docker: Status: Image is up to date for ubuntu:latest
==> docker: Starting docker container...
docker: Run command: docker run -v /Users/james/.packer.d/tmp/packer-docker307764002:/packer-files -d -i -t ubuntu /bin/bash
docker: Container ID: 6a872b49ce499f62c37c5a1a1e609c557dc36879...
==> docker: Exporting the container
==> docker: Killing the container: 6a872b49ce499f62c37c5a1a1e609c557dc36879...
Build 'docker' finished.
==> Builds finished. The artifacts of successful builds are:
--> docker: Exported Docker file: docker_basic.tar
We can see that Packer has pulled down a base Docker image, ubuntu
, that's running a new container, then it's exported the container as docker_basic.tar
. You could now use the docker import
command to import that image from the tar ball.
Let's do something a bit more complex in our next build.
Let's create a new template, docker_prov.json
, that will combine a Docker build with provisioning of a new image. Rather than export the new image we're going to create, we're going to commit the image to our local Docker daemon. Let's take a look at our template.
{
"builders": [{
"type": "docker",
"image": "ubuntu",
"commit": true
}],
"provisioners": [{
"type": "shell",
"script": "install.sh"
}]
}
In our new template, we've replaced the export_path
key with the commit
key, which is set to true
. We've also added a provisioners
block and specified a single script called install.sh
. Let's look at that script now.
#!/bin/sh -x
# Update apt
apt-get -yqq update
# Install Apache
apt-get -yqq install apache2
Our script updates APT and then installs the apache2
package.
When we run packer
on this template it'll create a new container from the ubuntu
image, run the install.sh
script using the shell
provisioner, and then commit a new image.
Let's see that now.
$ packer build docker_prov.json
. . .
==> docker: Provisioning with shell script: install.sh
docker: + apt-get -yqq update
docker: + apt-get -yqq install apache2
. . .
==> docker: Committing the container
docker: Image ID: sha256:dd465405e2f4880b50ef6468d9e18a1f...
==> docker: Killing the container: 13786af85560b9169b71126024aacb6e...
Build 'docker' finished.
==> Builds finished. The artifacts of successful builds are:
--> docker: Imported Docker image: sha256:dd465405e2f4880b50ef6468d9e18a1f...
Here we can see our Docker image has been built from the ubuntu
image and then provisioned using our install.sh
and the Apache web server installed. We've then outputted an image, stored in our local Docker daemon.
Sometimes a provisioner isn't quite sufficient and you need to take some additional actions to make a container fully functional. The docker
builder comes with a key called changes
that allows you to specify some Dockerfile
instructions.
changes
key behaves in much the same way as the docker commit --change
command line option.We can use the changes
key to supplement our existing template:
{
"type": "docker",
"image": "ubuntu",
"commit": true,
"changes": [
"USER www-data",
"WORKDIR /var/www",
"EXPOSE 80"
]
}
Here we've added three instructions: USER
, which sets the default user; WORKDIR
, which sets the working directory; and EXPOSE
, which exposes a network port. These instructions will be applied to the image being built and committed to Docker.
You can't change all Dockerfile
instructions, but you can change the CMD
, ENTRYPOINT
, ENV
, EXPOSE
, MAINTAINER
, USER
, VOLUME
, and WORKDIR
instructions.
This is still only a partial life cycle, and we most often want to do something with the artifact generated by our build. This is where post-processors come in.
Post-processors take actions on the artifacts, usually images, created by Packer. They allow us to store, distribute, or otherwise process those artifacts. The Docker workflow is ideal for demonstrating their capabilities. We're going to examine two Docker-centric post-processors:
docker-tag
- Tags Docker images.docker-push
- Pushes Docker images to an image store, like the Docker Hub.Post-processors are defined in another template block: post-processors
. Let's add some post-processing to a new template, docker_postproc.json
.
{
"builders": [{
"type": "docker",
"image": "ubuntu",
"commit": true
}],
"provisioners": [
{
"type": "shell",
"script": "install.sh"
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "jamtur01/docker_postproc",
"tag": "0.1"
},
"docker-push"
]
]
}
Note that we've added a post-processors
block with an array of post-processors defined. Packer will take the result of any builder
action and send it through the post-processors, so if you have one builder the post-processors will be executed once, two builders will result in the post-processors being executed twice, and so on. You can also control which post-processors run for which build—we'll see more of that in Chapter 7.
For each post-processor definition, Packer will take the result of each of the defined builders and send it through the post-processors. This means that if you have one post-processor defined and two builders defined in a template, the post-processor will run twice (once for each builder), by default.
There are three ways to define post-processors: simple, detailed, and in sequence. A simple post-processor definition is just the name of a post-processor listed in an array.
{
"post-processors": ["docker-push"]
}
A simple definition assumes you don't need to specify any configuration for the post-processor. A more detailed definition is much like a builder
definition and allows you to configure the post-processor.
{
"post-processors": [
{
"type": "docker-save",
"path": "container.tar"
}
]
}
Like with a builder or provisioner definition, we specify the type of post-processor and then any options. In our case we use the docker-save
post-processor which saves the Docker image to a file.
The last type of post-processor definition is a sequence. This is the most powerful use of post-processors, chained in sequence to perform multiple actions. It can contain simple and detailed post-processor definitions, listed in the order in which you wish to execute them.
"post-processors": [
[
{
"type": "docker-tag",
"repository": "jamtur01/docker_postproc",
"tag": "0.1"
},
"docker-push"
]
]
You can see our post-processors are inside the post-processors
array and further nested within an array of their own. This links post-processors together, meaning their actions are chained or executed in sequence. Any artifacts a post-processor generates is fed into the next post-processor in the sequence.
Our first post-processor is docker-tag
. You can specify a repository and an optional tag for your image. This is the equivalent of running the docker tag
command.
$ docker tag image_id jamtur01/docker_postproc:0.1
This tags our image with a repository name and a tag that makes it possible to use the second post-processor: docker-push
.
The docker-push
post-processor pushes Docker images to a Docker registry, like the Docker Hub, a local private registry, or even Amazon ECR. You can provide login credentials for the push, or the post-processor can make use of existing credentials such as your local Docker Hub or AWS credentials.
docker-push
post-processor, in a sequence.Let's try to post-process our artifact now.
$ packer build docker_postproc.json
docker output will be in this color.
. . .
==> docker: Running post-processor: docker-tag
docker (docker-tag): Tagging image: sha256:b5cf683867f9f1d20149bd106db8b423...
docker (docker-tag): Repository: jamtur01/docker_postproc:0.1
==> docker: Running post-processor: docker-push
docker (docker-push): Pushing: jamtur01/docker_postproc:0.1
docker (docker-push): The push refers to a repository [docker.io/jamtur01/docker_postproc]
. . .
docker (docker-push): 1f833f3fe176: Pushed
docker (docker-push): 0.1: digest: sha256:2eba43302071a38bf6347f6c06dcc7113d7... size: 1569
Build 'docker' finished.
==> Builds finished. The artifacts of successful builds are:
--> docker: Imported Docker image: sha256:b5cf683867f9f1d20149bd106db8b...
docker-tag
and docker-push
post-processors multiple times.We've cut out a lot of log entries, but you can see our Docker image being tagged and then pushed to my Docker Hub account, jamtur01
. The image has been pushed to the docker_postproc
repository with a tag of 0.1
. This assumes we've got local credentials for the Docker Hub. If you need to specify specific credentials you can add them to the template like so:
. . .
"variables": {
"hub_username": "",
"hub_password": ""
},
{
"post-processors": [
[
{
"type": "docker-tag",
"repository": "jamtur01/docker_postproc",
"tag": "0.1"
},
{
"type": "docker-push",
"login": true,
"login_username": "{{user `hub_username`}}",
"login_password": "{{user `hub_password`}}"
}
]
]
}
Here we've specified some variables to hold our Docker Hub username and password. This is more secure than hard coding it into the template.
We've used the user
function to reference them in the post-processor. We've also specifies the login
key and set it to true
to ensure the docker-push
post-processor logs in prior to pushing the image.
We can then run our template and specify the variables on the command line:
$ packer build \
-var 'hub_username=jamtur01' \
-var 'hub_password=bigsecret' \
docker_postproc.json
In this chapter we've seen how to combine Packer and Docker to build Docker images. We've seen how we can combine multiple stages of the build process:
Dockerfile
instructions.There are also other post-processors that might interest you. You can find a full list in the Packer documentation.
In the next chapter, we're going to see how we can add tests to our Packer build process to ensure that our provisioning and image are correct.