CLI tooling containers
What does the title mean?
To run one or multiple CLI tools within a lightweight container on your local environment.
Usually CLI tools are installed on your machine, using for instance a tarball or a brew install
command. This article shows how these tools can be run from a container without changing the user experience.
Why would you do this?!
It is a personal preference, but I always aim to run all the CLI tools inside containers locally. This keeps the actual work environment clean as usually only docker is required. Sometimes it can be challenging, especially if you need multiple tools inside the one container. But it gives some big advantages as well, like:
- Clean local work/devops environment (i.e. your laptop/desktop)
- Easy updating without the need for uninstalling etc
- Easily share scripts/code with others and inform them to run it inside your container (this might be the one of the greatest advantages, never have to hear that your code doesn’t work because someone elses local environment is a mess/out-of-date/up-to-date)
- Be conscious about all your dependencies
Of course there are also disadvantages:
- Takes time to set up, make it work and test it properly (often at times when you need to focus on something else)
- Authentication challenges, for instance to a cloud provider
Now let’s have a look at three examples:
ansible
Container
The first CLI tool I applied this method for was Ansible a few years ago. Mainly because installing the right version of Ansible and python can be a hassle (especially changing python versions). Spinning up a container, running the ansible playbook in there, while the networking goes through the host and then killing the container again seemed therefore like a very neat solution.
Here you can find my ansible container repository on GitLab.
The container is based on Alpine. Since these CLI tools will be spun-up every time you run the command, I try to keep the images as small as possible. Several directories and files are mounted to the container, like:
/etc/hosts
used often togehter with the inventory files~/.ssh/
directory for the keys$(pwd)
to give the container access the the current directory
If you now run the container in the directory with the playbooks and inventory files, ansible has access to everything that it needs. Otherwise you can simply mount the additional directories and files that you require in the docker run
command, see the docs.
terraform
Container
To deploy infrastructure using terraform we need a container that includes the right version of terraform. HashiCorp offers this nice Dockerfile
on GitHub.
The nice part is that it uses a build.sh
script for the more complex build operations. This contains all the Golang commands and checks the environment to ensure a proper build.
The not so nice part is that the Dockerfile does not make use of the multi-stage build functionality, which in my opinion shows one of the main advantages of containerizing applications written in Golang. In a multi-stage build of a Dockerfile, at least two containers are defined in the Dockerfile, each defined ‘container’ is called a stage. In a single Dockerfile the different stages allow easy copy-pasting of files between stages. A common design pattern to create (very) small containers is therefore:
- First stage: Download all dependencies to create the needed artifact
- Second stage: Copy in only the artifact without all the dependencies needed for the creation of the artifact
Therefore I have rewritten the Dockerfile
like this:
FROM golang:alpine as builder
ENV GOPATH /go
RUN apk add --update git bash openssh
ENV TERRAFORM_VERSION=0.11.13
ENV TF_DEV=true
ENV TF_RELEASE=1
WORKDIR $GOPATH/src/github.com/hashicorp/terraform
RUN git clone --branch v${TERRAFORM_VERSION} --depth 1 https://github.com/hashicorp/terraform.git ./ && \
/bin/bash scripts/build.sh
FROM alpine:3.9
RUN apk --no-cache add --update ca-certificates
COPY --from=builder /go/bin /usr/bin/
ENTRYPOINT ["terraform"]
gcloud
Container
Google offers a very nice container for their gcloud SDK. On their page they also offer a method of authentication. This serves well to authenticate to GCP, and if you set up an alias for the docker run ...
command, you can just use it as if it were installed locally.
For authentication Google advices the following process on their docker hub page. Authenticate by running:
docker run -ti --name gcloud-config google/cloud-sdk gcloud auth login
Once you authenticate successfully, credentials are preserved in the volume of the gcloud-config container. To list compute instances using these credentials, run the container with --volumes-from
:
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gcloud compute instances list --project your_project
Final tip, you can use the regular gcloud
command if you can add the following as an alias:
alias gcloud="docker run --rm -it -v (pwd):/current-dir -w="/current-dir" --volumes-from gcloud-config google/cloud-sdk gcloud"
Note: this will not work if you go back in directories w.r.t. the current directory, so gcloud ../my-deployment-manager-file.yml
won’t work.