9. DevOps

Let’s see how we may use Docker to be a part of a simple continuous integration/continuous delivery CICD pipeline. In this example, we have an demo Python API. The file and directory structure looks like the following. It’s not important what this Python API does since we are more interested in how to use Docker to build, test and publish the API to pypi.

python/
├── Dockerfile
├── Makefile
├── publish.sh
├── pydemo
│   ├── __init__.py
│   └── poco.py
├── README.md
├── README.rst
├── requirements.txt
├── setup.cfg
├── setup.py
└── tests
    ├── __init__.py
    └── test_poco.py

Using Docker containerization to perform CI/CD is just a matter of scripting. Here’s the script publish.sh that builds, tests and publishes the API. The important caveat here is the acquisition of the ${API_VERSION} environment variable value. We pass this value in to the Docker container and the script now has access to it.

 1#!/bin/bash
 2
 3SOURCE_DIST=/pydemo/dist/pydemo-${API_VERSION}.tar.gz
 4
 5buildCode() {
 6  echo "start the build"
 7  cd /pydemo \
 8    && make clean \
 9    && make \
10    && python setup.py sdist bdist bdist_wheel \
11    && twine check dist/*
12}
13
14updateVersion() {
15  echo "replace version of software to ${API_VERSION}"
16  sed -i "s/version='0.0.1'/version='${API_VERSION}'/g" /pydemo/setup.py
17}
18
19copyCredentials() {
20  if [[ -f /pydemo/.pypirc ]]; then
21    echo "copying over .pypirc"
22    cp /pydemo/.pypirc /root/.pypirc
23  fi
24}
25
26publish() {
27  echo "python publish"
28
29  if [[ -f /root/.pypirc ]]; then
30    if [[ -f ${SOURCE_DIST} ]]; then
31      echo "uploading source"
32      cd /pydemo \
33        && make clean \
34        && python setup.py sdist \
35        && twine upload --repository ${PYPI_REPO} ${SOURCE_DIST}
36    else
37      echo "no ${SOURCE_DIST} found!"
38    fi
39  else
40    echo "no .pypirc found!"
41  fi
42}
43
44cleanUp() {
45  if [[ -f /root/.pypirc ]]; then
46    echo "cleaning up"
47    rm -f /root/.pypirc
48  fi
49}
50
51build() {
52  echo "python build"
53  buildCode
54  publish
55}
56
57conda init bash
58. /root/.bashrc
59updateVersion
60copyCredentials
61build
62cleanUp
63
64echo "done!"

Here’s the Dockerfile. Here, we use the ARG instruction to pass build arguments to the container while it’s being created.

 1FROM continuumio/anaconda3
 2
 3ARG ARG_VERSION
 4ARG ARG_REPO
 5
 6ENV API_VERSION=$ARG_VERSION
 7ENV API_REPO=$ARG_REPO
 8ENV PATH /opt/conda/bin:$PATH
 9
10RUN apt-get update \
11    && apt-get upgrade -y \
12    && apt-get install build-essential -y
13    
14WORKDIR /pydemo
15
16COPY . .
17
18RUN conda install --file requirements.txt -y
19
20RUN /pydemo/publish.sh

As we iterate through our development, we perform CI/CD by executing the command as below. Here, we specify that the API is at version 0.0.1.

1docker build --no-cache \
2    --build-arg ARG_VERSION=0.0.1 \
3    --build-arg ARG_REPO=pydemo \
4    -t pydemo:0.0.1 .

If we are at version 0.0.2, then the command changes to something as simple as the following.

1docker build --no-cache \
2    --build-arg ARG_VERSION=0.0.2 \
3    --build-arg ARG_REPO=pydemo \
4    -t pydemo:0.0.1 .