Kubernetes, Local to Production with Django: 2— Docker and Minikube

Mark Gituma
10 min readJan 5, 2018

--

This section focuses on implementing the kubernetes hello-minikube tutorial adapted to a conventional Django application. The codebase for this tutorial can be cloned from my github repo and we will be working with the part_2-getting-started branch. The kubernetes version for this tutorial is assumed to be 1.15.0.

Edits

  • The following updates were made in October 2020 — Kubernetes version was updated to v1.19.2, python was updated to 3.8 and Django was updated to version 3.1.2

1. Requirements

OS

This tutorial assumes a Mac OS system, but has links on how to run it on a Linux/Ubuntu or Windows OS.

Minikube

Minikube is one of the easiest ways at the moment to run a single node Kubernetes cluster locally. In a Mac OS, the installation can be done by running.

$ brew install minikube

For a Linux or Windows OS, the installation instructions have been specified in the minikube github README page. The minikube version at the time of writing was 1.12.0.

Virtualbox

Minikube supports several VM drivers but by default uses virtualbox which can be downloaded and installed from the virtualbox downloads page.

Docker

Docker is used for containerization and the installation can be found in the docker documentation page.

Kubectl

The Kubernetes command line tool is called kubectl and is used to deploy and manage applications. This is done by creating, updating and deleting components as well as inspecting cluster resources. To install it, simply run:

$ brew install kubectl

For detailed Windows and Linux installations, please refer to the kubernetes kubectl installation page. The Kubectl version at the time of writing was 1.15.0.

Project files

In order to get the best of this tutorial, the project github repo should be cloned:

$ git clone https://github.com/gitumarkk/kubernetes_django.git

The branch this tutorial is based on is part_2-getting-started.

2. Minikube

To start the Kubernetes cluster with a specific version using minikube, run the command:

$ minikube start --kubernetes-version=v1.19.2

Several processes occur, which include:

  • The creation and configuration of a VM which runs a single-node kubernetes cluster.
  • Setting the default kubectl context to minikube i.e. kubectl config use-context minikube, where a context is the configuration information used to communicate with each unique Kubernetes cluster.

The status of the minikube cluster can be determined by running:

$ minikube statushost: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

And to confirm the kubectl context, the command is:

$ kubectl config current-context
minikube

The docker command line in the host machine can be configured to utilize the docker daemon within minikube by running:

$ eval $(minikube docker-env)

There are several reasons as to why it is useful to use the minikube docker daemon:

  • Docker images to be deployed into the local cluster don’t have to be pushed to a container registry and pulled in by Kubernetes, they can be built inside the same docker daemon as minikube and be used directly.
  • As a result, it’s good for running local experiments as has a much faster turn around time especially if an external registry is required.
  • It applies when you have a single VM (node) docker cluster and want to use the docker daemon inside the VM.

To confirm that the docker cli is using the minikube docker daemon, run:

$ docker info | grep Name
Name: minikube

In order to revert back to the host docker daemon, simply run:

$ eval $(minikube docker-env -u)

For the rest of the tutorial, the kubectl context should be set to minikube and the minikube docker daemon should be used.

There are many management commands that are used by kubectl to view the state of the Kubernetes cluster. Fortunately, minikube provides a dashboard so we don’t have to worry about all the explicit commands. To view the dashboard, run the command:

$ minikube dashboard

This opens the default browser and displays the current state of the Kubernetes cluster.

3. Docker

As Kubernetes expects a containerized application, we will be using docker to get started. It’s assumed docker has already been installed and we are using the minikube docker daemon.

The Dockerfile

The following Dockerfile is in the root directory of the project file i.e. ./kubernetes_django/Dockerfile:

FROM python:3.8-slim
LABEL maintainer="mark.gituma@gmail.com"
ENV PROJECT_ROOT /app
WORKDIR $PROJECT_ROOT
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD python manage.py runserver 0.0.0.0:8000
  • The Dockerfile first defines the base image to build from, where in this case it’s the python:3.8-slim image.
  • The LABEL instruction is then used to add metadata to the image. This is the recommended way to specify the package maintainer as the MAINTAINER instruction has been deprecated.
  • The ENV <VARIABLE> <var> directive sets the project root environmental variable, where the variable can be reused in several places as $<VARIABLE>. This allows for one point of modification in case some arbitrary variable needs to be changed.
  • The current working directory is then set using the WORKDIR instruction. The instruction resolves the $PROJECT_ROOT environmental variable previously set. The working directory will be the execution context of any subsequent RUN, COPY, ENTRYPOINT or CMD instructions, unless explicitly stated.
  • The COPY instruction is then used to copy the requirements.txt file from the current directory of the local file system and adds them to the file system of the container. Copying the individual file ensures that theRUN pip install instruction’s build cache is only invalidated (forcing the step to be re-run) if specifically the requirements.txt file changes, leading to an efficient build process. See the docker documentation for further details. It’s worth noting the COPY as opposed to the ADD instruction is the recommended command for copying files from the local file system to the container file system.
  • The required python packages are then installed using the RUN pip install instruction.
  • The rest of the project files are then copied into the container file system. This should be one of the last steps as the files are constantly changing leading to more frequent cache invalidations resulting in more frequent image builds.
  • The final instruction executed is CMD which provides defaults for an executing container. In this case the default is to start the python web server.

Building Docker

The command used to build the required docker image based on the Dockerfile is:

$ docker build -t <IMAGE_NAME>:<TAG> .

The :<TAG> parameter though optional, is recommended in order to keep track of the version of the docker image to be run e.g. docker build -t gitumarkk/k8_django_minikube:1.0.0 . . The <IMAGE_NAME> can be any arbitrary string, but the recommended format is <REPO_NAME>/<APP_NAME> .

In order to see the built image within the minikube docker environment, run:

$ docker imagesREPOSITORY                                             TAG                 IMAGE ID            CREATED             SIZE
gitumarkk/k8_django_minikube 1.0.0 c459907decbb 5 minutes ago 197MB
python 3-slim dc41c0491c65 10 days ago 156MBgcr.io/google_containers/kubernetes-dashboard-amd64 v1.8.0 55dbc28356f2 4 weeks ago 119MBgcr.io/k8s-minikube/storage-provisioner v1.8.0 4689081edb10 7 weeks ago 80.8MB.
.

As we are in the minikube docker daemon, it will display the image that we built as well as images used by minikube within the cluster.

4. Deployments

Kubernetes uses the concept of pods (i.e. a grouping of co-located and co-scheduled containers running in a shared context) to run applications. There are different controllers used to manage the lifecycle of pods in a Kubernetes cluster. However, a Deployment controller forms one of easiest ways to create, update and delete pods in the cluster.

Kubernetes commands can be executed by an imperative or declarative approach. Imperative commands specify how an operation needs to be performed, a declarative approach is done by using configuration files which can be stored in version control. The preferred method is the declarative approach as the steps can be tracked and audited. But for arguments sake we will look at both approaches

Imperative

To create a deployment imperatively, run the following command:

$ kubectl run <deployment-name> --image=<IMAGE-NAME> --port=8000

In the above command, the <deployment-name> can be any string which will be used to identify the deployment, while the <IMAGE-NAME> is the docker image that was built. At it’s simplest, the command creates a Deployment controller, and the controller then creates pods consisting of containers based on the image defined by <IMAGE-NAME>. The pods are then deployed in the minikube Kubernetes cluster. The running deployments can be seen in the minikube dashboard under the Deployments and Pods side navigation bar, however to view it in the terminal, execute:

$ kubectl get deploymentsNAME                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubernetes-django 1 1 1 1 20h

To delete a deployment, the command is:

$ kubectl delete deployment/<DEPLOYMENT_NAME>

Declarative

Using the declarative approach, the deployment can be conducted by applying the command:

$ kubectl apply -f deployment.yaml
deployment "<deployment-name>" created

To the following spec which is found in the ./kubernetes_django/deploy/kubernetes/django/deployment.yaml file in the repository:

apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployment_name>
labels:
<deployment_label_key>: <deployment_label_value>
spec:
replicas: 1
selector:
matchLabels:
<pod_label_key>: <pod_label_value>
template:
metadata:
labels:
<pod_label_key>: <pod_label_value>
spec:
containers:
- name: <pod_name>
image: <pod_image>
ports:
- containerPort: 8000

From the spec file:

  • The metadata: name field describes the deployment name, whereas the metadata: labels describes the labels for the deployment i.e. can be thought of as a tagging mechanism.
  • The spec: replicas field defines the number of pods to run.
  • The spec: selector: matchLabels field describes what pods the deployment should apply to.
  • The spec: template: metadata: labels field indicates what labels should be assigned to the running pod. This label is what is found by the matchLabels field in the deployment.
  • The spec: template: spec field, contains a list of containers that belong to this pod. In this case it indicates the pod has one container as it only has one image and name in the list. The <pod_name> can be any string but should ideally be descriptive enough. The <pod_image> is an image name that should be discoverable within the local context or from an external container registry. Since the local docker daemon is being used, the image will be used from the local context.
  • The deployment exposes port 8000 within the pod as defined in the spec: template: spec: containers: ports field.

Components created declaratively can be deleted by, running the command:

$ kubectl delete -f <file_path>.yaml

5. Services

When a deployment is created, each pod in the deployment has a unique IP address within the cluster. However, we need some kind of mechanism to allow the access of the pod IP address from outside the cluster. This is done by Services. Formally:

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service.

A Service routes traffic across pods while allowing the specific pod IP addresses to be dynamic i.e. less stable. This means pods can die and be recreated and thus the IP address can change, and yet the traffic will always route to the right pods. This is abstracted away by the Service object and allows the user to focus on building the application.

As with Deployments, services can either be defined imperatively or declaratively.

Imperative

To create a service imperatively, the following shell command is to be executed:

$ kubectl expose deployment <deployment-name> --type=NodePort

In order to view the existing services, run:

$ kubectl get svcNAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEdjango       NodePort    10.111.73.57   <none>        8000:30098/TCP   16skubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          4d

This shows that our deployment has a NodePort type and exposes port 8000 on the container to port 30098 on the host machine. The latter port is called a nodePort and by default the range is 30000–32767. The deployed service can be viewed on the minikube dashboard, however, minikube also provides the useful cli command:

$ minikube service <service-name>

Where the <service-name> in this case is django. If everything goes well, the default browser should be opened with the django application running on the <minikube_ip>:<nodePort> url, showing the default Django 2 webpage.

To delete the service imperatively, the command is:

$ kubectl delete svc/<service-name>

Declarative

The following declarative declaration of the service can be found in the ./kubernetes_django/deploy/kubernetes/django/service.yaml file.

kind: Service
apiVersion: v1
metadata:
name: kubernetes-django-service
spec:
selector:
<pod_key>: <pod_value>
ports:
- protocol: TCP
port: 8000
targetPort: 8000
type: NodePort
  • The metadata: name field describes the name of the Service object that will be created and can be identified by running kubectl get svc .
  • The spec: selector field specifies the <pod_label> and <pod_value> that the service applies to. This means that any pod matching <pod_key>=<pod_value> label will be exposed by the service
  • The spec: ports contains a yaml array. The protocol in the first item in the array is TCP where the pod port: 8000 field is exposed to the Kubernetes cluster i.e. the cluster interacts with the pod on port 8000. The targetPort is the port within the pod that it’s exposed through. If port is not defined, it will default to the targetPort . The NodePort type instructs the service to expose the pod to the node/host machine on a random port in the default range 30000–32767 , however an explicit nodePort can be set in the protocols array to specify which port in the default range the host machine can communicate with the pod .

And it can be created declaratively by running the command:

$ kubectl apply -f service.yaml
service "<service-name>" created

The deployed service can be viewed on the minikube dashboard or by running the command:

$ kubectl get svc

And the service can be viewed on the browser by running.

$ minikube service <deployment-name>

6. Summary

So far we have covered how to get a basic Django application up and running in Kubernetes cluster by:

  • Installing and running minikube which creates our local cluster.
  • Creating a Dockerfile that is used to build the image for the application.
  • Deploying the image as a Pod within our kubernetes cluster and seeing the result in the minikube dashboard .

This forms the foundation for the rest of the tutorial as it’s simply a matter of building on what we already have. The next tutorial will focus on how to deploy a Postgres backend with Celery that utilizes Redis as a message broker.

If you have any questions or anything needs clarification, you can book a time with me on https://mbele.io/mark

7. Credits

8. Terms and Definitions

  • Node: A node is a worker machine in Kubernetes and may consist of a virtual machine or a physical machine.
  • Cluster: A cluster is a collection of nodes.
  • Image: An image is a template that defines the packages and steps necessary to run an application.
  • Container: A container is the instance of an image and is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it i.e. code, runtime, system tools, system libraries, settings.
  • Pod: A pod is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.

9. Tutorial Links

Part 1, Part 2, Part 3, Part 4, Part 5, Part 6

--

--