Kubernetes, Local to Production with Django: 2— Docker and Minikube
This section focuses on implementing the kubernetes hello-minikube tutorial adapted to a conventional Django application. The codebase for this tutorial can be cloned from my github repo and we will be working with the
part_2-getting-started branch. The kubernetes version for this tutorial is assumed to be 1.15.0.
- The following updates were made in October 2020 — Kubernetes version was
updated to v1.19.2, python was updated to 3.8 and Django was updated to version 3.1.2
This tutorial assumes a Mac OS system, but has links on how to run it on a Linux/Ubuntu or Windows OS.
Minikube is one of the easiest ways at the moment to run a single node Kubernetes cluster locally. In a Mac OS, the installation can be done by running.
$ brew install minikube
For a Linux or Windows OS, the installation instructions have been specified in the minikube github README page. The minikube version at the time of writing was 1.12.0.
Minikube supports several VM drivers but by default uses virtualbox which can be downloaded and installed from the virtualbox downloads page.
Docker is used for containerization and the installation can be found in the docker documentation page.
The Kubernetes command line tool is called
kubectl and is used to deploy and manage applications. This is done by creating, updating and deleting components as well as inspecting cluster resources. To install it, simply run:
$ brew install kubectl
For detailed Windows and Linux installations, please refer to the kubernetes kubectl installation page. The Kubectl version at the time of writing was 1.15.0.
In order to get the best of this tutorial, the project github repo should be cloned:
The branch this tutorial is based on is
To start the Kubernetes cluster with a specific version using
minikube, run the command:
$ minikube start --kubernetes-version=v1.19.2
Several processes occur, which include:
- The creation and configuration of a VM which runs a single-node kubernetes cluster.
- Setting the default
kubectl config use-context minikube, where a context is the configuration information used to communicate with each unique Kubernetes cluster.
The status of the minikube cluster can be determined by running:
$ minikube statushost: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
And to confirm the
kubectl context, the command is:
$ kubectl config current-context
The docker command line in the host machine can be configured to utilize the docker daemon within minikube by running:
$ eval $(minikube docker-env)
There are several reasons as to why it is useful to use the minikube docker daemon:
- Docker images to be deployed into the local cluster don’t have to be pushed to a container registry and pulled in by Kubernetes, they can be built inside the same docker daemon as minikube and be used directly.
- As a result, it’s good for running local experiments as has a much faster turn around time especially if an external registry is required.
- It applies when you have a single VM (node) docker cluster and want to use the docker daemon inside the VM.
To confirm that the docker cli is using the minikube docker daemon, run:
$ docker info | grep Name
In order to revert back to the host docker daemon, simply run:
$ eval $(minikube docker-env -u)
For the rest of the tutorial, the
kubectl context should be set to
minikube and the minikube docker daemon should be used.
There are many management commands that are used by
kubectl to view the state of the Kubernetes cluster. Fortunately,
minikube provides a dashboard so we don’t have to worry about all the explicit commands. To view the dashboard, run the command:
$ minikube dashboard
This opens the default browser and displays the current state of the Kubernetes cluster.
As Kubernetes expects a containerized application, we will be using
docker to get started. It’s assumed docker has already been installed and we are using the
minikube docker daemon.
The following Dockerfile is in the root directory of the project file i.e.
LABEL maintainer="firstname.lastname@example.org"ENV PROJECT_ROOT /app
WORKDIR $PROJECT_ROOTCOPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD python manage.py runserver 0.0.0.0:8000
- The Dockerfile first defines the base image to build from, where in this case it’s the
LABELinstruction is then used to add metadata to the image. This is the recommended way to specify the package maintainer as the
MAINTAINERinstruction has been deprecated.
ENV <VARIABLE> <var>directive sets the project root environmental variable, where the variable can be reused in several places as
$<VARIABLE>. This allows for one point of modification in case some arbitrary variable needs to be changed.
- The current working directory is then set using the
WORKDIRinstruction. The instruction resolves the
$PROJECT_ROOTenvironmental variable previously set. The working directory will be the execution context of any subsequent
CMDinstructions, unless explicitly stated.
COPYinstruction is then used to copy the
requirements.txtfile from the current directory of the local file system and adds them to the file system of the container. Copying the individual file ensures that the
RUN pip installinstruction’s build cache is only invalidated (forcing the step to be re-run) if specifically the
requirements.txtfile changes, leading to an efficient build process. See the docker documentation for further details. It’s worth noting the
COPYas opposed to the
ADDinstruction is the recommended command for copying files from the local file system to the container file system.
- The required python packages are then installed using the
RUN pip installinstruction.
- The rest of the project files are then copied into the container file system. This should be one of the last steps as the files are constantly changing leading to more frequent cache invalidations resulting in more frequent image builds.
- The final instruction executed is
CMDwhich provides defaults for an executing container. In this case the default is to start the python web server.
The command used to build the required docker image based on the Dockerfile is:
$ docker build -t <IMAGE_NAME>:<TAG> .
:<TAG> parameter though optional, is recommended in order to keep track of the version of the docker image to be run e.g.
docker build -t gitumarkk/k8_django_minikube:1.0.0 . . The
<IMAGE_NAME> can be any arbitrary string, but the recommended format is
In order to see the built image within the minikube docker environment, run:
$ docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE
gitumarkk/k8_django_minikube 1.0.0 c459907decbb 5 minutes ago 197MBpython 3-slim dc41c0491c65 10 days ago 156MBgcr.io/google_containers/kubernetes-dashboard-amd64 v1.8.0 55dbc28356f2 4 weeks ago 119MBgcr.io/k8s-minikube/storage-provisioner v1.8.0 4689081edb10 7 weeks ago 80.8MB.
As we are in the minikube docker daemon, it will display the image that we built as well as images used by minikube within the cluster.
Kubernetes uses the concept of pods (i.e. a grouping of co-located and co-scheduled containers running in a shared context) to run applications. There are different controllers used to manage the lifecycle of pods in a Kubernetes cluster. However, a
Deployment controller forms one of easiest ways to create, update and delete pods in the cluster.
Kubernetes commands can be executed by an imperative or declarative approach. Imperative commands specify how an operation needs to be performed, a declarative approach is done by using configuration files which can be stored in version control. The preferred method is the declarative approach as the steps can be tracked and audited. But for arguments sake we will look at both approaches
To create a deployment imperatively, run the following command:
$ kubectl run <deployment-name> --image=<IMAGE-NAME> --port=8000
In the above command, the
<deployment-name> can be any string which will be used to identify the deployment, while the
<IMAGE-NAME> is the docker image that was built. At it’s simplest, the command creates a
Deployment controller, and the controller then creates pods consisting of containers based on the image defined by
<IMAGE-NAME>. The pods are then deployed in the minikube Kubernetes cluster. The running deployments can be seen in the minikube dashboard under the
Pods side navigation bar, however to view it in the terminal, execute:
$ kubectl get deploymentsNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-django 1 1 1 1 20h
To delete a deployment, the command is:
$ kubectl delete deployment/<DEPLOYMENT_NAME>
Using the declarative approach, the deployment can be conducted by applying the command:
$ kubectl apply -f deployment.yaml
deployment "<deployment-name>" created
To the following spec which is found in the
./kubernetes_django/deploy/kubernetes/django/deployment.yaml file in the repository:
- name: <pod_name>
- containerPort: 8000
From the spec file:
metadata: namefield describes the deployment name, whereas the
metadata: labelsdescribes the labels for the deployment i.e. can be thought of as a tagging mechanism.
spec: replicasfield defines the number of pods to run.
spec: selector: matchLabelsfield describes what pods the deployment should apply to.
spec: template: metadata: labelsfield indicates what labels should be assigned to the running pod. This label is what is found by the
matchLabelsfield in the deployment.
spec: template: specfield, contains a list of containers that belong to this pod. In this case it indicates the pod has one container as it only has one image and name in the list. The
<pod_name>can be any string but should ideally be descriptive enough. The
<pod_image>is an image name that should be discoverable within the local context or from an external container registry. Since the local docker daemon is being used, the image will be used from the local context.
- The deployment exposes port
8000within the pod as defined in the
spec: template: spec: containers: portsfield.
Components created declaratively can be deleted by, running the command:
$ kubectl delete -f <file_path>.yaml
When a deployment is created, each pod in the deployment has a unique IP address within the cluster. However, we need some kind of mechanism to allow the access of the pod IP address from outside the cluster. This is done by Services. Formally:
Serviceis an abstraction which defines a logical set of
Podsand a policy by which to access them - sometimes called a micro-service.
A Service routes traffic across pods while allowing the specific pod IP addresses to be dynamic i.e. less stable. This means pods can die and be recreated and thus the IP address can change, and yet the traffic will always route to the right pods. This is abstracted away by the Service object and allows the user to focus on building the application.
As with Deployments, services can either be defined imperatively or declaratively.
To create a service imperatively, the following shell command is to be executed:
$ kubectl expose deployment <deployment-name> --type=NodePort
In order to view the existing services, run:
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdjango NodePort 10.111.73.57 <none> 8000:30098/TCP 16skubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
This shows that our deployment has a
NodePort type and exposes port
8000 on the container to port
30098 on the host machine. The latter port is called a
nodePort and by default the range is
30000–32767. The deployed service can be viewed on the minikube dashboard, however, minikube also provides the useful cli command:
$ minikube service <service-name>
<service-name> in this case is
django. If everything goes well, the default browser should be opened with the django application running on the
<minikube_ip>:<nodePort> url, showing the default Django 2 webpage.
To delete the service imperatively, the command is:
$ kubectl delete svc/<service-name>
The following declarative declaration of the service can be found in the
- protocol: TCP
metadata: namefield describes the name of the
Serviceobject that will be created and can be identified by running
kubectl get svc.
spec: selectorfield specifies the
<pod_value>that the service applies to. This means that any pod matching
<pod_key>=<pod_value>label will be exposed by the service
spec: portscontains a yaml array. The
protocolin the first item in the array is
TCPwhere the pod
port: 8000field is exposed to the Kubernetes cluster i.e. the cluster interacts with the pod on port
targetPortis the port within the pod that it’s exposed through. If
portis not defined, it will default to the
NodePorttype instructs the service to expose the pod to the node/host machine on a random port in the default range
30000–32767, however an explicit
nodePortcan be set in the
protocolsarray to specify which port in the default range the host machine can communicate with the
And it can be created declaratively by running the command:
$ kubectl apply -f service.yaml
service "<service-name>" created
The deployed service can be viewed on the minikube dashboard or by running the command:
$ kubectl get svc
And the service can be viewed on the browser by running.
$ minikube service <deployment-name>
So far we have covered how to get a basic Django application up and running in Kubernetes cluster by:
- Installing and running
minikubewhich creates our local cluster.
- Creating a
Dockerfilethat is used to build the image for the application.
- Deploying the image as a
Podwithin our kubernetes cluster and seeing the result in the
This forms the foundation for the rest of the tutorial as it’s simply a matter of building on what we already have. The next tutorial will focus on how to deploy a Postgres backend with Celery that utilizes Redis as a message broker.
If you have any questions or anything needs clarification, you can book a time with me on https://mbele.io/mark
8. Terms and Definitions
- Node: A
nodeis a worker machine in Kubernetes and may consist of a virtual machine or a physical machine.
- Cluster: A
clusteris a collection of
- Image: An
imageis a template that defines the packages and steps necessary to run an application.
- Container: A
containeris the instance of an
imageand is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it i.e. code, runtime, system tools, system libraries, settings.
- Pod: A
podis a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.