Day 32 : Launching your Kubernetes Cluster with Deployment

Radheya Zunjur
5 min readJul 20, 2023

--

Welcome to Day 32 of our comprehensive guide on mastering Kubernetes! By now, you’ve already covered a significant portion of this exciting journey, and we hope you’re just as thrilled as we are to continue exploring the vast possibilities that Kubernetes has to offer. In this installment, we will delve into an essential aspect of Kubernetes deployment — launching your very own Kubernetes cluster.

What is Deployment in kubernetes?

A Deployment provides declarative updates for Pods and ReplicaSets. In easy terms, a Deployment is an object that manages the deployment and scaling of a set of pods. It provides a declarative way to define and manage the desired state of a replicated application.

In Kubernetes, a “Deployment” is an essential resource that allows you to declaratively manage and control the deployment of containerized applications. It represents a desired state for the application and ensures that the specified number of replicas are running at all times, automatically handling updates and rollbacks as needed.

When you create a Deployment, you define the desired state of your application, including the container image, the number of replicas (instances) of the application to be running, and other configurations such as resource limits, environment variables, and volume mounts.

Deployments Use Cases

A deployment is used:

  • To roll out a ReplicaSet.
  • Manage the lifecycle of pods and ensure the desired number of replicas are running.
  • To define the desired state of your application by specifying the number of replicas, the container image, and other configuration details.
  • For rolling updates, allowing to update the application without downtime.
  • To scale your application horizontally.

K8s using Horizontal Pod Autoscaler to scale up and down the applications.

Today’s task let’s keep it very simple.

Task 1)
Create one Deployment file to deploy a sample todo-app on K8s using “Auto-healing” and “Auto-Scaling” feature.

I am using the following repository for this project:

Clone the repository to your local.

git clone https://github.com/radheyzunjur/django-todo-cicd.git

The docker file in the cloned repository has the following:

FROM python:3
WORKDIR /data
RUN pip install django==3.2
COPY . .
RUN python manage.py migrate
EXPOSE 8000
CMD [“python”,”manage.py”,”runserver”,”0.0.0.0:8000"]

Go to the location and build the image(make sure your docker is running)

cd django-todo-cicd
docker build . -t radheyzunjur/django-todo:latest

Let’s verify if the image is created by running.

docker images

Yes, it is created. Now push this image to the docker hub.

Login to dockerhub.

docker login

Now push the repository to registry.

docker push radheyzunjur/django-todo:latest

Let’s verify in Docker Hub if the image is pushed.

We can see here radheyzunjur/django-todo is updated here.

Yes, the image is pushed from our local repo.

Now let us start with creating the deployment. Create the deployment.yml file.

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-django-application
spec:
replicas: 3
selector:
matchLabels:
app: django-todo-app
template:
metadata:
labels:
app: django-todo-app
spec:
containers:
- name: todo-app
image: radheyzunjur/django-todo:latest
ports:
- containerPort: 8000

Let’s create this deployment. Run this command as a root user:

kubectl apply -f deployment.yml

Let us verify if any pods are created.

COPY

kubectl get pods

Let us test if the container we created is working or not. This will be checked on the Worker node.

Let’s connect to any one of the containers locally.

COPY

sudo docker exec -it <container_id> bash

Let’s connect to the application using the container’s IP.

curl -L http://<container_ip>:8000

So the deployment I created is working successfully.

Let’s check the auto-healing and autoscaling features.

What are auto-healing and auto-scaling features in k8s?

Auto-healing, also known as self-healing, is a feature that automatically detects and recovers from failures within the cluster.

Auto-scaling is a feature that dynamically adjusts the number of running instances (pods) based on the current demand or resource utilization.

So I will delete two containers.

Use the following commands on the Master node to delete pods:

kubectl get pods
kubectl delete pod <podname> <podname>

If we check the pods again, we can observe that the minimum number of pods we specified (in this case 3) will be up again.

You can observe that two pods are created before 77s, proving that those came up automatically after the deletion of pods. This is the feature of k8s: auto-scaling and auto-healing.

To delete deployment, we use:


kubectl delete -f deployment.yml

We can also observe that, along with deployment, the pods created are also deleted.

Finally! You created a cluster and deployed a Django application on the worker node!

--

--

Radheya Zunjur
Radheya Zunjur

Written by Radheya Zunjur

Database Engineer At Harbinger | DevOps | Cloud Ops | Technical Writer

No responses yet