Day 30: Kubernetes Architecture
In last article we finished Jenkins. Now let’s move forward to our learning journey. Welcome to Day 30 of our comprehensive journey into the world of Kubernetes! In this installment, we dive deep into the heart of Kubernetes — its architecture. As one of the most influential and widely adopted container orchestration systems, understanding Kubernetes’ underlying structure is paramount to harnessing its full potential.
What is Kubernetes? Write in your own words and why do we call it k8s?
Kubernetes is an open-source container orchestration platform that facilitates the deployment, management, and scaling of containerized applications.
It was originally developed by Google and later handed over to the Cloud Native Computing Foundation (CNCF) for further development and maintenance.
Kubernetes provides a powerful and flexible solution for automating the management of containerized workloads, ensuring that applications run consistently and reliably across various environments, be it on-premises or in the cloud.
Kubernetes is called K8’s because K8s as an abbreviation results from counting the eight letters between the “K” and the “s”.
What are the benefits of using k8s?
Using Kubernetes (K8s) comes with a wide range of benefits that make it an essential tool in modern software development and deployment. Here are some of the key advantages:
- Scalability: Kubernetes allows you to easily scale your applications up or down based on demand. Whether you have a few users or millions, K8s can handle the workload, automatically adding more instances of your app when needed and removing them when the demand decreases.
- High Availability: With Kubernetes, you can ensure that your applications are always available, even if some parts fail. It can automatically detect and replace failed containers or even move them to other healthy computers, making sure your users experience minimal downtime.
- Automated Rollouts and Rollbacks: Deploying new updates or changes to your application is made safe and straightforward with Kubernetes. It enables you to update your software without any interruptions to your users, and if something goes wrong, it can quickly roll back to the previous version.
- Resource Efficiency: Kubernetes optimizes the use of resources in your cluster, making sure your applications run efficiently. It can pack multiple applications on the same machine, utilizing resources effectively and reducing wastage.
- Declarative Configuration: With K8s, you describe the desired state of your application, and it takes care of making that state a reality. This declarative approach means you don’t have to worry about the nitty-gritty of how to get there; Kubernetes handles it for you.
Explain the architecture of Kubernetes?
refer to [this video](https://youtu.be/FqfoDUhzyDo)
Kubernetes Cluster mainly consists of Worker Machines called Nodes and a Control Plane. In a cluster, there is at least one worker node. The Kubectl CLI communicates with the Control Plane and Control Plane manages the Worker Nodes.
Kubernetes — Cluster Architecture
As can be seen in the diagram below, Kubernetes has a client-server architecture and has master and worker nodes, with the master being installed on a single Linux system and the nodes on many Linux workstations.
Kubernetes Components
Kubernetes is composed of a number of components, each of which plays a specific role in the overall system. These components can be divided into two categories:
- nodes: Each Kubernetes cluster requires at least one worker node, which is a collection of worker machines that make up the nodes where our container will be deployed.
- Control plane: The worker nodes and any pods contained within them will be under the control plane.
Write the difference between kubectl and kubelets.
kubectl
is a command-line tool for managing Kubernetes clusters, while kubelet
is an agent running on each node to manage containers on that node based on instructions from the Kubernetes API server.
Explain the role of the API server.
The API server in Kubernetes is the central control point that manages the cluster’s state, handles communication between components, and provides the API for users to interact with the cluster. It is responsible for authentication, authorization, validation, and managing the desired state of resources.