The Kubernetes cluster architecture mainly consists of a Master and a set of Worker nodes. The Master is the controlling node. It is responsible for managing the cluster and distributing the tasks to the worker nodes. The worker nodes execute their assigned tasks and send regular status updates back to the master node, enabling the master to monitor.
Before we get into the components of the cluster in more detail, let us have a quick look at the kind of requirements they would be managing. And, the add on components that K8s provide for interacting with it and sending the requirements.
K8s Dashboad and the Kubectl
The K8s dashboad and the Kubectl are two add-on components. The dashboard is a web based utility tool and, mainly useful for monitoring various runtime objects in the K8s cluster.
The Kubectl is a command line utility. Its a key tool for interacting with the K8s master.
Both of these tools talk to the K8s master through its REST API service which is shown as API Server component in the diagram.
Kube API Objects – The requirement templates
K8s comes with a flexible design to manage our servers, environments, applications, deployments, services and the other deployment related stuffs, as separate entities.
And, it allows us to build our requirements by configuring these entities in its .yaml templates called the Kube API Objects. Some of these entities are simple and some are complex, referring to several other entities.
Below are some examples of these API objects. It includes a sample for creating a project environment. The second one is for deploying my-app. And, the last one is for exposing the deployed instances as a service.
The comments added there, specify the kind of instructions it provides to Kubernetes.
apiVersion: v1 kind: Namespace #Hi K8s, I need a new environment( Namespace object ) metadata: name: my-project-dev #Name it as my-project-dev
apiVersion: apps/v1 kind: Deployment #Please create a deployment object for my-app metadata: name: my-app-deployment #Use this as the name of the object spec: replicas: 3 #Create 3 instances of my application selector: #And, use this selector to keep track of these instances. matchLabels: #Track instances whose label match to - 'app: my-app' app: my-app template: #This template provides you the specification of the instances I need metadata: labels: #Assign these labels to my instances app: my-app spec: containers: - name: echo-container image: k8s.gcr.io/echoserver:1.4 #This is the IMAGE you need to deploy for creating my application instance ports: - containerPort: 8080 #Open http port on 8080 #Please use these instructions for now, I will update this #and send you for a re-deployment when I am ready with my next version!
apiVersion: v1 kind: Service #Please create a service metadata: name: my-app-service #Name it as my-app-service spec: selector: app: my-app #Select all application instances with label - 'app: my-app' ports: - protocol: http port: 80 #Expose this service on port 80 targetPort: 8080 #Point to my application instances on port 8080 . #These are running at separate virtual IPs as assigned to their containing PODs. I know you know it :)
Now having seen how the requirements look like, let us look at the components in the Master and the Worker nodes. And, how they work together to fulfill these user requirements.
K8s Master – Creates & Monitors our System
We also refer to this as the Control Plane.
As we send our requirement to the K8s cluster, the Master node acts as the main contractor responsible to fulfill our requirements.
It divides the contract and distributes it among several worker nodes. The worker nodes work on their assigned part and keep updating their status to the master. The regular update of the status enables the Master to monitor its contract. And, in case a worker node fails to run its assigned load, the corresponding controller initiates the steps to re-assign that a different node.
The following diagram shows how the Master & Worker nodes work in a feedback loop to keep the desired and actual system in sync.
The key components inside the Master are – controllers , scheduler , ETCD database & Kube API server
Kube API Server
It’s the RESTful service interface for the master for all its communication.
This is the interface for the master node to interact with the worker nodes and the outside world.
For example, the kubectl – command line tool, kubernetes dashboard, kubelet – on the worker nodes all interact with the master through this interface.
The components holding the automation logic to fulfil our requests.
K8s has several controller. Each of them fulfil the requirement coming in a specific API object. The list below shows some of the key controllers in K8s.
- Node Controller
- Deployment Controller
- Replication Controller
- Job Controller
- Ingress Controller
- Service Controller
- EndPoints Controller
- StatefulSets Controller
The controllers carry the automation logics. They understand how to monitor and fulfil requirements corresponding to their API objects. For instance,
- A JobController fulfills the requirement coming in API objects of kind Job. It understands how to run the specified number of worker PODs in sequence or in parallel till its completion.
- A DeploymentController handles API objects of kind Deployment. It knows how to monitor and ensure all of the the specified number of application instances are up and running.
Its’ like a facility manager responsible for optimizing the resource utilization across the nodes.
As we try to instantiate a Pod, the K8s hands it over to the scheduler to find a suitable node. The scheduler then starts a evaluation process taking into account so many built-in policies.
These policies evaluates the needs of the POD like the memory needs, CPU needs, the priority etc. And, it checks these needs, against the things like resource availability, resource distribution across the nodes. The selection of an optimal node is basically a two step process as below:
- Filtering : It is the process of figuring out the eligible nodes, to identify the options.
- Scoring : Its the process of ranking the eligible nodes to figure out the optimal ones.
It’s the reference for both the desired state and the actual state of the K8s cluster.
ETCD is a consistent and highly-available key value store. Its the central store for various configurations related to the clusters, the nodes and the deployed applications.
Apart from this, we uses this to store the incoming API objects which describes the desired states. And, it also stores the status updates from various worker nodes which tell us about the actual state across the cluster. Hence, the ETCD acts as the single source of truth for the the schedulers and controllers for their monitoring purpose.
Moreover, this database is also useful for the restoration purpose in case of a cluster wide crash. Hence, it is important for us to take its scheduled backups if we are using it for our production servers.
Nodes – Builds and provides status on the assigned tasks
K8s cluster usually has multiple worker nodes to deploy its workloads. The Node Controller on the Master is responsible for monitoring the health of these Nodes.
As discussed above, the master is responsible for distributing the workload among the worker nodes. The distribution mainly happens as the scheduler distributes the new Pod instances.
Each node creates and manage the assigned Pods. As part of managing the work loads, the node keep updating the status to the master node. In turn, this helps the corresponding controller to monitor and take necessary action to handle failures.
The key components of a node includes – a kubelet, a kube-proxy and the container runtime as shown.
The primary ‘node agent’ managing the key run time activities of the node
Its the key agent that carries out a number of activities to manages local Pods, containers, node and provides their status to the master.
Create, monitor and update POD health : The agent ensure the Pod and its containers on the node are running healthy and as per their specifications. It manages their local fail-over restarts and updates their availability to the control panel.
Monitor and update Node status : Provides key status updates on the resource availability and health of the nodes.
Activities are Configurable : We configure the agent’s activities when start the node server. Besides this, K8s also allows us to dynamically change these configurations using KubeletConfiguration objects .
The examples of some supported configurations are like:
- fileCheckFrequency : Tells how often to check for any update in the configmaps used by the pods and others.
- nodeStatusUpdateFrequency : how often kubelet posts node status to master
- volumeStatsAggPeriod : Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes
Its the network proxy managing the service concept
Its a network proxy which manages the cluster IPs and network routing rules.
The virtual network implemented through kube-proxy, greatly simplifies the network routing in setting up application services. It helps us avoid a lot of fire wall request, IP and port dependencies that we face in traditional development environments.
This is the place where we instantiate the application instances using its images.
K8s supports multiple container runtime. Apart from Docker, it supports CRI-O, Containerd and frakti etc.