A Deployment object is used to define the desired specification of your application instance including the scaling requirements.
The key in the deployment module lies in the way the Deployment Controller manages deployment objects. The deployment controller transitions the existing state of your application instances to the new desired state as per your latest deployment specifications, thus providing you an automated deployment process.
The transitioning is done in a controlled way using rolling over mechanism; thus it ensures minimal impact on the availability of the application during a deployment.
We will look at these with some examples in the sections below.
Key Features of Deployment
- Supports the deployment of scalable applications
- Manages the changes to the POD template
- Manages the changes to the scaling requirements
- Supports the auto scaling of the applications
- Manages Deployment History and enables you to Roll-back to historical versions
- Provides Rolling Update of the application instances
- Supports Pausing and Resuming of the rolling deployments
The following sections describes how K8s provides these features in more detail.
Deploying an Application
apiVersion: apps/v1 kind: Deployment metadata: name: demo-deploy labels: app: demo-deploy spec: replicas: 4 selector: matchLabels: app: demo-app template: metadata: labels: app: demo-app spec: containers: - name: demo-app image: k8s.gcr.io/echoserver:1.9 ports: - containerPort: 8080
This is our sample deployment named : demo-deploy
The key components of the deployment are :
- replicas – Defines the scaling
- template -Defines the desired specification of the pod instances
- selector – Defines the criteria to select and keep track of the instances deployed across the cluster nodes. Here, the label is ‘app: demo-app’
Lets deploy and explore the resultant components :
$ kubectl apply -f demo-deploy.yaml --record=true deployment.apps/demo-deploy created $ kubectl get deployment,rs,pod NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/demo-deploy 4/4 4 4 42s NAME DESIRED CURRENT READY AGE replicaset.apps/demo-deploy-595d5466b5 4 4 4 42s NAME READY STATUS RESTARTS AGE pod/demo-deploy-595d5466b5-5v8kj 1/1 Running 0 42s pod/demo-deploy-595d5466b5-hhtrr 1/1 Running 0 42s pod/demo-deploy-595d5466b5-jf4ks 1/1 Running 0 42s pod/demo-deploy-595d5466b5-w6sgx 1/1 Running 0 42s
- The Deployment has created a Replicaset . Each deployment is managed by a separate replicaset.
- The Replicaset used the template to create 4 pods as per the scaling requirement
- Each replicaset shares the same hash code with its associated pods.
- And, the pods, also, have another hash code to avoid the naming conflict.
- Replicaset uses the selector criteria to select and monitor the pods
To look at the details of our deployment and it’s associated events we can use the following command. Let us look at the top part for now:
$ kubectl describe deployments | head -15 Name: demo-deploy Namespace: default CreationTimestamp: Wed, 08 Jul 2020 13:36:42 +0000 Labels: app=demo-deploy Selector: app=demo-app Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=demo-app
Understanding Rolling Update
Now let us do a failed deployment by setting an invalid image to the deployment:
$ kubectl set image deployment.v1.apps/demo-deploy demo-app=k8s.gcr.io/echoserver:1.9.xy --record=true deployment.apps/demo-deploy image updated
The status of various components of the deployment is shown below. Now lets look at the ‘RollingUpdateStrategy‘ as highlighted under the pod description above and try to match the outcome below:
$ kubectl get deployment,rs,pod NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/demo-deploy 3/4 2 3 6m58s NAME DESIRED CURRENT READY AGE replicaset.apps/demo-deploy-595d5466b5 3 3 3 6m58s replicaset.apps/demo-deploy-fd86759fb 2 2 0 4s NAME READY STATUS RESTARTS AGE pod/demo-deploy-595d5466b5-5v8kj 1/1 Terminating 0 6m58s pod/demo-deploy-595d5466b5-hhtrr 1/1 Running 0 6m58s pod/demo-deploy-595d5466b5-jf4ks 1/1 Running 0 6m58s pod/demo-deploy-595d5466b5-w6sgx 1/1 Running 0 6m58s pod/demo-deploy-fd86759fb-jjfr7 0/1 ImagePullBackOff 0 4s pod/demo-deploy-fd86759fb-nqgwf 0/1 ImagePullBackOff 0 4s
- Only one pod (25% max unavailable out of scale=4) of the existing replicaset is terminating.
- 2 pods (1 in place of terminating pod and 1 as part of 25% max surge) is coming up in the new replicaset.
- And, since this is a rolling update and the new pod are not going to come up to the running status due to image issue, no more of pods of the existing replicaset will be taken down and the deployment will be stuck at this stage.
- Advantage : The application will be still available with 75% of the pods because of the rolling update strategy.
Now fix the image version and do a redeployment as follows :
kubectl set image deployment.v1.apps/demo-deploy demo-app=k8s.gcr.io/echoserver:1.10 --record=true
Running the following command shows the deployment is successful and the latest replicaset is having all the 4 pods ready.
$ kubectl rollout status deployment.v1.apps/demo-deploy Waiting for deployment "demo-deploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "demo-deploy" rollout to finish: 3 of 4updated replicas are available... deployment "demo-deploy" successfully rolled out $ kubectl get rs NAME DESIRED CURRENT READY AGE demo-deploy-595d5466b5 0 0 0 11m demo-deploy-5f6f56cd49 4 4 4 118s demo-deploy-fd86759fb 0 0 0 5m2s
Exploring Deployment History and doing a Roll Back
Now, lets say, the new version has some serious bug and we want to rollback and we want to check the history of the deployment:
$ kubectl rollout history deployment.v1.apps/demo-deploy deployment.apps/demo-deploy REVISION CHANGE-CAUSE 1 kubectl apply --filename=pod.yaml --record=true 2 kubectl set image deployment.v1.apps/demo-deploy demo-app=k8s.gcr.io/echoserver:1.9.xy --record=true 3 kubectl set image deployment.v1.apps/demo-deploy demo-app=k8s.gcr.io/echoserver:1.10 --record=true
The change cause has been added due to –record=true flag used during the deployments.
A simple ‘rollout undo‘ will take the deployment back to a failed revision(revision=2) and to take it to an older version we can specify the revision as follows :
$ kubectl rollout undo deployment.v1.apps/demo-deploy --to-revision=1 deployment.apps/demo-deploy rolled back $ kubectl rollout history deployment.v1.apps/demo-deploy deployment.apps/demo-deploy REVISION CHANGE-CAUSE 2 kubectl set image deployment.v1.apps/demo-deploy demo-app=k8s.gcr.io/echoserver:1.9.xy --record=true 3 kubectl set image deployment.v1.apps/demo-deploy demo-app=k8s.gcr.io/echoserver:1.10 --record=true 4 kubectl apply --filename=pod.yaml --record=true
Now the history shows that the roll back has used the revision 1 and renamed it to revision 4 after the rollback. Moreover, as we can see, it has reused the saved ‘pod-template-hash‘, the identity (595d5466b5) given by that deployment revision.
$ kubectl rollout history deployment.v1.apps/demo-deploy --revision=4 deployment.apps/demo-deploy with revision #4 Pod Template: Labels: app=demo-app pod-template-hash=595d5466b5 Annotations: kubernetes.io/change-cause: kubectl apply --filename=pod.yaml --record=true Containers: demo-app: Image: k8s.gcr.io/echoserver:1.9
Other Key Features – Pause, Restart, Scale , Autoscale
As we have seen so far, the deployment manages the desired state changes and keeps track of the each state history to support the requirement to roll back.
The above examples showed how we can change the image version. For major changes we can also deploy the updated version of deployment file where the name of the deployment remains the same:
# deploying an updated version of the app $ kubectl apply -f demo-app-v-1.1.yaml deployment.apps/demo-deploy configured
For scaling, autoscaling, pausing, resuming , updating comments to a deployment history, here are some useful commands:
# To pause a deployment in the middle kubectl rollout pause deployment.v1.apps/demo-deploy # To resume a paused deployment kubectl rollout resume deployment.v1.apps/demo-deploy # To scale a deplyment kubectl scale deployment.v1.apps/demo-deploy --replicas=8 # Command to autoscale a deployment as the cpu usage increases beyond 90% kubectl autoscale deployment.v1.apps/demo-deploy --min=10 --max=20 --cpu-percent=90 # To add a comment to the latest deployment history change-cause manually kubectl annotate deployment.v1.apps/demo-deploy kubernetes.io/change-cause="successful roll-over to image-1.10"
The deployment is used to manage the desired state changes of the deployed applications. And, the applications used in deployment are long running stateless applications.
The default restart policy of a pod in a deployment is – Always.
The other key deployment supported by Kubenetes are :
- Job & CronJob : Meant for managing jobs which are meant to run only till completion.
- ReplicaSets : Meant for managing long running stateful applications.
- Deamonset : Meant for managing applications which are required to present on all the desired nodes. These are not required to support scaling requirements as in the other deployments.