Create a Rolling Update Kubernetes Deployment in 3 ways

Kubernetes Deployment Kubernetes Rolling Updates

Introduction

Kubernetes Deployment is one of the coolest feature available in Kubernetes. Back in days, we have to take long downtime for upgrading any applications. After Kubernetes come into picture those long wait has been reduced to zero downtime.

Kubernetes Deployment will help with provisioning a set of pods. Which included the replica sets. In real-time production, a single pod deployment won’t help. To overcome the limit Kubernetes deployment provides more features and options by deploying a set of pods for our requirement. For instance, if we need to spin up a WordPress website we need Nginx and MariaDB all this can be included in a single deployment.

Creating a Kubernetes Deployment

Creating Kubernetes deployment is super easy by running kubectl command with options and arguments. However, to add more deployment options we need to create a YAML file by running kubectl command.

Creating in a single command

Let’s begin with creating our first deployment with an older nginx image nginx:1.17-perl later we will upgrade the same with nginx:1.18.0.

# kubectl create deployment linuxsys-deploy --image=nginx:1.17-perl

This will create a kubernetes deployment with 1 replica without exposing any ports from the containers.

How do I change the image in Kubernetes deployment?

$ kubectl set image deployment/linuxsys-deploy nginx=1.19.0-perl –record

Generating Deployment YAML file

To generate the YAML file we can use --dry-run=client and -o yaml to redirect the outputs. Now, we can modify the files as per our requirement before creating the deployment.

# kubectl create deployment linuxsys-deploy --image=nginx:1.17-perl --dry-run=client -o yaml > linuxsys-deploy.yaml

The output will be as show below.

root@k8mas1:~# kubectl create deployment linuxsys-deploy --image=nginx:1.17-perl --dry-run=client -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: linuxsys-deploy
  name: linuxsys-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: linuxsys-deploy
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: linuxsys-deploy
    spec:
      containers:
      - image: nginx:1.17-perl
        name: nginx
        resources: {}
status: {}
root@k8mas1:~# 

The third way to create a deployment is hard way as we discussed in our previous guide.

Modifying Deployment YAML file

Let’s add with 10 numbers of replicas, container port as 80, type of update as Rolling Update with 25% MaxSurge and MaxUnavailable will be 25%. Once modified it will look like below.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: linuxsys-deploy
  name: linuxsys-deploy
spec:
  replicas: 10
  selector:
    matchLabels:
      app: linuxsys-deploy
  strategy: 
       type: RollingUpdate
       rollingUpdate:
         maxSurge: 25%
         maxUnavailable: 25%
  template:
    metadata:
      labels:
        app: linuxsys-deploy
    spec:
      containers:
      - image: nginx:1.17-perl
        name: nginx
        ports:
        - containerPort: 80

Creating Deployment from YAML file

Now lets, create the deployment from our created YAML file using kubectl command.

root@k8mas1:~# kubectl create -f linuxsys-deploy.yaml 
deployment.apps/linuxsys-deploy created
root@k8mas1:~#

Once created it will take bit time to deploy the pods.

root@k8mas1:~# kubectl get deployments linuxsys-deploy 
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
linuxsys-deploy   7/10    10           7           29s
root@k8mas1:~#

As we have specified to create 10 numbers of replicas it will create the same.

root@k8mas1:~# kubectl get replicasets
NAME                         DESIRED   CURRENT   READY   AGE
linuxsys-deploy-6bf7f5fb8b   10        10        10      50s
root@k8mas1:~#

Additionally to list the pods created for deployments can be listed using get pods.

root@k8mas1:~# kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
linuxsys-deploy-6bf7f5fb8b-6ql92   1/1     Running   0          58s
linuxsys-deploy-6bf7f5fb8b-d8dx8   1/1     Running   0          57s
linuxsys-deploy-6bf7f5fb8b-f6l98   1/1     Running   0          57s
linuxsys-deploy-6bf7f5fb8b-glzgv   1/1     Running   0          57s
linuxsys-deploy-6bf7f5fb8b-jdp2n   1/1     Running   0          58s
linuxsys-deploy-6bf7f5fb8b-l447x   1/1     Running   0          57s
linuxsys-deploy-6bf7f5fb8b-m8zv5   1/1     Running   0          57s
linuxsys-deploy-6bf7f5fb8b-r84vn   1/1     Running   0          57s
linuxsys-deploy-6bf7f5fb8b-tw6rv   1/1     Running   0          57s
linuxsys-deploy-6bf7f5fb8b-zvw6b   1/1     Running   0          58s
root@k8mas1:~#

To list the same using labels use --label-columns=app

root@k8mas1:~# kubectl get pods --label-columns=app
NAME                               READY   STATUS    RESTARTS   AGE     APP
linuxsys-deploy-6bf7f5fb8b-6ql92   1/1     Running   0          5m47s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-d8dx8   1/1     Running   0          5m46s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-f6l98   1/1     Running   0          5m46s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-glzgv   1/1     Running   0          5m46s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-jdp2n   1/1     Running   0          5m47s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-l447x   1/1     Running   0          5m46s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-m8zv5   1/1     Running   0          5m46s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-r84vn   1/1     Running   0          5m46s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-tw6rv   1/1     Running   0          5m46s   linuxsys-deploy
linuxsys-deploy-6bf7f5fb8b-zvw6b   1/1     Running   0          5m47s   linuxsys-deploy
root@k8mas1:~#

Verifying the Created Kubernetes Deployment

Describe the deployment to get more information’s. Here we should get all information about the deployment, replicas state, Rolling update strategy and images used etc.

root@k8mas1:~# kubectl describe deployment linuxsys-deploy 
Name:                   linuxsys-deploy
Namespace:              default
CreationTimestamp:      Sun, 24 May 2020 22:35:37 +0000
Labels:                 app=linuxsys-deploy
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=linuxsys-deploy
Replicas:               10 desired | 10 updated | 10 total | 10 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=linuxsys-deploy
  Containers:
   nginx:
    Image:        nginx:1.17-perl
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   linuxsys-deploy-6bf7f5fb8b (10/10 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  8m13s  deployment-controller  Scaled up replica set linuxsys-deploy-6bf7f5fb8b to 10
root@k8mas1:~# 

Scale Up and Scale Down

Scale with commands

To scale up the number of replicas in our Kubernetes deployment it can be done on the fly. Let’s verify the current count of replicas.

root@k8mas1:~# kubectl get deployment linuxsys-deploy 
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
linuxsys-deploy   10/10   10           10          12m
root@k8mas1:~#

Let’s scale up the count to 30.

root@k8mas1:~# kubectl scale deployment --replicas=30 linuxsys-deploy 
deployment.apps/linuxsys-deploy scaled
root@k8mas1:~#

This will scale up the number of replicas in our deployments. To read more information we can describe option and check under the event section.

root@k8mas1:~# kubectl get deployment linuxsys-deploy 
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
linuxsys-deploy   30/30   30           30          13m
root@k8mas1:~#

root@k8mas1:~# kubectl describe deployments linuxsys-deploy 
Name:                   linuxsys-deploy
Namespace:              default
CreationTimestamp:      Sun, 24 May 2020 22:35:37 +0000
Labels:                 app=linuxsys-deploy
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=linuxsys-deploy
Replicas:               30 desired | 30 updated | 30 total | 30 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=linuxsys-deploy
  Containers:
   nginx:
    Image:        nginx:1.17-perl
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   linuxsys-deploy-6bf7f5fb8b (30/30 replicas created)
Events:
  Type    Reason             Age                From                   Message
  ----    ------             ----               ----                   -------
  Normal  ScalingReplicaSet  40m                deployment-controller  Scaled up replica set linuxsys-deploy-6bf7f5fb8b to 10
  Normal  ScalingReplicaSet  23m                deployment-controller  Scaled down replica set linuxsys-deploy-6bf7f5fb8b to 10
  Normal  ScalingReplicaSet  19m                deployment-controller  Scaled up replica set linuxsys-deploy-6bf7f5fb8b to 20
  Normal  ScalingReplicaSet  11m (x2 over 27m)  deployment-controller  Scaled up replica set linuxsys-deploy-6bf7f5fb8b to 30
root@k8mas1:~#

To scale down just replace the 30 with 20 or 10.

root@k8mas1:~# kubectl scale deployment --replicas=10 linuxsys-deploy 
deployment.apps/linuxsys-deploy scaled
root@k8mas1:~#

root@k8mas1:~# kubectl get deployment linuxsys-deploy 
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
linuxsys-deploy   10/10   10           10          17m
root@k8mas1:~#
 
root@k8mas1:~# kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
linuxsys-deploy-6bf7f5fb8b-6ql92   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-d8dx8   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-f6l98   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-glzgv   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-jdp2n   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-l447x   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-m8zv5   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-r84vn   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-tw6rv   1/1     Running   0          17m
linuxsys-deploy-6bf7f5fb8b-zvw6b   1/1     Running   0          17m
root@k8mas1:~#

Scale using YAML file

To scale using the existing Kubernetes deployment YAML file, edit the file and replace the number of replicas.

root@k8mas1:~# vim linuxsys-deploy.yaml
root@k8mas1:~#
root@k8mas1:~# grep "replicas" linuxsys-deploy.yaml
replicas: 20
root@k8mas1:~#

Once the file modified use apply option.

root@k8mas1:~# kubectl apply -f linuxsys-deploy.yaml
deployment.apps/linuxsys-deploy configured
root@k8mas1:~#

List and verify the changes.

root@k8mas1:~# kubectl get deployments linuxsys-deploy 
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
linuxsys-deploy   20/20   20           20          20m
root@k8mas1:~#

Scale by editing Deployment

One more option available to edit the deployment is to directly edit the deployment as shown below.

root@k8mas1:~# kubectl edit deployment linuxsys-deploy
deployment.apps/linuxsys-deploy edited
root@k8mas1:~#

Once we save and exit it will scale up the number of replicas.

root@k8mas1:~# kubectl get deployment linuxsys-deploy 
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
linuxsys-deploy   30/30   30           30          28m
root@k8mas1:~# 

Thus, scaling up and down is super easy in Kubernetes Deployment.

Rolling Updates without Zero Down time

It’s time, to begin with, the super cool thing about Kubernetes. There is no downtime for our application or web server while changing our codes or replacing our Nginx version from nginx:1.17-perl to nginx:1.18.0.

Right now we are running with nginx:1.17-perl replace and list the status of the update.

root@k8mas1:~# kubectl describe deployments linuxsys-deploy 
Name:                   linuxsys-deploy
Namespace:              default
CreationTimestamp:      Sun, 24 May 2020 22:35:37 +0000
Labels:                 app=linuxsys-deploy
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=linuxsys-deploy
Replicas:               10 desired | 10 updated | 10 total | 10 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=linuxsys-deploy
  Containers:
   nginx:
    Image:        nginx:1.17-perl
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   linuxsys-deploy-6bf7f5fb8b (10/10 replicas created)
Events:
  Type    Reason             Age                   From                   Message
  ----    ------             ----                  ----                   -------
  Normal  ScalingReplicaSet  54m                   deployment-controller  Scaled up replica set linuxsys-deploy-6bf7f5fb8b to 10
  Normal  ScalingReplicaSet  37m                   deployment-controller  Scaled down replica set linuxsys-deploy-6bf7f5fb8b to 10
root@k8mas1:~#

Applying Rolling Update

To replace the nginx image version we have edited the YAML file and applied the changes.

root@k8mas1:~# kubectl apply -f linuxsys-deploy.yaml
deployment.apps/linuxsys-deploy configured
root@k8mas1:~#

Once it applied, run the option rollout status to check the rolling update progress. As we are using StrategyType as RollingUpdate it won’t completely replace all at once. Instead, it will replace by part. Only 25% of pods are unavailable during the rolling update which means there is zero downtime for our application.

root@k8mas1:~# kubectl rollout status deployment linuxsys-deploy 
Waiting for deployment "linuxsys-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 6 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 6 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 6 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "linuxsys-deploy" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 8 of 10 updated replicas are available...
Waiting for deployment "linuxsys-deploy" rollout to finish: 9 of 10 updated replicas are available...
deployment "linuxsys-deploy" successfully rolled out
root@k8mas1:~#

Verify the changes by running describe option and look for the changed image version.

root@k8mas1:~# kubectl describe deployments linuxsys-deploy 
Name:                   linuxsys-deploy
Namespace:              default
CreationTimestamp:      Sun, 24 May 2020 22:35:37 +0000
Labels:                 app=linuxsys-deploy
Annotations:            deployment.kubernetes.io/revision: 6
Selector:               app=linuxsys-deploy
Replicas:               10 desired | 10 updated | 10 total | 10 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=linuxsys-deploy
  Containers:
   nginx:
    Image:        nginx:1.18.0
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   linuxsys-deploy-7598bfc (10/10 replicas created)
Events:
  Type    Reason             Age                   From                   Message
  ----    ------             ----                  ----                   -------
  Normal  ScalingReplicaSet  45m                   deployment-controller  Scaled down replica set linuxsys-deploy-6bf7f5fb8b to 10
  Normal  ScalingReplicaSet  42m                   deployment-controller  Scaled up replica set linuxsys-deploy-6bf7f5fb8b to 20
root@k8mas1:~# 

Rollback the Update

In case if you need to rollback the changes, It’s possible by doing an undo.

root@k8mas1:~# kubectl rollout undo deployment linuxsys-deploy 
deployment.apps/linuxsys-deploy rolled back
root@k8mas1:~#

Check for the status.

root@k8mas1:~# kubectl rollout status deployment linuxsys-deploy 
Waiting for deployment "linuxsys-deploy" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "linuxsys-deploy" rollout to finish: 8 of 10 updated replicas are available...
Waiting for deployment "linuxsys-deploy" rollout to finish: 9 of 10 updated replicas are available...
deployment "linuxsys-deploy" successfully rolled out
root@k8mas1:~# 

Once the rollout successfully complete in few seconds the nginx image version we used earlier was restored.

root@k8mas1:~# kubectl describe deployments linuxsys-deploy 
Name:                   linuxsys-deploy
Namespace:              default
CreationTimestamp:      Sun, 24 May 2020 22:35:37 +0000
Labels:                 app=linuxsys-deploy
Annotations:            deployment.kubernetes.io/revision: 7
Selector:               app=linuxsys-deploy
Replicas:               10 desired | 10 updated | 10 total | 10 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=linuxsys-deploy
  Containers:
   nginx:
    Image:        nginx:1.17-perl
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   linuxsys-deploy-6bf7f5fb8b (10/10 replicas created)
Events:
  Type    Reason             Age                  From                   Message
  ----    ------             ----                 ----                   -------
  Normal  ScalingReplicaSet  48m                  deployment-controller  Scaled down replica set linuxsys-deploy-6bf7f5fb8b to 10
  Normal  ScalingReplicaSet  44m                  deployment-controller  Scaled up replica set linuxsys-deploy-6bf7f5fb8b to 20
root@k8mas1:~#

Keeping history of Rolling Updates

To keep the rolling update and downgrade history use --record option when we run the kubectl command.

root@k8mas1:~# kubectl apply -f linuxsys-deploy.yaml --record
deployment.apps/linuxsys-deploy configured
root@k8mas1:~#

To go through the history we can list those if we have used --record on all our Kubernetes Deployment.

root@k8mas1:~# kubectl rollout history deployment 
deployment.apps/linuxsys-deploy 
REVISION  CHANGE-CAUSE
7         
8         kubectl apply --filename=linuxsys-deploy.yaml --record=true

Deleting a Kubernetes Deployment

To delete the kubernetes deployment it’s straight forward.

root@k8mas1:~# kubectl delete deployments linuxsys-deploy
deployment.apps "linuxsys-deploy" deleted
root@k8mas1:~#

This will terminate all the pods and containers.

root@k8mas1:~# kubectl get deployments
No resources found in default namespace.
root@k8mas1:~#

root@k8mas1:~# kubectl get pods
No resources found in default namespace.
root@k8mas1:~#

That’s it we have successfully complete with Kubernetes deployment guide.

Conclusion

We have seen how to manage a deployment end to end by creating, scaling, rolling update, undo an update and much more. Will come up with other concept similar to kubernetes deployment. Subscribe to our newsletter and provide your valuable feedbacks on below comment section.