Upgrade Kubernetes Cluster with zero downtime in 5 easy steps

Introduction

Upgrade Kubernetes Cluster, This requirement will be expected in any production environment. Everyone will be happy if there is no downtime for their production.

Today let’s try to upgrade my home lab Kubernetes cluster running with kubeadm. I’m running with three virtual machines for my home lab. However, the same steps are applied for any number of nodes in a critical production environment. The Kubernetes cluster can be upgraded without a zero downtime by transferring the loads from one node to another.

Upgrade Kubernetes Cluster
Upgrade Kubernetes Cluster

There are only a few steps we need to carry out to complete a version upgrade, and the upgrade path should be +1 version, For instance, the available latest version is v1.18.00 and my setup running with v1.15.00 it’s possible to upgrade one release above the head so it will be v.1.16.00. Moreover, First, we need to upgrade the version in Master, then in remaining worker nodes. Let’s start to proceed with the upgrade.

More topics on Kubernetes

Additional Prerequisites

If you are running with an isolated environment, then it’s time to have a look at how to set up an apt-cache server for your local use. I have a local apt-cache server for my home lab so I’m saving a lot of bandwidth. Moreover, this help to complete my upgrade process by not downloading the packages for each node from the internet.

Upgrading Master Node

Preparing for Upgrade

The Current version is v1.17.11 let’s plan to upgrade Kubernetes cluster version to 1.18.8-00. To know the version we can run the commands

# kubectl get nodes
# kubeadm version
root@k8mas1:~# kubectl get nodes 
NAME                          STATUS   ROLES    AGE   VERSION
k8mas1.linuxsysadmins.local   Ready    master   60m   v1.17.11
k8nod1.linuxsysadmins.local   Ready    <none>   59m   v1.17.11
k8nod2.linuxsysadmins.local   Ready    <none>   58m   v1.17.11
root@k8mas1:~#

root@k8mas1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.11", GitCommit:"ea5f00d93211b7c80247bf607cfa422ad6fb5347", GitTreeState:"clean", BuildDate:"2020-08-13T15:17:52Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
root@k8mas1:~#

We are running with few of deployments and other pods, In any case, during our cluster version upgrade it should not be disturbed.

ansible@k8mas1:~$ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
app-logistic-7d84bdb669-9j4xq     1/1     Running   0          9h
app-logistic-7d84bdb669-g2c4s     1/1     Running   0          9h
app-logistic-7d84bdb669-zz5tw     1/1     Running   0          9h
prod-web-srv01-6bd997976c-9dgpw   1/1     Running   0          9h
webserver                         1/1     Running   0          9h
ansible@k8mas1:~$ 
ansible@k8mas1:~$ kubectl get deployments.apps 
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
app-logistic     3/3     3            3           9h
prod-web-srv01   1/1     1            1           9h
ansible@k8mas1:~$ 
ansible@k8mas1:~$ kubectl get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
app-logistic   ClusterIP   10.100.142.100   <none>        80/TCP    9h
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP   10h
ansible@k8mas1:~$

Let’s see how to drain the host and start our upgrade.

Searching for available Upgrades

Let’s search for the available update using apt command with option and argument.

ansible@k8mas1:~$ sudo apt search kubeadm
Sorting… Done
Full Text Search… Done
kubeadm/kubernetes-xenial 1.18.8-00 amd64 [upgradable from: 1.17.11-00]
Kubernetes Cluster Bootstrapping Tool
ansible@k8mas1:~$

The current installed version is 1.17.11-00 and available upgrade is 1.18.8-00.

To make sure not to upgrade the packages automatically, we have marked the packages to hold. Let’s un-hold them to upgrade with the available latest version.

ansible@k8mas1:~$ sudo apt-mark unhold kubeadm kubelet kubectl
Canceled hold on kubeadm.
Canceled hold on kubelet.
Canceled hold on kubectl.
ansible@k8mas1:~$

Now it’s good to start with upgrade.

Installing the Updates

Start by installing the available package version by running apt install command. To install with a specific version we can add “=major-version.minor-release.patch-version” example “=1.18.8-00“. Upgrading kubelet is not part of the first step, we will perform it later.

ansible@k8mas1:~$ sudo apt install kubeadm=1.18.8-00 kubectl=1.18.8-00 -y
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages will be upgraded:
kubeadm kubectl
2 upgraded, 0 newly installed, 0 to remove and 154 not upgraded.
Need to get 17.0 MB of archives.
After this operation, 991 kB of additional disk space will be used.
Get:1 http://192.168.0.20:3142/apt.kubernetes.io kubernetes-xenial/main amd64 kubectl amd64 1.18.8-00 [8,827 kB]
Get:2 http://192.168.0.20:3142/apt.kubernetes.io kubernetes-xenial/main amd64 kubeadm amd64 1.18.8-00 [8,163 kB]
Fetched 17.0 MB in 1s (8,540 kB/s)
(Reading database … 61100 files and directories currently installed.)
Preparing to unpack …/kubectl_1.18.8-00_amd64.deb …
Unpacking kubectl (1.18.8-00) over (1.17.11-00) …
Preparing to unpack …/kubeadm_1.18.8-00_amd64.deb …
Unpacking kubeadm (1.18.8-00) over (1.17.11-00) …
Setting up kubectl (1.18.8-00) …
Setting up kubeadm (1.18.8-00) …
ansible@k8mas1:~$

Make sure, we have installed with the right required version.

ansible@k8mas1:~$ sudo apt list kubeadm
Listing… Done
kubeadm/kubernetes-xenial,now 1.18.8-00 amd64 [installed]
N: There are 167 additional versions. Please use the '-a' switch to see them.
ansible@k8mas1:~$

The installed version can be identified from above output.

Kubeadm Upgrade plan

To start with the upgrade, the kubeadm will provide with a plan. Using the plan we are good to proceed safely and smoothly.

It will print the current cluster version, available latest stable version. Moreover, it will print the Kubernetes cluster components which are available for upgrade. Finally, it will print the command for an upgrade.

ansible@k8mas1:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.17.11
[upgrade/versions] kubeadm version: v1.18.8
[upgrade/versions] Latest stable version: v1.18.8
[upgrade/versions] Latest stable version: v1.18.8
[upgrade/versions] Latest version in the v1.17 series: v1.17.11
[upgrade/versions] Latest version in the v1.17 series: v1.17.11

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        AVAILABLE
Kubelet     3 x v1.17.11   v1.18.8

Upgrade to the latest stable version:

COMPONENT            CURRENT    AVAILABLE
API Server           v1.17.11   v1.18.8
Controller Manager   v1.17.11   v1.18.8
Scheduler            v1.17.11   v1.18.8
Kube Proxy           v1.17.11   v1.18.8
CoreDNS              1.6.5      1.6.7
Etcd                 3.4.3      3.4.3-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.18.8

_____________________________________________________________________

ansible@k8mas1:~$

Let’s upgrade now.

Upgrade Kubernetes Cluster

Run the command which we got from the above output. This will pull the required images and start to upgrade the cluster.

ansible@k8mas1:~$ sudo kubeadm upgrade apply v1.18.8
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.8"
[upgrade/versions] Cluster version: v1.17.11
[upgrade/versions] kubeadm version: v1.18.8
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.8"...
Static pod: kube-apiserver-k8mas1.linuxsysadmins.local hash: 85ac2d193955cfa5384abc90b44d920d
Static pod: kube-controller-manager-k8mas1.linuxsysadmins.local hash: a25ad8b4e9809a46b4e82307a9fe5e0b
Static pod: kube-scheduler-k8mas1.linuxsysadmins.local hash: 40349d74337f170cb51a41e1a5157fae
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.8" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests908874508"
W0821 10:19:22.955051   89418 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-21-10-19-21/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8mas1.linuxsysadmins.local hash: 85ac2d193955cfa5384abc90b44d920d
Static pod: kube-apiserver-k8mas1.linuxsysadmins.local hash: 85ac2d193955cfa5384abc90b44d920d
Static pod: kube-apiserver-k8mas1.linuxsysadmins.local hash: fc25e05898b5aaa639a6b9873d38e935
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-21-10-19-21/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8mas1.linuxsysadmins.local hash: a25ad8b4e9809a46b4e82307a9fe5e0b
Static pod: kube-controller-manager-k8mas1.linuxsysadmins.local hash: d6c2ae1a188089d020fcb202502cb4a2
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-21-10-19-21/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8mas1.linuxsysadmins.local hash: 40349d74337f170cb51a41e1a5157fae
Static pod: kube-scheduler-k8mas1.linuxsysadmins.local hash: 11d568725920e6d8ffb50523909146aa
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.8". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
ansible@k8mas1:~$

That’s it, we have upgraded the cluster. Let’s try to get the nodes to verify the version.

ansible@k8mas1:~$ kubectl get nodes
NAME                          STATUS   ROLES    AGE   VERSION
k8mas1.linuxsysadmins.local   Ready    master   10h   v1.17.11
k8nod1.linuxsysadmins.local   Ready    <none>   10h   v1.17.11
k8nod2.linuxsysadmins.local   Ready    <none>   10h   v1.17.11
ansible@k8mas1:~$ 

Still we are getting the older version because, kubelete version is not yet upgraded.

Install the required version of kubelete. The version should be same as kubeadm or +1 higher than the kubeadm.

ansible@k8mas1:~$ sudo apt install kubelet=1.18.8-00 -y
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubelet
1 upgraded, 0 newly installed, 0 to remove and 153 not upgraded.
Need to get 0 B/19.4 MB of archives.
After this operation, 1,609 kB of additional disk space will be used.
(Reading database ... 61100 files and directories currently installed.)
Preparing to unpack .../kubelet_1.18.8-00_amd64.deb ...
Unpacking kubelet (1.18.8-00) over (1.17.11-00) ...
Setting up kubelet (1.18.8-00) ...
ansible@k8mas1:~$

Right after installing all updates, make sure to hold the upgrade or install for kubeadm, kubelet and kubectl packages. By doing this it won’t upgrade the version automatically when we try with OS/distribution upgrade.

ansible@k8mas1:~$ sudo apt-mark hold kubeadm kubelet kubectl
kubeadm set on hold.
kubelet set on hold.
kubectl set on hold.
ansible@k8mas1:~$

Restart the kubelet service to make it effective.

$ sudo systemctl restart kubelet

One again run the get command to print the version. Now we can see the upgraded version of master node as v1.18.8.

ansible@k8mas1:~$ kubectl get nodes 
NAME                          STATUS   ROLES    AGE   VERSION
k8mas1.linuxsysadmins.local   Ready    master   10h   v1.18.8
k8nod1.linuxsysadmins.local   Ready    <none>   10h   v1.17.11
k8nod2.linuxsysadmins.local   Ready    <none>   10h   v1.17.11
ansible@k8mas1:~$ 

That’s it for the master node part in our “Upgrade Kubernetes cluster” guide.


Upgrading Worker Nodes

Now it’s time to start the upgrade with the worker nodes.

Before starting with the worker node, list the pods $ kubectl get pods and verify where they are currently residing? Use the -o wide option to print with node information.

ansible@k8mas1:~$ kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP                NODE                          NOMINATED NODE   READINESS GATES
app-logistic-7d84bdb669-9j4xq     1/1     Running   0          9h    192.168.88.15     k8nod2.linuxsysadmins.local   <none>           <none>
app-logistic-7d84bdb669-g2c4s     1/1     Running   0          9h    192.168.145.206   k8nod1.linuxsysadmins.local   <none>           <none>
app-logistic-7d84bdb669-zz5tw     1/1     Running   0          9h    192.168.145.207   k8nod1.linuxsysadmins.local   <none>           <none>
prod-web-srv01-6bd997976c-9dgpw   1/1     Running   0          10h   192.168.145.193   k8nod1.linuxsysadmins.local   <none>           <none>
webserver                         1/1     Running   0          10h   192.168.88.1      k8nod2.linuxsysadmins.local   <none>           <none>
ansible@k8mas1:~$

There are some pods running on worker node, Let’s drain the node by --ignore-daemonsets ignoring daemonsets.

ansible@k8mas1:~$ kubectl drain k8nod1.linuxsysadmins.local --ignore-daemonsets 
node/k8nod1.linuxsysadmins.local already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-hwfqp, kube-system/kube-proxy-j5l2p
evicting pod default/app-logistic-7d84bdb669-g2c4s
evicting pod default/app-logistic-7d84bdb669-zz5tw
evicting pod default/prod-web-srv01-6bd997976c-9dgpw
evicting pod kube-system/coredns-66bff467f8-t64wt
pod/app-logistic-7d84bdb669-zz5tw evicted
pod/app-logistic-7d84bdb669-g2c4s evicted
pod/coredns-66bff467f8-t64wt evicted
pod/prod-web-srv01-6bd997976c-9dgpw evicted
node/k8nod1.linuxsysadmins.local evicted
ansible@k8mas1:~$ 

After drain the node, the scheduling with be disabled on k8nod1.linuxsysadmins.local.

ansible@k8mas1:~$ kubectl get nodes 
NAME                          STATUS                     ROLES    AGE   VERSION
k8mas1.linuxsysadmins.local   Ready                      master   10h   v1.18.8
k8nod1.linuxsysadmins.local   Ready,SchedulingDisabled   <none>   10h   v1.17.11
k8nod2.linuxsysadmins.local   Ready                      <none>   10h   v1.17.11
ansible@k8mas1:~$

Once again verify where the pods are residing.

ansible@k8mas1:~$ kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP              NODE                          NOMINATED NODE   READINESS GATES
app-logistic-7d84bdb669-9j4xq     1/1     Running   0          10h   192.168.88.15   k8nod2.linuxsysadmins.local   <none>           <none>
app-logistic-7d84bdb669-t85t9     1/1     Running   0          20s   192.168.88.21   k8nod2.linuxsysadmins.local   <none>           <none>
app-logistic-7d84bdb669-vf2bk     1/1     Running   0          20s   192.168.88.20   k8nod2.linuxsysadmins.local   <none>           <none>
prod-web-srv01-6bd997976c-kvznd   1/1     Running   0          20s   192.168.88.22   k8nod2.linuxsysadmins.local   <none>           <none>
webserver                         1/1     Running   0          10h   192.168.88.1    k8nod2.linuxsysadmins.local   <none>           <none>
ansible@k8mas1:~$

Now, we are able to see the pods are not running on k8nod1.linuxsysadmins.local

Installing Package on Worker Node

SSH into the worker node.

$ ssh ansible@192.168.0.27

Un-hold the kubeadm, kubelet, kubectl packages.

ansible@k8nod1:~$ sudo apt-mark unhold kubeadm kubelet kubectl
Canceled hold on kubeadm.
Canceled hold on kubelet.
Canceled hold on kubectl.
ansible@k8nod1:~$ 

Install the same version of packages using apt.

ansible@k8nod1:~$ sudo apt install kubeadm=1.18.8-00 kubectl=1.18.8-00 -y
[sudo] password for ansible: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubeadm kubectl
2 upgraded, 0 newly installed, 0 to remove and 154 not upgraded.
Need to get 17.0 MB of archives.
After this operation, 991 kB of additional disk space will be used.
Get:1 http://192.168.0.20:3142/apt.kubernetes.io kubernetes-xenial/main amd64 kubectl amd64 1.18.8-00 [8,827 kB]
Get:2 http://192.168.0.20:3142/apt.kubernetes.io kubernetes-xenial/main amd64 kubeadm amd64 1.18.8-00 [8,163 kB]
Fetched 17.0 MB in 0s (114 MB/s)
(Reading database ... 61100 files and directories currently installed.)
Preparing to unpack .../kubectl_1.18.8-00_amd64.deb ...
Unpacking kubectl (1.18.8-00) over (1.17.11-00) ...
Preparing to unpack .../kubeadm_1.18.8-00_amd64.deb ...
Unpacking kubeadm (1.18.8-00) over (1.17.11-00) ...
Setting up kubectl (1.18.8-00) ...
Setting up kubeadm (1.18.8-00) ...
ansible@k8nod1:~$

Install the kubelete version to same version.

ansible@k8nod1:~$ sudo apt install kubelet=1.18.8-00 -y
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubelet
1 upgraded, 0 newly installed, 0 to remove and 153 not upgraded.
Need to get 19.4 MB of archives.
After this operation, 1,609 kB of additional disk space will be used.
Get:1 http://192.168.0.20:3142/apt.kubernetes.io kubernetes-xenial/main amd64 kubelet amd64 1.18.8-00 [19.4 MB]
Fetched 19.4 MB in 0s (121 MB/s) 
(Reading database ... 61100 files and directories currently installed.)
Preparing to unpack .../kubelet_1.18.8-00_amd64.deb ...
Unpacking kubelet (1.18.8-00) over (1.17.11-00) ...
Setting up kubelet (1.18.8-00) ...
ansible@k8nod1:~$

Hold the packages once we done with the upgrade.

ansible@k8nod1:~$ sudo apt-mark hold kubeadm kubelet kubectl
kubeadm set on hold.
kubelet set on hold.
kubectl set on hold.
ansible@k8nod1:~$

Restart the kubelet service

$ sudo systemctl restart kubelet

Back to master node by typing exit from worker node k8mas1.linuxsysadmins.local.

Verify the Version

Once the kubelet service restarted, exit from the worker node and run $ kubectl get nodes to check the version.

ansible@k8mas1:~$ kubectl get nodes 
NAME                          STATUS                     ROLES    AGE   VERSION
k8mas1.linuxsysadmins.local   Ready                      master   11h   v1.18.8
k8nod1.linuxsysadmins.local   Ready,SchedulingDisabled   <none>   11h   v1.18.8
k8nod2.linuxsysadmins.local   Ready                      <none>   11h   v1.17.11
ansible@k8mas1:~$ 

Mark Worker node as Schedulable

After confirmation, uncordon the worker node to mark it as schedulable. This can be verified by running $ kubectl describe k8nod1.linuxsysadmins.local and look for "Unschedulable: false" it should be false.

ansible@k8mas1:~$ kubectl uncordon k8nod1.linuxsysadmins.local
node/k8nod1.linuxsysadmins.local uncordoned
ansible@k8mas1:~$

Once again verify by running get nodes. we have successfully completed with upgrading kubeadm version on worker node 1.

ansible@k8mas1:~$ kubectl get nodes 
NAME                          STATUS   ROLES    AGE   VERSION
k8mas1.linuxsysadmins.local   Ready    master   11h   v1.18.8
k8nod1.linuxsysadmins.local   Ready    <none>   11h   v1.18.8
k8nod2.linuxsysadmins.local   Ready    <none>   11h   v1.17.11
ansible@k8mas1:~$ 

Now, right after marking the worker node 1 as schedulable the pods will not be immediately back to it. This can be expected when we create a new pod or when we mark the second worker node as unschedulable to upgrade the cluster version.

ansible@k8mas1:~$ kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP                NODE                          NOMINATED NODE   READINESS GATES
app-logistic-7d84bdb669-8k82m     1/1     Running   0          50s   192.168.145.210   k8nod1.linuxsysadmins.local   <none>           <none>
app-logistic-7d84bdb669-d64mm     1/1     Running   0          50s   192.168.145.211   k8nod1.linuxsysadmins.local   <none>           <none>
app-logistic-7d84bdb669-tq4sb     1/1     Running   0          50s   192.168.145.213   k8nod1.linuxsysadmins.local   <none>           <none>
prod-web-srv01-6bd997976c-wb8st   1/1     Running   0          50s   192.168.145.212   k8nod1.linuxsysadmins.local   <none>           <none>
ansible@k8mas1:~$

Upgrading 2nd Worker node

Follow the same steps we done on worker node 1 to upgrade the remaining worker nodes. Finally, we should see all the worker nodes are with same version as shown below.

ansible@k8mas1:~$ kubectl get nodes 
NAME                          STATUS   ROLES    AGE   VERSION
k8mas1.linuxsysadmins.local   Ready    master   11h   v1.18.8
k8nod1.linuxsysadmins.local   Ready    <none>   11h   v1.18.8
k8nod2.linuxsysadmins.local   Ready    <none>   11h   v1.18.8
ansible@k8mas1:~$

ansible@k8mas1:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:10:16Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
ansible@k8mas1:~$
 

As we have marked the worker node 2 as unscheulable the pods will be now residing on worker node 1.

ansible@k8mas1:~$ kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP                NODE                          NOMINATED NODE   READINESS GATES
app-logistic-7d84bdb669-8k82m     1/1     Running   0          7m40s   192.168.145.210   k8nod1.linuxsysadmins.local   <none>           <none>
app-logistic-7d84bdb669-d64mm     1/1     Running   0          7m40s   192.168.145.211   k8nod1.linuxsysadmins.local   <none>           <none>
app-logistic-7d84bdb669-tq4sb     1/1     Running   0          7m40s   192.168.145.213   k8nod1.linuxsysadmins.local   <none>           <none>
prod-web-srv01-6bd997976c-wb8st   1/1     Running   0          7m40s   192.168.145.212   k8nod1.linuxsysadmins.local   <none>           <none>
ansible@k8mas1:~$ 

Verify whether our all pods, deployments and services are still looks fine.

ansible@k8mas1:~$ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
app-logistic-7d84bdb669-8k82m     1/1     Running   0          8m39s
app-logistic-7d84bdb669-d64mm     1/1     Running   0          8m39s
app-logistic-7d84bdb669-tq4sb     1/1     Running   0          8m39s
prod-web-srv01-6bd997976c-wb8st   1/1     Running   0          8m39s
ansible@k8mas1:~$

ansible@k8mas1:~$ kubectl get deployments.apps 
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
app-logistic     3/3     3            3           11h
prod-web-srv01   1/1     1            1           11h
ansible@k8mas1:~$
 
ansible@k8mas1:~$ kubectl get service
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
app-logistic   ClusterIP   10.100.142.100   <none>        80/TCP    11h
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP   11h
ansible@k8mas1:~$ 

That’s it, we have successfully completed with Upgrading the Kubernetes cluster version without impacting any up running applications.

Conclusion

Upgrade Kubernetes cluster is a key requirement for a system administrator in any production environment. To get the updated features of Kubernetes it is advised to up and run with an updated cluster version. By following 5 steps for each node we can complete the full upgrade process. You may be interested in more Kubernetes topic, refer below related articles from the author. Subscribe to our newsletter to keep posted.