Install Kubernetes Cluster on Ubuntu 22.04 LTS using kubeadm

A self reference guide for setting up a Vanilla Kubernetes on top of 4 virtual machines.
The current setup running with latest Kuberenets release version v1.28.1

All new packages are hosted from pkgs.k8s.io, If anyone looking to setup Kubernetes cluster after legacy repository deprecation notice this will help you to setup one.

https://kubernetes.io/blog/2023/08/31/legacy-package-repository-deprecation/
https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/

Kubernetes Server setup

8 vCPU
16 GB memory for each Virtual machines
300 GB x 2 disks

The OS version is Ubuntu 22.04 LTS Jammy

ansible@k8smaster1:~$ cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
ansible@k8smaster1:~$ 

IPs are configured using Netplan

root@k8smaster1:~# cat /etc/netplan/00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens18:
      addresses:
      - 192.168.0.20/24
      nameservers:
        addresses:
        - 192.168.0.101
        - 192.168.0.102
        search:
        - linuxsysadmins.lan
      routes:
      - to: default
        via: 192.168.0.1
  version: 2
root@k8smaster1:~#

Below are the 4 servers.

192.168.0.20/24 k8smaster1.linuxsysadmins.lan
192.168.0.21/24 k8sworker1.linuxsysadmins.lan
192.168.0.22/24 k8sworker2.linuxsysadmins.lan
192.168.0.23/24 k8sworker3.linuxsysadmins.lan

DNS resolution resolved using RedHat IDM

Disable swap

The first step is to disable Swap and remove from the FSTAB Entry.

$ sudo swapoff -a
$ rm -rfv /swap.img
$ Remove from FSTAB Entry.

Update the System

Update the apt cache and run the upgrade, following take a reboot.

$ sudo apt update
$ sudo apt upgrade -y
$ shutdown -r now

Enabling Kernel Parameters

Forwarding IPv4 and letting iptables see bridged traffic.
Enable kernel modules for Kuberentes Installation

$cat << EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

If you are not going to take a reboot before running the bootstrap, make sure to manually load the modules.

# modprobe overlay
# modprobe br_netfilter

Verify by listing the module

# lsmod | grep overlay
# lsmod | grep br_netfilter

Configure /proc/sys/net/ipv4/* to pass traffic

$ cat << EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

Networking sysctl references

bridge-nf-call-iptables - BOOLEAN
  1 : pass bridged IPv4 traffic to iptables' chains.
  0 : disable this.
  Default: 1

bridge-nf-call-ip6tables - BOOLEAN
  1 : pass bridged IPv6 traffic to ip6tables' chains.
  0 : disable this.
  Default: 1

ip_forward - BOOLEAN
  0 - disabled (default)
  not 0 - enabled

  Forward Packets between interfaces.

  This variable is special, its change resets all configuration
  parameters to their default state (RFC1122 for hosts, RFC1812
  for routers)

Configure kernel parameters at runtime

$ sudo sysctl --system

Installing and configuring Container Runtime.

Setup docker repository to install Containerd and get it installed.

Import the Key

$ sudo install -m 0755 -d /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ sudo chmod a+r /etc/apt/keyrings/docker.gpg

Enable the repository of Docker

$ echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the cache and install the container runtime

$ sudo apt-get update
$ sudo apt-get install -y ca-certificates curl gnupg containerd.io

Docker Official Guide Click Here

Configure Containerd

The default config will be empty, we need to generate one by running below command.
Make sure to redirect the output to existing config file config.toml.

# containerd config default | tee /etc/containerd/config.toml

Configuring a cgroup driver

In Kubernetes v1.28, you can enable automatic detection of the cgroup driver as an alpha feature.

Navigate to line number 125 and replace “false” with “true”.

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]

SystemdCgroup = false to SystemdCgroup = true

# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

Enable and restart the containerd service.

# systemctl daemon-reload
# systemctl enable containerd.service 
# systemctl restart containerd.service 
# systemctl status containerd.service 

Setup Kuberenetes Repositories

Setup Kuberenetes Repositories and install the required packages.

$ sudo apt-get install -y apt-transport-https ca-certificates curl net-tools wget vim bash-completion tcpdump
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update

Install K8s Packages

Install the Cluster setup packages.

# apt-get install -y kubelet kubeadm kubectl
# apt-mark hold kubelet kubeadm kubectl

Verify the installed version

# kubeadm version
# kubectl version --client 

Enable Kubelet service persistently.

# systemctl enable kubelet.service 
# systemctl status kubelet.service 

Initializing and Bootstrap the K8s Cluster

Initialize the Kubernetes cluster by listing and pulling the required images.

# kubeadm config images list
# kubeadm config images pull

Bootstrap with Kubeadm

# kubeadm init --pod-network-cidr=10.244.0.0/16
root@k8smaster1:~# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster1.linuxsysadmins.lan kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster1.linuxsysadmins.lan localhost] and IPs [192.168.0.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster1.linuxsysadmins.lan localhost] and IPs [192.168.0.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.003004 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster1.linuxsysadmins.lan as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8smaster1.linuxsysadmins.lan as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: tfdwxd.376dt1k1ee8isg28
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.20:6443 --token tfdwxd.376dt1k1ee8isg28 \
  --discovery-token-ca-cert-hash sha256:6f610e7fd68415ec880a4046b0f66bb2e326d60a2867d7e6bcf9cc17937759c8 
root@k8smaster1:~# 

A workaround for an Inconsistent error

While bootstrap, if you are getting error “container runtime is inconsistent” update the sanbox image with registry.k8s.io/pause:3.9 as follows.

$ sudo vim /etc/containerd/config.toml

sandbox_image = "registry.k8s.io/pause:3.6" with sandbox_image = "registry.k8s.io/pause:3.9"

Restart continerd service to make the changes effective.

$ sudo systemctl restart containerd.service 
$ sudo systemctl status containerd.service  

Start using the cluster

Exit from root user and run below commands from normal user account where planned to manage the cluster from.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Setup auto complete

$ echo 'source <(kubectl completion bash)' >>~/.bashrc
$ source ~/.bashrc

Installing Network CNI

Now we should install and setup the Network CNI before joining the worker nodes.
Installing Network add-ons, Choose and install anyone of network add-on.

Deploying Flannel with kubectl

$ wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

If used any custom Pod CIDR when bootstrapping we need to edit and replace the network Pod CIDR from 10.244.0.0/16 to xxx.xxx.xxx.xxx/xx

$ vim kube-flannel.yml

10.244.0.0/16 to xxx.xxx.xxx.xxx/xx

$ kubectl create -f kube-flannel.yml 

Reference: https://kubernetes.io/docs/concepts/cluster-administration/addons/

ansible@k8smaster1:~$ kubectl get nodes -o wide
NAME                            STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8smaster1.linuxsysadmins.lan   Ready    control-plane   5m32s   v1.28.1   192.168.0.20   <none>        Ubuntu 22.04.3 LTS   5.15.0-83-generic   containerd://1.6.22
ansible@k8smaster1:~$ 

After installing network Add-on, Error state pods will be terminated and it will recreate the core-DNS pods automatically.

Adding Worker nodes

Its time to start adding the worker nodes, As root user run below command on all the worker nodes.

# kubeadm join 192.168.0.20:6443 --token 05upkj.cf7cmzy5n08hhx9s --discovery-token-ca-cert-hash sha256:b7e53a2a8f358c6530a95374dc6ea4ca8a4a155598e3a550eface3a5ba0bc521

Right after running above command we are done with worker nodes interaction for now.

Check the cluster status

Check the cluster status by running below commands.

$ kubectl cluster-info
ansible@k8smaster1:~$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.20:6443
CoreDNS is running at https://192.168.0.20:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ansible@k8smaster1:~$ 

Verifying the Bootstrap

List all available nodes in the Kubernetes cluster.

ansible@k8smaster1:~$ kubectl get nodes -o wide
NAME                            STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8smaster1.linuxsysadmins.lan   Ready    control-plane   5m32s   v1.28.1   192.168.0.20   <none>        Ubuntu 22.04.3 LTS   5.15.0-83-generic   containerd://1.6.22
k8smaster2.linuxsysadmins.lan   Ready    <none>          64s     v1.28.1   192.168.0.22   <none>        Ubuntu 22.04.3 LTS   5.15.0-83-generic   containerd://1.6.22
k8smaster3.linuxsysadmins.lan   Ready    <none>          60s     v1.28.1   192.168.0.23   <none>        Ubuntu 22.04.3 LTS   5.15.0-83-generic   containerd://1.6.22
k8sworker1.linuxsysadmins.lan   Ready    <none>          66s     v1.28.1   192.168.0.21   <none>        Ubuntu 22.04.3 LTS   5.15.0-83-generic   containerd://1.6.22
ansible@k8smaster1:~$ 

List the kube-system pods

ansible@k8smaster1:~$ kubectl get pods -A
NAMESPACE     NAME                                                    READY   STATUS    RESTARTS      AGE
kube-system   coredns-5dd5756b68-2frsb                                0/1     Running   3 (14s ago)   5m44s
kube-system   coredns-5dd5756b68-zt8s8                                0/1     Running   3 (17s ago)   5m44s
kube-system   etcd-k8smaster1.linuxsysadmins.lan                      1/1     Running   1             5m59s
kube-system   kube-apiserver-k8smaster1.linuxsysadmins.lan            1/1     Running   1             5m59s
kube-system   kube-controller-manager-k8smaster1.linuxsysadmins.lan   1/1     Running   0             6m1s
kube-system   kube-proxy-4bnqz                                        1/1     Running   0             94s
kube-system   kube-proxy-7hzlh                                        1/1     Running   0             90s
kube-system   kube-proxy-jq89s                                        1/1     Running   0             96s
kube-system   kube-proxy-xwvm5                                        1/1     Running   0             5m44s
kube-system   kube-scheduler-k8smaster1.linuxsysadmins.lan            1/1     Running   1             5m59s
ansible@k8smaster1:~$ 

Cloud see all the pods related to flannel are up and running.

ansible@k8smaster1:~$ kubectl get pods -A -o wide
NAMESPACE      NAME                                                    READY   STATUS    RESTARTS       AGE     IP             NODE                            NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-4hzpc                                   1/1     Running   0              47s     192.168.0.23   k8smaster3.linuxsysadmins.lan   <none>           <none>
kube-flannel   kube-flannel-ds-5cbq8                                   1/1     Running   0              47s     192.168.0.22   k8smaster2.linuxsysadmins.lan   <none>           <none>
kube-flannel   kube-flannel-ds-b297x                                   1/1     Running   0              47s     192.168.0.21   k8sworker1.linuxsysadmins.lan   <none>           <none>
kube-flannel   kube-flannel-ds-ghbgk                                   1/1     Running   0              47s     192.168.0.20   k8smaster1.linuxsysadmins.lan   <none>           <none>
kube-system    coredns-5dd5756b68-2frsb                                0/1     Running   3 (108s ago)   7m18s   192.168.0.5    k8smaster1.linuxsysadmins.lan   <none>           <none>
kube-system    coredns-5dd5756b68-zt8s8                                0/1     Running   4 (1s ago)     7m18s   192.168.0.4    k8smaster1.linuxsysadmins.lan   <none>           <none>
kube-system    etcd-k8smaster1.linuxsysadmins.lan                      1/1     Running   1              7m33s   192.168.0.20   k8smaster1.linuxsysadmins.lan   <none>           <none>
kube-system    kube-apiserver-k8smaster1.linuxsysadmins.lan            1/1     Running   1              7m33s   192.168.0.20   k8smaster1.linuxsysadmins.lan   <none>           <none>
kube-system    kube-controller-manager-k8smaster1.linuxsysadmins.lan   1/1     Running   0              7m35s   192.168.0.20   k8smaster1.linuxsysadmins.lan   <none>           <none>
kube-system    kube-proxy-4bnqz                                        1/1     Running   0              3m8s    192.168.0.22   k8smaster2.linuxsysadmins.lan   <none>           <none>
kube-system    kube-proxy-7hzlh                                        1/1     Running   0              3m4s    192.168.0.23   k8smaster3.linuxsysadmins.lan   <none>           <none>
kube-system    kube-proxy-jq89s                                        1/1     Running   0              3m10s   192.168.0.21   k8sworker1.linuxsysadmins.lan   <none>           <none>
kube-system    kube-proxy-xwvm5                                        1/1     Running   0              7m18s   192.168.0.20   k8smaster1.linuxsysadmins.lan   <none>           <none>
kube-system    kube-scheduler-k8smaster1.linuxsysadmins.lan            1/1     Running   1              7m33s   192.168.0.20   k8smaster1.linuxsysadmins.lan   <none>           <none>
ansible@k8smaster1:~$ 

Now create a pod or deployments by following below guides.

That’s it.