Introduction to Kubernetes

Kubernetes or (k8s) is trending Open Source container orchestration solution across globally in every IT environment. Before 5 years back it released its initial release and licenced under Apache Licence 2.0. When Kubernetes release by mid of 2015 google donated it to Cloud-Native computing foundation. CNCF is one of partner with The Linux Foundation. K8s provides container orchestration with more feature such as application deployment, automating deployment, scaling and much more.

Docker is one of the common containers used with Kubernetes, However, K8s support other container technology as well.

List of few Components to build a Kubernetes Cluster

  • Nodes
  • Pods
  • Selectors
  • Controllers
  • Labels
  • Services
COMPONENTS DESCRIPTIONS
Nodes Nodes are Minions, They can be a Virtual machine or physical server. On each nodes docker will be installed and run with several containers within managed cluster. Moreover, all nodes in a kubernetes cluster will run with ETCD and it will be used by kubernetes to exchange messages and reporting about cluster status and proxy. ETCD is a distrubuted key value store used to store all kubernetes datas such as cluster status and metadata.
Pods As we seen Nodes are virtual machines or physical servers, Pods are number of containers combine together to give a higher share resources. Pods will have differenet IP’s within the cluster. Pods can have shares or disk volumes and provide access to all containers within the pod.
Selectors Selectors are queries made against any labels by matching objects.
Controllers Controllers manage a set of pods and depends on configuration state. Moreover, it will handle the replication and scaling. In case, any pods fails it will revert back the cluster status to normal.
Labels Labels are a key-value pairs to any object in the system.
Services A Service enables network access to a set of Pods in Kubernetes. Moreover, Services provide important features that are standardized across the cluster like load-balancing, service discovery between applications and features to support zero downtime application deployments. By default, A serivce is only exposed inside a cluster. However, it can also be exposed outside of a cluster as our requirement.

Host Information

We are using a total of three servers in our setup. One server act as master and the remaining two will be worker node.

192.168.107.100 | k8smaster
192.168.107.101 | k8sworker1
192.168.107.102 | k8sworker2

Configure Hostname

Set the hostname for all the servers in our setup.

$ sudo hostnamectl set-hostname k8smaster.linuxsysadmins.local
$ sudo hostnamectl set-hostname k8sworker1.linuxsysadmins.local
$ sudo hostnamectl set-hostname k8sworker2.linuxsysadmins.local

In case of any DNS issue, we should not lose the connectivity so make sure to amend the local host entry.

# sudo vim /etc/hosts
192.168.107.100 ks8master.linuxsysadmins.local  ks8master
192.168.107.101 ks8worker1.linuxsysadmins.local ks8worker1
192.168.107.102 ks8worker2.linuxsysadmins.local ks8worker2

Configure Static IP Address

Maks sure to assign a static IP address for our servers using netplan.

# sudo vim /etc/netplan/01-netcfg.yaml

Edit the file and change the “dhcp4: yes” to “dhcp4: no“. Configure static IP as shown below.

sysadmins@k8smaster:~$ cat /etc/netplan/01-netcfg.yaml 
 This file describes the network interfaces available on your system
 For more information, see netplan(5).
 network:
   version: 2
   renderer: networkd
   ethernets:
     ens33:
       dhcp4: no
       addresses: 
         - 192.168.107.100/24
       gateway4: 192.168.107.2
       nameservers:
               addresses: [192.168.107.2, 8.8.8.8, 8.8.4.4]      
sysadmins@k8smaster:~$

Disable Swap

For performance reason, we need to disable swap on all Kubernetes clustered nodes. The idea of Kubernetes is to tightly pack instances to as close to 100% utilized as possible. All deployments should be pinned with CPU/memory limits. So if the scheduler sends a pod to a machine it should never use swap at all. You don’t want to swap since it’ll slow things down.

$ sudo swapon -s
$ sudo swapoff -a
$ sudo swapon -s

Once disabled, verify the status.

sysadmins@k8smaster:~$ sudo swapon -s
 Filename                Type        Size    Used    Priority
 /dev/dm-1                                  partition   1003516 0   -2
sysadmins@k8smaster:~$ sudo swapoff -a
sysadmins@k8smaster:~$ 
sysadmins@k8smaster:~$ sudo swapon -s
sysadmins@k8smaster:~$

To make it persistent remove the swap entry from FSTAB or disable it by commenting the swap entry.

$ sudo vim /etc/fstab

comment on the FSTAB entry with “#” and reboot the server.

Resolving Prerequisites for Kubernetes

Being with installing prerequisites for Kubernetes. Update the cache and install few of dependencies.

$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates curl software-properties-common

Install the Docker and start/enable the service persistently.

$ sudo apt install docker.io
$ sudo systemctl enable docker
$ sudo systemctl start docker
$ sudo systemctl status docker

Install the Docker which available from the official Ubuntu repository.

Installing Kubernetes (k8s)

Download and install the GPG key for Kubernetes packages.

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

Create a repository file for Kubernetes by Appending with below URL.

cat </etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

Once the source added, make sure to update the cache by running apt-get update. Once completed, start to install the Kubernetes packages.

$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
  • kubelet will run on all the servers across the cluster. It helps to start the pods and containers.
  • kubeadm will help to bootstrap the cluster.
  • The command-line utility helps to talk with our cluster.

Once installation completed, check the installed version of kubeadm.

# kubeadm version
root@k8smaster:~# kubeadm version
 kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
root@k8smaster:~#

Initialize the Kubernetes Cluster

We are good with so far. Let’s initiate the cluster by running the init option with kubeadm. Define a pod network while running init.

# kubeadm init --pod-network-cidr=192.168.108.0/24 --apiserver-advertise-address=192.168.107.100

Output for your reference

root@k8smaster:~# sudo kubeadm init --pod-network-cidr=192.168.108.0/24 --apiserver-advertise-address=192.168.107.100
 [init] Using Kubernetes version: v1.15.3
 [preflight] Running pre-flight checks
 [preflight] Pulling images required for setting up a Kubernetes cluster
 [preflight] This might take a minute or two, depending on the speed of your internet connection
 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 [kubelet-start] Activating the kubelet service
 [certs] Using certificateDir folder "/etc/kubernetes/pki"
 [certs] Generating "ca" certificate and key
 [certs] Generating "apiserver-kubelet-client" certificate and key
 [certs] Generating "apiserver" certificate and key
 [certs] apiserver serving cert is signed for DNS names [k8smaster.linuxsysadmins.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.107.100]
 [certs] Generating "front-proxy-ca" certificate and key
 [certs] Generating "front-proxy-client" certificate and key
 [certs] Generating "etcd/ca" certificate and key
 [certs] Generating "etcd/server" certificate and key
 [certs] etcd/server serving cert is signed for DNS names [k8smaster.linuxsysadmins.local localhost] and IPs [192.168.107.100 127.0.0.1 ::1]
 [certs] Generating "etcd/peer" certificate and key
 [certs] etcd/peer serving cert is signed for DNS names [k8smaster.linuxsysadmins.local localhost] and IPs [192.168.107.100 127.0.0.1 ::1]
 [certs] Generating "etcd/healthcheck-client" certificate and key
 [certs] Generating "apiserver-etcd-client" certificate and key
 [certs] Generating "sa" key and public key
 [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
 [kubeconfig] Writing "admin.conf" kubeconfig file
 [kubeconfig] Writing "kubelet.conf" kubeconfig file
 [kubeconfig] Writing "controller-manager.conf" kubeconfig file
 [kubeconfig] Writing "scheduler.conf" kubeconfig file
 [control-plane] Using manifest folder "/etc/kubernetes/manifests"
 [control-plane] Creating static Pod manifest for "kube-apiserver"
 [control-plane] Creating static Pod manifest for "kube-controller-manager"
 [control-plane] Creating static Pod manifest for "kube-scheduler"
 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
 [kubelet-check] Initial timeout of 40s passed.
 [apiclient] All control plane components are healthy after 44.506738 seconds
 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
 [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
 [upload-certs] Skipping phase. Please see --upload-certs
 [mark-control-plane] Marking the node k8smaster.linuxsysadmins.local as control-plane by adding the label "node-role.kubernetes.io/master=''"
 [mark-control-plane] Marking the node k8smaster.linuxsysadmins.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
 [bootstrap-token] Using token: ig9g6d.kdc3wtk08wqiwz34
 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
 [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
 [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
 [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
 [addons] Applied essential addon: CoreDNS
 [addons] Applied essential addon: kube-proxy
 Your Kubernetes control-plane has initialized successfully!
 To start using your cluster, you need to run the following as a regular user:
 mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   sudo chown $(id -u):$(id -g) $HOME/.kube/config
 You should now deploy a pod network to the cluster.
 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
   https://kubernetes.io/docs/concepts/cluster-administration/addons/
 Then you can join any number of worker nodes by running the following on each as root:
 kubeadm join 192.168.107.100:6443 --token ig9g6d.kdc3wtk08wqiwz34 --discovery-token-ca-cert-hash sha256:fdb5b09bb65283c2b3c328e0839fd63b9c24b16c279115de3d55b1efacbb512b
root@k8smaster:~#

To start using your cluster, you need to run the following as a regular user

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If you have missed copying the above token its possible to get the token by running following command.

# kubeadm token list
root@k8smaster:~# kubeadm token list
 TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
 ig9g6d.kdc3wtk08wqiwz34   23h       2019-09-06T18:47:12+04:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
root@k8smaster:~#

We can notice the same token from the above command.

Deploy a Pod Network

We have already defined a pod network, however, we should create a namespace for the network. Depends on your environment choose the appropriate network addon. In our setup, we are using a flannel. Flannel will provide an overlay network for Kubernetes (k8s).

https://kubernetes.io/docs/concepts/cluster-administration/addons/
https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md

As discussed, We will use flannel in this guide.

Create the Pod Network by running flannel YAML file.

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Output for your reference

root@k8smaster:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 podsecuritypolicy.policy/psp.flannel.unprivileged created
 clusterrole.rbac.authorization.k8s.io/flannel created
 clusterrolebinding.rbac.authorization.k8s.io/flannel created
 serviceaccount/flannel created
 configmap/kube-flannel-cfg created
 daemonset.apps/kube-flannel-ds-amd64 created
 daemonset.apps/kube-flannel-ds-arm64 created
 daemonset.apps/kube-flannel-ds-arm created
 daemonset.apps/kube-flannel-ds-ppc64le created
 daemonset.apps/kube-flannel-ds-s390x created
root@k8smaster:~#

It’s time to join the clients.

Verify the Nodes

Once done with all the above steps, verify the available nodes by running kubectl with options. To get more details information we should use -o option with “wide” argument.

# kubectl get nodes
# kubectl get nodes -o wide

For your reference, Currently, we have only one node in our cluster that’s our master.

root@k8smaster:~# kubectl get nodes
 NAME                             STATUS   ROLES    AGE     VERSION
 k8smaster.linuxsysadmins.local   Ready    master   4m19s   v1.15.3
root@k8smaster:~#
 
root@k8smaster:~# kubectl get nodes -o wide
 NAME                             STATUS   ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
 k8smaster.linuxsysadmins.local   Ready    master   4m35s   v1.15.3   192.168.107.100           Ubuntu 18.04.3 LTS   4.15.0-60-generic   docker://18.9.7
root@k8smaster:~#

To know the current namespaces. While creating a Kubernetes cluster, by default all the below namespaces will be created.

# kubectl get pods --all-namespaces
root@k8smaster:~# kubectl get pods --all-namespaces
 NAMESPACE     NAME                                                     READY   STATUS    RESTARTS   AGE
 kube-system   coredns-5c98db65d4-45twb                                 1/1     Running   0          3m37s
 kube-system   coredns-5c98db65d4-mzrg9                                 1/1     Running   0          3m37s
 kube-system   etcd-k8smaster.linuxsysadmins.local                      1/1     Running   0          3m
 kube-system   kube-apiserver-k8smaster.linuxsysadmins.local            1/1     Running   0          2m50s
 kube-system   kube-controller-manager-k8smaster.linuxsysadmins.local   1/1     Running   0          3m7s
 kube-system   kube-flannel-ds-amd64-q92hn                              1/1     Running   0          75s
 kube-system   kube-proxy-zbq8q                                         1/1     Running   0          3m37s
 kube-system   kube-scheduler-k8smaster.linuxsysadmins.local            1/1     Running   0          2m44s
root@k8smaster:~#

Client Setup

Join Workers (Clients) with K8s Master

At the end of the initializing cluster, we have received a token. By using the token let us join our worker nodes with the master. Just copy the output from the master server and paste on the terminal of worker node is more than enough to join a worker node with a master.

# kubeadm join 192.168.107.100:6443 --token ig9g6d.kdc3wtk08wqiwz34 --discovery-token-ca-cert-hash sha256:fdb5b09bb65283c2b3c328e0839fd63b9c24b16c279115de3d55b1efacbb512b

The output while joining the worker will be short as shown below.

root@k8sworker1:~# kubeadm join 192.168.107.100:6443 --token ig9g6d.kdc3wtk08wqiwz34 --discovery-token-ca-cert-hash sha256:fdb5b09bb65283c2b3c328e0839fd63b9c24b16c279115de3d55b1efacbb512b
 [preflight] Running pre-flight checks
     [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [preflight] Reading configuration from the cluster…
 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
 [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 [kubelet-start] Activating the kubelet service
 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
 This node has joined the cluster:
 Certificate signing request was sent to apiserver and a response was received.
 The Kubelet was informed of the new secure connection details. 
 Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@k8sworker1:~#

We have completed with Client-side configuration.

Print and Verify the Nodes

Once joined the worker node, back to master and run below command to verify the same.

# kubectl get nodes

We can notice, now we have all three nodes including the master and they are in the ready state.

root@k8smaster:~# kubectl get nodes
 NAME                              STATUS   ROLES    AGE     VERSION
 k8smaster.linuxsysadmins.local    Ready    master   7m41s   v1.15.3
 k8sworker1.linuxsysadmins.local   Ready       51s     v1.15.3
 k8sworker2.linuxsysadmins.local   Ready       63s     v1.15.3
root@k8smaster:~#

Each node we add as a worker, we will notice the number of the namespace will increase for each worker node.

# kubectl get pods --all-namespaces
root@k8smaster:~# kubectl get pods --all-namespaces
 NAMESPACE     NAME                                                     READY   STATUS             RESTARTS   AGE
 kube-system   coredns-5c98db65d4-45twb                                 1/1     Running            0          7m44s
 kube-system   coredns-5c98db65d4-mzrg9                                 1/1     Running            0          7m44s
 kube-system   etcd-k8smaster.linuxsysadmins.local                      1/1     Running            0          7m7s
 kube-system   kube-apiserver-k8smaster.linuxsysadmins.local            1/1     Running            0          6m57s
 kube-system   kube-controller-manager-k8smaster.linuxsysadmins.local   1/1     Running            0          7m14s
 kube-system   kube-flannel-ds-amd64-82trt                              0/1     CrashLoopBackOff   3          77s
 kube-system   kube-flannel-ds-amd64-m7glg                              0/1     CrashLoopBackOff   3          89s
 kube-system   kube-flannel-ds-amd64-q92hn                              1/1     Running            0          5m22s
 kube-system   kube-proxy-2wbvq                                         1/1     Running            0          77s
 kube-system   kube-proxy-b9xhf                                         1/1     Running            0          89s
 kube-system   kube-proxy-zbq8q                                         1/1     Running            0          7m44s
 kube-system   kube-scheduler-k8smaster.linuxsysadmins.local            1/1     Running            0          6m51s
root@k8smaster:~#

That’s it, we have completed with Installing and configure a Kubernetes (k8s) cluster.

Conclusion

Kubernetes (k8s) cluster installation is a trending nowadays, Most of the production environment moved to Kubernetes. Start to Install Kubernetes and let us know the outcome through below comment section. Subscribe to our newsletter and stay tuned for more Kubernetes related articles.

LEAVE A REPLY

Please enter your comment!
Please enter your name here