Install Kubernetes Cluster with Ansible on Ubuntu in 5 minutes

Introduction

Install Kubernetes Cluster with Ansible, This is required when we need to automate the whole installation process in a few minutes. Let’s go through step by step process. To familiarise ourselves with it, we need to know the whole step-by-step setup and workflow.

If you need to know the manual installation steps, have a look at these guides How to Install and configure Kubernetes (k8s) on Ubuntu 18.04 LTS. For, more Kubernetes-related articles click here.

New Playbook updated with Containers, Jump to containers


System Requirements

I’m using the below system configuration in my home lab.

OPERATING SYSTEM VERSIONHOSTNAMEIP ADDRESSSYSTEM CPUSYSTEM MEMORYKUBEADM VERSION
Ubuntu 16.04.6 LTS or Ubuntu 18.08 LTSk8mas1.linuxsysadmins.local192.168.0.264 vCPU8 GBv1.19.0
Ubuntu 16.04.6 LTS or Ubuntu 18.08 LTSk8nod1.linuxsysadmins.local192.168.0.274 vCPU8 GBv1.19.0
Ubuntu 16.04.6 LTS or Ubuntu 18.08 LTSk8nod1.linuxsysadmins.local192.168.0.284 vCPU8 GBv1.19.0
CentOS Linux release 8.2.2004 (Core)gateway.linuxsysadmins.local192.168.0.161 vCPU1 GBNA

Setting Up Ansible Inventory

In the first place, decide which user is going to handle this installation and which user will manage the Kubernetes cluster. In my case, I’m about to use “ansible” as my user for both.

Installing Kubernetes with Ansible
Installing Kubernetes with Ansible

Moreover, I’m using my default Ansible host inventory file for this guide and segregated the hosts into a separate group as masters and workers to ease the process to put in separate playbooks.

$ cat /etc/ansible/hosts
[users]
k8mas1.linuxsysadmins.local
k8nod1.linuxsysadmins.local
k8nod2.linuxsysadmins.local

[masters]
master ansible_host=k8mas1.linuxsysadmins.local ansible_user=ansible

[workers]
worker1 ansible_host=k8nod1.linuxsysadmins.local ansible_user=ansible
worker2 ansible_host=k8nod2.linuxsysadmins.local ansible_user=ansible

Creating User Account

To Install the Kubernetes cluster and manage the cluster let’s create an account. While running the below playbook it will prompt to type the username required to be created on the remote servers.

---
- hosts: users
  become: yes

  vars_prompt:

   - name: "new_user"
     prompt: "Account need to be create in remote server."
     private: no

  tasks:
    - name: creating the user {{ new_user }}.
      user:
        name: "{{ new_user }}"
        createhome: yes
        shell: /bin/bash
        append: yes
        state: present  

    - name: Create a dedicated sudo entry file for the user.
      file:
        path: "/etc/sudoers.d/{{ new_user }}"
        state: touch
        mode: '0600'

    - name: "Setting up Sudo without Password for user {{ new_user }}."
      lineinfile:
        dest: "/etc/sudoers.d/{{ new_user }}"
        line: '{{ new_user }}  ALL=(ALL) NOPASSWD: ALL'
        validate: 'visudo -cf %s'

    - name: Set authorized key for user copying it from current {{ new_user }}  user.
      authorized_key:
         user: "{{ new_user }}"
         state: present
         key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"

    - name: Print the created user.
      shell: id "{{ new_user }}"
      register: new_user_created

    - debug:
        msg: "{{ new_user_created.stdout_lines[0] }}"
...

Save the above YAML in a cat and run as follow.

$ ansible-playbook cretae_user.yaml -k -K

The playbook will manage to create the account, adding a SUDO entry with no password to root and copying the SSH authorized key from the Ansible host to remote servers.

TASK [debug] ******************************************************************************************************
ok: [k8mas1.linuxsysadmins.local] => {
    "msg": "uid=1011(ansibleuser1) gid=1011(ansibleuser1) groups=1011(ansibleuser1)"
}
ok: [k8nod1.linuxsysadmins.local] => {
    "msg": "uid=1011(ansibleuser1) gid=1011(ansibleuser1) groups=1011(ansibleuser1)"
}
ok: [k8nod2.linuxsysadmins.local] => {
    "msg": "uid=1011(ansibleuser1) gid=1011(ansibleuser1) groups=1011(ansibleuser1)"
}

PLAY RECAP ********************************************************************************************************
k8mas1.linuxsysadmins.local : ok=7    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
k8nod1.linuxsysadmins.local : ok=7    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
k8nod2.linuxsysadmins.local : ok=7    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[ansible@gateway ~]$ 

Install Kubernetes & Docker Packages.

Right after creating the user start with installing and configuring required packages on all the master and worker nodes. Additionally, disabling the swap, resolving dependencies are included as well. At the end of the installation, it will reboot all the nodes.

---
- hosts: "masters, workers"
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
  
  tasks:
   
   - name: Make the Swap inactive
     command: swapoff -a

   - name: Remove Swap entry from /etc/fstab.
     lineinfile:
       dest: /etc/fstab
       regexp: swap
       state: absent

   - name: Installing Prerequisites for Kubernetes
     apt: 
       name:
         - apt-transport-https
         - ca-certificates
         - curl
         - gnupg-agent
         - vim
         - software-properties-common
       state: present

   - name: Add Docker’s official GPG key
     apt_key:
       url: https://download.docker.com/linux/ubuntu/gpg
       state: present

   - name: Add Docker Repository
     apt_repository:
       repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
       state: present
       filename: docker
       mode: 0600

   - name: Install Docker Engine.
     apt: 
       name:
         - docker-ce
         - docker-ce-cli
         - containerd.io
       state: present

   - name: Enable service docker, and enable persistently
     service: 
       name: docker
       enabled: yes

   - name: Add Google official GPG key
     apt_key:
       url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
       state: present

   - name: Add Kubernetes Repository
     apt_repository:
       repo: deb http://apt.kubernetes.io/ kubernetes-xenial main 
       state: present
       filename: kubernetes
       mode: 0600

   - name: Installing Kubernetes Cluster Packages.
     apt: 
       name:
         - kubeadm
         - kubectl
         - kubelet
       state: present

   - name: Enable service kubelet, and enable persistently
     service: 
       name: kubelet
       enabled: yes

   - name: Reboot all the kubernetes nodes.
     reboot:
       post_reboot_delay: 10
       reboot_timeout: 40
       connect_timeout: 60
       test_command: uptime
...

Save the YAML as follows and run it.

$ ansible-playbook pacakge_install.yaml -k -K

Output for your reference.

TASK [Reboot all the kubernetes nodes.] ***************************************************************************
changed: [worker2]
changed: [master]
changed: [worker1]

PLAY RECAP *******************************************************************************************************
master                     : ok=13   changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
worker1                    : ok=13   changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
worker2                    : ok=13   changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[ansible@gateway ~]$ 

The next step is to set up the master node.

Setting up Kubernetes Master Server

It’s now time to start the master server initialization. Before running the playbook decide which pod network CIDR, Pod Network, Networking, and Network Policy needs to be created.

Flannel Network

If you need to use flannel as your network use.

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Calico Network

In case, your choice is Calico get it from here.

While writing this guide we have used v3.16.

https://docs.projectcalico.org/v3.16/manifests/calico.yaml

The list of Release versions is here.

https://docs.projectcalico.org/releases

Current latest version while updating this guide.

https://docs.projectcalico.org/v3.18/manifests/calico.yaml

A new version may be out by tomorrow, find the same from the below URL.

https://docs.projectcalico.org/manifests/calico.yaml

If you are looking to learn more about Calico, find it here. Learn, test your skills, and be certified as a calico Operator Level 1.

https://academy.tigera.io/course/certified-calico-operator-level-1/

Weave Network

For Weave get the same from here.

https://cloud.weave.works/k8s/net

RBAC Manifest from Calico

Role-based access control

https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

While prompting use the required network and RBAC URL.

---
- hosts: masters
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh

  vars_prompt:

   - name: "pod_network_cidr"
     prompt: "Enter the Pod Network CIDR, example: 192.168.100.0/24"
     private: no

   - name: "k8s_master_ip"
     prompt: "Enter the Apiserver advertise address, example: 192.168.0.26"
     private: no

   - name: "pod_network_manifest_file"
     prompt: "Enter the Pod network manifest file URL, Your choice could be flannel, weave or calico, etc."
     private: no

   - name: "rbac_manifest_file"
     prompt: "Enter the RBAC manifest file URL"
     private: no 

  tasks:

   - name: Intilizing Kubernetes Cluster
     command: kubeadm init --pod-network-cidr "{{ pod_network_cidr }}"  --apiserver-advertise-address "{{ k8s_master_ip }}"
     run_once: true
     delegate_to: "{{ k8s_master_ip }}"

   - pause: seconds=30

   - name: Create directory for kube config.
     become_user: ansible
     become_method: sudo
     become: yes
     file: 
       path: /home/{{ansible_user }}/.kube
       state: directory
       owner: "{{ ansible_user }}"
       group: "{{ ansible_user }}"
       mode: 0755

   - name: Copy /etc/kubernetes/admin.conf to user's home directory /home/{{ ansible_user }}/.kube/config.
     become_user: root
     become_method: sudo
     become: yes
     copy:
       src: /etc/kubernetes/admin.conf
       dest: /home/{{ ansible_user }}/.kube/config
       remote_src: yes
       owner: "{{ ansible_user }}"
       group: "{{ ansible_user }}"
       mode: '0644'

   - pause: seconds=10

   - name: Remove the cache directory.
     become_user: ansible
     become_method: sudo
     become: yes
     file: 
       path: /home/{{ ansible_user }}/.kube/cache
       state: absent

   - name: Create Pod Network & RBAC.
     become_user: ansible
     become_method: sudo
     become: yes
     command: "{{ item }}"
     with_items: 
        - kubectl apply -f {{ pod_network_manifest_file }}
        - kubectl apply -f {{ rbac_manifest_file }}

   - pause: seconds=30

   - name: Get the token for joining the nodes with Kuberentes master.
     shell: kubeadm token create  --print-join-command
     register: kubernetes_join_command

   - debug:
       msg: "{{ kubernetes_join_command.stdout }}"

   - name: Copy join command to local file.
     local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777
...

Truncated the long output.

[ansible@gateway ~]$ ansible-playbook cluster_init.yaml -k -K
SSH password: 
BECOME password[defaults to SSH password]: 
Enter the Pod Network CIDR, example: 192.168.100.0/24: 192.168.100.0/24
Enter the Apiserver advertise address, example: 192.168.0.26: 192.168.0.26
Enter the Pod network manifest file URL, Your choice could be flannel, weave or calico, etc.: https://docs.projectcalico.org/v3.16/manifests/calico.yaml
Enter the RBAC manifest file URL: https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

PLAY [masters] ***************************************************************************************************

TASK [debug] *****************************************************************************************************
ok: [master] => {
    "msg": "kubeadm join 192.168.0.26:6443 --token h6s1ox.kqqfpl5to1ga0bo6     --discovery-token-ca-cert-hash sha256:19c2bc5db8dca256f44e9a992c599c6455ce243148edf9b170d75b9f4b3b5712 "
}

TASK [Copy join command to local file.] **************************************************************************
changed: [master]

PLAY RECAP *******************************************************************************************************
master                     : ok=12   changed=6    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[ansible@gateway ~]$ 

As before, save it in a file and run it.

$ ansible-playbook cluster_init.yaml -k -K

At the end of the maser server setup, it will print the worker nodes join command.

Setting Up Worker Nodes

Once the master server setup is completed, start with working on joining the worker nodes with the master by running the below playbook.

---
- hosts: workers
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
       
  tasks:

   - name: Copy join command to worker nodes.
     become: yes
     become_method: sudo
     become_user: root
     copy:
       src: /tmp/kubernetes_join_command
       dest: /tmp/kubernetes_join_command
       mode: 0777   

   - name: Join the Worker nodes with the master.
     become: yes
     become_method: sudo
     become_user: root
     command: sh /tmp/kubernetes_join_command
     register: joined_or_not

   - debug:
       msg: "{{ joined_or_not.stdout }}"

- hosts: masters
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
       
  tasks:

   - name: Configure kubectl command auto-completion.
     lineinfile:
       dest: /home/{{ ansible_user }}/.bashrc
       line: 'source <(kubectl completion bash)'
       insertafter: EOF
...

Output for reference.

TASK [debug] ****************************************************************************************************
ok: [worker1] => {
    "msg": "[preflight] Running pre-flight checks\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Starting the kubelet\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster."
}
ok: [worker2] => {
    "msg": "[preflight] Running pre-flight checks\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Starting the kubelet\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster."
}

PLAY [masters] **************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************
ok: [master]

TASK [Configure kubectl command auto completion.] ***************************************************************
changed: [master]

PLAY RECAP *******************************************************************************************************
master                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
worker1                    : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
worker2                    : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[ansible@gateway ~]$ 

Finally, copy the worker node setup YAML and save the file in the named worker.YAML and run it.

$ ansible-playbook worker.yaml -k -K

That’s it, we are good with setting up the Kubernetes cluster.

1 Playbook to Install Kubernetes cluster with Ansible

If you are interested to put all the plays in a single playbook you can do the same similar to below.

This guide was updated on 11th December 2020 with Containerd by replacing Docker.

Playbook with Containerd as Container Runtimes

To install and configure with Containerd instead of Docker.

---
- hosts: "masters, workers"
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
  
  tasks:
   
   - name: Make the Swap inactive
     command: swapoff -a

   - name: Remove Swap entry from /etc/fstab.
     lineinfile:
       dest: /etc/fstab
       regexp: swap
       state: absent

   - name: Create a empty file for containerd module.
     copy:
       content: ""
       dest: /etc/modules-load.d/containerd.conf
       force: no

   - name: Configure module for containerd.
     blockinfile:
       path: /etc/modules-load.d/containerd.conf 
       block: |
            overlay
            br_netfilter

   - name: Create a empty file for kubernetes sysctl params.
     copy:
       content: ""
       dest: /etc/sysctl.d/99-kubernetes-cri.conf
       force: no

   - name: Configure sysctl params for Kubernetes.
     lineinfile:
       path: /etc/sysctl.d/99-kubernetes-cri.conf 
       line: "{{ item }}"
     with_items:
       - 'net.bridge.bridge-nf-call-iptables  = 1'
       - 'net.ipv4.ip_forward                 = 1'
       - 'net.bridge.bridge-nf-call-ip6tables = 1'

   - name: Apply sysctl params without reboot.
     command: sysctl --system

   - name: Installing Prerequisites for Kubernetes
     apt: 
       name:
         - apt-transport-https
         - ca-certificates
         - curl
         - gnupg-agent
         - vim
         - software-properties-common
       state: present

   - name: Add Docker’s official GPG key
     apt_key:
       url: https://download.docker.com/linux/ubuntu/gpg
       state: present

   - name: Add Docker Repository
     apt_repository:
       repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
       state: present
       filename: docker
       update_cache: yes

   - name: Install containerd.
     apt: 
       name:
         - containerd.io
       state: present

   - name: Configure containerd.
     file:
       path: /etc/containerd
       state: directory

   - name: Configure containerd.
     shell: /usr/bin/containerd config default > /etc/containerd/config.toml

   - name: Enable containerd service, and start it.
     systemd: 
       name: containerd
       state: restarted
       enabled: yes
       daemon-reload: yes

   - name: Add Google official GPG key
     apt_key:
       url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
       state: present

   - name: Add Kubernetes Repository
     apt_repository:
       repo: deb http://apt.kubernetes.io/ kubernetes-xenial main 
       state: present
       filename: kubernetes
       mode: 0600

   - name: Installing Kubernetes Cluster Packages.
     apt: 
       name:
         - kubeadm
         - kubectl
         - kubelet
       state: present

   - name: Enable service kubelet, and enable persistently
     service: 
       name: kubelet
       enabled: yes

   - name: Reboot all the kubernetes nodes.
     reboot:
       post_reboot_delay: 10
       reboot_timeout: 40
       connect_timeout: 60
       test_command: uptime

   - pause: seconds=20

- hosts: masters
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh

  vars_prompt:

   - name: "pod_network_cidr"
     prompt: "Enter the Pod Network CIDR, example: 192.168.100.0/24"
     private: no

   - name: "k8s_master_ip"
     prompt: "Enter the Apiserver advertise address, example: 192.168.0.26"
     private: no

   - name: "pod_network_manifest_file"
     prompt: "Enter the Pod network manifest file URL, Your choice could be flannel, weave or calico, etc."
     private: no

   - name: "rbac_manifest_file"
     prompt: "Enter the RBAC manifest file URL"
     private: no 


  tasks:

   - name: Intilizing Kubernetes Cluster
     command: kubeadm init --pod-network-cidr "{{ pod_network_cidr }}"  --apiserver-advertise-address "{{ k8s_master_ip }}"
     run_once: true
     delegate_to: "{{ k8s_master_ip }}"

   - pause: seconds=30

   - name: Create directory for kube config.
     become_user: ansible
     become_method: sudo
     become: yes
     file: 
       path: /home/{{ansible_user }}/.kube
       state: directory
       owner: "{{ ansible_user }}"
       group: "{{ ansible_user }}"
       mode: 0755

   - name: Copy /etc/kubernetes/admin.conf to user home directory /home/{{ ansible_user }}/.kube/config.
     become_user: root
     become_method: sudo
     become: yes
     copy:
       src: /etc/kubernetes/admin.conf
       dest: /home/{{ ansible_user }}/.kube/config
       remote_src: yes
       owner: "{{ ansible_user }}"
       group: "{{ ansible_user }}"
       mode: '0644'

   - pause: seconds=10

   - name: Remove the cache directory.
     become_user: ansible
     become_method: sudo
     become: yes
     file: 
       path: /home/{{ ansible_user }}/.kube/cache
       state: absent

   - name: Create Pod Network & RBAC.
     become_user: ansible
     become_method: sudo
     become: yes
     command: "{{ item }}"
     with_items: 
        - kubectl apply -f {{ pod_network_manifest_file }}
        - kubectl apply -f {{ rbac_manifest_file }}

   - pause: seconds=30

   - name: Get the token for joining the nodes with Kuberentes master.
     shell: kubeadm token create  --print-join-command
     register: kubernetes_join_command

   - debug:
       msg: "{{ kubernetes_join_command.stdout }}"

   - name: Copy join command to local file.
     become: false
     local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777

- hosts: workers
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
       
  tasks:

   - name: Copy join command to worker nodes.
     become: yes
     become_method: sudo
     become_user: root
     copy:
       src: /tmp/kubernetes_join_command
       dest: /tmp/kubernetes_join_command
       mode: 0777   

   - name: Join the Worker nodes with master.
     become: yes
     become_method: sudo
     become_user: root
     command: sh /tmp/kubernetes_join_command
     register: joined_or_not

   - debug:
       msg: "{{ joined_or_not.stdout }}"

- hosts: masters
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
       
  tasks:

   - name: Configure kubectl command auto completion.
     lineinfile:
       dest: /home/{{ ansible_user }}/.bashrc
       line: 'source <(kubectl completion bash)'
       insertafter: EOF
...

Version of containerd

ansible@k8mas1:~$ containerd --version
containerd containerd.io 1.4.3 269548fa27e0089a8b8278fc4fc781d7f65a939b
ansible@k8mas1:~$


ansible@k8mas1:~$ kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
k8mas1   Ready    control-plane,master   45m   v1.20.0
k8nod1   Ready    <none>                 44m   v1.20.0
k8nod2   Ready    <none>                 44m   v1.20.0
ansible@k8mas1:~$


ansible@k8mas1:~$ kubectl get nodes -o wide
NAME     STATUS   ROLES                  AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8mas1   Ready    control-plane,master   107m   v1.20.0   192.168.0.26   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   containerd://1.4.3
k8nod1   Ready    <none>                 105m   v1.20.0   192.168.0.27   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   containerd://1.4.3
k8nod2   Ready    <none>                 105m   v1.20.0   192.168.0.28   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   containerd://1.4.3
ansible@k8mas1:~$ 

Playbook with Docker as Container Runtimes

Below Playbook is an old one with Docker as Container runtimes.

---
- hosts: "masters, workers"
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
  
  tasks:
   
   - name: Make the Swap inactive
     command: swapoff -a

   - name: Remove Swap entry from /etc/fstab.
     lineinfile:
       dest: /etc/fstab
       regexp: swap
       state: absent

   - name: Installing Prerequisites for Kubernetes
     apt: 
       name:
         - apt-transport-https
         - ca-certificates
         - curl
         - gnupg-agent
         - vim
         - software-properties-common
       state: present

   - name: Add Docker’s official GPG key
     apt_key:
       url: https://download.docker.com/linux/ubuntu/gpg
       state: present

   - name: Add Docker Repository
     apt_repository:
       repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
       state: present
       filename: docker
       mode: 0600

   - name: Install Docker Engine.
     apt: 
       name:
         - docker-ce
         - docker-ce-cli
         - containerd.io
       state: present

   - name: Enable service docker, and enable persistently
     service: 
       name: docker
       enabled: yes

   - name: Add Google official GPG key
     apt_key:
       url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
       state: present

   - name: Add Kubernetes Repository
     apt_repository:
       repo: deb http://apt.kubernetes.io/ kubernetes-xenial main 
       state: present
       filename: kubernetes
       mode: 0600

   - name: Installing Kubernetes Cluster Packages.
     apt: 
       name:
         - kubeadm
         - kubectl
         - kubelet
       state: present

   - name: Enable service kubelet, and enable persistently
     service: 
       name: kubelet
       enabled: yes

   - name: Reboot all the kubernetes nodes.
     reboot:
       post_reboot_delay: 10
       reboot_timeout: 40
       connect_timeout: 60
       test_command: uptime

   - pause: seconds=20

- hosts: masters
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh

  vars_prompt:

   - name: "pod_network_cidr"
     prompt: "Enter the Pod Network CIDR, example: 192.168.100.0/24"
     private: no

   - name: "k8s_master_ip"
     prompt: "Enter the Apiserver advertise address, example: 192.168.0.26"
     private: no

   - name: "pod_network_manifest_file"
     prompt: "Enter the Pod network manifest file URL, Your choice could be flannel, weave or calico, etc."
     private: no

   - name: "rbac_manifest_file"
     prompt: "Enter the RBAC manifest file URL"
     private: no 

  tasks:

   - name: Intilizing Kubernetes Cluster
     command: kubeadm init --pod-network-cidr "{{ pod_network_cidr }}"  --apiserver-advertise-address "{{ k8s_master_ip }}"
     run_once: true
     delegate_to: "{{ k8s_master_ip }}"

   - pause: seconds=30

   - name: Create directory for kube config.
     become_user: ansible
     become_method: sudo
     become: yes
     file: 
       path: /home/{{ansible_user }}/.kube
       state: directory
       owner: "{{ ansible_user }}"
       group: "{{ ansible_user }}"
       mode: 0755

   - name: Copy /etc/kubernetes/admin.conf to user home directory /home/{{ ansible_user }}/.kube/config.
     become_user: root
     become_method: sudo
     become: yes
     copy:
       src: /etc/kubernetes/admin.conf
       dest: /home/{{ ansible_user }}/.kube/config
       remote_src: yes
       owner: "{{ ansible_user }}"
       group: "{{ ansible_user }}"
       mode: '0644'

   - pause: seconds=10

   - name: Remove the cache directory.
     become_user: ansible
     become_method: sudo
     become: yes
     file: 
       path: /home/{{ ansible_user }}/.kube/cache
       state: absent

   - name: Create Pod Network & RBAC.
     become_user: ansible
     become_method: sudo
     become: yes
     command: "{{ item }}"
     with_items: 
        - kubectl apply -f {{ pod_network_manifest_file }}
        - kubectl apply -f {{ rbac_manifest_file }}

   - pause: seconds=30

   - name: Get the token for joining the nodes with Kuberentes master.
     shell: kubeadm token create  --print-join-command
     register: kubernetes_join_command

   - debug:
       msg: "{{ kubernetes_join_command.stdout }}"

   - name: Copy join command to local file.
     become: false
     local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777

- hosts: workers
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
       
  tasks:

   - name: Copy join command to worker nodes.
     become: yes
     become_method: sudo
     become_user: root
     copy:
       src: /tmp/kubernetes_join_command
       dest: /tmp/kubernetes_join_command
       mode: 0777   

   - name: Join the Worker nodes with the master.
     become: yes
     become_method: sudo
     become_user: root
     command: sh /tmp/kubernetes_join_command
     register: joined_or_not

   - debug:
       msg: "{{ joined_or_not.stdout }}"

- hosts: masters
  remote_user: ansible
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh
       
  tasks:

   - name: Configure kubectl command auto-completion.
     lineinfile:
       dest: /home/{{ ansible_user }}/.bashrc
       line: 'source <(kubectl completion bash)'
       insertafter: EOF
...

Running all in one playbook too works fine. Output for your reference.

TASK [Configure kubectl command auto completion.] ****************************************************************
changed: [master]

PLAY RECAP *******************************************************************************************************
master                     : ok=28   changed=15   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
worker1                    : ok=17   changed=10   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
worker2                    : ok=17   changed=10   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[ansible@gateway ~]$ 

In case, if the installation fails at any stage, run the below command on all three nodes and re-run the playbook.

$ sudo kubeadm reset --ignore-preflight-errors=all

That’s it, we have successfully completed the installing Kubernetes cluster with ansible.

Conclusion

Install Kubernetes Cluster with Ansible, To ease the cluster setup we have created with Ansible playbook. Usually, the manual step-by-step installation will take more time. By using Ansible, it can be completed within 5 minutes. Subscribe to our newsletter to receive more automation-related articles.

3 thoughts on “Install Kubernetes Cluster with Ansible on Ubuntu in 5 minutes

  1. This is great stuff, thank you!

    In replicating it myself, I’ve tweaked your “Installing Kubernetes & Docker Packages” play to only report as “changed” when the swap has actually been disabled. It’s a bit hacky and I’m sure there’s a neater way but in case you’re interested:

    – name: Make the Swap inactive
    shell: |
    before=$(wc -l /proc/swaps)
    swapoff -a
    after=$(wc -l /proc/swaps)
    if [ “$before” != “$after” ]; then echo CHANGED;fi
    register: swapoff
    changed_when: ‘”CHANGED” in swapoff.stdout’

Comments are closed.