Pod Placement Strategies with Node Affinity in Kubernetes

Introduction

In Kubernetes, efficiently placing pods on nodes can significantly impact performance and resource utilization. Node affinity provides a powerful mechanism for controlling pod placement based on node attributes, enabling administrators to optimize workload distribution and enhance cluster efficiency. Let’s explore how node affinity strategies can be leveraged to achieve these goals effectively.

The goal is to deploy web application related pods on a non-SSD disk type worker node and run the DB server on an SSD-based worker node.

Creating Namespace

Create the necessary namespace to deploy the app and db pods.

$ kubectl create namespace web-app-ns

Labeling Nodes

Label the Nodes as per the requirement, here we are labeling worker1 as SSD node and worker2 as non SSD worker.

$ kubectl label nodes k8swor1.linuxsysadmins.lan ssd=true
$ kubectl label nodes k8swor2.linuxsysadmins.lan ssd=false

Verify the node labels

[ansible@k8smas1 ~]$ kubectl describe nodes k8swor1.linuxsysadmins.lan | grep -i ssd
                    ssd=true
[ansible@k8smas1 ~]$ 
[ansible@k8smas1 ~]$ kubectl describe nodes k8swor2.linuxsysadmins.lan | grep -i ssd
                    ssd=false
[ansible@k8smas1 ~]$ 

Type of Node Affinity

There are two types of Node Affinity

  • requiredDuringSchedulingIgnoredDuringExecution
  • preferredDuringSchedulingIgnoredDuringExecution

The requiredDuringSchedulingIgnoreDuringExecution scheduler can’t schedule the Pod unless the rule is met.
The preferredDuringShedulingIgnoreDuringExecution tries to find a node that meets the rule. If a matching node is not available, the scheduler still schedules the Pod.

If you need to know little description about the nodeaffinity run below command.

$ kubectl explain pod.spec.affinity.nodeAffinity

Generating Pod Content

Run the Imperative command to generate the required YAML content for the APP pod.

$ kubectl run web-app-pod --image nginx --namespace web-app-ns --port=80 --dry-run=client -o yaml > web-app-pod.yaml

Remove the unnecessary content similar to creation timestamp and much more to keep the YAML content clean.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: web-app-pod
  name: web-app-pod
  namespace: web-app-ns
spec:
  containers:
  - image: nginx
    name: web-app-pod
    ports:
    - containerPort: 80
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

Add the node affinity under spec to run the web pods under the non SSD worker nodes.

apiVersion: v1
kind: Pod
metadata:
  labels:
  name: web-app
  namespace: web-app-ns
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: ssd
            operator: In
            values:
            - 'false'
  containers:
  - image: nginx
    name: web-app
    ports:
    - containerPort: 80

Verify whether the web app pods are placed in non SSD worker nodes.

[ansible@k8smas1 ~]$ kubectl get nodes --show-labels | grep ssd=
k8swor1.linuxsysadmins.lan   Ready    <none>          11d   v1.29.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8swor1.linuxsysadmins.lan,kubernetes.io/os=linux,ssd=true
k8swor2.linuxsysadmins.lan   Ready    <none>          11d   v1.29.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8swor2.linuxsysadmins.lan,kubernetes.io/os=linux,ssd=false
[ansible@k8smas1 ~]$ 
[ansible@k8smas1 ~]$ kubectl get pods -n web-app-ns web-app -o wide
NAME      READY   STATUS    RESTARTS   AGE    IP              NODE                         NOMINATED NODE   READINESS GATES
web-app   1/1     Running   0          121m   172.16.39.209   k8swor2.linuxsysadmins.lan   <none>           <none>
[ansible@k8smas1 ~]$ 

Run the Imperative command to generate the required YAML content for the DB pod.

$ kubectl run web-db-pod --image mysql --namespace web-app-ns --port=3306 --dry-run=client -o yaml > web-db.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: web-db-pod
  name: web-db-pod
  namespace: web-app-ns
spec:
  containers:
  - image: mysql
    name: web-db-pod
    ports:
    - containerPort: 3306
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

Do a clean up and add the necessary require/preferred node affinity to place the DB pods on a SSD based worker nodes.

apiVersion: v1
kind: Pod
metadata:
  labels:
  name: web-db
  namespace: web-app-ns
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: ssd
          operator: In
          values:
          - 'false'
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: ssd
            operator: In
            values:
            - 'true'
  containers:
  - name: web-db
    image: mysql
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: Xy20934hJ#4Z482jHD
    ports:
    - containerPort: 3306

Let’s verify where the DB node scheduled.

[ansible@k8smas1 ~]$ kubectl get nodes --show-labels | grep ssd=
k8swor1.linuxsysadmins.lan   Ready    <none>          11d   v1.29.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8swor1.linuxsysadmins.lan,kubernetes.io/os=linux,ssd=true
k8swor2.linuxsysadmins.lan   Ready    <none>          11d   v1.29.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8swor2.linuxsysadmins.lan,kubernetes.io/os=linux,ssd=false
[ansible@k8smas1 ~]$ 

[ansible@k8smas1 ~]$ kubectl get pods -n web-app-ns web-db -o wide 
NAME     READY   STATUS    RESTARTS   AGE     IP              NODE                         NOMINATED NODE   READINESS GATES
web-db   1/1     Running   0          3m43s   172.16.99.145   k8swor1.linuxsysadmins.lan   <none>           <none>
[ansible@k8smas1 ~]$

That’s it we could see the DB pods are placed on the SSD base worker nodes by using Node Affinity.

Leave a Reply

Your email address will not be published. Required fields are marked *