AdBlock Detected

It looks like you're using an ad-blocker!

Our team work realy hard to produce quality content on this website and we noticed you have ad-blocking enabled. Advertisements and advertising enable us to continue working and provide high-quality content.

Node affinity on Kubernetes

In this new entry on Kubernetes Node Affinity, we will explore how to apply and work with node affinity. This article is a continuation of the previous one about Node Selector in Kubernetes.

What is Node Affinity in Kubernetes?

Both node affinity and node selector serve the same purpose but in different ways. While Node Selector restricts where a pod will run using labels, Node Affinity establishes a set of rules to determine where to schedule and assign those pods. In other words, it allows you to specify an “affinity” for placing a group of pods on a node, and the node will have no control over that pod.

Configuring Node Affinity in Kubernetes

The best way to understand how node affinity works in Kubernetes is by looking at the configuration of a pod.

To implement node affinity, we can use two types of rules: “preferred rule” or “required rule,” or both. Note that if we use both options, the “preferred rule” will be configured first. Let’s look at an example using the “required rule”:

apiVersion: v1
kind: Pod
metadata:
  name: node-affinity
spec:
  affinity:
    nodeAffinity:  1
      requiredDuringSchedulingIgnoredDuringExecution: 2
        nodeSelectorTerms:
        - matchExpressions:
          - key: size
            operator: NotIn 4
            values:
            - very-big 3
            - very-small 3
  containers:
  - name: node-affinity
    image: nginx
  • We set the nodeAffinity property.
  • Definition of a required rule.
  • Key/value that must match to apply the rule.
  • The operator that will be applied, representing the relationship between the node label and the set of values that match the pod’s specification. This value can be In, NotIn, Exists, DoesNotExist, Lt, or Gt.

Types of Node Affinity

We previously mentioned two types of node affinity: preferred and required.

These node types help determine how the pod should be scheduled. In the case of “Required,” the rule must be known before a pod can be scheduled on a node.

Configuring a Required Node

In the example above, we defined a node of this type. Let’s describe the steps in more detail:

spec:
  affinity:
    nodeAffinity:  
      requiredDuringSchedulingIgnoredDuringExecution: 
        nodeSelectorTerms:
        - matchExpressions:
          - key: size
            operator: NotIn 
            values:
            - very-big 
            - very-small 
  containers:
  - name: node-affinity
    image: nginx

In the pod, we specify the node affinity type by adding the requiredDuringSchedulingIgnoredDuringExecution parameter.

Once we specify the node type, we must assign the key and value, as seen in the example with key: size and values.

Finally, we assign the operator we want to use. In this case, we use NotIn to indicate that the node should not have the specified key/value pair.

If the defined key/value pair is not found on the node, this pod will not be scheduled on that node. In other words, the state will be incorrect.

Configuring a Preferred Node

In this type of node, it’s specified that the scheduler will try to enforce the rule but does not guarantee compliance.

Here’s an example configuration:

      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: e2e-az-name
            operator: In
            values:
            - e2e-az3

In this node, we add the preferredDuringSchedulingIgnoredDuringExecution parameter.

Next, we define a weight, which assigns a priority to the node. This number ranges from 1 to 100, with higher values indicating higher preference.

Once the weight is defined, we add the key and value, similar to the required node type.

When using preferredDuringSchedulingIgnoredDuringExecution, if there’s no match with the node label, the pod will still run because the pod’s preference takes precedence.

Example of Node Affinity in Kubernetes

Now that we’ve explained the different node types and how they work, let’s look at a step-by-step example. We’ll have a node with matching labels.

List nodes:

kubectl get nodes

Assign a label to the selected node:

kubectl label nodes <node-name> <label-key>=<label-value>

In our case, we’ll work with the following node:

Labels:             alpha.eksctl.io/cluster-name=eks-dev-cluster
                    alpha.eksctl.io/instance-id=i-0100bdd9c9bd76461
                    alpha.eksctl.io/nodegroup-name=mixed-instances-stateless
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=c5.2xlarge
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=us-east-1
                    failure-domain.beta.kubernetes.io/zone=us-east-1c
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ip-192-168-143-94.ec2.internal
                    kubernetes.io/os=linux
                    node.kubernetes.io/instance-type=c5.2xlarge
                    nodegroup-instances=mixed-instances-stateless
                    nodegroup-type=microservices
                    topology.kubernetes.io/region=us-east-1
                    topology.kubernetes.io/zone=us-east-1c

We will assign node affinity to a deployment with the “microservices” label:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: locations
  namespace: microservices
spec:
  selector:
    matchLabels:
      app: locations
  replicas: 1
  template:
    metadata:
      labels:
        app: locations
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: nodegroup-type
                    operator: In
                    values:
                      - microservices
      containers:
        - name: locations
          image: locations:latest
          resources:
            requests:
              memory: "512Mi"
              cpu: "500m"
            limits:
              memory: "512Mi"
              cpu: "500m"
          ports:
            - containerPort: 8080
          livenessProbe:
            httpGet:
              path: /actuator/health
              port: 8080
            initialDelaySeconds: 240
            periodSeconds: 5
    

With the above deployment, we assign node affinity by the “microservices” label.

What happens if the label in the node affinity doesn’t exist?

Since we chose the requiredDuringSchedulingIgnoredDuringExecution option, if the label doesn’t match, the pod will remain in the “pending” state upon creation. If, on the other hand, we had chosen the preferredDuringSchedulingIgnoredDuringExecution label, even if there’s no match, the pod will still be scheduled based on its preferences.

Conclusion

In this entry about Node Affinity in Kubernetes, we’ve seen how to deploy a pod on one or another node based on a label. This practice is widely used within Kubernetes as nodes often have different characteristics, helping us distribute our pods effectively.

Leave a Reply

Your email address will not be published. Required fields are marked *