AdBlock Detected

It looks like you're using an ad-blocker!

Our team work realy hard to produce quality content on this website and we noticed you have ad-blocking enabled. Advertisements and advertising enable us to continue working and provide high-quality content.

When to use ClusterIP vs. LoadBalancer vs. NodePort vs. Ingress in Kubernetes

I’ve sometimes seen that when defining a service, the access type we set for connectivity is not correct, or sometimes it’s not clear which one to assign. In this post about when to use ClusterIP vs. LoadBalancer vs. NodePort vs. Ingress in Kubernetes, we will analyze what each type is and when to use them based on our needs when creating a service.

ClusterIP en Kubernetes

ClusterIP is the service type that Kubernetes establishes by default. This type of service is only accessible within your cluster by other services or applications, meaning it won’t have external access.

For example, a service definition with ClusterIP could be:

 ClusterIP en Kubernetes | Cuándo usar ClusterIP Vs LoadBalancer Vs NodePort  Vs ingress en Kubernetes
ClusterIP en Kubernetes

apiVersion: v1
kind: Service
metadata:
  name: myservice
  namespace: mynamespace
spec:
  selector:
    app: myservice
  ports:
    - protocol: TCP
      port: 10007
      targetPort:10007
  type: ClusterIP

If you want to access it from the internet, you can use the Kubernetes proxy, which runs on each node and can be specified for external connections. Of course, this is mainly for non-production or testing environments.

To use the Kubernetes proxy to launch a proxy server, execute the following command:

$ kubectl proxy --port=8080

With the proxy running, you can inspect the Kubernetes API and access the service using curl:

http://localhost:8080/api/v1/proxy/namespaces/<NAMESPACE>/services/<SERVICE-NAME>/proxy

For example, let’s install the Kubernetes dashboard by executing:

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/alternative/kubernetes-dashboard.yaml

Run Kubernetes proxy:

kubectl proxy --address 0.0.0.0 --accept-hosts '.*'

And from your browser or using curl, you can access:

http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/overview

As mentioned earlier, Kubernetes proxy should not be used in production environments unless you need to view a dashboard or enable internal traffic or perform debugging on a service.

NodePort en kubernetes

NodePort is one of the initial ways to route external internet traffic to your service. What NodePort does is open a port on the nodes, and all traffic arriving at that port is directed to the defined service.

Let’s take a look at the definition of a service with NodePort; you’ll notice that the service type determines the difference:

NodePort en Kubernetes
NodePort en Kubernetes

Let’s see the NodePortDefinition :

apiVersion: v1
kind: Service
metadata:  
  name: myservice
spec:
  selector:    
    app: myapp
  type: NodePort
  ports:  
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30000
    protocol: TCP

As seen in the service definition above, we’ve specified the port to open on the nodes. If not specified, a random port will be selected.

Using NodePort to access your service allows direct network access, so it’s not recommended for production environments.

Considerations when using NodePort in Kubernetes:

  • Ports that can be used (in the example, we used 30036) range from 30000 to 32767.
  • If the node’s IP changes, reconfiguration will be needed.
  • A service is assigned to a Pod.

When to use NodePort on Kubernetes?

As mentioned before, using NodePort provides network access to your service, making it not very suitable for production environments. However, it can be useful for occasional use of a service you’ve defined, such as demonstrating it to a client.

LoadBalancer en Kubernetes

A LoadBalancer in Kubernetes externally exposes the service, typically using a cloud provider’s Load Balancer. The creation of a load balancer happens asynchronously, and information about the balancer is published in the status.loadBalancer field.

Let’s look at an example of a service with a LoadBalancer; the service type determines the difference:

.

Load Balancer en Kubernetes
Load Balancer en Kubernetes

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  clusterIP: 10.0.121.221
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.0.1.127

External traffic is directed to the Pods, and the cloud provider decides how to balance it in each case. In some cases, providers allow specifying the loadBalancerIP, otherwise, it’s added with an ephemeral IP.

When to use usar LoadBalancer on Kubernetes?

Using LoadBalancer allows you to directly expose a service. Traffic is directed to the port specified in the service. The main drawback is that each service gets its own IP, and some cloud providers may charge for each exposed service, which could become expensive.

Ingress Controler en Kubernetes

Unlike the service types discussed earlier, Ingress is not actually a service type but acts as a “front piece” in front of many services, serving as an entry point to your cluster.

Ingress Controler en Kubernetes | Cuándo usar ClusterIP Vs LoadBalancer Vs NodePort  Vs ingress en Kubernetes
Ingress Controler en Kubernetes

There are many different types of Ingress, and depending on the cloud provider, you can use their own. One of the most common ones is Nginx.

Additionally, with Ingress, you can add various functionalities and plugins like cert-manager to establish SSL connections to your services.

Here’s an example of an Ingress definition:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    meta.helm.sh/release-name: api-ingress
    meta.helm.sh/release-namespace: my-namespace
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$1
  labels:
    app: api-ingress
  name: api-ingress
  namespace: my-namespace
spec:
  tls:
  - hosts:
    - api.services
    secretName: api-ingress-tls
  rules:
  - host: api.services
    http:
      paths:
      - backend:
          serviceName: myserviceA
          servicePort: 10030
        path: /my-service-a/(.*)
      - backend:
          serviceName: myserviceB
          servicePort: 10031
        path: /my-service-b/(.*)      

When to use Ingress Controller?

Ingress is the most commonly used tool when you want to expose multiple services through the same IP, and the services use the L7 protocol, which operates at the highest level of the application layer, dealing with messages.

Using Ingress also reduces costs in a cloud provider since you’ll pay for only one Load Balancer. Additionally, you can add various functionalities like SSL, authentication, etc.

Conclusion

The type of services you use with Kubernetes will allow you to access an external network in one way or another. In this article on when to use ClusterIP vs. LoadBalancer vs. NodePort vs. Ingress in Kubernetes, we’ve understood

Leave a Reply

Your email address will not be published. Required fields are marked *