Getting external traffic into Kubernetes – ClusterIp, NodePort, LoadBalancer, and Ingress

Temps de lecture estimé : 6 minute(s)

For the last few months, I have been acting as Developer Advocate for the OVH Managed Kubernetes beta, following our beta testers, getting feedback, writing docs and tutorials, and generally helping to make sure the product matches our users’ needs as closely as possible.

In the next few posts, I am going to tell you some stories about this beta phase. We’ll be taking a look at feedback from some of our beta testers, technical insights, and some fun anecdotes about the development of this new service.

Today, we’ll start with one of the most frequent questions I got during the early days of the beta: How do I route external traffic into my Kubernetes service? The question came up a lot as our customers began to explore Kubernetes, and when I tried to answer it, I realised that part of the problem was the sheer number of possible answers, and the concepts needed to understand them.

Related to that question was a feature request: most users wanted a load balancing tool. As the beta phase is all about confirming the stability of the product and validating the feature set prioritisation, we were able to quickly confirm LoadBalanceras a key feature of our first commercial release.

To try to better answer the external traffic question, and to make the adoption of LoadBalancereasier, we wrote a tutorial and added some drawings, which got nice feedback. This helped people to understand the concept underlaying the routing of external traffic on Kubernetes.

This blog post is an expanded version of this tutorial. We hope that you will find it useful!

Some concepts: ClusterIPNodePortIngress and LoadBalancer

When you begin to use Kubernetes for real-world applications, one of the first questions to ask is how to get external traffic into your cluster. The official documentation offers a comprehensive (but rather dry) explanation of this topic, but here we are going to explain it in a more practical, need-to-know way.

There are several ways to route external traffic into your cluster:

  • Using Kubernetes proxy and ClusterIP: The default Kubernetes ServiceType is ClusterIp, which exposes the Service on a cluster-internal IP. To reach the ClusterIp from an external source, you can open a Kubernetes proxy between the external source and the cluster. This is usually only used for development.

  • Exposing services as NodePort: Declaring a Service as NodePortexposes it on each Node’s IP at a static port (referred to as the NodePort). You can then access the Service from outside the cluster by requesting <NodeIp>:<NodePort>. This can also be used for production, albeit with some limitations.

  • Exposing services as LoadBalancer: Declaring a Service as LoadBalancer exposes it externally, using a cloud provider’s load balancer solution. The cloud provider will provision a load balancer for the Service, and map it to its automatically assigned NodePort. This is the most widely used method in production environments.

Using Kubernetes proxy and ClusterIP

The default Kubernetes ServiceType is ClusterIp, which exposes the Service on a cluster-internal IP. To reach the ClusterIp from an external computer, you can open a Kubernetes proxy between the external computer and the cluster.

You can use kubectl to create such a proxy. When the proxy is up, you’re directly connected to the cluster, and you can use the internal IP (ClusterIp) for thatService.

kubectl proxy and ClusterIP
kubectl proxy and ClusterIP

This method isn’t suitable for a production environment, but it’s useful for development, debugging, and other quick-and-dirty operations.

Exposing services as NodePort

Declaring a service as NodePort exposes the Service on each Node’s IP at the NodePort (a fixed port for that Service, in the default range of 30000-32767). You can then access the Service from outside the cluster by requesting <NodeIp>:<NodePort>. Every service you deploy as NodePort will be exposed in its own port, on every Node.

NodePort
NodePort

It’s rather cumbersome to use NodePortfor Servicesthat are in production. As you are using non-standard ports, you often need to set-up an external load balancer that listens to the standard ports and redirects the traffic to the <NodeIp>:<NodePort>.

Exposing services as LoadBalancer

Declaring a service of type LoadBalancer exposes it externally using a cloud provider’s load balancer. The cloud provider will provision a load balancer for the Service, and map it to its automatically assigned NodePort. How the traffic from that external load balancer is routed to the Service pods depends on the cluster provider.

LoadBalancer
LoadBalancer

The LoadBalancer is the best option for a production environment, with two caveats:

  • Every Service that you deploy as LoadBalancer will get it’s own IP.
  • The LoadBalancer is usually billed based on the number of exposed services, which can be expensive.

We are currently offering the OVH Managed Kubernetes LoadBalancer service as a free preview, until the end of summer 2019.

What about Ingress?

According to the official documentation, an Ingress is an API object that manages external access to the services in a cluster (typically HTTP). So what’s the difference between this and LoadBalancer or NodePort?

Ingress isn’t a type of Service, but rather an object that acts as a reverse proxy and single entry-point to your cluster that routes the request to different services. The most basic Ingress is the NGINX Ingress Controller, where the NGINX takes on the role of reverse proxy, while also functioning as SSL.

Ingress
Ingress

Ingress is exposed to the outside of the cluster via ClusterIP and Kubernetes proxy, NodePort, or LoadBalancer, and routes incoming traffic according to the configured rules.

Ingress behind LoadBalancer
Ingress behind LoadBalancer

The main advantage of using an Ingress behind a LoadBalancer is the cost: you can have lots of services behind a single LoadBalancer.

Which one should I use?

Well, that’s the one million dollar question, and one which will probably elicit a different response depending on who you ask!

You could go 100% LoadBalancer, getting an individual LoadBalancer for each service. Conceptually, it’s simple: every service is independent, with no extra configuration needed. The downside is the price (you will be paying for one LoadBalancer per service), and also the difficulty of managing lots of different IPs.

You could also use only one LoadBalancer and an Ingress behind it. All your services would be under the same IP, each one in a different path. It’s a cheaper approach, as you only pay for one LoadBalancer, but if your services don’t have a logical relationship, it can quickly become chaotic.

If you want my personal opinion, I would try to use a combination of the two…

An approach I like is having a LoadBalancer for every related set of services, and then routing to those services using an Ingressbehind the LoadBalancer. For example, let’s say you have two different microservice-based APIs, each one with around 10 services. I would put one LoadBalancer in front of one Ingress for each API, the LoadBalancerbeing the single public entry-point, and theIngress routing traffic to the API’s different services.

But if your architecture is quite complex (especially if you’re using microservices), you will soon find that manually managing everything with LoadBalancer and Ingress is  rather  cumbersome. If that’s the case, the answer could be to delegate those tasks to a service mesh…

What’s a service mesh?

You may have heard of Istio or Linkerd, and how they make it easier to build microservice architectures on Kubernetes, adding nifty perks like A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.

Istio, Linkerd, and similar tools are service meshes, which allow you to build networks of microservices and define their interactions, while simultaneously adding some high-value features that make the setup and operation of microservice-based architectures easier.

There’s a lot to talk about when it comes to using service meshes on Kubernetes, but as they say, that’s a story for another time…

 

Developer Evangelist at OVH Platform, Spaniard lost in Brittany, speaker, coder, dreamer and all-around geek.His job is all around the technical communities: developers, devops, ops... He writes blog posts, prepares tutorials, organizes events, speaks at conferences and meetups...

Leave a reply:

Your email address will not be published.