Tech Talk: Developing APIs the Easy Way – Streamline your API process with an endpoint-focused approach on Dec 5 at 11 am EST! Register now

Understanding Kubernetes Ingress

To get traffic into your Kubernetes cluster, you need an ingress controller. To properly address security, availability, and developer workflows in a Kubernetes environment, you need more. Learn why you really need an Edge Stack API Gateway to securely and effectively manage traffic to your Kubernetes application.

What is Kubernetes Ingress?

Kubernetes ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster.


A typical Kubernetes application has pods running inside a cluster and a load balancer outside. The load balancer takes connections from the internet and routes the traffic to an edge proxy that sits inside your cluster.


This edge proxy is then responsible for routing traffic into your pods. The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes; however, the edge proxy can also be configured with custom resource definitions (CRDs) or annotations.

Watch the 2 Minute Explainer Video

In this video from our Getting Edgy explainer series, Richard Li explains Kubernetes ingress.

NodePort, Load Balancers, and Ingress Controllers

In Kubernetes, there are three general approaches to exposing your application

NodePort

A NodePort is an open port on every node of your cluster. Kubernetes transparently routes incoming traffic on the NodePort to your service, even if your application runs on a different node.

Load Balancer

Using a Load Balancer service type automatically deploys an external load balancer. This external load balancer is associated with a specific IP address and routes external traffic to a Kubernetes service in your cluster.

Ingress Controllers

Kubernetes supports a high-level abstraction called Ingress, which allows simple host or URL-based HTTP routing. An ingress controller is responsible for reading the Ingress Resource information and processing that data accordingly.

The Evolution of Layer 7 Proxies

NGINX was initially designed as a web server in 2004. In 2006, a competing web server HAProxy was released.

Efforts to manage layer 7 began in 2010 in the form of smart RPC libraries with Finagle from Twitter, Hystrix from Netflix, and gRPC from Google. Then, 3 years later, Airbnb announced SmartStack, the spiritual ancestor of the modern-day service mesh.

2016 was a major year for proxies and service meshes with Buoyant announcing Linkerd and Lyft announcing Envoy. Since the release of Envoy, the space has continued to change rapidly with responses from the incumbents, NGINX and HAProxy as well as new projects based on Envoy Proxy. Today, there are 13 Envoy-based projects referenced on the Envoy website.

If an ingress controller is a car, a proxy is the engine. Proxies have evolved from web servers in the mid-2000s to high powered layer 7 proxies today to account for the new world of microservices and cloud native applications. Today, NGINX, HAProxy, and Envoy are the most popular proxies powering Kubernetes Ingress Controllers.

A Proxy Powers Your Ingress Controller

If an ingress controller is a car, a proxy is the engine. Proxies have evolved from web servers in the mid-2000s to high powered layer 7 proxies today to account for the new world of microservices and cloud native applications. Today, NGINX, HAProxy, and Envoy are the most popular proxies powering Kubernetes Ingress Controllers.

NGINX

NGINX has evolved since its original release as a web server to support more traditional proxy use cases. NGINX has two variants, NGINX Plus, a commercial offering, and NGINX open source. Per NGINX, NGINX Plus “extend[s] NGINX into the role of a frontend load balancer and application delivery controller.” NGINX open source has a number of limitations, including limited observability and health checks.


NGINX-Powered Ingress Controllers:
NGINX Ingress Controller

Kong Ingress Controller

HAProxy

Originally built in 2006, HAProxy is a reliable, fast, and proven proxy. However, the internet operated very differently in 2006 and HAProxy has been slowly catching up to the new world of microservices. For example, support for hitless reloads wasn't fully addressed until the end of 2017.

HAProxy-Powered Ingress Controllers:

HAProxy Ingress

Envoy Proxy

Envoy Proxy is the newest and fastest-growing proxy on the scene. Envoy was designed from the ground up for microservices with features such as hitless reloads, observability, resilience, and advanced load balancing. Lyft, Apple, Google, Salesforce, and many more companies use Envoy in production and the CNCF provides an independent home to Envoy.

Envoy-Powered Ingress Controllers:

Ambassador Edge Stack

Istio Gateway

Custom Proxy Ingress Controllers

Some ingress controllers are powered by custom-built proxies, like Tyk or Traefik. Since these proxies are custom built they typically have smaller communities and therefore slower feature momentum. In many cases, they lack the depth of the more widely adopted proxies that have been battle tested in more environments.


Self-Powered Ingress Controllers

Skipper

Other Ingress Controllers

Some ingress controllers run outside of your Kubernetes cluster. In many cases, these are provided by a hardware provider, like Citrix or F5. If you're running in AWS, you may choose to deploy your AWS ALB Ingress controller outside of your cluster and route it to an ingress controller that runs inside your Kubernetes cluster, like Ambassador.


Other Ingress Controllers

AWS ALB Ingress Controller

Citrix Ingress Controller

When an Ingress Controller Isn't Enough

The cloud presents new challenges for cloud native applications. Whether you are building a greenfield app or migrating a legacy app, your cloud native application will have many more microservices at the edge. These microservices will typically be managed by different teams and therefore have diverse requirements. Envoy

Proxy and Ambassador were created to address these exact challenges.


The Kubernetes documentation and many other resources will recommend a simple ingress controller to get you started getting traffic into your Kubernetes cluster. However, as your application becomes more advanced there is a lot of additional functionality you will expect from your ingress controller including

Security

  • Transport Layer Security (TLS) Termination
  • Authentication
  • Rate Limiting
  • WAF Integration

Availability

  • Different Load Balancing Algorithms
  • Circuit Breakers
  • Timeouts
  • Automatic Retries

Progressive Delivery

  • Canary Releases
  • Traffic Shadowing

Self-Service Configuration

  • Declarative Configuration
  • Advanced Policy Specification
  • GitOps-Friendly

Next Steps

Edge Stack API Gateway delivers the scalability, security, and simplicity for some of the world's largest Kubernetes installations.