Join our - Lead the Future of API Dev: A New Approach with Blackbird -Webinar on September 19thRegister Now

Back to blog
EDGE STACK API GATEWAY

Why Edge Stack uses CRDs instead of the Kubernetes Ingress Resource

Cindy Mullins
September 3, 2024 | 8 min read

I’m Cindy Mullins, the Community Manager here at Ambassador. Our Community Corner segments on LinkedIn feature a weekly deep dive into common questions we get in our Community across all of our products: Edge Stack API Gateway, Telepresence, and Blackbird API Development.

In this session, I wanted to explore how Edge Stack API Gateway resources are defined in Kubernetes and why we take the approach of using custom resource definitions (or CRDs) rather than rely, as some API Gateways do, on the Kubernetes Ingress Resource.


Shortcomings of the Kubernetes Ingress Resource

Within the Kubernetes space, different API Gateways take a variety of approaches to configuration recommendations and best practices. One implementation difference is how best to configure the API Gateway to receive and route traffic. A common way to do this is by using the Kubernetes Ingress resource.

Users are familiar with the Ingress resource, so when they start using Edge Stack a common assumption is that they’ll be configuring Edge Stack this same way. This is a common misconception, let me explain.

The Ingress resource is one of many Kubernetes resource types. Its function is to connect requests from outside the cluster to services inside Kubernetes clusters. Developers can use the Ingress resource to expose their services and APIs by defining routes directly in this resource. Many commercial controllers, like NGINX and HAProxy, implement the model of relying on the Ingress resource for their traffic management.

Here’s an example of an Ingress resource that routes traffic directed at /foo/ to service1:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: ambassador
name: test-ingress
spec:
rules:
- http:
paths:
- path: /foo/
backend:
serviceName: service1
servicePort: 80

You can see at a glance that the syntax here is somewhat cumbersome. An Ingress needs the apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. That’s all pretty standard.

Although the Ingress resource is widely known and accepted, its capabilities are rather basic. They are confined mostly to simpler routing scenarios based on the path and host defined in the request. Also, the Ingress resource is limited in traffic protocols and only supports rules for directing HTTP(S) traffic.

In an effort to enhance these basic capabilities, some vendors offer Ingress controllers that collect multiple Kubernetes Ingress resources behind a common IP. By doing this they can implement more advanced traffic routing logic and features like load balancing.

The Ingress resource also frequently uses annotations to configure the available options. These annotations sort of hang off the Ingress like footnotes, where they’re not compartmentalized or very searchable. Importantly, different proprietary Ingress controllers support different annotations, and often, these don’t work together.

But by “scaffolding” these annotations onto the Ingress resource, or combining them behind an Ingress Controller, Ingress configurations can often become inoperable across different controllers. That can also make these resources harder to maintain and hard to scale as your deployment grows.

So the TL/DR is:The Kubernetes Ingress resource is limited in functionality and then building on it to achieve greater functionality becomes tricky and cumbersome. We wanted an easier, more scalable option for our users. For this reason, Edge Stack instead relies on custom resource definitions or CRDs.

The CRDs handle the job of “ingress” - that is, directing external requests from outside the cluster to the services inside Kubernetes clusters - with a cleaner configuration that is modular, integrated by design, and more transparent.

And I’ll be clear, if you do require the Ingress resource for a particular use case, Edge Stack does still support it. But in the vast majority of cases it makes more sense - and provides far more advanced and better integrated functionality - to define your routes and traffic management specifications using CRDs.

The CRD Advantage

CRDs (Custom Resource Definitions) allow developers to extend the Kubernetes API with custom resources tailored to specific needs, enabling greater flexibility and customization. They support a declarative management approach, making it easier to automate the deployment and scaling of applications. CRDs integrate seamlessly with Kubernetes' native features, providing consistency, reusability, and security. Additionally, they are widely supported by the Kubernetes community and ecosystem, which fosters innovation and ease of adoption.

Kubernetes has been an extensible API since version 1.7 when custom resource definitions were first introduced. Based on the above, it’s clear we can do a lot more with Kubernetes by taking advantage of these available extensions. Let’s explore.
Clean, Simple, Modular

Edge Stack is designed to work with CRDs, which keep each piece of your config clean and modular. You define CRDs like Hosts and Mappings individually rather than grouping all of your config together in an Ingress or multiple Ingresses with annotations.

This is the equivalent configuration of the Ingress we looked at a minute ago using a Mapping CRD instead. You can see the syntax for basic routing is simpler. And there are lots of additional specifications available for more advanced routing scenarios.

---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: test-ingress-0-0
spec:
hostname: '*'
prefix: /foo/
service: service1:80

Advanced Traffic Management Options

With CRDs you can get very specific with the traffic routing and matching logic and utilize a wider range of functionality for dealing with different routing scenarios. On the Mapping itself, you can define, for example, regex matches and path rewrites, as well as things like upgrading traffic to non-HTTP protocols, originating TLS, and setting precedence.

Newly introduced in the 3.x series of Edge Stack, you can also define the Listener CRD which allows you to specify the ports you want Edge Stack to listen for traffic on as well as the traffic protocols you want to serve. On the Listener CRD you can also configure the security model which defines how the Listener will decide whether a request is secure or insecure and the L7 Depth for when layer 7 proxies are in use.

Easier Maintenance

Another advantage is that CRDs are easier to maintain as your deployment grows. If you need to add new Services or even new deployments, you don’t need to redefine and potentially break your whole Ingress. Instead, you can target specific Hosts, Mappings, Listeners, and Module CRDs individually to customize and expand the scale of your deployments.

You can run a kubectl get command to output all of your Mappings or Hosts for example, so when you’re updating an API version on your deployment, it’s easier to view and update these specs without having to comb through configuration details that have been tacked on as annotations.

Supports GitOps

CRDs make your Edge Stack deployment more containerized, and this CRD structure is then also more compatible with GitOps workflows and things like continuous deployment. Platform engineers tend to like CRDs because they can take active ownership over them, updating and maintaining them as part of their GitOps workflow.

CRDs All Day

I hope this helps explain why Edge Stack depends on CRDs and how this is an advantage over other API Gateways that rely on the more limited Ingress Resource. If you want to learn more, check out our Edge Stack documentation.

Edge Stack API Gateway

Explore Edge Stack's advanced features now—upgrade your Kubernetes with our CRDs for smarter API management