Join our - Lead the Future of API Dev: A New Approach with Blackbird -Webinar on September 19thRegister Now

Back to blog
EDGE STACK API GATEWAY

Choosing an API Gateway: Defining Kubernetes-Native

Community Corner with Cindy Mullins

Cindy Mullins
July 31, 2024 | 9 min read
Choosing an API Gateway: Defining Kubernetes-Native

I’m Cindy Mullins, the Community Manager here at Ambassador. Our new Community Corner segments on LinkedIn feature a weekly deep dive into common questions we get in our Community across all of our products: Edge Stack, Telepresence, and Blackbird. One of the most common questions I get, especially from new users, is how to define “Kubernetes-Native” as it relates to our flagship product, Edge Stack API Gateway.

There are many API Gateways, but I’d like to focus on three types in this blog. First, Legacy API Gateways provide access to applications running on legacy systems. Then there are what you might call ‘agnostic’ Gateways that are not Kubernetes specific, although they can work with Kubernetes. Lastly, there are Kubernetes-native API Gateways. So, what’s the difference?

First–A Note About K8s

Over the past several years, as I’m sure you know, Kubernetes has become the leading container management platform for deploying microservice applications. The Kubernetes environment is pretty demanding, with complex transactions and long-lived connections. Plus, we know that modern microservice architecture is highly dynamic and ephemeral—so it makes sense that it has created a need for API Gateways built for this purpose. So when new users ask me, “How does a Kubernetes-native API Gateway differ from other API Gateways?” It’s really a matter of architecture, configurability, and maintainability.

Legacy API Gateways

Mulesoft and Apigee are examples of API Gateways that can be used with legacy systems. They provide a secure, scalable way to expose legacy systems as APIs or to secure access to legacy applications through an API gateway. These API gateways are designed for something other than the highly dynamic environments that run on Kubernetes. You can make these tools work with Kubernetes, but they require additional infrastructure and some design effort to be highly available and production-ready. They’re also often deployed centrally, which runs counter to the highly distributed nature of modern cloud-based applications.

While legacy options serve as reliable workhorses for traditional API management, they come with their own set of drawbacks, especially when compared to modern, cloud-native solutions like Edge Stack. Here are a few of the drawbacks of legacy gateways:

  • Most obviously, the Lack of Cloud-Native Integration: Legacy API gateways were designed for on-premises environments and often lacked integration with modern cloud-native technologies and architectures. This can lead to difficulties leveraging cloud-native features such as auto-scaling, service discovery, and seamless deployment across multiple environments (e.g., hybrid and multi-cloud setups). It can also hinder the adoption of microservices and containerization strategies.
  • Limited Flexibility and Scalability: These gateways are typically built for static and monolithic applications, with limited flexibility to adapt to dynamic scaling requirements of microservices. This can result in inefficient resource utilization and difficulty scaling applications to meet varying demands, leading to performance bottlenecks and increased operational costs.
  • Complex Configuration and Management: Legacy API gateways often require extensive manual configuration and management, which can be time-consuming and prone to errors. This increases the operational overhead and the potential for misconfigurations, making it challenging to maintain and update the gateway in fast-paced development environments.
  • Cost of Maintenance and Upgrades: Maintaining and upgrading legacy API gateways can be costly and resource-intensive, especially as they become increasingly outdated. Organizations may need higher operational costs (and enterprise-level costs overall) and increased complexity in integrating new features or updates, making it harder to keep up with evolving business needs and technological advancements.

Agnostic API Gateways

Other API Gateways, like Kong or Gloo, are not Kubernetes-specific but they can be deployed in various environments, including Kubernetes. However, since they’re not built for Kubernetes specifically, they demand more manual configuration and may not fully leverage Kubernetes' features. And, although they are adaptable, these API Gateways often don’t provide the same level or degree of Kubernetes functionality like automatic service discovery, load balancing, and dynamic routing as a Kubernetes-native solution would. As a result, they may require more hands-on management to deploy and maintain. Here are a few of the issues you’ll run into with agnostic API gateways:

  • Lack of Native Integration: Agnostic API gateways are designed to operate in various environments, often meaning they don't integrate deeply with Kubernetes-specific features. This can lead to missing out on benefits like seamless scaling, service discovery, and native load balancing that Kubernetes offers.
  • Complex Configurations: These gateways might require more manual setup to work efficiently within a Kubernetes environment. The increased complexity can lead to more potential misconfigurations and additional operational overhead, making management more cumbersome.
  • Performance Overhead: Because agnostic gateways must support multiple environments, they may not be optimized for Kubernetes-specific performance enhancements. This can result in higher latency and less efficient resource utilization compared to their Kubernetes-optimized counterparts.
  • Inconsistent Security: Security policies and mechanisms in agnostic gateways might need to align better with Kubernetes security practices. This inconsistency can lead to potential security gaps or require extra effort to ensure comprehensive security coverage.
  • Monitoring and Observability: Agnostic gateways integrate less smoothly with Kubernetes-native monitoring and logging tools such as Prometheus and Grafana. This can complicate achieving consistent and effective observability, making it harder to track performance and troubleshoot issues.

Kubernetes-Native API Gateways

By contrast, Kubernetes API Gateways are purpose-built and are designed to operate within Kubernetes clusters. So they can automatically discover services, route traffic intelligently, and adapt to changes in real time, which helps ensure consistency.

These API Gateways also provide fine-grained control over traffic routing and security policies, making it easier to implement microservices-based architectures and enforce security best practices. They also offer enhanced observability with metrics, logs, and monitoring capabilities, which give insight into API performance and health.

Ambassador Edge Stack API Gateway is an example of a Kubernetes-native API Gateway. It relies entirely on Kubernetes for reliability, availability, and scalability, so this is all built-in with ease. Edge Stack allows for accelerated scalability by letting you manage high traffic volumes and distribute incoming requests across multiple backend services, ensuring reliable application performance. It also has enhanced security to protect your APIs from unauthorized access and malicious attacks with robust security features, including WAF, rate limiting, IP whitelisting, and more.

Another note Edge Stack uses declarative YAML. So your object, resource, custom resource, application, middleware, workload, image, etc., is designed to run on the Kubernetes platform and run with its own YAML or a shared YAML file. You can also easily scale Edge Stack by changing the replicas in your deployment or, for example, using a horizontal or vertical pod autoscaler.

In the end

Kubernetes technology is the way of the future, so having Edge Stack, which persists in all states in Kubernetes, means you’re not required to have a separate database for stateful data that you’d have to maintain alongside your deployment. Here at Ambassador, we recognized the importance of K8s technology, so we chose to build Edge Stack to be Kubernetes-native.

When it comes to choosing an API Gateway you have lots of choices, but I hope that clarifies why we’ve decided to build Edge Stack as a Kubernetes-native API Gateway and why it’s a natural choice for running services in a Kubernetes environment.

Try Edge Stack API Gateway Now