Moving to the Cloud

Exploring the API Gateway to Success

What is an API gateway?

An API gateway is a front door to your applications and systems. It’s on the hot path of every user request, and because of this, it needs to be performant, secure, and easily configurable. The fundamentals of API gateway technology have evolved over the past ten years, and adopting cloud native practices and technologies like continuous delivery, Kubernetes, and HTTP/3 adds new dimensions that need to be supported by your chosen implementation.

Moving to the cloud through the lens of API gateways

This article explores the benefits and challenges of moving to the cloud through the lens of API gateways and highlights the new practices and technologies that you will need to embrace.

At Ambassador Labs, we’ve learned a lot about deploying, operating, and configuring cloud native API gateways over the past five years as our Edge Stack API gateway and CNCF Emissary-ingress projects have seen wide adoption across organizations of every size.

Adopting cloud native: Changes, challenges, and choices

Adopting cloud technologies brings many benefits but also introduces new challenges. This is true regardless of the role in which you work. Architects need to understand the changes imposed by the underlying hardware and learn new infrastructure management patterns. Developers and QA specialists need to explore the opportunities presented by container and cloud technologies and also learn new abstractions for interacting with the underlying infrastructure platforms. And platform engineers need to build and operate a supporting platform to enable developers to code, test, ship, and run applications with speed and safety.

You must establish your goals for moving to the cloud early in the process — ideally, this is the first thing you do. Most successful organizations base their goals on improving some or all of the DORA or Accelerate metrics.

DORA metrics are used by DevOps teams to measure their performance and find out whether they are “low performers” to “elite performers.” The four metrics used are deployment frequency (DF), lead time for changes (LT), mean time to recovery (MTTR), and change failure rate (CFR). You want to maximize your deployment frequency while minimizing the other metrics.

Gateway to speed: Establishing abstractions, separation of concerns, and self-service.

At Ambassador Labs, we have seen a high correlation between deployment frequency and successful adoption of cloud native principles and technologies. This ability to rapidly ship new software to customers — both for feature releases and incident resolution — adds a lot of value that can be easily understood throughout the organization, from the C-level to the product and engineering and support teams.

The key to this is focusing on providing the correct platform abstractions and embracing a self-service mindset. Take the API gateway use case as an example, there are two key personas involved: the platform engineers, who want to set appropriate guardrails to minimize incidents and maximize their security posture, and the developers, who want to release services and functionality rapidly and configure API endpoints dynamically.

Cloud native API gateway

A cloud native API gateway will enable both of these personas and associated use cases. For example, with Ambassador Edge Stack, we embraced the widely adopted Kubernetes Resource Model (KRM), which enables all of the API gateway functionality to be configured by Custom Resources and applied to a cluster in the same manner as any Kubernetes configuration. For example, using build pipelines or a GitOps continuous delivery process.


We’ve gone one step further, though, and designed our Custom Resources with the engineering best practice of separation of concerns for platform engineers and developers in mind.

Platform engineers can configure the core API gateway functionality using resources like Listener, Host, and TLSContext. They can also provide a range of authentication and authorization options (using OIDC, JWT, etc) and rate limiting using the Filter resources. Independently from this — although appropriately coupled at runtime — developers can launch new services and APIs using the Mapping resource. They can also augment their API endpoints with required authn/authz policy and rate limiting using the FilterPolicy and RateLimit custom resources.

Gateway to the future: Becoming cloud native is a journey

In addition to adopting a good separation of concerns and a self-service approach, there are a number of other factors to consider when adopting a cloud native API gateway. The following articles in this series explore each of these considerations in more detail:


  • Service discovery: Monoliths, microservices, and meshes
  • Load balancing methodologies
  • Load balancing in the cloud

Adding support for modern protocols like HTTP/3

Service discovery: API gateway and/or service mesh

When adopting a cloud native approach to service connectivity and communication, there is often a recurring question of which technology is preferred for handling how microservice-based applications interact with each other. That is, “should I start with an API gateway or use a Service Mesh?”


When we talk about both technologies, we refer to the end user’s experience in achieving a successful API call within an environment. Ultimately, these technologies can be classified as two pages of the same book, except they differ in how they operate individually. It is essential to understand the underlying differences and similarities between both technologies in software communication.


In this article, you will learn about service discovery in microservices and also discover when you should use an API gateway and when you should use a service mesh.

Kubernetes load balancing methodologies

Load balancing is the process of efficiently distributing network traffic among multiple backend services and is a critical strategy for maximizing scalability and availability. In Kubernetes, there are various choices for load balancing external traffic to pods, each with different tradeoffs.


This article offers a tour de force of various load balancing strategies and implementations, with the goal to help you choose how to get started and how to evolve this as your cloud adoption grows.

Cloudy with a chance of load balancing: AWS EKS and API gateways

We’ve helped thousands of developers get their Kubernetes ingress controllers up and running across different cloud providers. Amazon users have two options for running Kubernetes: they can deploy and self-manage Kubernetes on EC2 instances, or they can use Amazon’s managed offering with Amazon Elastic Kubernetes Service (EKS).

If you are using EKS Anywhere, the recommended ingress and API gateway is Emissary-ingress. Overall, AWS provides a powerful, customizable platform on which to run Kubernetes. However, the multitude of options for customization often leads to confusion among new users and makes it difficult for them to know when and where to optimize for their particular use case.


After working with many customers to configure their ingress controller successfully on AWS EC2 and Amazon EKS, we found a common set of questions that we were asking users. We took those questions and converted them into a series of key decisions that we’ve presented in this article.

Embracing modern protocols: To HTTP/3 and beyond

With HTTP/3 being supported by 70%+ of browsers (including Chrome, Firefox, and Edge), and the official spec being finalized in June 2022, now is the time that organizations are beginning a widespread rollout of this protocol to gain performance and reliability. As leaders in the implementation of the HTTP/3 spec, Google and the Envoy Proxy teams have been working on rolling this out for quite some time, and they have learned many lessons.


HTTP/3 is especially beneficial for users with lossy networks, such as cell/mobile-based apps, IoT devices, or apps serving emerging markets. The increased resilience through rapid reconnection and the reduced latency from the new protocol will benefit all types of Internet traffic, such as typical web browsing/search, e-commerce and finance, or the use of interactive web-based applications, all of which can encounter packet loss of 2%+ on the underlying networks.


This article provides details of the HTTP/3 protocol and highlights the benefits and challenges of adding support for this in your applications. We have also conducted preliminary benchmark tests using the Google Chrome web browser and Ambassador Edge Stack 3.0 to study HTTP/3 and test it against the previous versions of the HTTP protocol.

Adopting a cloud native API gateway: Focus on speed, safety, and self-service

Choosing to become cloud native is a big decision. There are many things to consider, both from an organizational and technical perspective. The fundamentals of API gateway technology have evolved over the past ten years, and adopting cloud native practices and technologies like continuous delivery, Kubernetes, and HTTP/3 adds new dimensions that need to be supported by your chosen implementation.


We recommend you focus on speed, safety, and self-service. You want developers to be able to move fast and out-innovate your competitors. You also want platform engineers to provide guardrails and security for your systems. And critically, you don’t want your teams drowning in IT service desk requests and ticket handoffs. Self-service is the only way to move with speed and safety.

Let's Talk