Choosing an API Gateway: Defining Kubernetes-Native
Community Corner with Cindy Mullins
The latest posts and insights about Ambassador Labs - our products, our ecosystem, as well as voices from across our community.
Community Corner with Cindy Mullins
Research shows that nearly 90% of developers use APIs in some way. However, using live APIs during development can pose risks and slow down your workflow. That's where API mocking becomes invaluable. What is API Mocking? API mocking simulates the behavior of real APIs, creating a controlled environment for testing and development.Â
I'm pretty sure we all witnessed–if not lived through the disruption of–one very expensive bad code release and IT outage this summer. As someone who spent decades in software development, I can tell you with confidence that although the actual costs of that mistake were upwards of one billion dollars, the emotional toll on the developers involved was probably more. It was a lose-lose situation for all involved. I know developers are pushed harder than ever to code faster, release more often, and–" oh, by the way"–test their code. Meanwhile, infrastructures get more complex by the minute as regions, offices, networks, apps, services, machines, and containers multiply. As a former developer, I understand the pain of building, testing, and deploying your own code (which, by the way, is even more complicated than ever in today's cloud-native, microservices-driven world). Now, as a CEO, I also understand the pain (or at least the very real fear of the pain) of a potential billion-dollar mistake. Regardless of whether you relate to either of those positions, on a less sensational note, I think we can all appreciate the role of healthy growth in a business's success today and the critical part that software plays. Let’s take APIs, for example. A subset of all of the things that developers might be building, but - importantly - an increasingly vital necessity and one of the biggest drivers of volume, urgency, and security. Application Programming Interfaces; it’s an awful acronym, I’m sorry. Despite that, it's a heck of a powerful thing. These little pieces of code are the language that enables all of our devices and various applications to connect and share information. I'm certainly not the first to say it, but APIs are the essential building blocks of the modern world.
When building applications with APIs, choosing the right architecture for the job is key. APIs can be defined by SOAP, GraphQL, gRPC - the list goes on and on. In fact, any interface between two pieces of code is an API. After all, APIs are application programming interfaces. Here, we’ll examine when RESTful APIs are often the first and best choice—they’re nice, neat GET and POST endpoints with developer-friendly URLs. REST still holds about 90% of the market (our friends at Postman keep track of those stats in a great annual report you can find). But there are also times when you may consider using a different API protocol like gRPC in your application.
What is loading balancing in Kubernetes? Load balancing is the process of efficiently distributing network traffic among multiple backend services, and is a critical strategy for maximizing scalability and availability. There are a variety of choices for load balancing Kubernetes external traffic to Pods, each with different tradeoffs. Selecting a load balancing algorithm should not be undertaken lightly, especially if you are using application layer (L7) aware protocols like gRPC. It’s all too easy to select an algorithm that will result in a single web server running hot or some other form of unbalanced load distribution.
A modern API gateway like Edge Stack Kubernetes API gateway empowers organizations with a cost-effective solution to harness the full potential of their microservices architecture by streamlining application development and management in a Kubernetes ecosystem. One area where Edge Stack really shines is in continuous delivery testing on many levels. For example, you can deploy a new service or an upgraded version of a service into production and hide, or “cloak,” this service from end-users via the gateway. This effectively separates the deployment and release process, allowing you to run acceptance and nonfunctional tests on the cloaked service, such as load tests and security analysis. You can also perform canary testing by allowing a small amount of user traffic to flow to this new deployment. There is also potential to use a gateway to “shadow” (duplicate) real production traffic to the new version of the service and hide the responses from the user, and “shift” traffic around to focus load on a specific cluster of your system. Finally, you can use an API gateway to implement and control chaos testing. These techniques allow you to learn how your service will perform under realistic use cases, load, and failure scenarios, which are critical for continuous delivery testing.