Tech Talk: Developing APIs the Easy Way – Streamline your API process with an endpoint-focused approach on Dec 5 at 11 am EST! Register now

Blog

The latest posts and insights about Ambassador Labs - our products, our ecosystem, as well as voices from across our community.

Rate Limiting, API Gateway

Part 2: Rate Limiting for API gateways

In the first article of this Rate Limiting series, I introduced the motivations for rate limiting. I discussed several implementation options (depending on whether you own both sides of the communication or not) and the associated tradeoffs. This article dives a little deeper into the need for rate limiting with API gateways Why Rate Limiting with an API Gateway? In the first article, I discussed options for where to implement rate limiting: the source, the sink, or middleware (literally a service in the middle of the source and sink).

May 8, 2018 | 9 min read

Rate Limiting

Part 1: Rate Limiting: A Useful Tool with Distributed Systems

Within the computing domain, rate limiting is used to control the rate of operations initiated or consumed or traffic sent or received. If you have been developing software for more than a year, you have most likely bumped into this concept. However, as with many architectural challenges, there are usually more tradeoffs to consider than can first appear. This article outlines some of the implementations, benefits, and challenges with rate limiting in modern distributed applications. Why Implement Rate Limiting? You implement rate limiting primarily for one of three reasons: to prevent a denial of service (intentional or otherwise) through resource exhaustion, to limit the impact (or potential) of cascading failure, or to restrict or meter resource usage.

April 26, 2018 | 10 min read

Microservices

9 Questions to Ask When (Continuously) Deploying Microservices

Richard Li Modern applications are systems, not just code. These applications are built from many different parts. For example, a modern application might consist of a handful of microservices (containing business logic) that use ElasticSearch (for search), Redis (for caching), and a PostgreSQL instance (for data storage). In this applications-are-systems world, existing deployment systems start to show their age. A previously simple task such as installing your application for local development now becomes a long Wiki document with dozens of steps to setup and configure dozens of different components.

March 27, 2018 | 9 min read
Useful Kubernetes Developer Tools

Kubernetes

3 Useful Kubernetes Developer Tools

We have multiple clusters and namespaces: some for production, some for load testing, and some for development. In the course of our day-to-day development, we’ve found a number of useful tools that improve our productivity. (ProTip: A productivity killer is typing kubectl delete ns foo … in the wrong cluster. The first two tools below help address that problem.) kubectx/kubens, which lets you easily switch between clusters and namespaces.

March 12, 2018 | 1 min read
NGINX, HA Proxy and the Evolution of L7, Proxies, and Microservices

Microservices

NGINX, HA Proxy and the Evolution of L7, Proxies, and Microservices

In a microservice architecture, services communicate with each other through L7 protocols such as gRPC and HTTP. Since the network is not reliable (and services can go down!), managing L7 communications is critical for reliability and scale.

February 28, 2018 | 4 min read
Kubernetes Ingress 101

Kubernetes API Gateway

Kubernetes Ingress 101: NodePort, Load Balancers, and Ingress Controllers

This article was updated in December 2021. This article will introduce the three general strategies in Kubernetes for ingress, and the tradeoffs with each approach. I’ll then explore some of the more sophisticated requirements of an ingress strategy. Finally, I’ll give some guidelines on how to pick your Kubernetes ingress strategy. What is Kubernetes ingress?

February 28, 2018 | 12 min read
1...3738
39
4041...44