🚀 Blackbird is here! Accelerate your API development with AI-powered Code Gen, Instant Mocking with Prod-Like Environments. 👉 Start your free trial

Blog

The latest posts and insights about Ambassador Labs - our products, our ecosystem, as well as voices from across our community.

Kubernetes

Distributed Tracing with Java “MicroDonuts”, Kubernetes and the Edge Stack API Gateway

Distributing tracing is increasingly seen as an essential component for observing microservice-based applications. As a result, many of the modern microservice language frameworks are being provided with support for tracing implementations such as Open Zipkin, Jaeger, OpenCensus, and LightStep xPM. Google was one of the first organisations to talk about their use of distributed tracing in a paper that described their Dapper implementation, and one of the requirements that they concluded essentially was the need for ubiquitous deployment of the tracing system: Ubiquity is important since the usefulness of a tracing infrastructure can be severely impacted if even small parts of the system are not being monitored As I’ve written about previously, many engineers beginning a greenfield project or exploring a migration based on the microservice architecture often start by deploying a front proxy or API gateway at the edge to route traffic to independent services dynamically. As every inbound request flows through this component, an edge gateway will naturally need to support distributed tracing, ideally using a well-established open protocol.

August 16, 2018 | 14 min read

Envoy, Edge Stack API Gateway

Envoy vs NGINX vs HAProxy: Why the Edge Stack API Gateway chose Envoy

NGINX, HAProxy, and Envoy are all battle-tested L4 and L7 proxies. So why did we choose Envoy as the core proxy as we developed Edge Stack API Gateway for applications deployed into Kubernetes? It’s an L7 world In today’s cloud-centric world, business logic is commonly distributed into ephemeral microservices. These services need to communicate with each other over the network. The core network protocols that are used by these services are so-called “Layer 7” protocols, e.g., HTTP, HTTP/2, gRPC, Kafka, MongoDB, and so forth. These protocols build on top of your typical transport layer protocols such as TCP. Managing and observing L7 is crucial to any cloud application, since a large part of application semantics and resiliency are dependent on L7 traffic.

June 21, 2018 | 9 min read

Rate Limiting, Edge Stack API Gateway

Part 3: Implementing a Java Rate Limiting Service for Edge Stack API Gateway

The rate limiting functionality offered by the Kubernetes API Gateway, Edge Stack is fully customizable, allowing any service that implements a gRPC endpoint to decide whether a request should be limited or not. In this article, which builds on the previous part 2 and part 1, you will learn how to build and deploy a simple Java-based rate limiting service for Edge Stack how rate limiting works. Getting Setup: The Docker Java Shop In my previous tutorial, “Deploying Java Apps with Kubernetes and the Edge Stack API Gateway,” I added the open source Edge Stack API gateway to an existing series of Java (Dropwizard and Spring Boot) based services that were deployed into Kubernetes. If you haven’t seen this, I would recommend going through this tutorial and the others in the series to familiarize yourself with the fundamentals. The rest of this article assumes you’re comfortable building Java-based microservices and deploying them to Kubernetes, and you also have all of the prerequisites installed (I’m using Docker for Mac Edge, with built-in Kubernetes support, but the principles should be similar if you are using minikube or a remote cluster).

May 17, 2018 | 14 min read

Rate Limiting, Edge Stack API Gateway

Part 2: Rate Limiting for API gateways

In the first article of this Rate Limiting series, I introduced the motivations for rate limiting. I discussed several implementation options (depending on whether you own both sides of the communication or not) and the associated tradeoffs. This article dives a little deeper into the need for rate limiting with API gateways Why Rate Limiting with an API Gateway? In the first article, I discussed options for where to implement rate limiting: the source, the sink, or middleware (literally a service in the middle of the source and sink).

May 8, 2018 | 9 min read

Rate Limiting

Part 1: Rate Limiting: A Useful Tool with Distributed Systems

Within the computing domain, rate limiting is used to control the rate of operations initiated or consumed or traffic sent or received. If you have been developing software for more than a year, you have most likely bumped into this concept. However, as with many architectural challenges, there are usually more tradeoffs to consider than can first appear. This article outlines some of the implementations, benefits, and challenges with rate limiting in modern distributed applications. Why Implement Rate Limiting? You implement rate limiting primarily for one of three reasons: to prevent a denial of service (intentional or otherwise) through resource exhaustion, to limit the impact (or potential) of cascading failure, or to restrict or meter resource usage.

April 26, 2018 | 10 min read

Microservices

9 Questions to Ask When (Continuously) Deploying Microservices

Richard Li Modern applications are systems, not just code. These applications are built from many different parts. For example, a modern application might consist of a handful of microservices (containing business logic) that use ElasticSearch (for search), Redis (for caching), and a PostgreSQL instance (for data storage). In this applications-are-systems world, existing deployment systems start to show their age. A previously simple task such as installing your application for local development now becomes a long Wiki document with dozens of steps to setup and configure dozens of different components.

March 27, 2018 | 9 min read
1...3435
36
3738...42