Envoy Proxy 101: What it is, and why it matters?

Envoy Proxy is a modern, high performance, small footprint edge and service proxy. Envoy is most comparable to software load balancers such as NGINX and HAProxy. It is built to handle the complex networking challenges of modern microservice-based architectures.


Originally written and deployed at Lyft, Envoy now has a vibrant contributor base and is an official Cloud Native Computing Foundation (CNCF) project.

Background

As organizations have adopted microservices, a crucial component of deploying and managing microservices is a state-of-the-art L7 proxy. An L7 proxy provides observability, resilience, and transparent routing of your actual service. As a layer 7 proxy, Envoy can be deployed as a sidecar alongside application services. It abstracts the networking complexity and provides a consistent, high-performance data plane. Its modular architecture and extensive feature set make it a powerful tool for building resilient and observable distributed systems.

Introduction to modern Network load balancing and proxying

Matt Klein - An excellent primer on load balancing in today's world, covering essential capabilities and why they're important

Read more

Lyft's Envoy: From Monolith to Service Mesh

Matt Klein - Watch Matt Klein cover Lyft's architectural migration from monolith to a fully distributed service mesh, the origins of Envoy, a high level architectural overview, and future directions.

Watch now

API versioning and evolution with proxies

Cindy Sridharan - A L7 proxy is a powerful tool to help you iterate your APIs, while minimizing user impact. This article discusses one such use case.

Read more

Using API Gateways to Facilitate Your transition from Monolith to Microservices

Daniel Bryant - An API Gateway like Edge Stack is a proxy deployed at your edge, and is frequently used to facilitate a migration from monolith to microservices.

Key Features and Benefits

  1. Dynamic service discovery: Envoy integrates with various service discovery systems, enabling automatic backend service discovery and routing updates.
  2. Advanced load balancing: Envoy supports multiple load balancing algorithms, including round-robin, least-request, and ring hash, which ensure the efficient distribution of traffic across service instances.
  3. Circuit breaking and fault tolerance: Envoy can detect and isolate unhealthy service instances, preventing cascading failures and ensuring system resilience.
  4. Observability: Envoy generates detailed metrics, logs, and distributed traces, providing deep visibility into the system's behavior and performance.
  5. Extensibility: Envoy's filter chain mechanism allows custom filters to be plugged in at various points in the request processing pipeline, enabling advanced traffic control and modification.

Envoy has a highly sophisticated configuration system.

Envoy has a highly sophisticated configuration system. For a basic configuration, it supports static configuration via YAML files. For more advanced configuration, Envoy has a set of gRPC-based APIs. These tutorials walk through the basics of how to configure it.

Tutorials on Using Envoy

Getting started with Envoy Proxy for microservices resilience

A basic introduction to using the Envoy Proxy and configuring it.

Read more

Deploying Envoy with a Python Flask webapp and Kubernetes

Deploy a real application using Kubernetes, Postgres, Flask, and Envoy

Read more

Deploying Envoy as an API Gateway for Microservices

Learn how you can deploy Envoy as an edge service in Kubernetes

Read more

Comparing Envoy, NGINX, and HAProxy: Key Features and Capabilities

Load Balancing

  • Envoy: Supports advanced load balancing algorithms, including round-robin, least-request, ring hash, and more.
  • NGINX: Offers load balancing features, including round-robin, least-connected, and IP hash.
  • HAProxy: Provides advanced load balancing capabilities, supporting algorithms like round-robin, static-rr, leastconn, and more.

Dynamic Configuration

  • Envoy: Designed for dynamic configuration through APIs, allowing real-time updates without restarts.
  • NGINX: Requires configuration file reloads for most changes, limiting dynamic adaptability.
  • HAProxy: Supports runtime API for some configuration changes, but not as extensive as Envoy.

Protocol Support

  • Envoy: Extensive support for modern protocols, including HTTP/2, gRPC, and WebSocket.
  • NGINX: Supports HTTP/2 and WebSocket, but gRPC support requires additional modules.
  • HAProxy: Supports HTTP/2 and WebSocket, but gRPC support is limited.

Observability

  • Envoy: Built-in support for detailed metrics, logging, and distributed tracing.
  • NGINX: Provides basic metrics and logging, but advanced observability requires additional modules or tools.
  • HAProxy: Offers metrics and logging capabilities but is less extensive than Envoy's built-in observability features.

Extensibility

  • Envoy: Highly extensible through a filter chain mechanism, allowing custom processing at various stages.
  • NGINX: Supports modules and Lua scripting for extensibility but is less flexible than Envoy's filter chain.
  • HAProxy: Provides some extensibility through Lua scripting and plugins but not as comprehensive as Envoy.

Service Mesh Integration

  • Envoy: Widely used as the data plane in service mesh solutions like Istio, Consul Connect, and AWS App Mesh.
  • NGINX: Can be used in service mesh architectures, but not as commonly as Envoy.
  • HAProxy: It is not as widely used in service mesh solutions as Envoy.

NGINX and HAProxy are powerful proxies with their own strengths, but Envoy's dynamic configuration, modern protocol support, observability features, and extensibility make it particularly well-suited for cloud-native applications and service mesh architectures.

Service Mesh Integration

A service mesh is a transparent layer that adds resilience, observability, and security to your service-to-service communication. Example service meshes include Istio and Linkerd. Istio is closely associated with Envoy because Istio relies on it to do the actual Layer 7 traffic management. Istio itself is a control plane for a fleet of Envoy Proxies that are deployed next to your microservices.

Service Mesh

What is a service mesh and do I need one when developing microservices?

Daniel Bryant - this talk from MicroXchg covers what service meshes are, why they're well-suited for microservice deployments, and how to use a service mesh when you're deploying microservices

Watch now

Service mesh data plane vs. control plane

Matt Klein - As the idea of the "service mesh" has become increasingly popular over the last two years and as the number of entrants into the space has swelled

Read more

The Mechanics of Deploying Envoy at Lyft

Matt Klein - This talk covers the logistical details of how Envoy was developed and deployed incrementally at Lyft, focusing primarily on the evolution of service mesh configuration management.

Learn More

Using Envoy Proxy

In a typical deployment, each service instance runs in its own container or virtual machine, with an Envoy proxy running alongside it. The application service communicates with its local Envoy proxy via localhost, while Envoy handles the actual network communication with other services and external clients. Envoy's configuration can be managed dynamically through APIs, allowing for real-time updates without requiring restarts. This dynamic configuration capability is crucial in modern, cloud-native environments where services and configurations can change frequently.


Envoy Proxy's versatility and feature set make it suitable for a wide range of use cases in modern application architectures:


  1. Microservices Communication: Envoy can be deployed as a sidecar proxy alongside each service instance, handling inter-service communication securely and efficiently. It provides features like service discovery, load balancing, and fault tolerance, making it easier to manage the complex communication patterns in microservices architectures.
  2. API Gateway: Envoy can act as an API gateway, a single entry point for external clients to access multiple backend services. It can handle tasks such as request routing, authentication, rate limiting, and protocol translation, simplifying the management of public-facing APIs.
  3. Ingress and Egress Control: Envoy can be used as an ingress proxy, controlling and securing traffic entering a cluster from external sources. It can also serve as an egress proxy, managing and monitoring traffic, leaving the cluster to external services, enabling fine-grained control over outbound requests.
  4. Canary Releases and Traffic Splitting: Envoy's traffic management capabilities allow for the controlled rollout of new service versions through canary releases. It can also split traffic between different service versions, enabling gradual rollouts and reducing the risk of deployments.
  5. Security and Access Control: Envoy can enforce security policies at the network level, such as authentication and authorization. It supports features like TLS encryption, JSON Web Token (JWT) validation, and role-based access control (RBAC), enhancing the security of inter-service communication.
  6. Hybrid and Multi-Cloud Deployments: Envoy's platform-agnostic nature makes it suitable for hybrid and multi-cloud deployments. It can bridge services across different cloud providers or between on-premises and cloud environments, facilitating seamless communication and migration strategies.

Additional Links

Envoy Proxy blog

Visit the official blog, learn more about Envoy and its architecture.

Learn more

Edge Stack API Gateway

Edge Stack is a Kubernetes-native API Gateway built on the Envoy Proxy.

Learn more

Envoy Proxy GitHub

The official GitHub repository. Envoy APIs are defined in the data-plane-api repository; while the code is in the repository.

Learn more

Istio

Istio is a service mesh built on the Envoy Proxy.

Learn more