Envoy Proxy 101: What it is, and why it matters?
Envoy Proxy is a modern, high performance, small footprint edge and service proxy. Envoy is most comparable to software load balancers such as NGINX and HAProxy. It is built to handle the complex networking challenges of modern microservice-based architectures.
Originally written and deployed at Lyft, Envoy now has a vibrant contributor base and is an official Cloud Native Computing Foundation (CNCF) project.
Background
As organizations have adopted microservices, a crucial component of deploying and managing microservices is a state-of-the-art L7 proxy. An L7 proxy provides observability, resilience, and transparent routing of your actual service. As a layer 7 proxy, Envoy can be deployed as a sidecar alongside application services. It abstracts the networking complexity and provides a consistent, high-performance data plane. Its modular architecture and extensive feature set make it a powerful tool for building resilient and observable distributed systems.
Introduction to modern Network load balancing and proxying
Matt Klein - An excellent primer on load balancing in today's world, covering essential capabilities and why they're important
Lyft's Envoy: From Monolith to Service Mesh
Matt Klein - Watch Matt Klein cover Lyft's architectural migration from monolith to a fully distributed service mesh, the origins of Envoy, a high level architectural overview, and future directions.
API versioning and evolution with proxies
Cindy Sridharan - A L7 proxy is a powerful tool to help you iterate your APIs, while minimizing user impact. This article discusses one such use case.
Using API Gateways to Facilitate Your transition from Monolith to Microservices
Daniel Bryant - An API Gateway like Edge Stack is a proxy deployed at your edge, and is frequently used to facilitate a migration from monolith to microservices.
Key Features and Benefits
- Dynamic service discovery: Envoy integrates with various service discovery systems, enabling automatic backend service discovery and routing updates.
- Advanced load balancing: Envoy supports multiple load balancing algorithms, including round-robin, least-request, and ring hash, which ensure the efficient distribution of traffic across service instances.
- Circuit breaking and fault tolerance: Envoy can detect and isolate unhealthy service instances, preventing cascading failures and ensuring system resilience.
- Observability: Envoy generates detailed metrics, logs, and distributed traces, providing deep visibility into the system's behavior and performance.
- Extensibility: Envoy's filter chain mechanism allows custom filters to be plugged in at various points in the request processing pipeline, enabling advanced traffic control and modification.
Envoy has a highly sophisticated configuration system.
Tutorials on Using Envoy
Comparing Envoy, NGINX, and HAProxy: Key Features and Capabilities
Load Balancing
- Envoy: Supports advanced load balancing algorithms, including round-robin, least-request, ring hash, and more.
- NGINX: Offers load balancing features, including round-robin, least-connected, and IP hash.
- HAProxy: Provides advanced load balancing capabilities, supporting algorithms like round-robin, static-rr, leastconn, and more.
Dynamic Configuration
- Envoy: Designed for dynamic configuration through APIs, allowing real-time updates without restarts.
- NGINX: Requires configuration file reloads for most changes, limiting dynamic adaptability.
- HAProxy: Supports runtime API for some configuration changes, but not as extensive as Envoy.
Protocol Support
- Envoy: Extensive support for modern protocols, including HTTP/2, gRPC, and WebSocket.
- NGINX: Supports HTTP/2 and WebSocket, but gRPC support requires additional modules.
- HAProxy: Supports HTTP/2 and WebSocket, but gRPC support is limited.
Observability
- Envoy: Built-in support for detailed metrics, logging, and distributed tracing.
- NGINX: Provides basic metrics and logging, but advanced observability requires additional modules or tools.
- HAProxy: Offers metrics and logging capabilities but is less extensive than Envoy's built-in observability features.
Extensibility
- Envoy: Highly extensible through a filter chain mechanism, allowing custom processing at various stages.
- NGINX: Supports modules and Lua scripting for extensibility but is less flexible than Envoy's filter chain.
- HAProxy: Provides some extensibility through Lua scripting and plugins but not as comprehensive as Envoy.
Service Mesh Integration
- Envoy: Widely used as the data plane in service mesh solutions like Istio, Consul Connect, and AWS App Mesh.
- NGINX: Can be used in service mesh architectures, but not as commonly as Envoy.
- HAProxy: It is not as widely used in service mesh solutions as Envoy.
NGINX and HAProxy are powerful proxies with their own strengths, but Envoy's dynamic configuration, modern protocol support, observability features, and extensibility make it particularly well-suited for cloud-native applications and service mesh architectures.
Service Mesh Integration
A service mesh is a transparent layer that adds resilience, observability, and security to your service-to-service communication. Example service meshes include Istio and Linkerd. Istio is closely associated with Envoy because Istio relies on it to do the actual Layer 7 traffic management. Istio itself is a control plane for a fleet of Envoy Proxies that are deployed next to your microservices.
Service Mesh
What is a service mesh and do I need one when developing microservices?
Daniel Bryant - this talk from MicroXchg covers what service meshes are, why they're well-suited for microservice deployments, and how to use a service mesh when you're deploying microservices
Using Envoy Proxy
In a typical deployment, each service instance runs in its own container or virtual machine, with an Envoy proxy running alongside it. The application service communicates with its local Envoy proxy via localhost, while Envoy handles the actual network communication with other services and external clients. Envoy's configuration can be managed dynamically through APIs, allowing for real-time updates without requiring restarts. This dynamic configuration capability is crucial in modern, cloud-native environments where services and configurations can change frequently.
Envoy Proxy's versatility and feature set make it suitable for a wide range of use cases in modern application architectures:
- Microservices Communication: Envoy can be deployed as a sidecar proxy alongside each service instance, handling inter-service communication securely and efficiently. It provides features like service discovery, load balancing, and fault tolerance, making it easier to manage the complex communication patterns in microservices architectures.
- API Gateway: Envoy can act as an API gateway, a single entry point for external clients to access multiple backend services. It can handle tasks such as request routing, authentication, rate limiting, and protocol translation, simplifying the management of public-facing APIs.
- Ingress and Egress Control: Envoy can be used as an ingress proxy, controlling and securing traffic entering a cluster from external sources. It can also serve as an egress proxy, managing and monitoring traffic, leaving the cluster to external services, enabling fine-grained control over outbound requests.
- Canary Releases and Traffic Splitting: Envoy's traffic management capabilities allow for the controlled rollout of new service versions through canary releases. It can also split traffic between different service versions, enabling gradual rollouts and reducing the risk of deployments.
- Security and Access Control: Envoy can enforce security policies at the network level, such as authentication and authorization. It supports features like TLS encryption, JSON Web Token (JWT) validation, and role-based access control (RBAC), enhancing the security of inter-service communication.
- Hybrid and Multi-Cloud Deployments: Envoy's platform-agnostic nature makes it suitable for hybrid and multi-cloud deployments. It can bridge services across different cloud providers or between on-premises and cloud environments, facilitating seamless communication and migration strategies.