DocsEmissary-ingressWhy Emissary-ingress?
Why Emissary-ingress?
Emissary-ingress gives platform engineers a comprehensive, self-service edge stack for managing the boundary between end-users and Kubernetes. Built on the Envoy Proxy and fully Kubernetes-native, Emissary-ingress is made to support multiple, independent teams that need to rapidly publish, monitor, and update services for end-users. A true edge stack, Emissary-ingress can also be used to handle the functions of an API Gateway, a Kubernetes ingress controller, and a layer 7 load balancer (for more, see this blog post).
How Does Emissary-ingress work?
Emissary-ingress is an open-source, Kubernetes-native microservices API gateway built on the Envoy Proxy. Emissary-ingress is built from the ground up to support multiple, independent teams that need to rapidly publish, monitor, and update services for end-users. Emissary-ingress can also be used to handle the functions of a Kubernetes ingress controller and load balancer (for more, see this blog post).
Cloud-native applications today
Traditional cloud applications were built using a monolithic approach. These applications were designed, coded, and deployed as a single unit. Today's cloud-native applications, by contrast, consist of many individual (micro)services. This results in an architecture that is:
- Heterogeneous: Services are implemented using multiple (polyglot) languages, they are designed using multiple architecture styles, and they communicate with each other over multiple protocols.
- Dynamic: Services are frequently updated and released (often without coordination), which results in a constantly-changing application.
- Decentralized: Services are managed by independent product-focused teams, with different development workflows and release cadences.
Heterogeneous services
Emissary-ingress is commonly used to route traffic to a wide variety of services. It supports:
- configuration on a per-service basis, enabling fine-grained control of timeouts, rate limiting, authentication policies, and more.
- a wide range of L7 protocols natively, including HTTP, HTTP/2, gRPC, gRPC-Web, and WebSockets.
- Can route raw TCP for services that use protocols not directly supported by Emissary-ingress.
Dynamic services
Service updates result in a constantly changing application. The dynamic nature of cloud-native applications introduces new challenges around configuration updates, release, and testing. Emissary-ingress:
- Enables progressive delivery, with support for canary routing and traffic shadowing.
- Exposes high-resolution observability metrics, providing insight into service behavior.
- Uses a zero downtime configuration architecture, so configuration changes have no end-user impact.
Decentralized workflows
Independent teams can create their own workflows for developing and releasing functionality that are optimized for their specific service(s). With Emissary-ingress, teams can:
- Leverage a declarative configuration model, making it easy to understand the canonical configuration and implement GitOps-style best practices.
- Independently configure different aspects of Emissary-ingress, eliminating the need to request configuration changes through a centralized operations team.
Emissary-ingress is engineered for Kubernetes
Emissary-ingress takes full advantage of Kubernetes and Envoy Proxy.
- All of the state required for Emissary-ingress is stored directly in Kubernetes, eliminating the need for an additional database.
- The Emissary-ingress team has added extensive engineering efforts and integration testing to ensure optimal performance and scale of Envoy and Kubernetes.
For more information
Deploy Emissary-ingress today and join the community Slack Channel.
Interested in learning more?