Using intercepts
Blackbird cluster (powered by Telepresence) allows you to intercept traffic from a Kubernetes service and route it to your local machine, enabling your local environment to function as if it were running in the cluster. There are two types of intercepts:
- Global intercept: This is the default. A global intercept redirects traffic from the Kubernetes service to the version running on your local machine.
- Personal intercept: A personal intercept allows you to selectively intercept a portion of traffic to a service without interfering with the rest of traffic. The end user won't experience the change, but you can observe and debug using your development tools. This allows you to share a cluster with others on your team without interfering with their work.
Using this page, you can learn about:
- Specifying a namespace for an intercept
- Creating an intercept
- Creating a personal intercept
- Creating an intercept when the service has multiple ports
- Creating an intercept when multiple services match your workload
- Intercepting multiple ports
- Port-forwarding an intercepted container's sidecars
- Intercepting headless services
- Intercepting without a service
- Specifying the intercept traffic target
- Environment variables and intercept specifications
Prerequisites
- You downloaded the Blackbird CLI. For more information, see Getting started with the Blackbird CLI.
- You installed the Traffic Manager. For more information, see Using the Traffic Manager.
- You're connected to a cluster. For more information, see Using connects.
- You can access a Kubernetes cluster using the Kubernetes CLI (kubectl) or the OpenShift CLI (oc).
- Your application is deployed in the cluster and accessible using a Kubernetes service.
- You have a local copy of the service ready to run on your local machine.
Specifying a namespace for an intercept
You can specify the name of the namespace when you connect using the --namespace
option.
Importing environment variables
Blackbird can import environment variables from the Pod that's being intercepted. For more information, see Environment variables.
Creating a global intercept
The following command redirects all traffic destined for the service to your laptop, acting as a proxy. It includes traffic routed through the ingress controller, so use this option with caution to avoid disrupting production environments.
Creating a personal intercept
The following command creates a personal intercept. Blackbird will then generate a header that uniquely identifies your intercept. Requests that don't contain this header will not be affected by your intercept.
This command outputs an HTTP header that you can set on your request for the traffic to be intercepted.
You can then run blackbird cluster status
to see the list of active intercepts.
You can then run blackbird cluster leave <name of intercept>
to stop the intercept.
Bypassing the ingress configuration
You can bypass the ingress configuration by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will occur.
Flag | Description | Required |
---|---|---|
--ingress-host | The IP address for the ingress. | yes |
--ingress-port | The port for the ingress. | yes |
--ingress-tls | Whether tls should be used. | no |
--ingress-l5 | Whether a different ip address should be used in request headers. | no |
Creating an intercept when the service has multiple ports
You can intercept a service that has multiple ports by telling Blackbird which service port you want to intercept. Specifically, you can either use the name of the service port or the port number itself. To see which options might be available to you and your service, use kubectl to describe your service or look in the object's YAML. For more information on multiple ports, see Multi-port services in the Kubernetes documentation.
When intercepting a service with multiple ports, the intercepted service port name is displayed. To change the intercepted port, create a new intercept using the same method as above. This will update the selected service port.
Creating an intercept when multiple services match your workload
In many cases, a service has a one-to-one relationship with a , allowing Blackbird to automatically determine which service to intercept based on the targeted . However, when using tools like Argo, multiple services might share the same labels to manage traffic between a canary and a stable service, which can affect auto-detection.
If you know which service you want to use when intercepting a workload, you can use the --service
flag. Using the example above, if you want to intercept your workload using the echo-stable
service your command would be as follows.
Intercepting multiple ports
You can intercept more than one service and/or service port that are using the same workload by creating more than one intercept that identifies the same workload using the --workload
flag. In the following example, there's a service multi-echo
with the two ports: http
and grpc
. They're both targeting the same multi-echo
deployment.
Port-forwarding an intercepted container's sidecars
Sidecars are containers that are in the same Pod as an application container. Typically, they provide auxiliary functionality to an application and can usually be reached at localhost:${SIDECAR_PORT}
. For example, a common use case for a sidecar is to proxy requests to a database. Your application would connect to localhost:${SIDECAR_PORT}
, and the sidecar would then connect to the database, possibly augmenting the connection with TLS or authentication.
When intercepting a container that uses sidecars, you might want to have the sidecar ports available to your local application at localhost:${SIDECAR_PORT}
, as if running in-cluster. Blackbird's --to-pod ${PORT}
flag implements this behavior, adding port-forwards for the port given.
If there are multiple ports that you need to forward, simply repeat the flag (--to-pod=<sidecarPort0> --to-pod=<sidecarPort1>
).
Intercepting headless services
Kubernetes allows you to create services without a ClusterIP. When these services include a Pod selector, they provide a DNS record that directly resolves to the backing Pods.
You can intercept it like any other service.
Note: This option utilizes an
initContainer
that requiresNET_ADMIN
capabilities. If your cluster administrator has disabled them, you must use numeric ports with the agent injector. This option also requires the Traffic Agent to run as GID7777
. By default, this is disabled on openshift clusters. To enable running as GID7777
on a specific OpenShift namespace, run:oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE
Note: Blackbird doesn't support intercepting headless services without a selector.
Intercepting without a service
You can intercept a workload without a service by adding an annotation that informs Blackbird about what container ports are eligible for intercepts. Blackbird will then inject a Traffic Agent when the workload is deployed, and you can intercept the given ports as if they were service ports. The annotation value is a comma-separated list of port identifiers consisting of either the name or the port number of a container port, optionally suffixed with /TCP
or /UDP
.
To intercept without a service:
Deploy an annotation similar to the following.
Connect to the cluster.
List your intercept eligible workloads. If the annotation is correct, the deployment will display in the list.
Start a local intercept handler that receives the incoming traffic. The following is an example using a simple Python HTTP service.
Create an intercept.
Note: The response contains an address that you can curl to reach the intercepted pod. You won't be able to curl the name "echo-no-svc". Because there's no service by that name, there's no DNS entry for it.
Curl the intercepted workload.
Note: An intercept without a service utilizes an
initContainer
that requiresNET_ADMIN
capabilities. If your cluster administrator has disabled them, you can't intercept services using numeric target ports.
Specifying the intercept traffic target
By default, your local application is reachable on 127.0.0.1
, and intercepted traffic will be sent to that IP at the port given by --port
. If you want to change this behavior and send traffic to a different IP address, you can use the --address
parameter to blackbird cluster intercept
. For example, if your local machine is configured to respond to HTTP requests for an intercept on 172.16.0.19:8080
, you would use the following.
Environment variables and intercept specifications
Environment variables
You can import environment variables from the cluster pod when running an intercept and apply them to the code running on your local machine for the intercepted service.
There are several options available:
blackbird cluster intercept [service] --port [port] --env-file=[FILENAME]
This writes the environment variables to a file. The file can be used when starting containers locally. The option
--env-syntax
allows control over the syntax of the file. Valid syntaxes include "docker", "compose", "sh", "csh", "cmd", and "ps" where "sh", "csh", and "ps" can be suffixed with ":export".blackbird cluster intercept [service] --port [port] --env-json=[FILENAME]
This writes the environment variables to a JSON file. The file can be injected into other build processes.
blackbird cluster intercept [service] --port [port] -- [COMMAND]
This runs a command locally with the pod's environment variables set on your local machine. After the command quits, the intercept stops (as if
blackbird cluster leave [service]
was run). This can be used in conjunction with a local server command, such aspython [FILENAME]
ornode [FILENAME]
to run a service locally while using the environment variables that were set on the pod using a ConfigMap.Another is running a subshell, such as Bash:
blackbird cluster intercept [service] --port [port] -- /bin/bash
This starts the intercept and then launches the subshell on your local machine with the same variables set as on the pod.
blackbird cluster intercept [service] --docker-run -- [CONTAINER]
This ensures that the environment is propagated to the container. It also works for
--docker-build
and--docker-debug
.
Telepresence environment variables
You can also import environment variables specific to Telepresence.
TELEPRESENCE_ROOT
The directory where all remote volumes mounts are rooted.
TELEPRESENCE_MOUNTS
A colon-separated list of remotely mounted directories.
TELEPRESENCE_CONTAINER
The name of the intercepted container.
TELEPRESENCE_INTERCEPT_ID
The ID of the intercept. This is the same as the "x-intercept-id" HTTP header. This variable is useful when you need custom behavior while intercepting a pod. For example, in pub-sub systems like Kafka, processes without the
TELEPRESENCE_INTERCEPT_ID
can filter out messages containing anx-intercept-id
header, while those with an ID process only matching headers. This ensures that messages for a specific intercept are always routed to the intercepting process.
Intercept specifications
Intercept specifications can be used to create a standard configuration for intercepts that can be used to start local applications and handle intercepted traffic.
Note: Previously, intercept specifications were referred to as saved intercepts.
Templating
This intercept specification supports template expansion in all properties except names that references other objects within the specification, and it makes all functions from the Masterminds/sprig package available. The following example shows how to provide a header value created from two environment variables.
Blackbird also provides its own set of properties. This is limited to the following.
Options | Type | Description |
---|---|---|
.Telepresence.Username | string | The name of the user running the specification. |
Root
This intercept specification can create a standard configuration to easily run tasks, start an intercept, and start your local application to handle the intercepted traffic.
Options | Description |
---|---|
name | The name of the specification. |
connection | The connection properties to use when Telepresence connects to the cluster. |
handlers | The local processes to handle traffic. |
prerequisites | Items to set up prior to starting any intercepts, and items to remove once the intercept is complete. |
workloads | Remote workloads that are intercepted, keyed by workload name. |
Name
The name is optional. If you don't specify the name, it will use the filename of the specification file.
Connection
The connection defines how Blackbird establishes connections to a cluster. Connections established during the execution of an intercept specification will be temporary and terminate with the completion of the specification, while pre-existing connections are discovered and retained for future use.
A connection can be declared in singular form as the following:
It can also be declared when more than one connection is necessary, in plural form, such as the following:
When multiple connections are used, all intercept handlers must run in Docker and all connections must have a name.
You can pass the most common parameters from blackbird cluster connect
command (blackbird cluster connect --help
) using a camel case format.
Commonly used options include the following:
Options | Type | Format | Description |
---|---|---|---|
namespace | string | [a-z0-9][a-z0-9-]{1,62} | The namespace that this connection is bound to. Defaults to the default appointed by the context. |
mappedNamespaces | string list | [a-z0-9][a-z0-9-]{1,62} | The namespaces that Blackbird will be concerned with. |
managerNamespace | string | [a-z0-9][a-z0-9-]{1,62} | The namespace where the traffic manager is to be found. |
context | string | N/A | The Kubernetes context to use. |
hostname | string | N/A | Docker only. Hostname used by the connection container. |
expose | string | [IP:][port:]container-port | Docker only. Make a connection container port available to services outside of Docker. |
name | string | N/A | The name used when referencing the connection. |
Handlers
A handler is code running locally. It can receive traffic for an intercepted service or set up prerequisites to run before/after the intercept itself.
When it's intended as an intercept handler (i.e., to handle traffic), it's usually the service you're working on or another dependency (e.g., database, another third-party service) running on your local machine. A handler can be a Docker container or an application running natively.
The following example creates an intercept handler with the name echo-server
and uses a Docker container. The container will automatically have access to the ports, environment, and mounted directories of the intercepted container.
If you don't want to use Docker containers, you can still configure your handlers to start using a regular script. The following shows how to create a handler called echo-server
that sets an environment variable of PORT=8080
and starts the application.
If you don't want to utilize Docker containers or scripts but want to harness all the essential data, including volumes and environment variables, to start a process that can manage intercepted traffic directed toward a specified output without executing anything, the solution lies in setting up an external handler.
The following shows how to establish this type of handler with the name echo-server
. This configuration not only sets an environment variable defined as PORT=8080
, but it also generates a file encompassing all pertinent metadata.
The following table defines the parameters that can be used within the handlers section.
Options | Type | Format | Description |
---|---|---|---|
name | string | [a-zA-Z][a-zA-Z0-9_-]* | The name of your handler that the intercepts use to reference it. |
environment | map list | N/A | The environment variables in your handler. |
environment[*].name | string | [a-zA-Z_][a-zA-Z0-9_]* | The name of the environment variable. |
environment[*].value | string | N/A | The value for the environment variable. |
script | map | N/A | Tells the handler to run as a script, mutually exclusive to docker and external. |
docker | map | N/A | Tells the handler to run as a Docker container, mutually exclusive to script and external. |
external | map | N/A | Tells the handler to run as an external, mutually exclusive to script and Docker. |
Script
The handler's script element defines the parameters.
Options | Type | Format | Description |
---|---|---|---|
run | string | N/A | The script to run. It can use multiple lines. |
shell | string | bash|sh|sh | The shell that will parse and run the script. It can be "bash", "zsh", or "sh". It defaults to the value of the SHELL environment variable. |
Docker
The handler's Docker element defines the parameters. The build
and image
parameters are mutually exclusive.
Options | Type | Format | Description |
---|---|---|---|
build | map | N/A | Defines how to build the image from source using docker build command. |
compose | map | N/A | Defines how to integrate with an existing Docker Compose file. |
image | string | image | Defines which image to use. |
ports | int list | N/A | The ports that should be exposed to the host. |
options | string list | N/A | Options for Docker run options. |
command | string | N/A | An optional command to run. |
args | string list | N/A | An optional command arguments |
External
The handler's external element defines the parameters.
Options | Type | Format | Description |
---|---|---|---|
isDocker | boolean | N/A | Indicates if the runner is in a Docker container (true/false). |
outputFormat | string | json|yaml | Sets the output format to either JSON or YAML. |
outputPath | string | N/A | Specifies output destination: "stdout", "stderr", or a file path. |
Build
The Docker build element defines the parameters.
Options | Type | Format | Description |
---|---|---|---|
context | string | N/A | Defines either a path to a directory containing a Dockerfile or a URL to a Git repository. |
args | string list | N/A | Additional arguments for the Docker build command. |
For additional information on these parameters, see docker container run.
Compose
The Docker Compose element defines the way to integrate with the tool of the same name.
Options | Type | Format | Description |
---|---|---|---|
context | string | N/A | (Optional) Docker context, meaning the path to / or the directory containing your Docker Compose file. |
services | map list | The services to use with the Telepresence integration. | |
spec | map | compose spec | (Optional) Embedded Docker Compose specification. |
Service
The service describes how to integrate with each service from your Docker Compose file, and it can be seen as an override functionality. A service is normally not provided when you want to keep the original behavior, but it can be provided for documentation purposes using the local
behavior.
A service can be declared either as a property of compose
in the intercept specification or as an x-telepresence
extension in the Docker Compose specification. The syntax is the same in both cases, but the name
property can't be used together with x-telepresence
because it's implicit.
Options | Type | Format | Description |
---|---|---|---|
name | string | [a-zA-Z][a-zA-Z0-9_-]* | The name of your service in the compose file |
behavior | string | interceptHandler|remote|local | Behavior of the service in context of the intercept. |
mapping | map | Optional mapping to cluster service. Only applicable for behavior: remote |
Behavior
Value | Description |
---|---|
interceptHandler | The service runs locally and will receive traffic from the intercepted pod. |
remote | The service will not run as part of Docker Compose. Instead, traffic is redirected to a service in the cluster. |
local | The service runs locally without modifications. This is the default. |
Mapping
Options | Type | Description |
---|---|---|
name | string | The name of the cluster service to link the compose service with. |
namespace | string | (Optional) The cluster namespace for service. It defaults to the namespace of the intercept. |
Examples
Considering the following Docker Compose file:
The following will use the myapp service as the interceptor.
Due to the possibility of multiple workloads using different connections utilizing the same compose-handler
, the services designated as interceptHandler
within the compose-spec
might operate on distinct connections. When this is the case, the connection must be explicitly specified within each service.
The following will prevent the service from running locally. DNS will point to the service in the cluster with the same name.
Adding mapping allows you to select the cluster service more accurately by indicating to Telepresence that the postgres service should be mapped to the psql service in the big-data namespace.
As an alternative, the services
can be added as x-telepresence
extensions in the Docker Compose file:
Prerequisites
When you're creating an intercept specification, there's an option to include prerequisites.
Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, and more. Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. The elements of the prerequisites
array correspond to handlers
.
The following example declares that build-binary
and rm-binary
are two handlers; the first will be run before any intercepts, and the second will run after cleaning up the intercepts.
If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail.
The follow example defines the parameters availble within the prerequistes section.
Options | Description |
---|---|
create | The name of a handler to run before the intercept |
delete | The name of a handler to run after the intercept |
Workloads
Workloads define the services in your cluster that will be intercepted.
The following example creates an intercept on a service called echo-server
on port 8080. It creates a personal intercept with the header of x-intercept-id: foo
and routes its traffic to a handler called echo-server
.
When multiple connections are used, the name of the workload must be prefixed with the name of the connection and a slash. Like this:
This following table defines the parameters available within a workload.
Options | Type | Format | Description | Default |
---|---|---|---|---|
name | string | ^([a-z0-9][a-z0-9-]{0,62}/)?[a-z][a-z0-9-]{0,62}$ | The name of the workload to intercept (optionally prefixed with a connection name). | N/A |
intercepts | intercept list | N/A | The list of intercepts associated to the workload. | N/A |
Intercepts
The following table defines the parameters available for each intercept.
Options | Type | Format | Description | Default |
---|---|---|---|---|
enabled | boolean | N/A | If set to false, it disables this intercept. | true |
headers | header list | N/A | The headers that filter the intercept. | Auto generated |
service | name | [a-z][a-z0-9-]{1,62} | The name of the service to intercept. | N/A |
localPort | integer|string | 1-65535 | The port for the service being intercepted. | N/A |
port | integer | 1-65535 | The port the service in the cluster is running on. | N/A |
pathPrefix | string | N/A | The path prefix filter for the intercept. Defaults to "/". | / |
replace | boolean | N/A | Determines if the app container should be stopped. | false |
global | boolean | N/A | If true, intercept all TCP/UDP traffic. Mutually exclusive with headers and pathXxx properties. | true |
mountPoint | string | N/A | The local directory or drive where the remote volumes are mounted. | false |
Header
You can define headers to filter the requests that should end up on your local machine when intercepting.
Options | Type | Format | Description | Default |
---|---|---|---|---|
name | string | N/A | The name of the header. | N/A |
value | string | N/A | The value of the header. | N/A |
Usage
Running your specification from the CLI
After you've written your intercept specification, you can run it.
To start your intercept, use the following command.
This validates and run your specification. If you want to validate it, you can use the following command.
Using and sharing your specification as a CRD
You can use this specification if you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster.
Note: The intercept specification CRD requires Kubernetes 1.22 or higher. If you're using an old cluster you'll need to install using Helm directly and use the
--disable-openapi-validation
flag.
Install the CRD object in your cluster. This is a one-time installation.
Deploy the specification in your cluster as a CRD.
The echo-server
example looks like this:
Now, every person that's connected to the cluster can start your intercept by using the following command.
You can also list available specifications.
Integrating with Docker
An intercept specification can be used within the Docker extension if you're using a YAML file and a Docker runtime as handlers.
Integrating with your IDE
You can integrate JSON schemas into your IDE to provide autocompletion and hints while writing your intercept specification. There are two schemas available:
To add the schema to your IDE, follow the instructions for your IDE. For example:
ON THIS PAGE
- Prerequisites
- Specifying a namespace for an intercept
- Importing environment variables
- Creating a global intercept
- Creating a personal intercept
- Bypassing the ingress configuration
- Creating an intercept when the service has multiple ports
- Creating an intercept when multiple services match your workload
- Intercepting multiple ports
- Port-forwarding an intercepted container's sidecars
- Intercepting headless services
- Intercepting without a service
- Specifying the intercept traffic target
- Environment variables and intercept specifications
- Intercept specifications