Docsright arrowBlackbirdright arrowUsing intercepts

27 min • read

Using intercepts

Blackbird cluster (powered by Telepresence) allows you to intercept traffic from a Kubernetes service and route it to your local machine, enabling your local environment to function as if it were running in the cluster. There are two types of intercepts:

  • Global intercept: This is the default. A global intercept redirects traffic from the Kubernetes service to the version running on your local machine.
  • Personal intercept: A personal intercept allows you to selectively intercept a portion of traffic to a service without interfering with the rest of traffic. The end user won't experience the change, but you can observe and debug using your development tools. This allows you to share a cluster with others on your team without interfering with their work.

Using this page, you can learn about:

Prerequisites

  • You downloaded the Blackbird CLI. For more information, see Getting started with the Blackbird CLI.
  • You installed the Traffic Manager. For more information, see Using the Traffic Manager.
  • You're connected to a cluster. For more information, see Using connects.
  • You can access a Kubernetes cluster using the Kubernetes CLI (kubectl) or the OpenShift CLI (oc).
  • Your application is deployed in the cluster and accessible using a Kubernetes service.
  • You have a local copy of the service ready to run on your local machine.

Specifying a namespace for an intercept

You can specify the name of the namespace when you connect using the --namespace option.

Importing environment variables

Blackbird can import environment variables from the Pod that's being intercepted. For more information, see Environment variables.

Creating a global intercept

The following command redirects all traffic destined for the service to your laptop, acting as a proxy. It includes traffic routed through the ingress controller, so use this option with caution to avoid disrupting production environments.

Creating a personal intercept

The following command creates a personal intercept. Blackbird will then generate a header that uniquely identifies your intercept. Requests that don't contain this header will not be affected by your intercept.

This command outputs an HTTP header that you can set on your request for the traffic to be intercepted.

You can then run blackbird cluster status to see the list of active intercepts.

You can then run blackbird cluster leave <name of intercept> to stop the intercept.

Bypassing the ingress configuration

You can bypass the ingress configuration by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will occur.

FlagDescriptionRequired
--ingress-hostThe IP address for the ingress.yes
--ingress-portThe port for the ingress.yes
--ingress-tlsWhether tls should be used.no
--ingress-l5Whether a different ip address should be used in request headers.no

Creating an intercept when the service has multiple ports

You can intercept a service that has multiple ports by telling Blackbird which service port you want to intercept. Specifically, you can either use the name of the service port or the port number itself. To see which options might be available to you and your service, use kubectl to describe your service or look in the object's YAML. For more information on multiple ports, see Multi-port services in the Kubernetes documentation.

When intercepting a service with multiple ports, the intercepted service port name is displayed. To change the intercepted port, create a new intercept using the same method as above. This will update the selected service port.

Creating an intercept when multiple services match your workload

In many cases, a service has a one-to-one relationship with a , allowing Blackbird to automatically determine which service to intercept based on the targeted . However, when using tools like Argo, multiple services might share the same labels to manage traffic between a canary and a stable service, which can affect auto-detection.

If you know which service you want to use when intercepting a workload, you can use the --service flag. Using the example above, if you want to intercept your workload using the echo-stable service your command would be as follows.

Intercepting multiple ports

You can intercept more than one service and/or service port that are using the same workload by creating more than one intercept that identifies the same workload using the --workload flag. In the following example, there's a service multi-echo with the two ports: http and grpc. They're both targeting the same multi-echo deployment.

Port-forwarding an intercepted container's sidecars

Sidecars are containers that are in the same Pod as an application container. Typically, they provide auxiliary functionality to an application and can usually be reached at localhost:${SIDECAR_PORT}. For example, a common use case for a sidecar is to proxy requests to a database. Your application would connect to localhost:${SIDECAR_PORT}, and the sidecar would then connect to the database, possibly augmenting the connection with TLS or authentication.

When intercepting a container that uses sidecars, you might want to have the sidecar ports available to your local application at localhost:${SIDECAR_PORT}, as if running in-cluster. Blackbird's --to-pod ${PORT} flag implements this behavior, adding port-forwards for the port given.

If there are multiple ports that you need to forward, simply repeat the flag (--to-pod=<sidecarPort0> --to-pod=<sidecarPort1>).

Intercepting headless services

Kubernetes allows you to create services without a ClusterIP. When these services include a Pod selector, they provide a DNS record that directly resolves to the backing Pods.

You can intercept it like any other service.

Note: This option utilizes an initContainer that requires NET_ADMIN capabilities. If your cluster administrator has disabled them, you must use numeric ports with the agent injector. This option also requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. To enable running as GID 7777 on a specific OpenShift namespace, run: oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE

Note: Blackbird doesn't support intercepting headless services without a selector.

Intercepting without a service

You can intercept a workload without a service by adding an annotation that informs Blackbird about what container ports are eligible for intercepts. Blackbird will then inject a Traffic Agent when the workload is deployed, and you can intercept the given ports as if they were service ports. The annotation value is a comma-separated list of port identifiers consisting of either the name or the port number of a container port, optionally suffixed with /TCP or /UDP.

To intercept without a service:

  1. Deploy an annotation similar to the following.

  2. Connect to the cluster.

  3. List your intercept eligible workloads. If the annotation is correct, the deployment will display in the list.

  4. Start a local intercept handler that receives the incoming traffic. The following is an example using a simple Python HTTP service.

  5. Create an intercept.

Note: The response contains an address that you can curl to reach the intercepted pod. You won't be able to curl the name "echo-no-svc". Because there's no service by that name, there's no DNS entry for it.

  1. Curl the intercepted workload.

Note: An intercept without a service utilizes an initContainer that requires NET_ADMIN capabilities. If your cluster administrator has disabled them, you can't intercept services using numeric target ports.

Specifying the intercept traffic target

By default, your local application is reachable on 127.0.0.1, and intercepted traffic will be sent to that IP at the port given by --port. If you want to change this behavior and send traffic to a different IP address, you can use the --address parameter to blackbird cluster intercept. For example, if your local machine is configured to respond to HTTP requests for an intercept on 172.16.0.19:8080, you would use the following.

Environment variables and intercept specifications

Environment variables

You can import environment variables from the cluster pod when running an intercept and apply them to the code running on your local machine for the intercepted service.

There are several options available:

  • blackbird cluster intercept [service] --port [port] --env-file=[FILENAME]

    This writes the environment variables to a file. The file can be used when starting containers locally. The option --env-syntax allows control over the syntax of the file. Valid syntaxes include "docker", "compose", "sh", "csh", "cmd", and "ps" where "sh", "csh", and "ps" can be suffixed with ":export".

  • blackbird cluster intercept [service] --port [port] --env-json=[FILENAME]

    This writes the environment variables to a JSON file. The file can be injected into other build processes.

  • blackbird cluster intercept [service] --port [port] -- [COMMAND]

    This runs a command locally with the pod's environment variables set on your local machine. After the command quits, the intercept stops (as if blackbird cluster leave [service] was run). This can be used in conjunction with a local server command, such as python [FILENAME] or node [FILENAME] to run a service locally while using the environment variables that were set on the pod using a ConfigMap.

    Another is running a subshell, such as Bash:

    blackbird cluster intercept [service] --port [port] -- /bin/bash

    This starts the intercept and then launches the subshell on your local machine with the same variables set as on the pod.

  • blackbird cluster intercept [service] --docker-run -- [CONTAINER]

    This ensures that the environment is propagated to the container. It also works for --docker-build and --docker-debug.

Telepresence environment variables

You can also import environment variables specific to Telepresence.

  • TELEPRESENCE_ROOT

    The directory where all remote volumes mounts are rooted.

  • TELEPRESENCE_MOUNTS

    A colon-separated list of remotely mounted directories.

  • TELEPRESENCE_CONTAINER

    The name of the intercepted container.

  • TELEPRESENCE_INTERCEPT_ID

    The ID of the intercept. This is the same as the "x-intercept-id" HTTP header. This variable is useful when you need custom behavior while intercepting a pod. For example, in pub-sub systems like Kafka, processes without the TELEPRESENCE_INTERCEPT_ID can filter out messages containing an x-intercept-id header, while those with an ID process only matching headers. This ensures that messages for a specific intercept are always routed to the intercepting process.

Intercept specifications

Intercept specifications can be used to create a standard configuration for intercepts that can be used to start local applications and handle intercepted traffic.

Note: Previously, intercept specifications were referred to as saved intercepts.

Templating

This intercept specification supports template expansion in all properties except names that references other objects within the specification, and it makes all functions from the Masterminds/sprig package available. The following example shows how to provide a header value created from two environment variables.

Blackbird also provides its own set of properties. This is limited to the following.

OptionsTypeDescription
.Telepresence.UsernamestringThe name of the user running the specification.

Root

This intercept specification can create a standard configuration to easily run tasks, start an intercept, and start your local application to handle the intercepted traffic.

OptionsDescription
nameThe name of the specification.
connectionThe connection properties to use when Telepresence connects to the cluster.
handlersThe local processes to handle traffic.
prerequisitesItems to set up prior to starting any intercepts, and items to remove once the intercept is complete.
workloadsRemote workloads that are intercepted, keyed by workload name.

Name

The name is optional. If you don't specify the name, it will use the filename of the specification file.

Connection

The connection defines how Blackbird establishes connections to a cluster. Connections established during the execution of an intercept specification will be temporary and terminate with the completion of the specification, while pre-existing connections are discovered and retained for future use.

A connection can be declared in singular form as the following:

It can also be declared when more than one connection is necessary, in plural form, such as the following:

When multiple connections are used, all intercept handlers must run in Docker and all connections must have a name.

You can pass the most common parameters from blackbird cluster connect command (blackbird cluster connect --help) using a camel case format.

Commonly used options include the following:

OptionsTypeFormatDescription
namespacestring[a-z0-9][a-z0-9-]{1,62}The namespace that this connection is bound to. Defaults to the default appointed by the context.
mappedNamespacesstring list[a-z0-9][a-z0-9-]{1,62}The namespaces that Blackbird will be concerned with.
managerNamespacestring[a-z0-9][a-z0-9-]{1,62}The namespace where the traffic manager is to be found.
contextstringN/AThe Kubernetes context to use.
hostnamestringN/ADocker only. Hostname used by the connection container.
exposestring[IP:][port:]container-portDocker only. Make a connection container port available to services outside of Docker.
namestringN/AThe name used when referencing the connection.

Handlers

A handler is code running locally. It can receive traffic for an intercepted service or set up prerequisites to run before/after the intercept itself.

When it's intended as an intercept handler (i.e., to handle traffic), it's usually the service you're working on or another dependency (e.g., database, another third-party service) running on your local machine. A handler can be a Docker container or an application running natively.

The following example creates an intercept handler with the name echo-server and uses a Docker container. The container will automatically have access to the ports, environment, and mounted directories of the intercepted container.

If you don't want to use Docker containers, you can still configure your handlers to start using a regular script. The following shows how to create a handler called echo-server that sets an environment variable of PORT=8080 and starts the application.

If you don't want to utilize Docker containers or scripts but want to harness all the essential data, including volumes and environment variables, to start a process that can manage intercepted traffic directed toward a specified output without executing anything, the solution lies in setting up an external handler.

The following shows how to establish this type of handler with the name echo-server. This configuration not only sets an environment variable defined as PORT=8080, but it also generates a file encompassing all pertinent metadata.

The following table defines the parameters that can be used within the handlers section.

OptionsTypeFormatDescription
namestring[a-zA-Z][a-zA-Z0-9_-]*The name of your handler that the intercepts use to reference it.
environmentmap listN/AThe environment variables in your handler.
environment[*].namestring[a-zA-Z_][a-zA-Z0-9_]*The name of the environment variable.
environment[*].valuestringN/AThe value for the environment variable.
scriptmapN/ATells the handler to run as a script, mutually exclusive to docker and external.
dockermapN/ATells the handler to run as a Docker container, mutually exclusive to script and external.
externalmapN/ATells the handler to run as an external, mutually exclusive to script and Docker.

Script

The handler's script element defines the parameters.

OptionsTypeFormatDescription
runstringN/AThe script to run. It can use multiple lines.
shellstringbash|sh|shThe shell that will parse and run the script. It can be "bash", "zsh", or "sh". It defaults to the value of the SHELL environment variable.

Docker

The handler's Docker element defines the parameters. The build and image parameters are mutually exclusive.

OptionsTypeFormatDescription
buildmapN/ADefines how to build the image from source using docker build command.
composemapN/ADefines how to integrate with an existing Docker Compose file.
imagestringimageDefines which image to use.
portsint listN/AThe ports that should be exposed to the host.
optionsstring listN/AOptions for Docker run options.
commandstringN/AAn optional command to run.
argsstring listN/AAn optional command arguments

External

The handler's external element defines the parameters.

OptionsTypeFormatDescription
isDockerbooleanN/AIndicates if the runner is in a Docker container (true/false).
outputFormatstringjson|yamlSets the output format to either JSON or YAML.
outputPathstringN/ASpecifies output destination: "stdout", "stderr", or a file path.
Build

The Docker build element defines the parameters.

OptionsTypeFormatDescription
contextstringN/ADefines either a path to a directory containing a Dockerfile or a URL to a Git repository.
argsstring listN/AAdditional arguments for the Docker build command.

For additional information on these parameters, see docker container run.

Compose

The Docker Compose element defines the way to integrate with the tool of the same name.

OptionsTypeFormatDescription
contextstringN/A(Optional) Docker context, meaning the path to / or the directory containing your Docker Compose file.
servicesmap listThe services to use with the Telepresence integration.
specmapcompose spec(Optional) Embedded Docker Compose specification.
Service

The service describes how to integrate with each service from your Docker Compose file, and it can be seen as an override functionality. A service is normally not provided when you want to keep the original behavior, but it can be provided for documentation purposes using the local behavior.

A service can be declared either as a property of compose in the intercept specification or as an x-telepresence extension in the Docker Compose specification. The syntax is the same in both cases, but the name property can't be used together with x-telepresence because it's implicit.

OptionsTypeFormatDescription
namestring[a-zA-Z][a-zA-Z0-9_-]*The name of your service in the compose file
behaviorstringinterceptHandler|remote|localBehavior of the service in context of the intercept.
mappingmapOptional mapping to cluster service. Only applicable for behavior: remote
Behavior
ValueDescription
interceptHandlerThe service runs locally and will receive traffic from the intercepted pod.
remoteThe service will not run as part of Docker Compose. Instead, traffic is redirected to a service in the cluster.
localThe service runs locally without modifications. This is the default.
Mapping
OptionsTypeDescription
namestringThe name of the cluster service to link the compose service with.
namespacestring(Optional) The cluster namespace for service. It defaults to the namespace of the intercept.

Examples

Considering the following Docker Compose file:

The following will use the myapp service as the interceptor.

Due to the possibility of multiple workloads using different connections utilizing the same compose-handler, the services designated as interceptHandler within the compose-spec might operate on distinct connections. When this is the case, the connection must be explicitly specified within each service.

The following will prevent the service from running locally. DNS will point to the service in the cluster with the same name.

Adding mapping allows you to select the cluster service more accurately by indicating to Telepresence that the postgres service should be mapped to the psql service in the big-data namespace.

As an alternative, the services can be added as x-telepresence extensions in the Docker Compose file:

Prerequisites

When you're creating an intercept specification, there's an option to include prerequisites.

Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, and more. Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. The elements of the prerequisites array correspond to handlers.

The following example declares that build-binary and rm-binary are two handlers; the first will be run before any intercepts, and the second will run after cleaning up the intercepts.

If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail.

The follow example defines the parameters availble within the prerequistes section.

OptionsDescription
createThe name of a handler to run before the intercept
deleteThe name of a handler to run after the intercept

Workloads

Workloads define the services in your cluster that will be intercepted.

The following example creates an intercept on a service called echo-server on port 8080. It creates a personal intercept with the header of x-intercept-id: foo and routes its traffic to a handler called echo-server.

When multiple connections are used, the name of the workload must be prefixed with the name of the connection and a slash. Like this:

This following table defines the parameters available within a workload.

OptionsTypeFormatDescriptionDefault
namestring^([a-z0-9][a-z0-9-]{0,62}/)?[a-z][a-z0-9-]{0,62}$The name of the workload to intercept (optionally prefixed with a connection name).N/A
interceptsintercept listN/AThe list of intercepts associated to the workload.N/A

Intercepts

The following table defines the parameters available for each intercept.

OptionsTypeFormatDescriptionDefault
enabledbooleanN/AIf set to false, it disables this intercept.true
headersheader listN/AThe headers that filter the intercept.Auto generated
servicename[a-z][a-z0-9-]{1,62}The name of the service to intercept.N/A
localPortinteger|string1-65535The port for the service being intercepted.N/A
portinteger1-65535The port the service in the cluster is running on.N/A
pathPrefixstringN/AThe path prefix filter for the intercept. Defaults to "/"./
replacebooleanN/ADetermines if the app container should be stopped.false
globalbooleanN/AIf true, intercept all TCP/UDP traffic. Mutually exclusive with headers and pathXxx properties.true
mountPointstringN/AThe local directory or drive where the remote volumes are mounted.false

You can define headers to filter the requests that should end up on your local machine when intercepting.

OptionsTypeFormatDescriptionDefault
namestringN/AThe name of the header.N/A
valuestringN/AThe value of the header.N/A

Usage

Running your specification from the CLI

After you've written your intercept specification, you can run it.

To start your intercept, use the following command.

This validates and run your specification. If you want to validate it, you can use the following command.

Using and sharing your specification as a CRD

You can use this specification if you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster.

Note: The intercept specification CRD requires Kubernetes 1.22 or higher. If you're using an old cluster you'll need to install using Helm directly and use the --disable-openapi-validation flag.

  1. Install the CRD object in your cluster. This is a one-time installation.

  2. Deploy the specification in your cluster as a CRD.

The echo-server example looks like this:

Now, every person that's connected to the cluster can start your intercept by using the following command.

You can also list available specifications.

Integrating with Docker

An intercept specification can be used within the Docker extension if you're using a YAML file and a Docker runtime as handlers.

Integrating with your IDE

You can integrate JSON schemas into your IDE to provide autocompletion and hints while writing your intercept specification. There are two schemas available:

To add the schema to your IDE, follow the instructions for your IDE. For example: