Docsright arrowTelepresenceright arrowConfiguring intercept using CLI

12 min • read

Configuring intercept using CLI

Specifying a namespace for an intercept

The namespace of the intercepted workload is specified during connect using the --namespace option.

Importing environment variables

Telepresence can import the environment variables from the pod that is being intercepted, see this doc for more details.

Creating an intercept

The following command will intercept all traffic bound to the service and proxy it to your laptop. This includes traffic coming through your ingress controller, so use this option carefully as to not disrupt production environments.

Creating a personal intercept

If you want to do a personal intercept, you can use the --http-header auto option. Telepresence will then generate a header that uniquelly identifies your intercept. Requests that don't contain this header will not be affected by your intercept.

A preview URL will also be generated that automatically performs a redirect to the intercepted workload. That redirect contains the header. The preview URL can be disabled using the flag --preview-url=false.

This will output an HTTP header that you can set on your request for that traffic to be intercepted:

Run telepresence status to see the list of active intercepts.

Finally, run telepresence leave <name of intercept> to stop the intercept.

Skipping the ingress dialogue

You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown.

FlagDescriptionRequired
--ingress-hostThe ip address for the ingressyes
--ingress-portThe port for the ingressyes
--ingress-tlsWhether tls should be usedno
--ingress-l5Whether a different ip address should be used in request headersno

Creating an intercept when a service has multiple ports

If you are trying to intercept a service that has multiple ports, you need to tell Telepresence which service port you are trying to intercept. To specify, you can either use the name of the service port or the port number itself. To see which options might be available to you and your service, use kubectl to describe your service or look in the object's YAML. For more information on multiple ports, see the Kubernetes documentation.

When intercepting a service that has multiple ports, the name of the service port that has been intercepted is also listed.

If you want to change which port has been intercepted, you can create a new intercept the same way you did above and it will change which service port is being intercepted.

Creating an intercept When multiple services match your workload

Oftentimes, there's a 1-to-1 relationship between a service and a workload, so telepresence is able to auto-detect which service it should intercept based on the workload you are trying to intercept. But if you use something like Argo, there may be two services (that use the same labels) to manage traffic between a canary and a stable service.

Fortunately, if you know which service you want to use when intercepting a workload, you can use the --service flag. So in the aforementioned example, if you wanted to use the echo-stable service when intercepting your workload, your command would look like this:

Intercepting multiple ports

It is possible to intercept more than one service and/or service port that are using the same workload. You do this by creating more than one intercept that identify the same workload using the --workload flag.

Let's assume that we have a service multi-echo with the two ports http and grpc. They are both targeting the same multi-echo deployment.

Port-forwarding an intercepted container's sidecars

Sidecars are containers that sit in the same pod as an application container; they usually provide auxiliary functionality to an application, and can usually be reached at localhost:${SIDECAR_PORT}. For example, a common use case for a sidecar is to proxy requests to a database, your application would connect to localhost:${SIDECAR_PORT}, and the sidecar would then connect to the database, perhaps augmenting the connection with TLS or authentication.

When intercepting a container that uses sidecars, you might want those sidecars' ports to be available to your local application at localhost:${SIDECAR_PORT}, exactly as they would be if running in-cluster. Telepresence's --to-pod ${PORT} flag implements this behavior, adding port-forwards for the port given.

If there are multiple ports that you need forwarded, simply repeat the flag (--to-pod=<sidecarPort0> --to-pod=<sidecarPort1>).

Intercepting headless services

Kubernetes supports creating services without a ClusterIP, which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. Telepresence supports intercepting these headless services as it would a regular service with a ClusterIP. So, for example, if you have the following service:

You can intercept it like any other:

Intercepting without a service

You can intercept a workload without a service by adding an annotation that informs Telepresence what container ports that are eligable for intercepts. Telepresence will then inject a traffic-agent when the workload is deployed, and you will be able to intercept the given ports as if they were service ports. The annotation is:

The annotation value is a comma separated list of port identifiers consisting of either the name or the port number of a container port, optionally suffixed with /TCP or /UDP

Let's try it out!

  1. Deploy an annotation similar to this one to your cluster:

  2. Connect telepresence:

  3. List your intercept eligible workloads. If the annotation is correct, the deployment should show up in the list:

  4. Start an intercept handler locally that will receive the incoming traffic. Here's an example using a simple python http service:

  5. Create an intercept:

Note that the response contains an "Address" that you can curl to reach the intercepted pod. You will not be able to curl the name "echo-no-svc". Since there's no service by that name, there's no DNS entry for it either.

  1. Curl the intercepted workload:

Sharing intercepts with teammates

Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You can do that easily by going to Ambassador Cloud -> Intercepts history pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced client to be installed and to be logged in (telepresence login).

To instantiate an intercept based on a saved intercept, simply run telepresence intercept --use-saved-intercept <saved-intercept-name>. When logged in, the command will first check for a saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned.

Saved Intercepts can be managed through Ambassador Cloud.

Specifying the intercept traffic target

By default, it's assumed that your local app is reachable on 127.0.0.1, and intercepted traffic will be sent to that IP at the port given by --port. If you wish to change this behavior and send traffic to a different IP address, you can use the --address parameter to telepresence intercept. Say your machine is configured to respond to HTTP requests for an intercept on 172.16.0.19:8080. You would run this as:

Replacing a running workload

By default, your application keeps running as Telepresence intercepts it, even if it doesn't receive any traffic (or receives only a subset, as with personal intercepts). This can pose a problem for applications that are active even when they're not receiving requests. For instance, if your application consumes from a message queue as soon as it starts up, intercepting it won't stop the pod from consuming from the queue.

To work around this issue, telepresence intercept allows you to pass in a --replace flag that will stop every application container from running on your pod. When you pass in --replace, Telepresence will restart your application with a dummy application container that sleeps infinitely, and instead just place a traffic agent to redirect traffic to your local machine. The application container will be restored as soon as you leave the intercept.