DocsTelepresence OSS
Cluster-side configuration
Cluster-side configuration
For the most part, Telepresence doesn't require any special
configuration in the cluster and can be used right away in any
cluster (as long as the user has adequate RBAC permissions
and the cluster's server version is 1.19.0
or higher).
Helm Chart configuration
Some cluster specific configuration can be provided when installing or upgrading the Telepresence cluster installation using Helm. Once installed, the Telepresence client will configure itself from values that it receives when connecting to the Traffic manager.
See the Helm chart README for a full list of available configuration settings.
Values
To add configuration, create a yaml file with the configuration values and then use it executing telepresence helm install [--upgrade] --values <values yaml>
Client Configuration
It is possible for the Traffic Manager to automatically push config to all connecting clients. To learn more about this, please see the client config docs
Traffic Manager Configuration
The trafficManager
structure of the Helm chart configures the behavior of the Telepresence traffic manager.
Service Mesh
The trafficManager.serviceMesh
structure is used to configure Telepresence's integrations with service meshes.
You should configure this if your cluster is running a compatible service mesh, as it's often needed to be able
to intercept all workloads. Currently only istio
is supported.
See the page on service meshes for more information.
Valid values are:
Value | Resulting action |
---|---|
type | The type of service mesh that is in use by your cluster. Supports none (the default) and istio |
Agent Configuration
The agent
structure of the Helm chart configures the behavior of the Telepresence agents.
Image Configuration
The agent.image
structure contains the following values:
Setting | Meaning |
---|---|
registry | Registry used when downloading the image. Defaults to "docker.io/datawire". |
name | The name of the image. Defaults to "tel2" |
tag | The tag of the image. Defaults to 2.18.0 |
Log level
The agent.LogLevel
controls the log level of the traffic-agent. See Log Levels for more info.
Resources
The agent.resources
and agent.initResources
will be used as the resources
element when injecting traffic-agents and init-containers.
Mutating Webhook
Telepresence uses a Mutating Webhook to inject the Traffic Agent sidecar container and update the port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched and in sync as far as GitOps workflows (such as ArgoCD) are concerned.
The injection will happen on demand the first time an attempt is made to intercept the workload.
If you want to prevent that the injection ever happens, simply add the telepresence.getambassador.io/inject-traffic-agent: disabled
annotation to your workload template's annotations:
Service Name and Port Annotations
Telepresence will automatically find all services and all ports that will connect to a workload and make them available for an intercept, but you can explicitly define that only one service and/or port can be intercepted.
Ignore Certain Volume Mounts
An annotation telepresence.getambassador.io/inject-ignore-volume-mounts
can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container.
Note on Numeric Ports
If the targetPort
of your intercepted service is pointing at a port number, in addition to
injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer
that will
reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent.
For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it:
Excluding Envrionment Variables
If your pod contains sensitive variables like a database password, or third party API Key, you may want to exclude those from being propagated through an intercept. Telepresence allows you to configure this through a ConfigMap that is then read and removes the sensitive variables.
This can be done in two ways:
When installing your traffic-manager through helm you can use the --set
flag and pass a comma separated list of variables:
telepresence helm install --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"
This also applies when upgrading:
telepresence helm upgrade --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"
Once this is completed, the environment variables will no longer be in the environment file created by an Intercept.
The other way to complete this is in your custom values.yaml
. Customizing your traffic-manager through a values file can be viewed here.
You can exclude any number of variables, they just need to match the key
of the variable within a pod to be excluded.