DocsTelepresence OSSTelepresence RBAC
Telepresence RBAC
The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster.
There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence intercept, and otherwise be unable to affect Kubernetes resources.
In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the Kubernetes RBAC documentation page.
Requirements
- Kubernetes version 1.16+
- Cluster admin privileges to apply RBAC
Editing your kubeconfig
This guide also assumes that you are utilizing a kubeconfig file that is specified by the KUBECONFIG
environment variable. This is a yaml
file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the yaml
. After an administrator has applied the RBAC configuration, a user should create a config.yaml
in your current directory that looks like the following:
The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format <service-account-name>-token-<uuid>
. This token can be obtained by your cluster administrator by running kubectl get secret -n ambassador <service-account-secret-name> -o jsonpath='{.data.token}' | base64 -d
.
After creating config.yaml
in your current directory, export the file's location to KUBECONFIG by running export KUBECONFIG=$(pwd)/config.yaml
. You should then be able to switch to this context by running kubectl config use-context my-context
.
Administrating Telepresence
Telepresence administration requires permissions for creating Namespaces
, ServiceAccounts
, ClusterRoles
, ClusterRoleBindings
, Secrets
, Services
, MutatingWebhookConfiguration
, and for creating the traffic-manager
deployment which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence:
There are two ways to install the traffic-manager: Using telepresence connect
and installing the helm chart.
By using telepresence connect
, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager.
Cluster-wide telepresence user access
To allow users to make intercepts across all namespaces, but with more limited kubectl
permissions, the following ServiceAccount
, ClusterRole
, and ClusterRoleBinding
will allow full telepresence intercept
functionality.
Traffic Manager connect permission
In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager.
Namespace only telepresence user access
RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s).
For each accessible namespace
The user will also need the Traffic Manager connect permission described above.