Skip to main content
Version: 1.4.x

Onboarding Clusters

This page explains how to onboard a Kubernetes cluster to an existing Tetrate Service Bridge management plane.

Before you start:

✓ Verify that you’re logged in to the management plane. If you're not, do with the steps below.

Log into the Management Plane

If already logged in with tctl, you can skip this step.

tctl login

The login command will prompt you to set a organization, tenant, and provide a username and password. For onboarding clusters, you do not need to specify a tenant.

The username you log in with must have the correct permissions to create a cluster. This will allow you to configure the management plane and onboard a cluster.

Lookup the Organization

To configure a cluster object in the next step, you need to know the organization that the cluster belongs to.

To lookup an existing organization name, either use the Web UI, or use tctl get to query for details:

tctl get org

This should print a result similar to the following. Note the name of the organization, and proceed to the next step.

NAME       DISPLAY NAME    DESCRIPTION
tetrate tetrate

Configuring the Management Plane

To create the correct credentials for the cluster to communicate with the management plane, we need to create a cluster object using the management plane API.

Adjust the below yaml object according to your needs and save to a file called new-cluster.yaml.

apiVersion: api.tsb.tetrate.io/v2
kind: Cluster
metadata:
name: <cluster-name-in-tsb>
organization: <organization-name>
spec:
tokenTtl: "8760h"
Cluster name in TSB

<cluster-name-in-tsb> is the designated name for your cluster in TSB. You use this name in TSB APIs, such as namespace selector in workspaces and config groups. You will also use this name when creating a ControlPlane custom resource below.

Cluster token TTL

To make sure communication between the TSB management plane and the cluster is not disrupted, you must renew the cluster token before it expires. You can set tokenTtl to a very high value (e.g. 8760h or 1 year) to avoid having to renew the cluster token frequently.

Please refer to the reference docs for details on the configurable fields of a Cluster object.

To create the cluster object at the management plane, use tctl to apply the yaml file containing the cluster details.

tctl apply -f new-cluster.yaml

Deploy Operators

Next, you need to install the necessary components in the cluster to onboard and connect it to the management plane.

There are two operators you must deploy. First, the control plane operator, which is responsible for managing Istio, SkyWalking, Zipkin and various other components. Second, the data plane operator, which is responsible for managing gateways.

tctl install manifest cluster-operators \
--registry <registry-location> > clusteroperators.yaml

The install manifest cluster-operators command outputs the Kubernetes manifests of the required operators. We can then add this to our source control or apply it to the cluster:

kubectl apply -f clusteroperators.yaml

Secrets

The control plane needs secrets in order to authenticate with the management plane. The manifest render command for the cluster uses the tctl tool to retrieve tokens to communicate with the management plane automatically, so you only need to provide Elastic credentials, XCP edge certificate secret, and the cluster name (so that the CLI tool can get tokens with the correct scope). Token generation is safe to run multiple times as it does not revoke any previously created tokens.

Then you can run the following command to generate the control plane secrets:

tctl install manifest control-plane-secrets \
--cluster <cluster-name> \
> controlplane-secrets.yaml

The install manifest control-plane-secrets command outputs the required Kubernetes secrets. When saved to a file, we can add to our source control or apply it to the cluster:

kubectl apply -f controlplane-secrets.yaml

For more information, see the CLI reference for the tctl install control plane secrets command.

Installation

Finally, you will need to create a ControlPlane custom resource in Kubernetes that describes the control plane we wish to deploy.

For this step, you will be creating a manifest file that must include several variables:

Variable NameDescription
registry-locationURL of your Docker registry
elastic-hostname-or-ipAddress where your Elasticsearch instance is running
elastic-portPort number where your Elasticsearch instance is listening
elastic-versionThe major version number of your Elasticsearch instance (e.g. if version is 7.13.0, the value should be 7)
tsb-addressAddress where your TSB Management Plane is running
tsb-portPort number where your TSB Management Plane is listening
cluster-name-in-tsbName used when the cluster was registered to TSB Management Plane

The value for tsb-address can be looked up by looking up the external IP address returned by the following command (make sure that kubectl is pointed towards the cluster where TSB Management Cluster has been installed):

$ kubectl get svc -n tsb envoy

The value for tsb-port should be set to 8443 if otherwise unchanged.

Elasticsearch configuration for demo install

If you are using the demo profile, values for elastic.host and elastic.port can be the same as tsb-address and tsb-port, as Envoy will properly redirect the traffic to the appropriate Pod.

Set elastic.version to 7 and elastic.selfSigned to true.

The version for Elasticsearch can be found by examining the manifest for its Pod. Execute the following command, and obtain the manifest:

kubectl get pods -n tsb elasticsearch-0

Within the manifest you should find a line resembling the following. The string following elasticsearch is the version.

    image: <repository host or path>/elasticsearch:6.4.3

If you are using a self-signed certificate, replace selfSigned: <is-using-self-signed-CA> with selfSigned: true in the YAML file below. If you are not using a self-signed certificate, you can either omit this field or specify an explicit false value.

apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: controlplane
namespace: istio-system
spec:
hub: <registry-location>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
selfSigned: <is-using-self-signed-CA>
managementPlane:
host: <tsb-address>
port: <tsb-port>
clusterName: <cluster-name-in-tsb>
meshExpansion: {}

For more details on what each of these sections describes and how to configure them, please check out the following links:

This can then be applied to your Kubernetes cluster:

kubectl apply -f controlplane.yaml
note

To onboard a cluster, you do not need to create any data plane descriptions at this stage. Data plane descriptions are only needed when adding Gateways. For more information, see the section on Gateways in the usage quickstart guide.

Verify Onboarded Cluster

To verify a cluster has been successfully onboarded check that the pods have all started correctly.

kubectl get pod -n istio-system
NAME READY STATUS RESTARTS AGE
edge-6659df478d-2tkjw 1/1 Running 0 25s
istio-operator-f8fd7dcd7-w8fjl 1/1 Running 0 2m19s
istiod-8495db5465-fd8kv 1/1 Running 0 103s
oap-deployment-7c74b86c59-pg2jv 2/2 Running 0 2m19s
otel-collector-b96786f54-zxvz5 2/2 Running 0 2m19s
tsb-operator-control-plane-7dc8d87fd9-tsj5g 1/1 Running 0 8m48s
vmgateway-bcd58bbbd-j7skc 1/1 Running 0 93s
xcp-operator-edge-54b75dc588-f4p2t 1/1 Running 0 2m18s
zipkin-64b6cf5ff4-wj2t8 2/2 Running 0 2m18s