Onboarding Clusters
This page explains how to onboard a Kubernetes cluster to an existing Tetrate Service Bridge management plane.
Before you start:
✓ Verify that you’re logged in to the management plane. If you're not, do with the steps below.
Log into the Management Plane
If already logged in with tctl
, you can skip this step.
tctl login
The login
command will prompt you to set a organization
, tenant
, and
provide a username
and password
. For onboarding clusters, you do not need
to specify a tenant
.
The username
you log in with must have the correct permissions to create a
cluster. This will allow you to configure the management plane and onboard a
cluster.
Lookup the Organization
To configure a cluster object in the next step, you need to know the organization that the cluster belongs to.
To lookup an existing organization name, either use the Web UI,
or use tctl get
to query for details:
tctl get org
This should print a result similar to the following. Note the name of the organization, and proceed to the next step.
NAME DISPLAY NAME DESCRIPTION
tetrate tetrate
Configuring the Management Plane
To create the correct credentials for the cluster to communicate with the management plane, we need to create a cluster object using the management plane API.
Adjust the below yaml
object according to your needs and save to a file called new-cluster.yaml
.
apiVersion: api.tsb.tetrate.io/v2
kind: Cluster
metadata:
name: <cluster-name-in-tsb>
organization: <organization-name>
spec:
tokenTtl: "8760h"
Cluster name in TSB
<cluster-name-in-tsb> is the designated name for your cluster in TSB. You use this name in TSB APIs, such as namespace selector in workspaces and config groups. You will also use this name when creating a ControlPlane custom resource below.
Cluster token TTL
To make sure communication between the TSB management plane and the cluster is
not disrupted, you must renew the cluster token before it expires. You can set
tokenTtl
to a very high value (e.g. 8760h or 1 year) to avoid having to renew
the cluster token frequently.
Please refer to the reference docs for details on the configurable fields of a Cluster object.
To create the cluster object at the management plane, use tctl
to apply the
yaml
file containing the cluster details.
tctl apply -f new-cluster.yaml
Deploy Operators
Next, you need to install the necessary components in the cluster to onboard and connect it to the management plane.
There are two operators you must deploy. First, the control plane operator, which is responsible for managing Istio, SkyWalking, Zipkin and various other components. Second, the data plane operator, which is responsible for managing gateways.
tctl install manifest cluster-operators \
--registry <registry-location> > clusteroperators.yaml
- Standard
- OpenShift
The install manifest cluster-operators command outputs the Kubernetes manifests of the required operators. We can then add this to our source control or apply it to the cluster:
kubectl apply -f clusteroperators.yaml
Similarly to what the management plane operator requires, we need to add the
anyuid
SCC to the control plane and data plane operator service accounts.
oc adm policy add-scc-to-user anyuid \
system:serviceaccount:istio-system:tsb-operator-control-plane
oc adm policy add-scc-to-user anyuid \
system:serviceaccount:istio-gateway:tsb-operator-data-plane
The install manifest cluster-operators command outputs the Kubernetes manifests of the required operators. We can then add this to our source control or apply it to the cluster:
oc apply -f clusteroperators.yaml
Secrets
The control plane needs secrets in order to authenticate with the management plane. The manifest render
command for the cluster uses the tctl
tool to retrieve tokens to communicate
with the management plane automatically, so you only need to provide Elastic
credentials, XCP edge certificate secret, and the cluster name (so that the CLI
tool can get tokens with the correct scope). Token generation is safe to run
multiple times as it does not revoke any previously created tokens.
Then you can run the following command to generate the control plane secrets:
tctl install manifest control-plane-secrets \
--cluster <cluster-name> \
> controlplane-secrets.yaml
The install manifest control-plane-secrets
command outputs the required
Kubernetes secrets. When saved to a file, we can add to our source control or
apply it to the cluster:
- Standard
- OpenShift
kubectl apply -f controlplane-secrets.yaml
oc apply -f controlplane-secrets.yaml
For more information, see the CLI reference for the tctl
install control plane secrets
command.
Installation
Finally, you will need to create a ControlPlane custom resource in Kubernetes that describes the control plane we wish to deploy.
For this step, you will be creating a manifest file that must include several variables:
Variable Name | Description |
---|---|
registry-location | URL of your Docker registry |
elastic-hostname-or-ip | Address where your Elasticsearch instance is running |
elastic-port | Port number where your Elasticsearch instance is listening |
elastic-version | The major version number of your Elasticsearch instance (e.g. if version is 7.13.0 , the value should be 7 ) |
tsb-address | Address where your TSB Management Plane is running |
tsb-port | Port number where your TSB Management Plane is listening |
cluster-name-in-tsb | Name used when the cluster was registered to TSB Management Plane |
The value for tsb-address
can be looked up by looking up the external IP address
returned by the following command (make sure that kubectl is pointed towards the
cluster where TSB Management Cluster has been installed):
$ kubectl get svc -n tsb envoy
The value for tsb-port
should be set to 8443 if otherwise unchanged.
Elasticsearch configuration for demo install
If you are using the demo profile, values for elastic.host
and elastic.port
can be the same as tsb-address
and tsb-port
, as Envoy will properly redirect
the traffic to the appropriate Pod.
Set elastic.version
to 7 and elastic.selfSigned
to true
.
The version for Elasticsearch can be found by examining the manifest for its Pod. Execute the following command, and obtain the manifest:
kubectl get pods -n tsb elasticsearch-0
Within the manifest you should find a line resembling the following.
The string following elasticsearch
is the version.
image: <repository host or path>/elasticsearch:6.4.3
- Standard
- OpenShift
- Mirantis
If you are using a self-signed certificate, replace selfSigned: <is-using-self-signed-CA>
with selfSigned: true
in the YAML file below. If you are not using a self-signed certificate, you can either omit this field or specify an explicit false
value.
apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: controlplane
namespace: istio-system
spec:
hub: <registry-location>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
selfSigned: <is-using-self-signed-CA>
managementPlane:
host: <tsb-address>
port: <tsb-port>
clusterName: <cluster-name-in-tsb>
meshExpansion: {}
For more details on what each of these sections describes and how to configure them, please check out the following links:
This can then be applied to your Kubernetes cluster:
kubectl apply -f controlplane.yaml
First, Istio requires the use of the CNI plugin with some specific configuration to make it coexist with the default CNI (multus). Also, the mechanism for obtaining a TLS certificate for OAP is slightly different, so we need to adjust for that.
If you are using a self-signed certificate, replace selfSigned: <is-using-self-signed-CA>
with selfSigned: true
in the YAML file below. If you are not using a self-signed certificate, you can either omit this field or specify an explicit false
value.
apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: controlplane
namespace: istio-system
spec:
components:
oap:
kubeSpec:
overlays:
- apiVersion: extensions/v1beta1
kind: Deployment
name: oap-deployment
patches:
- path: spec.template.spec.containers.[name:oap].env.[name:SW_RECEIVER_GRPC_SSL_CERT_CHAIN_PATH].value
value: /skywalking/pkin/tls.crt
- path: spec.template.spec.containers.[name:oap].env.[name:SW_CORE_GRPC_SSL_TRUSTED_CA_PATH].value
value: /skywalking/pkin/tls.crt
service:
annotations:
service.beta.openshift.io/serving-cert-secret-name: dns.oap-service-account
istio:
kubeSpec:
CNI:
binaryDirectory: /var/lib/cni/bin
chained: false
configurationDirectory: /etc/cni/multus/net.d
configurationFileName: istio-cni.conf
overlays:
- apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
name: tsb-istiocontrolplane
patches:
- path: spec.meshConfig.defaultConfig.envoyAccessLogService.address
value: oap.istio-system.svc:11800
- path: spec.meshConfig.defaultConfig.envoyAccessLogService.tlsSettings.caCertificates
value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- path: spec.values.cni.chained
value: false
- path: spec.values.sidecarInjectorWebhook
value:
injectedAnnotations:
k8s.v1.cni.cncf.io/networks: istio-cni
traceSamplingRate: 100
hub: <registry-location>
managementPlane:
host: <tsb-address>
port: <tsb-port>
clusterName: <cluster-name-in-tsb>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
selfSigned: <is-using-self-signed-CA>
meshExpansion: {}
For more details on what each of these sections describes and how to configure them, please check out the following links:
Before applying it, bear in mind that you will have to allow the service accounts of the different control plane components to your OpenShift Authorization Policies.
oc adm policy add-scc-to-user anyuid -n istio-system -z istiod-service-account # SA for istiod
oc adm policy add-scc-to-user anyuid -n istio-system -z vmgateway-service-account # SA for vmgateway
oc adm policy add-scc-to-user anyuid -n istio-system -z istio-system-oap # SA for OAP
oc adm policy add-scc-to-user privileged -n istio-system -z xcp-edge # SA for XCP-Edge
This can then be applied to your Kubernetes cluster:
oc apply -f controlplane.yaml
apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: controlplane
namespace: istio-system
spec:
components:
istio:
kubeSpec:
CNI:
chained: true
binaryDirectory: /opt/cni/bin
configurationDirectory: /etc/cni/net.d
# Depending in the underlaying machine OS, you will need to uncomment the following
# lines if istio CNI pods need privileged permissions to run.
# overlays:
# - apiVersion: install.istio.io/v1alpha1
# kind: IstioOperator
# name: tsb-istiocontrolplane
# patches:
# - path: spec.components.cni.k8s
# overlays:
# - apiVersion: extensions/v1beta1
# kind: DaemonSet
# name: istio-cni-node
# patches:
# - path: spec.template.spec.containers.[name:install-cni].securityContext
# value:
# privileged: true
hub: <registry-location>
managementPlane:
host: <tsb-address>
port: <tsb-port>
clusterName: <cluster-name-in-tsb>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
meshExpansion: {}
For more details on what each of these sections describes and how to configure them, please check out the following links:
Before applying it, bear in mind that you will have to grant cluster-admin
role to istio-system:istio-operator
service account.
This can then be applied to your Kubernetes cluster:
kubectl apply -f controlplane.yaml
note
To onboard a cluster, you do not need to create any data plane descriptions at this stage. Data plane descriptions are only needed when adding Gateways. For more information, see the section on Gateways in the usage quickstart guide.
Verify Onboarded Cluster
To verify a cluster has been successfully onboarded check that the pods have all started correctly.
- Standard
- OpenShift
kubectl get pod -n istio-system
NAME READY STATUS RESTARTS AGE
edge-6659df478d-2tkjw 1/1 Running 0 25s
istio-operator-f8fd7dcd7-w8fjl 1/1 Running 0 2m19s
istiod-8495db5465-fd8kv 1/1 Running 0 103s
oap-deployment-7c74b86c59-pg2jv 2/2 Running 0 2m19s
otel-collector-b96786f54-zxvz5 2/2 Running 0 2m19s
tsb-operator-control-plane-7dc8d87fd9-tsj5g 1/1 Running 0 8m48s
vmgateway-bcd58bbbd-j7skc 1/1 Running 0 93s
xcp-operator-edge-54b75dc588-f4p2t 1/1 Running 0 2m18s
zipkin-64b6cf5ff4-wj2t8 2/2 Running 0 2m18s
oc get pod -n istio-system
NAME READY STATUS RESTARTS AGE
edge-6659df478d-2tkjw 1/1 Running 0 25s
istio-operator-f8fd7dcd7-w8fjl 1/1 Running 0 2m19s
istiod-8495db5465-fd8kv 1/1 Running 0 103s
oap-deployment-7c74b86c59-pg2jv 2/2 Running 0 2m19s
otel-collector-b96786f54-zxvz5 2/2 Running 0 2m19s
tsb-operator-control-plane-7dc8d87fd9-tsj5g 1/1 Running 0 8m48s
vmgateway-bcd58bbbd-j7skc 1/1 Running 0 93s
xcp-operator-edge-54b75dc588-f4p2t 1/1 Running 0 2m18s
zipkin-64b6cf5ff4-wj2t8 2/2 Running 0 2m18s
Istio setup for onboarded applications
Besides the CNI configuration required in the ControlPlane
, you need to be
aware that any namespace that is going to have workloads with Istio sidecars,
will need a to account for the need of creating a NetworkAttachmentDefinition
object created so that the pods can be attached to the istio-cni
network.
cat <<EOF | oc -n <target-namespace> create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: istio-cni
EOF
Also note that the envoy sidecars injected into the workloads run as user ID
1337, and that is disallowed by default in OpenShift. Hence, we will need to add
the anyuid
SCC (or any other SCC that allows the aforementioned user ID) to
the service accounts used in the application namespace.
oc adm policy add-scc-to-group anyuid \
system:serviceaccounts:<target-namespace>