Onboarding Clusters
This page explains how to onboard a Kubernetes cluster to an existing Tetrate Service Bridge management plane.
Before you start:
✓ Set-up the TSB management plane, visit the requirements and download page
for the how-to guide.
✓ Verify that you’re logged in to the management plane. If you're not, do with
the steps below.
Log into the Management Plane
If already logged in with tctl
, you can skip this step.
tctl login
The login
command will prompt you to set a organization
, tenant
, and
provide a username
and password
. For onboarding clusters, you do not need
to specify a tenant
.
The username
you log in with must have the correct permissions to create a
cluster. This will allow you to configure the management plane and onboard a
cluster.
Configuring the Management Plane
To create the correct credentials for the cluster to communicate with the
management plane, we need to create a cluster object using the management plane
API. To configure a cluster object, adjust the below yaml
object according to
your needs and save to a file.
apiVersion: api.tsb.tetrate.io/v2
kind: Cluster
metadata:
name: <cluster-name-in-tsb>
organization: <organization-name>
spec:
tokenTtl: "8760h"
Cluster name in TSB
<cluster-name-in-tsb> is the designated name for your cluster in TSB. You use this name in TSB APIs, such as namespace selector in workspaces and config groups. You will also use this name when creating a ControlPlane custom resource below.
Cluster token TTL
To make sure communication between the TSB management plane and the cluster is
not disrupted, you must renew the cluster token before it expires. You can set
tokenTtl
to a very high value (e.g. 8760h or 1 year) to avoid having to renew
the cluster token frequently.
Please refer to the reference docs for details on the configurable fields of a Cluster object.
To create the cluster object at the management plane, use tctl
to apply the
yaml
file containing the cluster details.
tctl apply -f new-cluster.yaml
Deploy Operators
Next, you need to install the necessary components in the cluster to onboard and connect it to the management plane.
There are two operators you must deploy. First, the control plane operator, which is responsible for managing Istio, SkyWalking, Zipkin and various other components. Second, the data plane operator, which is responsible for managing gateways. These operators work independently from each other to decouple the upgrade of gateways from sidecar proxies.
tctl install manifest cluster-operators \
--registry <registry-location> > clusteroperators.yaml
- Standard
- OpenShift
The install manifest cluster-operators command outputs the Kubernetes manifests of the required operators. We can then add this to our source control or apply it to the cluster:
kubectl apply -f clusteroperators.yaml
Similarly to what the management plane operator requires, we need to add the
anyuid
SCC to the control plane and data plane operator service accounts.
oc adm policy add-scc-to-user anyuid \
system:serviceaccount:istio-system:tsb-operator-control-plane
oc adm policy add-scc-to-user anyuid \
system:serviceaccount:istio-gateway:tsb-operator-data-plane
The install manifest cluster-operators command outputs the Kubernetes manifests of the required operators. We can then add this to our source control or apply it to the cluster:
oc apply -f clusteroperators.yaml
Secrets
The control plane needs fewer secrets than the management plane as it only has
to connect to the TSB management plane and Elasticsearch. The manifest render
command for the cluster uses the tctl
tool to retrieve tokens to communicate
with the management plane automatically, so you only need to provide Elastic
credentials, XCP edge certificate secret, and the cluster name (so that the CLI
tool can get tokens with the correct scope). Token generation is safe to run
multiple times as it does not revoke any previously created tokens.
tctl install manifest control-plane-secrets \
--elastic-password tsb-elastic-password \
--elastic-username tsb \
--cluster <cluster-name> \
> controlplane-secrets.yaml
You need to set up a tls secret named xcp-edge-cert
so that the control plane
can talk to the management plane over mTLS. The certificate must be created
using the same chain of trust that was used to create the xcp-central-cert
in
the management plane, and have the tls.crt
, tls.key
, and ca.crt
fields
set.
cert manager
If you have installed cert manager in your
TSB management plane cluster, we provide a convenience method in tctl
to help
create a control plane certificate. If you have used the demo installation
profile for TSB, a cert-manager has been installed for your convenience. Add the
following xcp-certs
flag to the above install manifest
command for automatic
creation of your control plane cluster's xcp-edge-cert
in this case.
tctl install manifest control-plane-secrets \
--xcp-certs "$(tctl install cluster-certs --cluster <cluster-name>)" \
...
Please note that you will need to have the current context of kubectl
pointing
to your management plane cluster when creating the secrets manifest with tctl
install cluster-certs
. When applying the resulting secrets manifest, don't
forget to switch back current context of kubectl
to the onboarding cluster.
The install manifest control-plane-secrets
command outputs the required
Kubernetes secrets. When saved to a file, we can add to our source control or
apply it to the cluster:
- Standard
- OpenShift
kubectl apply -f controlplane-secrets.yaml
oc apply -f controlplane-secrets.yaml
For more information, see the CLI reference for the tctl
install control plane secrets
command.
Installation
Finally, we need to create a ControlPlane custom resource in Kubernetes that describes the control plane we wish to deploy.
Cluster name in TSB
Make sure to replace <cluster-name-in-tsb>
with values that you have set previously when
creating cluster object in TSB management plane
- Standard
- OpenShift
apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: controlplane
namespace: istio-system
spec:
hub: <registry-location>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
managementPlane:
host: <tsb-address>
port: <tsb-port>
clusterName: <cluster-name-in-tsb>
meshExpansion: {}
For more details on what each of these sections describes and how to configure them, please check out the following links:
This can then be applied to you Kubernetes cluster:
kubectl apply -f controlplane.yaml
First, Istio requires the use of the CNI plugin with some specific configuration to make it coexist with the default CNI (multus). Also, the mechanism for obtaining a TLS certificate for OAP is slightly different, so we need to adjust for that.
apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: controlplane
namespace: istio-system
spec:
components:
oap:
kubeSpec:
overlays:
- apiVersion: extensions/v1beta1
kind: Deployment
name: oap-deployment
patches:
- path: spec.template.spec.containers.[name:oap].env.[name:SW_CORE_GRPC_SSL_CERT_CHAIN_PATH].value
value: /skywalking/pkin/tls.crt
- path: spec.template.spec.containers.[name:oap].env.[name:SW_CORE_GRPC_SSL_TRUSTED_CA_PATH].value
value: /skywalking/pkin/tls.crt
service:
annotations:
service.beta.openshift.io/serving-cert-secret-name: dns.oap-service-account
istio:
kubeSpec:
CNI:
binaryDirectory: /var/lib/cni/bin
chained: false
configurationDirectory: /etc/cni/multus/net.d
configurationFileName: istio-cni.conf
overlays:
- apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
name: tsb-istiocontrolplane
patches:
- path: spec.meshConfig.defaultConfig.envoyAccessLogService.address
value: oap.istio-system.svc:11800
- path: spec.meshConfig.defaultConfig.envoyAccessLogService.tlsSettings.caCertificates
value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- path: spec.values.cni.chained
value: false
- path: spec.values.sidecarInjectorWebhook
value:
injectedAnnotations:
k8s.v1.cni.cncf.io/networks: istio-cni
traceSamplingRate: 100
hub: <registry-location>
managementPlane:
host: <tsb-address>
port: <tsb-port>
clusterName: <cluster-name-in-tsb>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
meshExpansion: {}
For more details on what each of these sections describes and how to configure them, please check out the following links:
This can then be applied to you Kubernetes cluster:
oc apply -f controlplane.yaml
Istio setup for onboarded applications
Besides the CNI configuration required in the ControlPlane
, you need to be aware that
any namespace that is going to have workloads with Istio sidecars will need a to account
for the need of creating a NetworkAttachmentDefinition
object created so that the pods
can be attached to the istio-cni
network.
$ cat <<EOF | oc -n <target-namespace> create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: istio-cni
EOF
Also note that the envoy sidecars injected into the workloads run as user ID 1337, and that
is disallowed by default in OpenShift. Hence, we will need to add the anyuid
SCC (or any
other SCC that allows the aforementioned user ID) to the service accounts used in the
application namespace. This requirement also applies to any IngressGateway
or Tier1Gateway
deployed in the cluster.
oc adm policy add-scc-to-group anyuid system:serviceaccounts:<target-namespace>
note
To onboard a cluster, you do not need to create any data plane descriptions at this stage. Data plane descriptions are only needed when adding Gateways. For more information, see the section on Gateways in the usage quickstart guide.
Verify Onboarded Cluster
To verify a cluster has been successfully onboarded check that the pods have all started correctly.
- Standard
- OpenShift
kubectl get pod -n istio-system
NAME READY STATUS RESTARTS AGE
edge-6659df478d-2tkjw 1/1 Running 0 25s
istio-operator-f8fd7dcd7-w8fjl 1/1 Running 0 2m19s
istiod-8495db5465-fd8kv 1/1 Running 0 103s
oap-deployment-7c74b86c59-pg2jv 2/2 Running 0 2m19s
otel-collector-b96786f54-zxvz5 2/2 Running 0 2m19s
tsb-operator-control-plane-7dc8d87fd9-tsj5g 1/1 Running 0 8m48s
vmgateway-bcd58bbbd-j7skc 1/1 Running 0 93s
xcp-operator-edge-54b75dc588-f4p2t 1/1 Running 0 2m18s
zipkin-64b6cf5ff4-wj2t8 2/2 Running 0 2m18s
oc get pod -n istio-system
NAME READY STATUS RESTARTS AGE
edge-6659df478d-2tkjw 1/1 Running 0 25s
istio-operator-f8fd7dcd7-w8fjl 1/1 Running 0 2m19s
istiod-8495db5465-fd8kv 1/1 Running 0 103s
oap-deployment-7c74b86c59-pg2jv 2/2 Running 0 2m19s
otel-collector-b96786f54-zxvz5 2/2 Running 0 2m19s
tsb-operator-control-plane-7dc8d87fd9-tsj5g 1/1 Running 0 8m48s
vmgateway-bcd58bbbd-j7skc 1/1 Running 0 93s
xcp-operator-edge-54b75dc588-f4p2t 1/1 Running 0 2m18s
zipkin-64b6cf5ff4-wj2t8 2/2 Running 0 2m18s
Istio setup for onboarded applications
Besides the CNI configuration required in the ControlPlane
, you need to be
aware that any namespace that is going to have workloads with Istio sidecars,
will need a to account for the need of creating a NetworkAttachmentDefinition
object created so that the pods can be attached to the istio-cni
network.
cat <<EOF | oc -n <target-namespace> create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: istio-cni
EOF
Also note that the envoy sidecars injected into the workloads run as user ID
1337, and that is disallowed by default in OpenShift. Hence, we will need to add
the anyuid
SCC (or any other SCC that allows the aforementioned user ID) to
the service accounts used in the application namespace.
oc adm policy add-scc-to-group anyuid \
system:serviceaccounts:<target-namespace>