From Helm based installation
TSB release 0.8.0 includes a new operator based installation method. While this approach greatly simplifies the platform lifecycle when it comes to installation and upgrades, some considerations need to be taken when upgrading from a release deployed with the legacy helm approach.
Starting with TSB 0.8.0 have changed their names to fit the Tetrate Service Bridge name, discarding the old TCC acronym. This means some deployments or secrets may have changed their names to reflect that.
Management plane
Secrets
Due to the naming changes, the secret containing the certificate used to terminate TLS traffic
when accessing TSB has been renamed from tcc-certs
to tsb-certs
. You will need to clone the
tcc-certs
secret to tsb-certs
or create tsb-certs
again using the same certificates.
The secrets storing the LDAP credentials, the admin
super user credentials, and the synchronization
jobs (teamsync-token
) do not change, but they are no longer managed by the TSB installation
procedure.
Postgres credentials were added to the TSB Kubernetes deployment itself in previous releases, and with
the operator approach they need to be stored in a secret called postgres-credentials
. Check the
installation documentation section regarding the secrets requirements to create this secret. This
secret must exist before you apply the ManagementPlane
resource to the cluster, otherwise TSB API
server will not be able to start.
Persistent storage
After installing the TSB management plane operator as described in the installation documentation, you will need to take care of the following considerations when it comes to the telemetry and data stores.
If your installation uses the bundled Elasticsearch instance, deployed as Kubernetes StatefulSet, the operator will not be able to take ownership of it. The same happens with the bundled instance of Postgres.
To upgrade successfully from such installation you will need to treat both Elasticsearch and Postgres as deployed externally, so the TSB operator will not manage them
apiVersion: install.tetrate.io/v1alpha1
kind: ManagementPlane
metadata:
namespace: tcc
name: tsbmgmtplane
spec:
hub: <your-registry>
tenant: <your-tenant>
telemetryStore:
elastic:
host: elasticsearch
port: 9200
protocol: http
version: 6
dataStore:
postgres:
host: postgres
port: 5432
sslMode: disable
connection_lifetime: 5m
The snippet above describes a ManagementPlane
object configured to point to your currently deployed
Elasticsearch and Postgres. Note though that these will become unmanaged by the operator and you will
need to manage them separately.
If your installation already used externally managed Elasticsearch and/or Postgres you will need to
create the ManagementPlane
with the settings you already provided to the previous release helm charts.
Cleanup unmanaged resources
Due to the naming changes, some deployments will be left in the cluster running older version of the TSB
components. After applying the ManagementPlane
resource to the cluster and check that the new deployments
are successfully started, you will need to manually delete the following objects:
deployment/oap-deployment
deployment/prometheus
deployment/tcc
service/tcc
service/prometheus
secret/tcc-certs
serviceaccount/prometheus
clusterrolebinding/prometheus-tcc
clusterrole/prometheus-tcc
kubectl -n ${MANAGEMENT_NAMESPACE} delete deployment oap-deployment prometheus tcc
kubectl -n ${MANAGEMENT_NAMESPACE} delete service tcc prometheus
kubectl -n ${MANAGEMENT_NAMESPACE} delete secret tcc-certs
kubectl delete clusterrolebinding prometheus-tcc
kubectl delete clusterrole prometheus-tcc
Control plane
Istiod
migration
The TSB control plane in 0.8.0 ships with Istio control plane based on istiod
. This implies some changes that
you need to be aware of.
Cleanup unmanaged resources
In the control plane, the cleanup necessity comes from the change to an Istio version using istiod
. Istio
will run now just a single component grouping the functionality of the previous components, so after
applying the ControlPlane
resource and checking that the new components are healthy, you will need to
remove the following objects:
deployment/istio-citadel
deployment/istio-pilot
deployment/istio-sidecar-injector
deployment/istio-tracing
service/istio-citadel
service/istio-pilot
service/istio-sidecar-injector
service/tracing
serviceaccount/istio-citadel-service-account
serviceaccount/istio-ingressgateway-service-account
serviceaccount/istio-multi
serviceaccount/istio-reader
serviceaccount/istio-sidecar-injector-service-account
serviceaccount/oap-service-account
serviceaccount/tsbd-service-account
clusterrolebinding/istio-citadel-istio-system
clusterrolebinding/istio-multi
clusterrolebinding/istio-reader
clusterrolebinding/istio-sidecar-injector-admin-role-binding-istio-system
clusterrolebinding/tsbd-admin-role-binding-istio-system
clusterrole/istio-citadel-istio-system
clusterrole/istio-reader
clusterrole/istio-sidecar-injector-istio-system
kubectl -n ${CONTROL_NAMESPACE} delete deployment istio-citadel istio-pilot istio-sidecar-injector istio-tracing
kubectl -n ${CONTROL_NAMESPACE} delete service istio-citadel istio-pilot istio-sidecar-injector tracing
kubectl -n ${CONTROL_NAMESPACE} delete serviceaccount istio-citadel-service-account istio-ingressgateway-service-account istio-multi istio-reader istio-sidecar-injector-service-account oap-service-account tsbd-service-account
kubectl delete clusterrolebinding istio-citadel-istio-system istio-multi istio-reader istio-sidecar-injector-admin-role-binding-istio-system tsbd-admin-role-binding-istio-system
kubectl delete clusterrole istio-citadel-istio-system istio-reader istio-sidecar-injector-istio-system
Data plane
Sidecars
The istio-pilot
service disappears. This means that all of the deployed sidecars and gateways will not be able
to receive further updates as they are configured to this service. Whilst traffic should not be affected by
this change, the sidecars will not be able to receive new updates (for example addition or removal of service
endpoints). Because of that you will need to delete the application pods in a controlled manner so the new pods
are configured to fetch updates from istiod
instead of istio-pilot
.
Ingress gateways
The TSB data plane operator will deploy the TSB gateways whenever a DataPlaneConfig
resource is created. DataPlaneConfig
has to be created in the namespace where TSB gateway has to be created. There is a 1:1 mapping between the DataPlaneConfig
and TSB gateway for the namespace. To upgrade your current gateways, you will need to create the corresponding resource
for them.
Cleanup unmanaged resources
The deployment of the ingress gateways also changes its name for the new gateways, moving from tsb-gateway
to
tsb-gateway-<ns>
(where <ns>
is the namespace where the DataPlaneConfig
resource is deployed to). When you deploy
a DataPlaneConfig
resource for one of your existing gateways the data plane operator will create a new deployment
for it, while keeping the old deployment in place.
The Kubernetes service that receives the incoming traffic will be forwarding traffic to both of the gateway deployments, old and new, so when you verify that the newly deployed pods work as expected you can cleanup the old deployment.
Elasticsearch
Due to changes in the SkyWalking templates and index and template naming, it is imperative to delete the SkyWalking related Elasticsearch indexes and templates. Follow the procedure below to delete the appropriate data from Elasticsearch.
Please follow the procedure described in the Elasticsearch wipe procedure page to that end.