Management Plane Installation
This page will show you how to install the Tetrate Service Bridge management plane in a production environment.
Before you start:
✓ Make sure that you’ve checked the requirements
✓ You’ve configured all the external dependencies
✓ Downloaded Tetrate Service Bridge CLI (tctl
)
✓ Synced the Tetrate Service Bridge images
TSB Management Plane
To keep installation simple but still allow a lot of custom configuration
options we have created a management plane operator. The operator will run in
the cluster and bootstraps the management plane as described in a
ManagementPlane Custom Resource. It watches for changes and enacts them. To help
in creating the right Custom Resource Document (CRD) we have added the ability
to our tctl
client to create the base manifests which you can then modify
according to your required set-up. After this you can either apply the manifests
directly to the appropriate clusters or use in your source control operated
clusters.
Operators
If you would like to know more about the inner workings of Operators, and the Operator Pattern, review the Kubernetes documentation
required certificates
For the TSB management plane to communicate with TSB control planes, you need to
set up TLS secrets. These secrets need to be created from the same trust chain
and have the tls.crt
, tls.key
, and ca.crt
fields set.
In the management plane namespace (default is tsb
), this secret must be named
xcp-central-cert
. In the control plane namespaces (default is istio-system
),
this secret must be named xcp-edge-cert
.
If you have installed cert manager in your
TSB management plane cluster, we provide some convenience methods in tctl
to
help bootstrap these certificates.
Operator Installation
First, create the manifest allowing you to install the management plane operator from your private Docker registry:
tctl install manifest management-plane-operator \
--registry <registry-location> > managementplaneoperator.yaml
- Standard
- OpenShift
The managementplaneoperator.yaml
file created by the install manifest command
can be applied directly to the appropriate cluster by using the kubectl client:
kubectl apply -f managementplaneoperator.yaml
After applying the manifest you will see the operator running in the tsb
namespace:
kubectl get pod -n tsb
In OpenShift, the TSB operator needs the anyuid
SCC in order to be able to
start the webhooks for validating and setting defaults to the ManagementPlane
resources.
oc adm policy add-scc-to-user anyuid \
system:serviceaccount:tsb:tsb-operator-management-plane
The managementplaneoperator.yaml
file created by the install manifest command
can be applied directly to the appropriate cluster by using the oc
client:
oc apply -f managementplaneoperator.yaml
After applying the manifest you will see the operator running in the tsb
namespace:
oc get pod -n tsb
Example output:
NAME READY STATUS RESTARTS AGE
tsb-operator-management-plane-d4c86f5c8-b2zb5 1/1 Running 0 8s
Management Plane Installation
The management plane components need some secrets for external communication
purposes. The required secrets are split into five categories represented by the
flag’s prefix: tsb
, xcp
, postgres
, elastic
and ldap
.
These can be generated in the correct format by passing them as command-line flags to the management-plane manifest command.
The management plane communicates with a cluster control plane over mTLS. You will
need to set up a TLS certificate and store it in a secret named xcp-central-cert
in the management plane namespace including the standard tls.crt
, tls.key
and
ca.crt
fields. The CA certificate must be able to verify the certificate presented
from the control plane that will be set up in a similar fashion when onboarding a
cluster i.e. this xcp-central-cert
will need to be created from the same chain of
trust as the control plane certificates. For the CA, we recommend plugging into your
existing PKI infrastructure. The leaf certificate must have a trust domain of xcp.tetrate.io
and set in its URI SAN. e.g. spiffe://xcp.tetrate.io/central
.
cert-manager
In case you have installed cert-manager
in the management plane cluster, you
can have tctl
automatically install certificates for secure communication with
control planes. To do this, add an --xcp-certs
flag to the install manifest
command listed below.
The below command represents the minimum required configuration for creating
secrets. See the CLI reference
documentation for all available options such as providing CA certificates for
Elasticsearch
, PostgreSQL
and LDAP
.
tctl install manifest management-plane-secrets \
--elastic-password <elastic-password> \
--elastic-username <elastic-username> \
--ldap-bind-dn <ldap-bind-dn> \
--ldap-bind-password <ldap-bind-password> \
--postgres-password <postgres-password> \
--postgres-username <postgres-username> \
--tsb-admin-password <tsb-admin-password> \
--tsb-server-certificate "$(cat foo.cert)" \
--tsb-server-key "$(cat foo.key)" > managementplane-secrets.yaml
You can check the bundled explanation from tctl
by running this help command:
tctl install manifest management-plane-secrets --help
Once you've created your secrets manifest, apply it to your cluster.
Vault Injection
If you’re using Vault
injection for certain components, remove the applicable
secrets from the manifest that you’ve created before applying it to your
cluster.
- Standard
- OpenShift
kubectl apply -f managementplane-secrets.yaml
oc apply -f managementplane-secrets.yaml
Installation
Now we’re ready to deploy the management plane.
To deploy the management plane we need to create a ManagementPlane
custom
resource in the Kubernetes cluster that describes the management plane.
Below is a ManagementPlane
custom resource that describes a basic management plane. Save this
managementplane.yaml
and adjust it according to your needs:
apiVersion: install.tetrate.io/v1alpha1
kind: ManagementPlane
metadata:
name: managementplane
namespace: tsb
spec:
hub: <registry-location>
dataStore:
postgres:
host: <postgres-hostname-or-ip>
port: <postgres-port>
name: <database-name>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
identityProvider:
ldap:
host: <ldap-hostname-or-ip>
port: <ldap-port>
search:
baseDN: dc=tetrate,dc=io
iam:
matchDN: "cn=%s,ou=People,dc=tetrate,dc=io"
matchFilter: "(&(objectClass=person)(uid=%s))"
sync:
usersFilter: "(objectClass=person)"
groupsFilter: "(objectClass=groupOfUniqueNames)"
membershipAttribute: uniqueMember
tokenIssuer:
jwt:
expiration: 1h
issuers:
- name: https://jwt.tetrate.io
algorithm: RS256
signingKey: tls.key
For more information on what each of these sections describes and how to configure them, please check out the following links:
Edit the relevant sections, save your configured custom resource to a file and apply it to your Kubernetes cluster.
- Standard
- OpenShift
kubectl apply -f managementplane.yaml
Once applied, ensure that TSB has created a default tenant and onboarded your identity provider.
kubectl create job -n tsb teamsync-bootstrap --from=cronjob/teamsync
oc apply -f managementplane.yaml
Once applied, ensure that TSB has created a default tenant and onboarded your identity provider.
oc create job -n tsb teamsync-bootstrap --from=cronjob/teamsync
Note: TSB will automatically do this every hour, so this command only needs to be run once after the initial installation.
Verifying Installation
To verify your installation succeeded, log in as the admin user. Try to connect
to the TSB UI or login with the tctl
CLI tool.
The TSB UI is reachable on port 8443 of the external IP as returned by the following command:
- Standard
- OpenShift
kubectl get svc -n tsb envoy
oc get svc -n tsb envoy
To configure tctl
’s default config profile to point to your new TSB cluster do
the following:
tctl config clusters set default --bridge-address $(kubectl get svc -n tsb envoy --output jsonpath='{.status.loadBalancer.ingress[0].ip}'):8443
Now you can log in with tctl
and provide the tenant (which will default to
tetrate
) and admin account credentials.
tctl login
Tenant: tetrate
Username: admin
Password: *****
Login Successful!