Deploying L1 Gateways
L1 Gateways typically run on a separate cluster. They can be used to shift traffic between two different clusters such as a VM based cluster and a Kubernetes Cluster.
In this documentation, we will assume that you have already created the
application and deployed its services on two different clusters. We will deploy
a dedicated L1 load balancer for the bookinfo
application on a separate
cluster that has been onboarded into TSB. You need the load balancer IP or
host name of the TSB cluster. Refer to Tetrate Service Bridge Installation for
details on how to obtain ${TSBIP}
.
1. Creating the L1 Load Balancer Object
The command below updates the bookinfo
application to use a dedicated L1 load
balancers service running in bookinfo-l1
namespace. It specifies that the
dedicated load balancer will have a label app: tsb-gateway-bookinfo-l1
, and
use the TLS credentials found in bookinfo-l1-secret
to terminate the TLS
traffic for bookinfo.acme.com
. Traffic for /details
will be sent to directly
to the endpoints of details-vm
service deployed in the VM cluster, while all
other traffic will be forwarded to the dedicated L2 load balancer in the
bookinfo-front
namespace running in another cluster, listening on port 443.
cat >/tmp/l1.json <<EOF
{
"id": "tsb-gateway-bookinfo-l1",
"hostname": "tsb-gateway-bookinfo-l1.bookinfo-l1.svc.cluster.local",
"labels": {
"app": "tsb-gateway-bookinfo-l1"
},
"namespace": "bookinfo-l1",
"serviceType": "LOADBALANCER",
"lbSettings": {
"enableWorkflows": false,
"loadBalancerClass": "ENVOY",
"loadBalancerTier": "TIER1",
"routes": [
{
"hostname": "bookinfo.acme.com",
"tls": {
"tlsMode": "SIMPLE",
"secretName": "bookinfo-l1-secret"
},
"httpSettings": {
"routeRules": [
{
"match" : [
"uri" : {
"prefix" : "/details"
}
],
"route": {
"destinations": [
{
"local": {
"service": "details-vm",
},
"weight": 100,
"port": 443
}
]
}
},
{
"route": {
"destinations": [
{
"remote": {
"service": "tsb-gateway-bookinfo-front",
},
"weight": 100,
"port": 443
}
]
}
}
]
}
}
]
}
}
EOF
curl --request POST -k --url https://${TSBIP}:8443/v1/tenants/tenant1/environments/dev/applications/bookinfo/services \
-u "admin:<credential>" \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--data @/tmp/l1.json
1.1. Install the L1 Load Balancers in the Application's Namespace
When installing a dedicated L1 load balancer, we need to provide the helm
installer with the Tenant ID and the Environment ID (refer to Creating Tenants
and Environments), and the Cluster Id (refer to Onboarding Application
Clusters). We shall refer to these three values as ${TENANT}
, ${ENV}
and ${CLUSTER}
.
OpenShift
OpenShift users need to add the load balancer service account to the gateway security context constraint.
oc adm policy add-scc-to-user gw-scc -z tsb-gateway-service-account -n <gateway-namespace>
Create a data plane resource YAML file as described below:
API compatibility
The current DataPlaneConfig
API matches the Istio operator API, but this will change
in future releases of TSB.
cat <<EOYAML > bookinfo-l1-lb.yaml
---
apiVersion: install.tetrate.io/v1alpha1
kind: DataPlaneConfig
metadata:
namespace: bookinfo-l1
name: bookinfo-l1-gateway
spec:
hub: ${HUB}
components:
ingressGateways:
- namespace: bookinfo-l1
name: tsb-gateway-bookinfo-l1
enabled: true
values:
gateways:
istio-ingressgateway:
labels:
app: tsb-gateway-bookinfo-l1
unvalidatedValues:
global:
tcc:
enabled: true
tenant: ${TENANT}
cluster: ${CLUSTER}
environment: ${ENV}
EOYAML
Apply the bookinfo-l1-lb.yaml
file to the cluster.
kubectl apply -f bookinfo-l1-lb.yaml
The TSB data plane operator will pick this new configurations and deploy the gateways accordingly.
NodePorts
To change the load balancer to expose node ports change the spec.values.gateways.istio-ingressgateway.type
key.
values:
gateways:
istio-ingressgateway:
type: NodePort
To change the assigned nodePort
can set the spec.values.gateways.istio-ingressgateway.ports
key as follows.
values:
gateways:
istio-ingressgateway:
type: NodePort
ports:
- port: 80
nodePort: <selected-node-port>
name: http2
- port: 443
name: https
nodePort: <selected-node-port>
1.1.1. Node Filters for tier1
to tier2
Communication
For the use case where you route traffic from tier1
to tier2
gateways and where you want tier1
to route to the NodePort
of tier2
gateways, but limited to
specific nodes of the tier2
cluster, we need to add nodefilters
as service annotation in the tier2
gateways. Extending the NodePort
example,
in the below we added serviceAnnotations
with the key 'traffic.tetrate.io/nodeSelector'
and value a JSON blob '{"kubernetes.io/hostname": "gke-prod-gke-us-west1-b-larger-pool-b28c726d-w90f"}'
.
This JSON blob is essentially the node labels associated with the tier2
cluster that would serve as node selector.
values:
gateways:
istio-ingressgateway:
type: NodePort
To add the node filter, set the spec.values.gateways.istio-ingressgateway.serviceAnnotations
key as below with the node label as JSON blob.
values:
gateways:
istio-ingressgateway:
type: NodePort
ports:
- port: 80
nodePort: <selected-node-port>
name: http2
- port: 443
name: https
nodePort: <selected-node-port>
serviceAnnotations:
traffic.tetrate.io/nodeSelector: '{"kubernetes.io/hostname": "gke-prod-gke-us-west1-b-larger-pool-b28c726d-w90f"}'