VM Sidecars
Every VM service can also host an application sidecar that enables monitoring, traffic control, and traffic steering between Kubernetes and VM workloads.
VM Application Sidecar - Requirements
Following are some requirements to ensure sidecar proxies function correctly, deployed next to VM workloads.
- VM workload should listen on 127.0.0.1. There can be more than 1 service on the same VM, but each should listen on separate ports, should separate sidecars be desired
- Docker engine and docker-compose binaries should be installed on the VM. Sidecar envoy proxy & node-agent runs as docker containers
- No
iptables
used - Docker host network mode must be enabled, so sidecars can intercept traffic, and proxy to applications
- Certificates must be extracted from Kubernetes/OpenShift, and transferred to the VM workload manually by the user. Node-agent uses these certs for sidecars use. To bootstrap, node-agent certs must be available on any fixed directory, and path provided in docker-compose.
Generating VM Sidecar docker-compose using helm template
To generate a VM Sidecar docker-compose, run the following helm command.
REGISTRY="<your-internal-docker-registry>"
helm template tetrate/install/helm/vm --name sidecar \
--namespace bookinfo-vm \
--set global.hub=${REGISTRY} \
--set global.tcc.tenant=${TENANT_ID} \
--set global.tcc.environment=${ENV_ID} \
--set global.tcc.cluster=${CLUSTER_ID} \
--set global.vmgateway.host=1.2.3.4 \
--set global.sidecar.appNs=bookinfo-vm \
--set global.sidecar.appId=bookinfo \
--set global.sidecar.serviceDefinition=details \
> sidecar.yaml
Helm Field | Description |
---|---|
global.vmgateway.host | External IP of vmgateway service deployed along with Istio control plane in istio-system namespace |
global.sidecar.appNs | Namespace of VM service, typically same as that of Kubernetes tcc-gateway service deployed for VM workloads (string) |
global.sidecar.appId | Application name (string) |
global.sidecar.serviceDefinition | Name of the ServiceDefinition created in VM Service Registry. |
Generating certs and transferring to VM
Before VM sidecar is started, we must manually extract certificates from Kubernetes / OpenShift for node-agent to query certs for sidecars.
Following these steps to extract certificates from Kubernetes / OpenShift cluster.
SECRET="secrets/istio.default"
OUTDIR=/tmp
NS="istio-system"
kubectl get ${SECRET} --namespace ${NS} \
--template='{{index .data "root-cert.pem"}}' | base64 -D \
> ${OUTDIR}/certs/root-cert.pem
kubectl get ${SECRET} --namespace ${NS} \
--template='{{index .data "cert-chain.pem"}}' | base64 -D \
> ${OUTDIR}/certs/cert-chain.pem
kubectl get ${SECRET} --namespace ${NS} \
--template='{{index .data "key.pem"}}' | base64 -D \
> ${OUTDIR}/certs/key.pem
cd ${OUTDIR}; tar cvfz certs.tgz ./certs; rm -rf ./certs
Follow these steps to transfer certificates to VM.
REMOTE_DIR="/tmp"
USER=myuser
HOST=detailsvm.prod.internal.company.com
KEYFILE=$HOME/.ssh/key.pem
scp -i ${KEYFILE} ${OUTDIR}/certs.tgz \
${USER}@${HOST}:${REMOTE_DIR}/certs.tgz
ssh -i ${KEYFILE} ${USER}@${HOST} \
"cd ${REMOTE_DIR}; tar xvfz certs.tgz; rm -f certs.tgz"
rm -rf ${OUTDIR}/certs.tgz ${OUTDIR}/certs
Copy docker-compose, and start sidecar proxy.
# local client: copy docker-compose
scp -i ${KEYFILE} docker-compose.yaml \
${USER}@${HOST}:${REMOTE_DIR}/docker-compose.yaml
# on the remote VM
ssh -i ${KEYFILE} ${USER}@${HOST}
docker-compose up -f ${REMOTE_DIR}/docker-compose.yaml -d
Case 1: VM with sidecar, no iptables
If the VM service does not have iptables
based traffic capture, the application
process has to either listen on a different port than port 80, or it has to
listen only on 127.0.0.1 and not on 0.0.0.0. Let us assume that the
application process is listening on port 9080. The sidecar would receive
traffic on port 80, do TLS termination and forward the traffic to the
application process on 127.0.0.1:9080. Below is the corresponding
ServiceDefinition
.
cat <<EOF | kubectl apply -f - <<
apiVersion: registry.tetrate.io/v1alpha1
kind: ServiceDefinition
metadata:
name: ratings
namespace: bookinfo
spec:
hostname: ratings.prod.internal.company.com
ports:
- number: 80
name: http
protocol: HTTP
applicationPort: 9080 # where the app process is listening
sidecarsPresent: true
sidecarSettings:
usingIptablesCapture: false
endpointSource:
manual:
values:
- address: 3.3.3.3
- address: 4.4.4.4
EOF
Case 2: VM with sidecar outbound traffic interception
In the above examples, all outbound traffic from the services do not transit
through the sidecar. Let us assume that the application wishes to interact with
the details
VM services in the same namespace using the sidecar. The
application process can choose to use the sidecar in the outbound path as well
by treating it as a HTTP Proxy on localhost
. By setting the HTTP_PROXY
environment variable to http://localhost:15080
, or using a language run time
specific option, all outbound plain text HTTP traffic from the application
process will be forwarded to the sidecar on port 15080. The sidecar would then
initiate mutual TLS connections or simple TLS connections as appropriate to
other services in the mesh.
cat <<EOF | kubectl apply -f - <<
apiVersion: registry.tetrate.io/v1alpha1
kind: ServiceDefinition
metadata:
name: ratings
namespace: bookinfo
spec:
hostname: ratings.prod.internal.company.com
ports:
- number: 80
name: http
protocol: HTTP
applicationPort: 9080 # where the app process is listening
sidecarsPresent: true
sidecarSettings:
usingIptablesCapture: false
egressHttpProxyPort: 15080 # outbound traffic explicitly sent here.
endpointSource:
manual:
values:
- address: 3.3.3.3
- address: 4.4.4.4
EOF