Skip to main content
Version: 1.5.x

Resource Consumption and Capacity Planning

This document describes a conservative guideline for capacity planning of Tetrate Service Bridge (TSB) in Management and Control planes.

These parameters apply to production installations: TSB will run with minimal resources if you are using a demo-like environment.


The resource provisioning guidelines described in this document are very conservative.

Also please be aware that the resource provisioning described in this document are applicable to vertical resource scaling. Multiple replicas of the same TSB components do not share the load with each other, and therefore you cannot expect the combined resources from multiple components to have the same effect. Replicas of TSB components should only be used for high availability purposes only.

For a baseline installation of TSB with 1 registered cluster and 1 deployed service within that cluster, the following resources are recommended.

To reiterate, the amount of memory described below are very conservative. Also, the actual performance given by the number of vCPUs tend to fluctuate depending on your underlying infrastructure. You are advised to verify the results in your environment.

ComponentvCPU #Memory MiB
TSB server (Management Plane) 12512
XCP Central Components 22128
XCP Edge1128
Front Envoy150
TSB UI1256
1 Including the Kubernetes operator and persistent data reconciliation processes.
2 Including the Kubernetes operator.

The TSB stack is mostly CPU-bound. Additional clusters registered with TSB via XCP increase the CPU utilization by ~4%.

The effect of additional registered clusters or additional deployed workload services on memory utilisation is almost negligible. Likewise, the effect of additional clusters or workloads on resource consumption of the majority of TSB components is mostly negligible, with the notable exceptions of TSB, XCP Central component, TSB UI and IAM.


Components that are part of the visibility stack (e.g. OTel/Zipkin, etc.) have their resource utilisation driven by requests, thus the resource scaling should follow the user request rate statistics. As a general rule of thumb, more than 1 vCPU is preferred. It is also important to notice that the visibility stack performance is largely bound by Elasticsearch performance.

Thus, we recommend vertically scaling the components by 1 vCPU for a number of deployed workflows:

Management Plane

Besides OAP, All components don't require any resource adjustment. Those components are architectured and tested to support very large clusters.

OAP in Management plane requires extra CPU and Memory ~ 100 millicores of CPU and 1024 MiB of RAM per every 1000 services. E.g. 4000 services aggregated in TSB Management Plane from all TSB clusters would require approximately 400 millicores of CPU and 4096 MiB of RAM in total.

Control Plane Resource Requirements

Following table shows typical peak resource utilization for TSB control plane with the following assumptions:

  • 50 services with sidecars
  • Traffic on entire cluster is 500 repository
  • Zipkin sampling rate is 1% of the traffic
  • Metric is captured for every request at every workload.

Note that average CPU utilization would be a fraction of the typical peak value.

ComponentTypical Peak CPU (m)Typical Peak Memory (Mi)
XCP Edge100m100Mi
Istio Operator - Control Plane50m100Mi
Istio Operator - Data Plane150m100Mi
TSB Control Plane Operator100m100Mi
TSB Data Plane Operator150m100Mi
OTEL Collector50m100Mi

TSB/Istio Operator resource usage per Ingress Gateway

The following table shows the resources used by TSB Operator and Istio Operator per Ingress Gateways


Keep in mind that these are estimated numbers depending on your application deployed, this can vary, but you can have a general idea of the consumption with these values

Ingress GatewaysTSB Operator CPU(m)TSB Operator Mem(Mi)Istio Operator CPU(m)Istio Operator Mem(Mi)

Component resource utilization

The following tables will show how the different components of TSB scale with 4000 services and peaking with 60 rpm, this is divided by information from the Management Plane, and the Control Plane.

Management Plane

ServicesGatewaysTraffic(rpm)Central CPU(m)Central Mem(Mi)MPC CPU(m)MPC Mem(Mi)OAP CPU(m)OAP Mem(Mi)Otel CPU(m)Otel Mem(Mi)TSB CPU(m)TSB Mem(Mi)Zipkin CPU(m)Zipkin Mem(Mi)
000 rpm3m39Mi5m30Mi37m408Mi22m108Mi14m57Mi2m708Mi
4207600 rpm4m42Mi15m31Mi116m736Mi24m123Mi50m63Mi14m835Mi
8209600 rpm4m54Mi24m34Mi43m909Mi26m127Mi85m75Mi25m948Mi
122011600 rpm4m59Mi32m41Mi28m1141Mi27m210Mi213m78Mi25m954Mi
162013600 rpm5m63Mi44m48Mi209m1475Mi29m249Mi113m86Mi25m957Mi
202015600 rpm5m73Mi41m51Mi51m1655Mi24m319Mi211m91Mi27m957Mi
242017300 rpm4m84Mi72m62Mi57m1910Mi29m381Mi227m97Mi27m755Mi
28201960 rpm5m90Mi73m65Mi43m2136Mi16m466Mi275m104Mi27m770Mi
32202160 rpm5m106Mi85m78Mi89m2600Mi43m574Mi382m108Mi27m802Mi
36202360 rpm5m123Mi94m71Mi245m2772Mi37m578Mi625m115Mi27m825Mi
40202560 rpm5m147Mi90m81Mi521m3224Mi15m704Mi508m122Mi27m856Mi

IAM will peak at 509m/52Mi, LDAP at 2m/17Mi and XCP Operator at 9m/37Mi

Control Plane

ServicesGatewaysTraffic(rpm)Edge CPU(m)Edge Mem(Mi)Istiod CPU(m)Istiod Mem(Mi)OAP CPU(m)OAP Mem(Mi)Otel CPU(m)Otel Mem(Mi)Zipkin CPU(m)Zipkin Mem(Mi)
000 rpm6m49Mi9m53Mi48m610Mi26m80Mi25m723Mi
4002600 rpm350m120Mi600m600Mi900m1510Mi27m86Mi75m931Mi
8004600 rpm700m230Mi2170m1140Mi1720m2310Mi32m91Mi123m1030Mi
12006600 rpm1010m366Mi2680m1890Mi2630m3280Mi35m101Mi139M1080Mi
16008600 rpm1600m438Mi2690m2490Mi3610m4030Mi41m180Mi180m1070Mi
200010600 rpm1900m514Mi3240m3820Mi4470m5890Mi43m106Mi209m1080Mi
240012300 rpm682m628Mi2010m4660Mi3910m5750Mi37m110Mi281m1070Mi
400020600 rpm1470m1040Mi3730m9790Mi13300m35000Mi37m135Mi465m1100Mi

Metric Server will peak at 11m/32Mi, Onboarding Operator at 6m/38Mi, and XCP-Operator at 11m/46Mi