For each cluster in your turbopuffer on-prem deployment you will be provided with an 'on-prem kit' containing all the files required to configure your cluster. This document provides guidance to successfully deploy a new turbopuffer on-prem cluster.
If you don't have your kit yet, you can get a sense of the Terraform and Kubernetes configuration files with this scrubbed example.
onprem-kit
├── gcp # cloud specific terraform configurations
│ ├── main.tf
│ └── turbopuffer.tfvars
├── aws
│ ├── main.tf
│ └── turbopuffer.tfvars
├── azure
│ ├── main.tf
│ └── turbopuffer.tfvars
├── values.yaml (generated) # configuration file generated by terraform
├── provision-internal-cluster-auth.sh # generates an HMAC key used by turbopuffer to authenticate internal requests
├── tpuf # helm chart for main terraform deployment
│ ├── Chart.yaml
│ ├── files
│ │ ├── 01_lua_router.yaml
│ │ └── cert-manager-1-13-2.yaml
│ ├── templates
│ │ ├── 00_config_map.yaml
│ │ ├── ...
│ │ └── 08_deployment.yaml
│ └── values.yaml
└── tpuf-ctl-cluster.yaml
terraform
, kubectl
and helm
installed.cd your-cloud-provider
terraform init
to setup required providersturbopuffer.tfvars
terraform apply -var-file=turbopuffer.tfvars
kubectl
. Add your new cluster context to kubectl.
gcloud container clusters get-credentials CLUSTER_NAME --project PROJECT_ID --region REGION
aws eks update-kubeconfig --region REGION --name CLUSTER_NAME
az aks get-credentials --name=CLUSTER_NAME --resource-group=RESOURCE_GROUP
kubectl config get-contexts
and confirm the cluster is correct.kubectl get pods
and confirm the command succeeds (no output).values.yaml
file in the onprem-kit
directory, which contains values for Helm. Edit this file and set any other necessary values. Refer to values.yaml
inside the Helm chart (tpuf/values.yaml
) for a list of possible values and their default values. Of particular note:
image.registry
determines the registry used for pulling images. This defaults to our secure turbopuffer-onprem
registry containing only images approved for onprem deployments. You can change this to a private registry of your choosing to which you replicate our images.image.digest
determines the specific image signature that will be deployed. You will be notified by turbopuffer when a new image of interest to you is available; this will provide an opportunity for you to perform all required security audits on top of our existing procedures.certificates.enabled
Enables or disables all certificate handling in turbopuffer, defaults to true
. If you want to terminate TLS yourself, set this to false to allow turbopuffer to be accessed on HTTP port 80.certificates.mode
Determines the mode in which certificates are provisioned. Supports either manual
in which you provide the appropriately named certificate in the turbopuffer
namespace, or letsencrypt
using cert-manager.io
and LetsEncrypt to provision certificates. Default manual
.ingress.internal
exposes turbopuffer on an internal IP if true. This allows you to manage TLS and routing yourself.
If this setting is true, you should also disable certificates (enabled: false
), and set the following configurations:"tpuf_config""
"server":
"self_endpoint": http://INGRESS_INTERNAL_IP
"self_endpoint_host_header": CLUSTER_ENDPOINT
The value for INGRESS_INTERNAL_IP
can be obtained by running kubectl get ingress -n turbopuffer
, while CLUSTER_ENDPOINT
is the same as hostname
in your configuration.
Note: with an internal deployment you will not be able to access your cluster from outside of the VPC subnet that the internal IP is allocated from. If you manually allocate an IP out of a global internal subnet and configure it to be used, then it could be accessed globally.
tpuf_config
configuration values suggested by turbopuffer for your onprem deployment. You can find information about these settings and more in our onprem configuration documentation.cd
to the top of the onprem-kit directory. Run helm install turbopuffer tpuf --values=values.yaml --values=metrics-keys.yaml -n default
to deploy turbopuffer to your cluster.
values.yaml
file, you will need to run helm upgrade --values=values.yaml --values=metrics-keys.yaml -n default turbopuffer tpuf
to apply the changes.KUBE_CONTEXT=$(kubectl config current-context) ./provision-internal-cluster-auth.sh
turbopuffer-secrets
in the turbopuffer
namespace.python3 scripts/apikey.py
to generate an Org ID and API Keyvalues.yaml
with tpuf_config.authentication.allowed_api_keys_sha256
. Refer to Refer to values.yaml
inside the Helm chart (tpuf/values.yaml
) for an example of this.helm upgrade -n default turbopuffer tpuf --values=values.yaml --values=metrics-keys.yaml --values=api-values.yaml
to deploy the changes.kubectl rollout restart statefulsets -n turbopuffer
.kubectl apply -f tpuf-ctl-cluster.yaml
to apply our kubernetes configurations.
kubectl get pods -n tpuf-ctl-cluster
, you should see a single pod.TURBOPUFFER_API_KEY=<your_api_key> scripts/sanity.sh
will query your turbopuffer cluster directly, verifying that core operations function. It will not verify certicates, and may encounter a 500 error if the nodes aren't routeable yet.If you want to disable outgoing connections for the cluster, you can allowlist the following IPs:
35.234.93.182
(api.polarsignals.com
)76.76.21.0/24
curl -s https://ip-ranges.datadoghq.com/| jq -r '(.apm.prefixes_ipv4 + .global.prefixes_ipv4 + .logs.prefixes_ipv4 + .agents.prefixes_ipv4) | unique[]'
The turbopuffer team will provide you with a new image digest to use.
image.digest
value in your Helm values.yaml
filehelm upgrade -n default turbopuffer tpuf --values=values.yaml