Logo

On-Prem Deployment Runlist

For each cluster in your turbopuffer on-prem deployment you will be provided with an 'on-prem kit' containing all the files required to configure your cluster. This document provides guidance to successfully deploy a new turbopuffer on-prem cluster.

If you don't have your kit yet, you can get a sense of the Terraform and Kubernetes configuration files with this scrubbed example.

Your kit contents

onprem-kit
├── README.md
├── aws
│   ├── main.tf
│   └── turbopuffer.tfvars
├── cosign.pub
├── gcp
│   ├── main.tf
│   └── turbopuffer.tfvars
├── scripts
│   ├── generate_secrets.py
│   └── sanity.sh
├── values.yaml (generated) # configuration file generated by terraform
├── values.secret.yaml (generated) # sensitive configuration file generated by terraform
├── metrics-keys.yaml # configuration file provided by turbopuffer
└── tpuf
    ├── Chart.yaml
    ├── charts
    ├── files
    │   └── ...
    ├── templates
    │   ├── 00_config_map.yaml
    │   ├── ...
    │   └── 12_control_plane_agent.yaml
    ├── values.schema.json
    └── values.yaml

Runlist

  1. Mise en place: Check you have all prerequisites:
    • Verify you have both terraform, kubectl and helm installed.
    • Provision a fresh sub-account for your new cluster
  2. Cluster configuration: Apply terraform configuration to setup Kubernetes cluster and bucket
    • cd your-cloud-provider
    • Run terraform init to setup required providers
    • Fill in the required values in turbopuffer.tfvars
    • Apply terraform configuration: terraform apply -var-file=turbopuffer.tfvars
  3. kubectl. Add your new cluster context to kubectl.
    • Apply provider specific instructions:
      • GCP: Run gcloud container clusters get-credentials CLUSTER_NAME --project PROJECT_ID --region REGION
      • AWS: Run aws eks update-kubeconfig --region REGION --name CLUSTER_NAME
      • Azure: Run az aks get-credentials --name=CLUSTER_NAME --resource-group=RESOURCE_GROUP
    • Run kubectl config get-contexts and confirm the cluster is correct.
    • Run kubectl get pods and confirm the command succeeds (no output).
  4. Configure Helm: The terraform command will have output a values.yaml file in the onprem-kit directory, which contains values for Helm. Edit this file and set any other necessary values. Refer to values.schema.json for a description of valid configurations.
    • To use provider managed TLS certificates, see here.
    • tpuf_config configuration values suggested by turbopuffer for your onprem deployment. You can find information about these settings and more in our onprem configuration documentation.
  5. Generate API keys: Run ./scripts/generate-secrets.py to generate values.secret.yaml. This file will generate an Org Id and API key, along with an token for intra-cluster communication.
  6. Deploy turbopuffer: cd to the top of the onprem-kit directory. Run helm install turbopuffer tpuf --values=values.yaml --values=values.secret.yaml --values=metrics-keys.yaml -n default to deploy turbopuffer to your cluster.
    • Enable pulling your image from our registries.
      • AWS: Provide us with your AWS account id,
      • GCP: Provide the service account email used for pulling images to the turbopuffer team. Either the default compute service account for the sub-account, or a custom service account e.g. used for replicating images into your own registry or configured in K8s.
    • If you change anything in the values.yaml file, you will need to run helm upgrade --values=values.yaml --values=values.secret.yaml --values=metrics-keys.yaml -n default turbopuffer tpuf to apply the changes.
  7. Run post-deployment sanity checks
    • TURBOPUFFER_API_KEY=<your_api_key> scripts/sanity.sh will query your turbopuffer cluster directly, verifying that core operations function. It will not verify certicates, and may encounter a 500 error if the nodes aren't routeable yet.

Using a custom registry for your turbopuffer cluster

By default turbopuffer will pull from one of several turbopuffer managed image registries, as configured in our included terraform. However, there are many reasons you may want to host our images in a registry you control. Our Helm chart fully supports this through the following settings:

image.registry: YOUR_REGISTRY_URL
control_plane.image.registry: YOUR_REGISTRY_URL

We expect two registries found there one called turbopuffer and one called tpuf-ctl-cluster holding the images for turbopuffer and our control plane agent respectively.

For customers on AWS, we can configure ECR Replication to automatically push the latest images into your registry.

Using cloud provider managed TLS certificates

Our helm chart allows managing TLS termination internally to your cluster using either cert-manager or the native kubernetes apis. Your organization may already be managing their certificates through your cloud providers' managed certificates offerings, in which case you will need to handle termination yourselves.

Regardless of your cloud provider, you will want to deploy turbopuffer internally, by setting:

ingress.internal: true
certificates.enabled: false

On GCP

Adding Google Managed Certificates to your GKE cluster is as simple as deploying the following Kubernetes manifest alongside your turbopuffer helm deployment. All that is required is to insert the correct value for YOUR_DOMAIN.

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: ingress-nginx-svc-config
  namespace: ingress-nginx
spec:
  healthCheck:
    checkIntervalSec: 10
    timeoutSec: 10
    port: 80
    type: HTTP
    requestPath: /healthz

---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-svc
  namespace: ingress-nginx
  annotations:
    cloud.google.com/backend-config: '{"default": "ingress-nginx-svc-config"}'
spec:
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
  name: managed-cert
  namespace: ingress-nginx
spec:
  domains:
    - YOUR_DOMAIN
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx-ing
  namespace: ingress-nginx
  annotations:
    networking.gke.io/managed-certificates: managed-cert
spec:
  ingressClassName: "gce"
  defaultBackend:
    service:
      name: ingress-nginx-svc
      port:
        number: 80

On AWS

Before configuring your AWS managed certificate, you will need to install the AWS LB Controller which will allow you to provision Network Load Balancers to service your cluster and handle TLS termination efficiently.

Additionally, you will need to provision your certificate externally to your cluster using the AWS console or CLI.

Once both of these are accomplished, you can use the following Kubernetes manifest to provision an NLB that will directly target our ingress LBs.

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress-lb
  namespace: ingress-nginx  
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external 
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: YOUR_CERTIFICATE_ARN
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"

spec:
  type: LoadBalancer
  externalTrafficPolicy: Local 
  ports:
    - name: https
      port: 443
      targetPort: 80
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller

Networking

If you want to disable outgoing connections for the cluster, you can allowlist the following IPs:

  • PolarSignals (CPU and Heap profiling)
    • 35.234.93.182 (api.polarsignals.com)
  • Control Plane (Cluster Heartbeats)
    • 76.76.21.0/24
  • Datadog (Telemetry)
    • curl -s https://ip-ranges.datadoghq.com/| jq -r '(.apm.prefixes_ipv4 + .global.prefixes_ipv4 + .logs.prefixes_ipv4 + .agents.prefixes_ipv4) | unique[]'

Upgrading turbopuffer versions

The turbopuffer team will provide you with a new image digest to use.

  • Update the image.digest value in your Helm values.yaml file
  • Run helm upgrade -n default turbopuffer tpuf --values=values.yaml
    • This will update the turbopuffer-index and turbopuffer-query StatefulSets with the new image.
© 2025 turbopuffer Inc.
Privacy PolicyTerms of service