Logo

On-Prem Deployment Runlist

For each cluster in your turbopuffer on-prem deployment you will be provided with an 'on-prem kit' containing all the files required to configure your cluster. This document provides guidance to successfully deploy a new turbopuffer on-prem cluster.

If you don't have your kit yet, you can get a sense of the Terraform and Kubernetes configuration files with this scrubbed example.

Your kit contents

onprem-kit
├── gcp                                   # cloud specific terraform configurations
│   ├── main.tf
│   └── turbopuffer.tfvars
├── aws
│   ├── main.tf
│   └── turbopuffer.tfvars
├── azure
│   ├── main.tf
│   └── turbopuffer.tfvars
├── values.yaml (generated)              # configuration file generated by terraform
├── provision-internal-cluster-auth.sh   # generates an HMAC key used by turbopuffer to authenticate internal requests
├── tpuf                                 # helm chart for main terraform deployment
│   ├── Chart.yaml
│   ├── files
│   │   ├── 01_lua_router.yaml
│   │   └── cert-manager-1-13-2.yaml
│   ├── templates
│   │   ├── 00_config_map.yaml
│   │   ├── ...
│   │   └── 08_deployment.yaml
│   └── values.yaml
└──  tpuf-ctl-cluster.yaml

Runlist

  1. Mise en place: Check you have all prerequisites:
    • Verify you have both terraform, kubectl and helm installed.
    • Provision a fresh sub-account for your new cluster
  2. Cluster configuration: Apply terraform configuration to setup Kubernetes cluster and bucket
    • cd your-cloud-provider
    • Run terraform init to setup required providers
    • Fill in the required values in turbopuffer.tfvars
    • Apply terraform configuration: terraform apply -var-file=turbopuffer.tfvars
  3. kubectl. Add your new cluster context to kubectl.
    • Apply provider specific instructions:
      • GCP: Run gcloud container clusters get-credentials CLUSTER_NAME --project PROJECT_ID --region REGION
      • AWS: Run aws eks update-kubeconfig --region REGION --name CLUSTER_NAME
      • Azure: Run az aks get-credentials --name=CLUSTER_NAME --resource-group=RESOURCE_GROUP
    • Run kubectl config get-contexts and confirm the cluster is correct.
    • Run kubectl get pods and confirm the command succeeds (no output).
  4. Configure Helm: The terraform command will have output a values.yaml file in the onprem-kit directory, which contains values for Helm. Edit this file and set any other necessary values. Refer to values.yaml inside the Helm chart (tpuf/values.yaml) for a list of possible values and their default values. Of particular note:
    • image.registry determines the registry used for pulling images. This defaults to our secure turbopuffer-onprem registry containing only images approved for onprem deployments. You can change this to a private registry of your choosing to which you replicate our images.
    • image.digest determines the specific image signature that will be deployed. You will be notified by turbopuffer when a new image of interest to you is available; this will provide an opportunity for you to perform all required security audits on top of our existing procedures.
    • certificates.enabled Enables or disables all certificate handling in turbopuffer, defaults to true. If you want to terminate TLS yourself, set this to false to allow turbopuffer to be accessed on HTTP port 80.
    • certificates.mode Determines the mode in which certificates are provisioned. Supports either manual in which you provide the appropriately named certificate in the turbopuffer namespace, or letsencrypt using cert-manager.io and LetsEncrypt to provision certificates. Default manual.
    • ingress.internal exposes turbopuffer on an internal IP if true. This allows you to manage TLS and routing yourself. If this setting is true, you should also disable certificates (enabled: false), and set the following configurations:
    "tpuf_config""
      "server":
        "self_endpoint": http://INGRESS_INTERNAL_IP
        "self_endpoint_host_header":  CLUSTER_ENDPOINT
    
    The value for INGRESS_INTERNAL_IP can be obtained by running kubectl get ingress -n turbopuffer, while CLUSTER_ENDPOINT is the same as hostname in your configuration. Note: with an internal deployment you will not be able to access your cluster from outside of the VPC subnet that the internal IP is allocated from. If you manually allocate an IP out of a global internal subnet and configure it to be used, then it could be accessed globally.
    • tpuf_config configuration values suggested by turbopuffer for your onprem deployment. You can find information about these settings and more in our onprem configuration documentation.
  5. Deploy turbopuffer: cd to the top of the onprem-kit directory. Run helm install turbopuffer tpuf --values=values.yaml --values=metrics-keys.yaml -n default to deploy turbopuffer to your cluster.
    • Enable pulling your image from our registries.
      • AWS: Provide us with your AWS account id,
      • GCP: Provide the service account email used for pulling images to the turbopuffer team. Either the default compute service account for the sub-account, or a custom service account e.g. used for replicating images into your own registry or configured in K8s.
    • At this point pods will not come online properly as we are still missing the bucket secret.
    • If you change anything in the values.yaml file, you will need to run helm upgrade --values=values.yaml --values=metrics-keys.yaml -n default turbopuffer tpuf to apply the changes.
  6. Provision your intra-cluster authentication key using KUBE_CONTEXT=$(kubectl config current-context) ./provision-internal-cluster-auth.sh
    • Note: This creates a Secret turbopuffer-secrets in the turbopuffer namespace.
  7. Setup your first API key
    • Run python3 scripts/apikey.py to generate an Org ID and API Key
    • Update helm values.yaml with tpuf_config.authentication.allowed_api_keys_sha256. Refer to Refer to values.yaml inside the Helm chart (tpuf/values.yaml) for an example of this.
    • Run helm upgrade -n default turbopuffer tpuf --values=values.yaml --values=metrics-keys.yaml --values=api-values.yaml to deploy the changes.
    • Restart turbopuffer to pick up the new configuration: kubectl rollout restart statefulsets -n turbopuffer.
  8. Deploy the cluster agent: Run kubectl apply -f tpuf-ctl-cluster.yaml to apply our kubernetes configurations.
    • Confirm the controller is successfully booting by checking kubectl get pods -n tpuf-ctl-cluster, you should see a single pod.
    • Confirm with us that your cluster is correctly heartbeating our control plane.
  9. Run post-deployment sanity checks
    • TURBOPUFFER_API_KEY=<your_api_key> scripts/sanity.sh will query your turbopuffer cluster directly, verifying that core operations function. It will not verify certicates, and may encounter a 500 error if the nodes aren't routeable yet.

Networking

If you want to disable outgoing connections for the cluster, you can allowlist the following IPs:

  • PolarSignals (CPU and Heap profiling)
    • 35.234.93.182 (api.polarsignals.com)
  • Control Plane (Cluster Heartbeats)
    • 76.76.21.0/24
  • Datadog (Telemetry)
    • curl -s https://ip-ranges.datadoghq.com/| jq -r '(.apm.prefixes_ipv4 + .global.prefixes_ipv4 + .logs.prefixes_ipv4 + .agents.prefixes_ipv4) | unique[]'

Upgrading turbopuffer versions

The turbopuffer team will provide you with a new image digest to use.

  • Update the image.digest value in your Helm values.yaml file
  • Run helm upgrade -n default turbopuffer tpuf --values=values.yaml
    • This will update the turbopuffer-index and turbopuffer-query StatefulSets with the new image.
© 2024 turbopuffer Inc.
Privacy PolicyTerms of service