For each cluster in your turbopuffer on-prem deployment you will be provided with an 'on-prem kit' containing all the files required to configure your cluster. This document provides guidance to successfully deploy a new turbopuffer on-prem cluster.
If you don't have your kit yet, you can get a sense of the Terraform and Kubernetes configuration files with this scrubbed example.
onprem-kit
├── README.md
├── aws
│ ├── main.tf
│ └── turbopuffer.tfvars
├── cosign.pub
├── gcp
│ ├── main.tf
│ └── turbopuffer.tfvars
├── scripts
│ ├── generate_secrets.py
│ └── sanity.sh
├── values.yaml (generated) # configuration file generated by terraform
├── values.secret.yaml (generated) # sensitive configuration file generated by terraform
├── metrics-keys.yaml # configuration file provided by turbopuffer
└── tpuf
├── Chart.yaml
├── charts
├── files
│ └── ...
├── templates
│ ├── 00_config_map.yaml
│ ├── ...
│ └── 12_control_plane_agent.yaml
├── values.schema.json
└── values.yaml
terraform
, kubectl
and helm
installed.cd your-cloud-provider
terraform init
to setup required providersturbopuffer.tfvars
terraform apply -var-file=turbopuffer.tfvars
kubectl
. Add your new cluster context to kubectl.
gcloud container clusters get-credentials CLUSTER_NAME --project PROJECT_ID --region REGION
aws eks update-kubeconfig --region REGION --name CLUSTER_NAME
az aks get-credentials --name=CLUSTER_NAME --resource-group=RESOURCE_GROUP
kubectl config get-contexts
and confirm the cluster is correct.kubectl get pods
and confirm the command succeeds (no output).values.yaml
file in the onprem-kit
directory, which contains values for Helm. Edit this file and set any other necessary values. Refer to values.schema.json
for a description of valid configurations.
tpuf_config
configuration values suggested by turbopuffer for your onprem deployment. You can find information about these settings and more in our onprem configuration documentation../scripts/generate-secrets.py
to generate values.secret.yaml
. This file will generate an Org Id and API key, along with an token for intra-cluster communication.cd
to the top of the onprem-kit directory. Run helm install turbopuffer tpuf --values=values.yaml --values=values.secret.yaml --values=metrics-keys.yaml -n default
to deploy turbopuffer to your cluster.
values.yaml
file, you will need to run helm upgrade --values=values.yaml --values=values.secret.yaml --values=metrics-keys.yaml -n default turbopuffer tpuf
to apply the changes.TURBOPUFFER_API_KEY=<your_api_key> scripts/sanity.sh
will query your turbopuffer cluster directly, verifying that core operations function. It will not verify certicates, and may encounter a 500 error if the nodes aren't routeable yet.By default turbopuffer will pull from one of several turbopuffer managed image registries, as configured in our included terraform. However, there are many reasons you may want to host our images in a registry you control. Our Helm chart fully supports this through the following settings:
image.registry: YOUR_REGISTRY_URL
control_plane.image.registry: YOUR_REGISTRY_URL
We expect two registries found there one called turbopuffer
and one called tpuf-ctl-cluster
holding the images for turbopuffer and our control plane agent respectively.
For customers on AWS, we can configure ECR Replication to automatically push the latest images into your registry.
Our helm chart allows managing TLS termination internally to your cluster using either cert-manager
or the native kubernetes apis.
Your organization may already be managing their certificates through your cloud providers' managed certificates offerings, in which case you will need to handle termination yourselves.
Regardless of your cloud provider, you will want to deploy turbopuffer internally, by setting:
ingress.internal: true
certificates.enabled: false
Adding Google Managed Certificates to your GKE cluster is as simple as deploying the following Kubernetes manifest alongside your turbopuffer helm deployment.
All that is required is to insert the correct value for YOUR_DOMAIN
.
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: ingress-nginx-svc-config
namespace: ingress-nginx
spec:
healthCheck:
checkIntervalSec: 10
timeoutSec: 10
port: 80
type: HTTP
requestPath: /healthz
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-svc
namespace: ingress-nginx
annotations:
cloud.google.com/backend-config: '{"default": "ingress-nginx-svc-config"}'
spec:
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: ingress-nginx
spec:
domains:
- YOUR_DOMAIN
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-ing
namespace: ingress-nginx
annotations:
networking.gke.io/managed-certificates: managed-cert
spec:
ingressClassName: "gce"
defaultBackend:
service:
name: ingress-nginx-svc
port:
number: 80
Before configuring your AWS managed certificate, you will need to install the AWS LB Controller which will allow you to provision Network Load Balancers to service your cluster and handle TLS termination efficiently.
Additionally, you will need to provision your certificate externally to your cluster using the AWS console or CLI.
Once both of these are accomplished, you can use the following Kubernetes manifest to provision an NLB that will directly target our ingress LBs.
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: YOUR_CERTIFICATE_ARN
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: https
port: 443
targetPort: 80
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
If you want to disable outgoing connections for the cluster, you can allowlist the following IPs:
35.234.93.182
(api.polarsignals.com
)76.76.21.0/24
curl -s https://ip-ranges.datadoghq.com/| jq -r '(.apm.prefixes_ipv4 + .global.prefixes_ipv4 + .logs.prefixes_ipv4 + .agents.prefixes_ipv4) | unique[]'
The turbopuffer team will provide you with a new image digest to use.
image.digest
value in your Helm values.yaml
filehelm upgrade -n default turbopuffer tpuf --values=values.yaml