Notion
Notion needed a search engine capable of scaling beyond 10B vectors to power their next generation of search & AI features.
millions
$ saved annually
vectors
write peaks
1M+
namespaces
turbopuffer makes it too easy to build state-of-the-art AI apps.

Mickey Liu, Data Engineering Lead
Why turbopuffer?
State-of-the-art AI apps are expected to make every byte searchable for LLMs and users. Notion needed a search engine capable of 100TB+ scale with zero ops as they pioneer what SaaS looks like in the AI era.
Results
Notion migrated to turbopuffer in October 2024 and experienced:
- Consistent reads with 100,000+ writes/s peaks
- 80% reduction in cost, allowing Notion to remove per-user AI charges
- >= 99.99% uptime
- From concerned to excited about 10x'ing their data size
- Zero performance drops
- A turbopuffer team so responsive they felt part of the Notion engineering team
- A roadmap aligned with their anticipated needs
turbopuffer's economics have changed the way we think about building products that connect data to users and LLMs.

Akshay Kothari, Co-founder
turbopuffer in Notion
turbopuffer powers Q&A, research, 3rd-party data search in Notion:
When a dialog that may query turbopuffer opens in Notion, they immediately fire a prewarm query to ensure the namespace is in cache.