<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/6b34caf8-948e-4b31-96a8-07d0da1ebd86/a0aca5b0-eb58-4785-bd22-822625f9c18c/github-mark-white.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/6b34caf8-948e-4b31-96a8-07d0da1ebd86/a0aca5b0-eb58-4785-bd22-822625f9c18c/github-mark-white.png" width="40px" />
This is nutramap, the nutrition tracker of my dreams.
Nutramap uses natural language processing, so you can type what you had to eat yesterday just like you’re texting your mom about it.
It breaks down your meal into ingredients, estimates the amounts of each, then uses hybrid vector search to map each ingredient to the USDA's massive nutrition database to get hyper-accurate nutrient data.
My hybrid search algorithm combines FAISS and BM25 keyword indexing with a reciprocal rank fusion algorithm. I was able to achieve 75ms average query latency over 2.7M records with many performance optimizations.
</aside>
Languages
Performance metrics for hybrid search feature:
• < 100ms end-to-end query latency even with reranking
• > 1000 QPS on commodity hardware without choking
• Linear scaling when adding more docs or users
• Failover-ready if vector store or sparse store crashes
• Memory efficient: quantized vectors, sparsity-aware storage
Nutramap uses NLP to break down your meal into ingredients, estimates the amounts of each, and maps each ingredient to the most comprehensive nutrition database there is, the USDA’s.
There are thousands upon thousands of foods in this database, every version of every raw ingredient you can imagine and many many branded foods as well, each with a full accompanying nutrition panel that has over 70 different nutrients.
For example, if I typed in yogurt, it could be any of these and more:
So parsing user input and mapping it as accurately as possible to an entry was no minor task, especially because you want to keep the app fluid and responsive as possible.