Analyze food, chat with AI, plan weekly diets, and receive real-time nutrition insights.
📱 Mobile App: Download / View Repository
NomAI is a powerful AI Agent that brings nutrition and food intelligence to life. Whether you're analyzing meals through images, chatting with an AI nutrition assistant, or generating personalized weekly diet plans — NomAI handles the heavy lifting with a sophisticated multi-step LLM pipeline backed by real-time web research.
| Feature | Description |
|---|---|
| 🧠 AI Nutrition Analysis | Analyze food from images or text descriptions with a 3-step pipeline: food extraction → web search → LLM synthesis |
| 💬 Conversational AI Chatbot | LangChain-powered agent that understands dietary preferences, allergies, and health goals |
| 🍽️ Weekly Diet Planner | Generate 7-day personalized meal plans with carb cycling, variety tracking, and macro targets |
| 🔄 Meal Alternatives | Get 5 AI-suggested alternative meals respecting your dietary profile |
| 📊 Nutrition Tracking | Mark meals as eaten, update plans on the fly, and track diet history |
| 🔗 Dual LLM Support | Seamlessly switch between Google Gemini and OpenRouter (Claude) providers |
| 🌐 Web-Grounded Analysis | Nutrition data enriched with web search results from Exa or DuckDuckGo |
| 🛢️ Firestore Persistence | Chat history and diet plans stored in Google Firestore |
graph TD
Client["🖥️ Client (Mobile / Web)"]
Main["main.py — FastAPI App"]
Client --> Main
Main --> NutritionRouter["/api/v1/nutrition"]
Main --> ChatRouter["/api/v1/users"]
Main --> AgentRouter["/api/v1/chat"]
Main --> DietRouter["/api/v1/diet"]
NutritionRouter --> NutritionServiceV2
AgentRouter --> LangChainAgent["🤖 LangChain Agent"]
LangChainAgent --> AgentTools["Tools: analyse_image\nanalyse_food_description"]
AgentTools --> NutritionServiceV2
ChatRouter --> ChatFirestore
DietRouter --> DietService
NutritionServiceV2 --> FoodExtractor["FoodExtractorService"]
NutritionServiceV2 --> SearchService
NutritionServiceV2 --> LLMProvider["LLM Provider\n(Gemini / OpenRouter)"]
DietService --> LLMProvider
DietService --> DietFirestoreDB["DietFirestore"]
FoodExtractor --> LLMProvider
SearchService --> ExaAPI["🔍 Exa / DuckDuckGo"]
ChatFirestore --> Firestore["🔥 Firestore DB"]
DietFirestoreDB --> Firestore
| Method | Path | Description |
|---|---|---|
POST |
/analyze |
Analyze nutrition from a food image (URL required) |
POST |
/analyze-by-description |
Analyze nutrition from a text description |
| Method | Path | Description |
|---|---|---|
POST |
/messages |
Send a message and receive AI nutrition analysis with tool calls |
| Method | Path | Description |
|---|---|---|
GET |
/ |
Get chat messages for a user (paginated) |
PATCH |
/log-status |
Update the isAddedToLogs flag for a message |
| Method | Path | Description |
|---|---|---|
POST |
/ |
Generate a new weekly diet plan |
GET |
/{user_id} |
Get the active weekly diet |
GET |
/{user_id}/history |
Get diet history (paginated) |
POST |
/{user_id}/suggest-alternatives |
Suggest 5 alternative meals |
PUT |
/{user_id}/{day_index}/meals/{meal_type} |
Update a specific meal |
PATCH |
/{user_id}/meals/eaten |
Mark a meal as eaten / not eaten |
GET |
/{user_id}/diet/{diet_id} |
Get a specific diet by ID |
POST |
/{user_id}/diet/{diet_id}/copy |
Copy a past diet as new active diet |
NomAI's conversational brain operates as a ReAct Agent (Reasoning + Acting). It doesn't just respond; it thinks, selects tools, and iterates until it has the best answer.
graph TD
User["👤 User Input\n(Chat/Image)"] --> Context["📋 Context Builder\n(Preferences + Allergies + Goals)"]
Context --> Brain["🧠 LLM Controller\n(ReAct State Graph)"]
Brain --> Decision{"Is this food-related?"}
Decision -- "No / Simple Q&A" --> Direct["💬 Direct Friendly Answer"]
Decision -- "Yes / Needs Analysis" --> ToolSelection["🛠️ Tool Selection"]
ToolSelection -- "Image Provided" --> ToolA["📸 analyse_image"]
ToolSelection -- "Text Description" --> ToolB["📝 analyse_food_description"]
ToolA --> Pipe["🧪 Nutrition Pipeline"]
ToolB --> Pipe
Pipe --> Observation["🔍 Tool Observation\n(Structured Nutrition Data)"]
Observation --> Brain
Brain --> Final["🎁 Final Personalized Response\n(Friendly Answer + Data)"]
Final --> Firestore["🔥 Sync to Firestore"]
NomAI maintains a Message State that evolves during a single request:
- State Init: User message + System prompt (Personalized Profile).
- Reasoning: The LLM analyzes the intent. If it sees food, it pauses and emits a
tool_call. - Acting: The system executes the
analyse_imageoranalyse_food_descriptiontool. - Observation: The tool's output (nutrients, calories, fiber) is appended to the message list.
- Finalization: The LLM reads the tool's findings and crafts a warm, personalized message (e.g., "This burger looks delicious, Pavel! It has 650 calories, but keep an eye on the sodium...").
NomAI supports two easy deployment options: Google Cloud Platform (GCP) or Railway.
NomAI is architected for the cloud, utilizing a fully automated CI/CD pipeline on Google Cloud Platform (GCP).
graph LR
Dev["💻 Developer\nPush to GitHub"] --> CB["⚙️ Google Cloud Build"]
subgraph "CI/CD Pipeline"
CB --> Build["🐳 Docker Build\n(Dockerfile.api)"]
Build --> AR["📦 Artifact Registry\n(Docker Image)"]
AR --> Deploy["🚀 Cloud Run\n(Managed Serverless)"]
end
Deploy --> Public["🌐 Public API Endpoints\n(https://nomai-service-...)"]
subgraph "Infrastructure"
Firebase["🔥 Firestore\n(NoSQL DB)"]
Secret["🔒 Secret Manager\n(API Keys)"]
end
Deploy -.-> Firebase
Deploy -.-> Secret
One-click deployment to Railway with built-in environment variable management and template setup.
- Click the button above — This opens Railway with the NomAI template pre-loaded
- Connect your GitHub repository to enable automatic deployments
- Configure Environment Variables in the Railway dashboard:
| Variable | Description | Required |
|---|---|---|
FIREBASE_CREDENTIALS_JSON |
Full Firebase service account JSON string | ⬜ |
GOOGLE_API_KEY |
Google Gemini API key (If openrouter is used then can skip this ) | ✅ |
EXA_API_KEY |
Exa search API key (optional if not using DuckDuckGo) | ⬜ |
SEARCH_PROVIDER |
exa or duckduckgo |
✅ |
PROVIDER_TYPE |
gemini or openrouter |
✅ |
GEMINI_MODEL |
Gemini model name (auto-switches when PROVIDER_TYPE=gemini) |
⬜ |
OPENROUTER_MODEL |
OpenRouter model name (auto-switches when PROVIDER_TYPE=openrouter) |
⬜ |
OPENROUTER_API_KEY |
OpenRouter API key (if using openrouter) | ⬜ |
AGENT_MODEL |
Agent model for LangChain if provider is gemini then put gemini model else openrouter model (default: openai/gpt-4o-mini) |
✅ |
Model Auto-Switching: When
PROVIDER_TYPE=gemini, the system usesGEMINI_MODEL(default:gemini-2.0-flash). WhenPROVIDER_TYPE=openrouter, it usesOPENROUTER_MODEL(default:google/gemini-3.1-flash-lite-preview).
The Firebase Credentials are loaded 3 ways , on the local machine via direct service.json file , on the GCP it does n't require service.json file , on non Google provider like railway , we load the service.json file via key FIREBASE_CREDENTIALS_JSON.
- Deploy — Railway auto-detects Python, installs dependencies via
uv, and startsuvicorn main:app - Custom Domain (optional) — Add a custom domain in Railway service settings
- ✅ Automatic HTTPS/SSL
- ✅ GitHub integration for auto-deploy on push
- ✅ Environment variable management
- ✅ Built-in logs and monitoring
- ✅ Free tier available
- Containerization:
uv-optimized Python 3.13 slim image for fast builds and minimal footprint. - Orchestration: Managed via
cloudbuild.yamlininfra/cloudbuild/(GCP). - Hosting: Google Cloud Run for autoscaling serverless execution (GCP) or Railway for managed deployment.
- Registry: Google Artifact Registry for secure container storage (GCP).
NomAI uses a sophisticated 3-step pipeline for accurate, web-grounded nutrition analysis:
graph LR
A["📸 Multi-modal Input\n(Image + Prompt)"] --> B["Step 1\nFood Identification"]
B --> C["Step 2\nWeb Grounding"]
C --> D["Step 3\nMultimodal Synthesis"]
D --> E["📊 Structured\nNutrition Response"]
B -.- B1["Lightweight LLM identifies\nfood items & generates\nenriched search queries"]
C -.- C1["Exa / DuckDuckGo\nfinds USDA, FDA, or\nbrand-specific data"]
D -.- D1["Powerful Multimodal LLM\ncombines Actual Image +\nWeb Data + User Prompt"]
| Step | Purpose | Service |
|---|---|---|
| 1. Food Identification | Lightweight LLM detects food items and generates enriched queries for search | FoodExtractorService |
| 2. Web Grounding | Executes targeted searches (Exa/DDG) to find authoritative nutritional facts | SearchService |
| 3. Multimodal Synthesis | Final LLM combines Actual Image + Web Results + User Prompt for the result | GeminiProvider / OpenRouterProvider |
| 4. Client Delivery | Returns highly accurate, fact-checked structured data to the user | FastAPI Response |
Generating a weekly diet plan is a multi-stage process that balances nutritional targets, metabolic variety (carb cycling), and food diversity.
graph TD
Input["📥 DietInput Payload\n(Macros + Preferences + Goals)"] --> Calc["⚖️ Target Calculator"]
Calc --> Patterns["🔄 Carb Cycling Logic\n(Mon-Sun Pattern: 0, -15, -10, -15, +10, -10, +15)"]
Patterns --> Loop["🔁 7-Day Generation Loop"]
subgraph "Per-Day Iteration"
DayPrompt["📝 Prompt Builder\n(Day Context + Used Foods)"] --> LLMCall["🤖 LLM Provider\n(Gemini / OpenRouter)"]
LLMCall --> DailyClean["🧹 Data Cleaning\n(Remove concerns/alternatives)"]
DailyClean --> Variety["🥗 Update Used Foods\n(Track variety for next days)"]
end
Loop --> DayPrompt
Variety -- "Next Day" --> Loop
Variety -- "End Loop" --> Aggregator["📊 Weekly Aggregator\n(Sum Macros & Totals)"]
Aggregator --> Firestore["🔥 Firestore Storage\n(users/{userId}/diet/{dietId})"]
Firestore --> Final["✅ WeeklyDietOutput\n(Sent to Client)"]
- Payload Intake: Receives targets (Calories, P/C/F), dietary restrictions (Vegan, Keto, etc.), and health goals.
- Carb Cycling Pattern: Instead of flat targets, the system applies a pattern (e.g., lower carbs on Mon/Thu, higher carbs on Fri/Sun) to prevent metabolic adaptation.
- Sequential Variety Loop: The system generates one day at a time, feeding the list of
used_foodsfrom previous days back into the next prompt to ensure you don't eat the same thing every day. - Final Aggregation: Calculates the total weekly impact and persists the plan as "active" in Firestore.
NomAI utilizes a modular search architecture to ground AI responses in real-world data. It supports multiple search backends that can be toggled via environment variables.
graph TD
Pipeline["🧪 Nutrition Pipeline"] --> Router["🔍 SearchService (Router)"]
Router --> Env{"SEARCH_PROVIDER\nEnv Var"}
Env -- "exa" --> Exa["🚀 ExaSearchProvider"]
Env -- "duckduckgo" --> DDG["🦆 DuckDuckGoSearchProvider"]
Exa --> ExaAPI["Exa AI API\n(Authoritative / Linked Data)"]
DDG --> DDG_Library["DDGS Library\n(Global Web Search)"]
ExaAPI --> Results["📊 Normalized SearchResults\n(Title, URL, Snippet, Score)"]
DDG_Library --> Results
Results --> Pipeline
NomAI supports multiple LLM backends via a strategy pattern:
classDiagram
class LLMProvider {
<<abstract>>
+generate_from_text(prompt, schema)
+generate_from_image(prompt, image_bytes, schema)
}
class GeminiProvider {
+Google Gemini API
+Structured output support
+Native vision capabilities
}
class OpenRouterProvider {
+OpenRouter API
+Claude, GPT, etc.
+JSON schema validation
}
LLMProvider <|-- GeminiProvider
LLMProvider <|-- OpenRouterProvider
Switch providers with a single environment variable: PROVIDER_TYPE=gemini or PROVIDER_TYPE=openrouter.
nomai-backend/
├── app/
│ ├── agent/ # LangChain AI agent
│ ├── config/ # App settings
│ ├── endpoints/ # API route handlers
│ ├── exceptions/ # Custom exception hierarchy
│ ├── middleware/ # FastAPI middleware
│ ├── models/ # Pydantic data models
│ ├── services/ # Business logic layer
│ └── utils/ # Helpers and shared utilities
├── infra/ # Infrastructure as Code
│ ├── cloudbuild/ # GCP Cloud Build configs
│ └── docker/ # Dockerfile.api
├── main.py # App entrypoint
├── pyproject.toml # Dependencies (uv)
└── railway.json # Railway deployment config
NomAI uses a comprehensive exception hierarchy with 18 standardized error codes:
graph TD
Base["BaseNomAIException"]
Base --> V["ValidationException (400)"]
Base --> I["ImageProcessingException (400/413)"]
Base --> N["NutritionAnalysisException (400/422)"]
Base --> E["ExternalServiceException (502)"]
Base --> C["ConfigurationException (503)"]
Base --> R["RateLimitException (429)"]
Base --> B["BusinessLogicException (400)"]
git clone https://github.com/Pavel401/nomai-backend.git
cd nomai-backenduv syncuvicorn main:app --host 0.0.0.0 --port 8000 --reload| Variable | Purpose | Default |
|---|---|---|
PROD |
Production mode toggle | false |
PROVIDER_TYPE |
LLM provider (gemini / openrouter) |
gemini |
GOOGLE_API_KEY |
Google Gemini API key | — |
GEMINI_MODEL |
Gemini model name (auto-used when PROVIDER_TYPE=gemini) |
gemini-2.0-flash |
OPENROUTER_API_KEY |
OpenRouter API key | — |
OPENROUTER_MODEL |
OpenRouter model (auto-used when PROVIDER_TYPE=openrouter) |
google/gemini-3.1-flash-lite-preview |
AGENT_MODEL |
Agent model for LangChain | openai/gpt-4o-mini |
SEARCH_PROVIDER |
Search backend (exa / duckduckgo) |
exa |
EXA_API_KEY |
Exa search API key | — |
FIREBASE_CREDENTIALS_PATH |
Firebase service account JSON path | — |
FIREBASE_CREDENTIALS_JSON |
Firebase service account JSON string (for Railway) | — |
FIRESTORE_DATABASE_ID |
Firestore database ID | mealai |
DEBUG_MODE |
Enable pipeline debug logging | false |
HOST / PORT |
Server bind address | 0.0.0.0 / 8000 |
| Tech | Use Case |
|---|---|
| FastAPI | API framework |
| LangChain | Agent orchestration & tool management |
| Google Gemini | Primary LLM for analysis & generation |
| OpenRouter | Alternative LLM provider (Claude, GPT, etc.) |
| Pydantic v2 | Data validation & structured output |
| Firestore | Chat & diet plan persistence |
| Exa / DuckDuckGo | Web search for nutrition data grounding |
| Python 3.13+ | Core backend language |
| uv | Package management |
Built with ❤️