-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Farid Vatani edited this page May 13, 2025
·
4 revisions
This document provides a deep-dive into the technical architecture and rationale behind the Smart Search implementation using Next.js, MongoDB Atlas, OpenAI embeddings, and modern search UX patterns.
This project implements a modular, scalable, AI-augmented search system using:
- URL-based state — Shareable, bookmarkable search results via query params
- Hybrid Search — Combines traditional full-text and fuzzy search with vector semantic search
- Vector Embeddings — Search beyond keywords using meaning (via OpenAI + MongoDB vector index)
- Streaming UI — Server Components + Suspense for smooth UX without unnecessary hydration
- Auto-complete — Type-ahead experience with MongoDB Atlas' autocomplete pipeline
- Go to: https://cloud.mongodb.com/
- Create an M0 cluster (free tier)
- Name your DB (e.g.,
mydb), and create a collection (e.g.,recipes)
- Go to Database Access
- Add a user with read/write access
- Use this in your
.envasDATABASE_URL
- Whitelist your IP (
0.0.0.0/0for local testing)
Go to Search → Indexes in MongoDB Atlas
- Create a Search Index on
recipes - Default mappings:
title,descriptionas strings
- Edit your default index
- Set
title→ Autocomplete - Rebuild the index
- Create a new Vector Search Index
- Collection:
recipes - Field:
embeddings - Dimensions: 1536 (for
text-embedding-3-small) - Similarity:
Euclideanor Cosine
User Input
↓
<searchBar.tsx> (Client Component)
↓
URL updates via useRouter().replace()
↓
<page.tsx> (Server Component)
↓
<recipeList.tsx> (Server Component w/ Suspense)
↓
MongoDB Search (Text, Fuzzy, or Vector via Prisma or $aggregate)
↓
Render Results
src/
├── app/ # App Router structure (Next.js 15+)
│ ├── api/autocomplete/route.ts # Suggestion API
│ ├── globals.css # Global styles
│ ├── layout.tsx # Shared layout
│ ├── page.tsx # Root page – manages query param extraction
│
├── components/ # Reusable UI components
│ ├── AutoCompleteBox.tsx
│ ├── loading.tsx # Suspense fallback
│ ├── recipeList.tsx # Renders search results (Server Component)
│ └── searchBar.tsx # Handles input, debounce, and query state
│
├── hooks/ # Custom React hooks
│ ├── useClickOutside.ts
│ └── useDebounce.ts
│
├── lib/ # Shared utilities
│ ├── db.ts # Prisma client instance
│ └── embeddings.ts # OpenAI vector generation
│
├── prisma/
│ ├── schema.prisma # MongoDB schema definition for Prisma
│ └── seed.ts # Initial recipe seeding
│
├── scripts/
│ └── generate-embeddings.ts # Populate vector embeddings
db.recipe.findMany({
where: {
description: {
contains: query,
mode: "insensitive",
},
},
});await db.$runCommandRaw({
aggregate: "recipe",
pipeline: [
{
$search: {
index: "default",
text: {
query,
path: ["title", "description"],
fuzzy: { maxEdits: 2 },
},
},
},
],
});const embedding = await getEmbedding(query);
await db.$runCommandRaw({
aggregate: "recipe",
pipeline: [
{
$vectorSearch: {
index: "vector-index",
path: "embeddings",
queryVector: embedding,
numCandidates: 100,
limit: 10,
},
},
],
});This project uses OpenAI's Embedding API to convert recipe descriptions and user queries into vectors.
-
Seed recipes (
pnpm seed) -
Generate embeddings (
pnpm embed)
- Each document gets a
embeddings: number[]field - Stored alongside the title & description
- On search:
- Query is sent to OpenAI to generate query vector
- MongoDB runs
$vectorSearchto retrieve the most relevant recipes
| Feature | Tech Used | Fallbacks |
|---|---|---|
| Full-text |
$search on MongoDB |
Prisma string filter (if needed) |
| Fuzzy Match | $search + fuzzy |
Misspell-tolerant |
| Autocomplete | $search: autocomplete |
Debounced via useDebounce
|
| Vector Search |
$vectorSearch on embeddings
|
Optional fallback to text search |
-
useDebounce.ts— Prevents rapid-fire API calls from keystrokes -
useClickOutside.ts— Closes autocomplete dropdown outside of autocomplete box -
db.ts— Ensures Prisma client is singleton in dev -
embeddings.ts— Thin wrapper around OpenAI embedding endpoint
model Recipe {
id String @id @default(auto()) @map("_id") @db.ObjectId
title String
description String
embeddings Float[] // Store embeddings as an array of floats
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@map("recipes")
}Handles GET requests like:
/api/autocomplete?q=chickUses $search: autocomplete stage to return recipe title suggestions based on partial input.
- Model:
text-embedding-3-small - Used for:
- Embedding descriptions on ingestion (
generate-embeddings.ts) - Embedding user queries at runtime (in
recipeList.tsx)
- Embedding descriptions on ingestion (
- Search Index (default): full-text and autocomplete
- Vector Index: stored on embeddings field
- Input clearing updates URL and resets result
- Debounced autocomplete avoids request flooding
- Suspense fallback handles real-world loading times
-
keyprop forces re-trigger of Server Components on query change
| Benefit | Description |
|---|---|
| Shareable | Users can copy/paste exact search queries |
| Bookmarkable | Come back to same state instantly |
| Consistent SSR | Works naturally with server-side rendering |
| SEO-friendly | Each query is its own routeable resource |
- MongoDB Atlas connection string must be secured in
.env - OpenAI API keys must be rate-limited and protected server-side
- Production deployments (e.g., Vercel) must enable necessary environment variables