Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 7 additions & 6 deletions models.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,35 +8,35 @@
episodic:
ollama: nomic-embed-text
openai: text-embedding-3-small
gemini: models/text-embedding-004
gemini: models/gemini-embedding-001
aws: amazon.titan-embed-text-v2:0
local: all-MiniLM-L6-v2

semantic:
ollama: nomic-embed-text
openai: text-embedding-3-small
gemini: models/text-embedding-004
gemini: models/gemini-embedding-001
aws: amazon.titan-embed-text-v2:0
local: all-MiniLM-L6-v2

procedural:
ollama: nomic-embed-text
openai: text-embedding-3-small
gemini: models/text-embedding-004
gemini: models/gemini-embedding-001
aws: amazon.titan-embed-text-v2:0
local: all-MiniLM-L6-v2

emotional:
ollama: nomic-embed-text
openai: text-embedding-3-small
gemini: models/text-embedding-004
gemini: models/gemini-embedding-001
aws: amazon.titan-embed-text-v2:0
local: all-MiniLM-L6-v2

reflective:
ollama: nomic-embed-text
openai: text-embedding-3-large
gemini: models/text-embedding-004
gemini: models/gemini-embedding-001
aws: amazon.titan-embed-text-v2:0
local: all-mpnet-base-v2
# Available Ollama models (pull with: ollama pull <model>)
Expand All @@ -50,7 +50,8 @@ reflective:
# - text-embedding-3-large (3072d)

# Gemini models:
# - models/text-embedding-004 (768d) - latest
# - models/gemini-embedding-001 (3072d native, configurable via outputDimensionality) - current GA
# - models/text-embedding-004 (768d) - deprecated, returns 404
# - models/embedding-001 (768d) - deprecated

#AWS models:
Expand Down
10 changes: 5 additions & 5 deletions packages/openmemory-js/src/core/models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -51,37 +51,37 @@ const get_defaults = (): model_cfg => ({
episodic: {
ollama: "nomic-embed-text",
openai: "text-embedding-3-small",
gemini: "models/embedding-001",
gemini: "models/gemini-embedding-001",
aws: "amazon.titan-embed-text-v2:0",
siray: "text-embedding-3-small",
local: "all-MiniLM-L6-v2",
},
semantic: {
ollama: "nomic-embed-text",
openai: "text-embedding-3-small",
gemini: "models/embedding-001",
gemini: "models/gemini-embedding-001",
aws: "amazon.titan-embed-text-v2:0",
siray: "text-embedding-3-small",
local: "all-MiniLM-L6-v2",
},
procedural: {
ollama: "nomic-embed-text",
openai: "text-embedding-3-small",
gemini: "models/embedding-001",
gemini: "models/gemini-embedding-001",
aws: "amazon.titan-embed-text-v2:0",
local: "all-MiniLM-L6-v2",
},
emotional: {
ollama: "nomic-embed-text",
openai: "text-embedding-3-small",
gemini: "models/embedding-001",
gemini: "models/gemini-embedding-001",
aws: "amazon.titan-embed-text-v2:0",
local: "all-MiniLM-L6-v2",
},
reflective: {
ollama: "nomic-embed-text",
openai: "text-embedding-3-large",
gemini: "models/embedding-001",
gemini: "models/gemini-embedding-001",
aws: "amazon.titan-embed-text-v2:0",
local: "all-mpnet-base-v2",
},
Expand Down
6 changes: 3 additions & 3 deletions packages/openmemory-js/src/memory/embed.ts
Original file line number Diff line number Diff line change
Expand Up @@ -307,11 +307,11 @@ async function emb_gemini(
): Promise<Record<string, number[]>> {
if (!env.gemini_key) throw new Error("Gemini key missing");
const prom = gem_q.then(async () => {
const url = `https://generativelanguage.googleapis.com/v1beta/models/text-embedding-004:batchEmbedContents?key=${env.gemini_key}`;
const url = `https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:batchEmbedContents?key=${env.gemini_key}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Re-embed vectors when switching Gemini model

When upgrading a database that already contains Gemini vectors produced by text-embedding-004/embedding-001, this line changes query and newly-written embeddings to gemini-embedding-001 without invalidating the old stored vectors. The vector table stores only v/dim and no model identity (packages/openmemory-js/src/core/db.ts:161), while search compares the new query vector against every stored vector in the sector (packages/openmemory-js/src/core/vector/postgres.ts:58), so existing memories remain in the old embedding space and semantic ranking becomes unreliable until they are re-embedded. Please add a rebuild/migration path or a model-version guard before silently switching the default.

Useful? React with 👍 / 👎.

for (let a = 0; a < 3; a++) {
try {
const reqs = Object.entries(txts).map(([s, t]) => ({
model: "models/text-embedding-004",
model: "models/gemini-embedding-001",
content: { parts: [{ text: t }] },
taskType: task_map[s] || task_map.semantic,
}));
Expand Down Expand Up @@ -705,7 +705,7 @@ export const getEmbeddingInfo = () => {
} else if (env.emb_kind === "gemini") {
i.configured = !!env.gemini_key;
i.batch_api = env.embed_mode === "simple";
i.model = "embedding-001";
i.model = "gemini-embedding-001";
} else if (env.emb_kind === "aws") {
i.configured =
!!env.AWS_REGION &&
Expand Down
Loading