Persistent, cloud-backed shared memory for Claude Code • OpenClaw • Codex • Cursor • Hermes • pi agents.
One session ends, everything important disappears.
Hivemind finally fixes the "agent amnesia" problem.
Hivemind automatically captures every prompt, tool call, decision, and file operation. Then turns them into searchable memory that is instantly available to every agent and teammate across sessions, machines, and time.
- 🧠 Captures every session's prompts, tool calls, and responses into a shared SQL table on Deeplake Cloud
- 🔍 Searches across all memory with lexical search (falls back to grep when index unavailable)
- 🔗 Shares memory across sessions, agents, teammates, and machines in real-time
- 📁 Intercepts file operations on
~/.deeplake/memory/through a virtual filesystem backed by SQL - 📝 Summarizes sessions into AI-generated wiki pages via a background worker at session end
One command, all your agents:
npm install -g @deeplake/hivemind && hivemind installThat's it. The installer detects every supported assistant on your machine (Claude Code, Codex, OpenClaw, Cursor, Hermes Agent, pi), wires up the hooks, and opens a browser once for login. Restart your assistants and they all share the same brain.
Install for a specific assistant only:
hivemind install --only claude
hivemind claude install # equivalent
hivemind codex install
hivemind claw install
hivemind cursor install
hivemind hermes install
hivemind pi installCheck what's wired up:
hivemind statusSupported assistants:
| Platform | Integration | Auto-capture | Auto-recall |
|---|---|---|---|
| Claude Code | Marketplace plugin | ✅ | ✅ |
| OpenClaw | Native extension | ✅ | ✅ |
| Codex | Hooks (hooks.json) |
✅ | ✅ |
| Cursor | Hooks (hooks.json 1.7+) |
✅ | ✅ |
| Hermes Agent | Shell hooks (config.yaml) + skill + MCP server |
✅ | ✅ |
| pi | Extension API (pi.on(...)) + skill + AGENTS.md |
✅ | ✅ |
Claude Code plugin marketplace
If you prefer Claude Code's native plugin marketplace:
/plugin marketplace add activeloopai/hivemind
/plugin install hivemind
/reload-plugins
/hivemind:login
Auto-updates on each session start. Manual update: /hivemind:update.
OpenClaw ClawHub
openclaw plugins install clawhub:hivemind
Then type /hivemind_login in chat, click the auth link, and sign in.
| Command | Description |
|---|---|
/hivemind_login |
Sign in via device flow |
/hivemind_capture |
Toggle capture on/off |
/hivemind_whoami |
Show current org and workspace |
/hivemind_orgs |
List organizations |
/hivemind_switch_org <name> |
Switch organization |
/hivemind_workspaces |
List workspaces |
/hivemind_switch_workspace <id> |
Switch workspace |
/hivemind_update |
Check for plugin updates |
Auto-recall and auto-capture are enabled by default. Data is stored in the same sessions table as Claude Code and Codex.
Hivemind runs alongside OpenClaw's built-in memory-core plugin. It does not claim the memory slot, so memory-core's dreaming cron ("0 3 * * *") and other memory-slot-dependent jobs keep working. Hivemind captures session activity and exposes its own commands; memory-core keeps owning recall/promotion/dreaming.
- Hivemind seems slow or unresponsive. Check the agent model in
~/.openclaw/openclaw.jsonunderagents.defaults.model. Hivemind makes many small tool calls per turn; a large reasoning model like Opus will feel sluggish. Recommended default:anthropic/claude-haiku-4-5-20251001. openclaw model <id>says "plugins.allow excludes model". Themodelplugin CLI is disabled by default. Edit~/.openclaw/openclaw.jsondirectly (keyagents.defaults.model) and restart the gateway:systemctl --user restart openclaw-gateway.service.- Model switch rejected as "not allowed". Use the exact dated provider-prefixed ID (
anthropic/claude-haiku-4-5-20251001,anthropic/claude-sonnet-4-6). Legacy IDs likeclaude-3-5-haiku-latestand unprefixed bare IDs are not on OpenClaw's allowlist. - Self-update via Telegram fails with "elevated is not available".
tools.elevated.allowFrommust includetelegrambefore elevated commands work from that channel. Safer alternative: run the upgrade in a local shell withopenclaw plugins update hivemind. npm error EACCESduring self-update. OpenClaw was installed under a root-owned npm prefix (e.g./usr/lib/node_modules/openclaw). Reinstall under a user-writable prefix, or run the update with appropriate privileges locally — not via a channel.
Codex (manual)
Tell Codex to fetch and follow the install instructions:
Fetch and follow instructions from https://raw.githubusercontent.com/activeloopai/hivemind/main/codex/INSTALL.md
Or run the installer script directly:
git clone https://github.com/activeloopai/hivemind.git ~/.codex/hivemind
~/.codex/hivemind/codex/install.shRestart Codex to activate.
Cursor (1.7+)
The unified installer wires six lifecycle events in ~/.cursor/hooks.json — sessionStart, beforeSubmitPrompt, postToolUse, afterAgentResponse, stop, sessionEnd. Hooks fork a Node bundle at ~/.cursor/hivemind/bundle/ per event. Restart Cursor after install to load.
hivemind cursor installAuto-capture is enabled the same way as Claude Code / Codex / OpenClaw.
Hermes Agent
Wires shell hooks into ~/.hermes/config.yaml (pre_llm_call / post_tool_call / post_llm_call / on_session_end) for auto-capture, drops the bundle at ~/.hermes/hivemind/bundle/, registers the shared MCP server (~/.hivemind/mcp/server.js) under mcp_servers.hivemind, and installs an agentskills.io-compatible skill at ~/.hermes/skills/hivemind-memory/ for recall.
hivemind hermes installpi (badlogic/pi-mono coding-agent)
Upserts an idempotent BEGIN/END marker block into ~/.pi/agent/AGENTS.md (auto-loaded every turn) and drops a TypeScript extension at ~/.pi/agent/extensions/hivemind.ts. The extension subscribes to pi's lifecycle events (session_start / input / tool_result / message_end) for auto-capture and registers hivemind_search, hivemind_read, hivemind_index as first-class pi tools.
hivemind pi installNote: no per-agent SKILL.md is dropped under ~/.pi/agent/skills/ — pi reads skills from both that directory AND the shared ~/.agents/skills/ location. If the codex installer has run on the same machine, pi picks up the hivemind skill from the shared ~/.agents/skills/hivemind-memory symlink automatically. The AGENTS.md block plus the registered tools cover the action surface in either case.
hivemind uninstall # remove from every detected assistant
hivemind codex uninstall # remove from one┌─────────────────────────────────────────────────────┐
│ Your Coding Agent │
└──────────────────────────┬──────────────────────────┘
│
┌──────────────────▼──────────────────┐
│ 📥 Capture (every turn) │
│ prompts · tool calls · responses │
└──────────────────┬──────────────────┘
│
┌──────────────────▼──────────────────┐
│ 🧠 Hivemind │
│ SQL tables · Virtual File System │
│ Search Memory · inject context │
└──────────────────┬──────────────────┘
│
┌──────────────────▼──────────────────┐
│ 🌊 Deeplake │
│ Shared across all agents │
│ Postgres · S3 │
└─────────────────────────────────────┘
Every session is captured. Every agent can recall. Teammates in the same org see the same memory.
Just ask Claude naturally:
"What was Emanuele working on?"
"Search memory for authentication bugs"
"What did we decide about the API design?"
After each session, a background worker generates a wiki summary: key decisions, code changes, next steps. Browse them at ~/.deeplake/memory/summaries/.
Invite teammates to your Deeplake org. Their agents see your memory, your agents see theirs. No setup, no sync, no merge conflicts.
Disable capture entirely:
HIVEMIND_CAPTURE=false claudeEnable debug logging:
HIVEMIND_DEBUG=1 claudeThis plugin captures session activity and stores it in your Deeplake workspace:
| Data | What's captured |
|---|---|
| User prompts | Every message you send |
| Tool calls | Tool name + full input |
| Tool responses | Full tool output |
| Assistant responses | Claude's final response |
| Subagent activity | Subagent tool calls and responses |
All users in your Deeplake workspace can read this data. A DATA NOTICE is displayed at the start of every session.
| Variable | Default | Description |
|---|---|---|
HIVEMIND_TOKEN |
— | API token (auto-set by login) |
HIVEMIND_ORG_ID |
— | Organization ID (auto-set by login) |
HIVEMIND_WORKSPACE_ID |
default |
Workspace name |
HIVEMIND_API_URL |
https://api.deeplake.ai |
API endpoint |
HIVEMIND_TABLE |
memory |
SQL table for summaries and virtual FS |
HIVEMIND_SESSIONS_TABLE |
sessions |
SQL table for per-event session capture |
HIVEMIND_MEMORY_PATH |
~/.deeplake/memory |
Path that triggers interception |
HIVEMIND_CAPTURE |
true |
Set to false to disable capture |
HIVEMIND_EMBEDDINGS |
true |
Set to false to force lexical-only mode |
HIVEMIND_DEBUG |
— | Set to 1 for verbose hook debug logs |
Hivemind can run a local embedding daemon (nomic-embed-text-v1.5, ~130 MB)
so that Grep over ~/.deeplake/memory/ uses hybrid semantic + lexical
ranking instead of pure BM25. This is off by default — the daemon
depends on @huggingface/transformers, which pulls onnxruntime-node and
sharp (~600 MB total with native binaries). Shipping that with every agent
install would 60× the install size for a feature most users don't need.
To enable, run the bundled command:
hivemind embeddings installThis installs @huggingface/transformers once into a shared directory
(~/.hivemind/embed-deps/) and symlinks every detected agent's plugin to
it, so the 600 MB cost is paid one time regardless of how many agents you
have wired up. Re-run the same command after installing a new agent and
the new symlink is added (the npm install is skipped because it's cached).
Or do it in one shot at install time:
hivemind install --with-embeddings # all detected agents
hivemind <agent> install --with-embeddings # a single agentOther commands:
hivemind embeddings status # show shared deps + per-agent state
hivemind embeddings uninstall # remove the per-agent symlinks
hivemind embeddings uninstall --prune # also delete the shared dir (~600 MB)Restart your agents after enabling. From the next session, captured
messages and AI-generated summaries will include a 768-dim embedding,
and semantic recall queries will route through the local daemon (the
nomic model is downloaded on first use and cached in ~/.cache/huggingface/).
If @huggingface/transformers is not present, Hivemind silently
degrades to lexical-only mode:
- ✅ Capture continues; rows still land in Deeplake.
- ✅
Grepstill works via BM25 /ILIKEmatching on text columns. - ⚪ The
message_embedding/summary_embeddingcolumns stayNULL. - ⚪ The hook log notes
embeddings: no-transformersonce at session start.
You can also force lexical-only mode explicitly with
HIVEMIND_EMBEDDINGS=false (useful for CI or air-gapped environments).
Hivemind doesn't just capture raw events — it also generates an
AI-written wiki summary for each session and stores it in the
memory table (alongside its 768-dim summary_embedding). The summary
is what shows up when you Grep for past sessions or follow links from
~/.deeplake/memory/index.md.
Each agent (Claude Code / Codex / Cursor / Hermes / pi) fires a wiki worker on two triggers:
| Trigger | When it fires |
|---|---|
| Final | At session end (Stop / SessionEnd / session_shutdown), once. |
| Periodic | Mid-session, when either of two thresholds is hit since the last summary: |
• messages-since-last-summary ≥ HIVEMIND_SUMMARY_EVERY_N_MSGS (default 50) |
|
• elapsed time ≥ HIVEMIND_SUMMARY_EVERY_HOURS (default 2) |
The first message after a long pause therefore triggers a fresh summary; long sessions naturally checkpoint every ~50 messages.
A per-session JSON sidecar at
~/.claude/hooks/summary-state/<sessionId>.json tracks
{lastSummaryAt, lastSummaryCount, totalCount}. The dir is shared
across all agents (session ids are UUIDs so no collisions). It is
never deleted, so resuming a session via --resume / --continue
picks up where it left off.
- The wiki worker queries the
sessionstable for every event tied to that session. - It builds a structured prompt asking the host agent's CLI to extract entities, decisions, files modified, open questions, etc.
- It shells out to that agent's CLI (
claude -p,codex exec,pi --print, …) with the prompt — never a separate API key, the agent's existing credentials are used. - The generated markdown is uploaded to the
memorytable at/summaries/<user>/<sessionId>.md. The shared embedding daemon produces the 768-dimsummary_embeddingso the summary is recallable via semantic search.
A lock file at ~/.claude/hooks/summary-state/<sessionId>.lock
prevents two workers from running concurrently for the same session.
| Env var | Default | Effect |
|---|---|---|
HIVEMIND_SUMMARY_EVERY_N_MSGS |
50 |
Trigger periodic when messages-since-last ≥ this |
HIVEMIND_SUMMARY_EVERY_HOURS |
2 |
Trigger periodic after this many hours, with ≥1 msg |
HIVEMIND_CURSOR_MODEL |
auto |
(cursor only) model passed to cursor-agent --print --model |
HIVEMIND_HERMES_PROVIDER |
openrouter |
(hermes only) provider passed to hermes -z --provider |
HIVEMIND_HERMES_MODEL |
anthropic/claude-haiku-4-5 |
(hermes only) model passed to hermes -z -m |
HIVEMIND_PI_PROVIDER |
google |
(pi only) provider passed to pi --print --provider |
HIVEMIND_PI_MODEL |
gemini-2.5-flash |
(pi only) model passed to pi --print --model |
HIVEMIND_CAPTURE=false |
unset | Disable both capture and summary generation |
For pi specifically, the wiki worker is bundled separately at
~/.pi/agent/hivemind/wiki-worker.js (deposited by hivemind pi install).
The other agents ship the wiki worker inside their per-agent plugin
bundle.
Hivemind also crystallises recurring patterns from your recent sessions
into reusable Claude Code skills, automatically. Same architecture as
the wiki worker: an async background process that fires on Stop /
SessionEnd, mines recent sessions in scope, asks Haiku whether the
activity contains something worth keeping, and writes a SKILL.md if so.
| Trigger | When it fires |
|---|---|
| Stop counter | Mid-session, after every HIVEMIND_SKILIFY_EVERY_N_TURNS (default 20) turns. |
| SessionEnd | Always at end-of-session, regardless of counter — catches tail-of-session knowledge. |
Per-project counter state lives at
~/.deeplake/state/skilify/<project-key>.json. Project key is the sha1
of git config remote.origin.url (with the absolute path as fallback for
non-git dirs).
- The worker pulls the last 10 sessions in scope from the
sessionsDeeplake table — strictly newer than the watermark in the state file. - It strips each session to prompt + assistant text only (tool calls and thinking blocks are dropped — they're noise for skill mining).
- It builds a gate prompt: existing project skill bodies + the 10 stripped exchanges + decision rules.
- It runs
claude -p haiku --permission-mode bypassPermissionswith the prompt. The model returns a JSON verdict:KEEP <name> <body>— write a new skill.MERGE <existing-name> <merged-body>— update an existing skill, bump version.SKIP <reason>— pattern is one-off / generic / already covered.
- On KEEP/MERGE the skill is written to
<project>/.claude/skills/<name>/SKILL.md(or~/.claude/skills/...if you've setinstalltoglobal), with provenance frontmatter (source_sessions,version,created_by_agent, timestamps). - A row is also inserted into the
skillsDeeplake table for org-wide provenance (append-only — never UPDATE, sidesteps the UPDATE-coalescing quirk).
The /skilify slash command (Claude Code, Codex) and the hivemind skilify CLI control mining behaviour.
hivemind skilify # show current scope, team, install, per-project state
hivemind skilify scope <me|team|org> # who counts as "in scope" for mining
hivemind skilify install <project|global> # where new skills are written
hivemind skilify promote <skill-name> # move a project skill to ~/.claude/skills/
hivemind skilify team add <username> # add to the team list (used when scope=team)
hivemind skilify team remove <username> # remove from team
hivemind skilify team list # list current team membersThe team list flows into the worker's session-fetch SQL: scope=me
filters by your own username, scope=team filters by author IN (<team>), scope=org applies no author filter.
Config persists at ~/.deeplake/state/skilify/config.json (one global
file shared across projects).
Once a teammate's skills are mined into the Deeplake skills table, you
can install them locally with pull. Layout written to disk:
<root>/<name>--<author>/SKILL.md ← pulled skills (e.g. deploy--alice/)
<root>/<name>/SKILL.md ← your locally-mined skills (flat, no suffix)
The --<author> suffix keeps cross-author entries with the same name
disjoint and lets Claude Code's single-depth skill loader find pulled
skills without any symlink trickery. <root> is ~/.claude/skills for
--to global and <cwd>/.claude/skills for --to project.
hivemind skilify pull # all authors, install globally
hivemind skilify pull --user alice@example.com # only this author
hivemind skilify pull --users a@x.com,b@y.com # multiple authors (CSV)
hivemind skilify pull --all-users # explicit "no author filter" (default)
hivemind skilify pull --to project # install under <cwd>/.claude/skills
hivemind skilify pull --dry-run # preview, no disk writes
hivemind skilify pull --force # overwrite even when local version >= remote
hivemind skilify pull <skill-name> # pull only that skill (combinable with --user)Every successful pull records an entry in
~/.deeplake/state/skilify/pulled.json. That manifest is the source of
truth for unpull — anything not in the manifest is never touched
by default, even if its directory follows the <name>--<author> shape
(this protects user-authored variant skills like deploy--blue-green).
hivemind skilify unpull # remove every pulled entry under the install scope
hivemind skilify unpull --user alice@example.com # remove only this author's pulls
hivemind skilify unpull --users a@x.com,b@y.com # multiple authors
hivemind skilify unpull --not-mine # remove all pulls except your own
hivemind skilify unpull --dry-run # preview, no disk writes
hivemind skilify unpull --to project # operate on <cwd>/.claude/skills instead of global
hivemind skilify unpull --all # ALSO remove flat-layout (locally-mined) skills — destructive
hivemind skilify unpull --legacy-cleanup # ALSO remove pre-`--author`-layout `<projectkey>/` dirs from older skilify versionsDrift handling: if a manifest entry's directory was deleted out-of-band
(e.g. rm -rf by hand), the next unpull reports it as manifest-orphan
and prunes the entry from the manifest without errors.
Cross-project caveat: same (name, author) from two different projects
collides on disk under the new flat layout — the more recently pulled
row wins, and the prior SKILL.md is preserved as SKILL.md.bak. The
underlying row stays in the Deeplake skills table, so re-pulling from
the other project recovers it.
| Env var | Default | Effect |
|---|---|---|
HIVEMIND_SKILIFY_EVERY_N_TURNS |
20 |
Stop-counter threshold for mid-session worker fires |
HIVEMIND_SKILLS_TABLE |
skills |
Deeplake table name for org-wide provenance |
HIVEMIND_SKILIFY_WORKER=1 |
unset | Recursion guard (set automatically inside the worker) |
HIVEMIND_CURSOR_MODEL |
auto |
(cursor only) model passed to the cursor-agent gate call |
HIVEMIND_HERMES_PROVIDER |
openrouter |
(hermes only) provider passed to the hermes gate call |
HIVEMIND_HERMES_MODEL |
anthropic/claude-haiku-4-5 |
(hermes only) model passed to hermes |
The skilify worker calls each agent's own headless CLI for the gate
prompt — so a user who only has codex / cursor / hermes installed
never needs claude in their PATH:
| Agent | Gate command |
|---|---|
| claude_code | claude -p <prompt> --no-session-persistence --model haiku --permission-mode bypassPermissions |
| codex | codex exec --dangerously-bypass-approvals-and-sandbox <prompt> |
| cursor | cursor-agent --print --model <HIVEMIND_CURSOR_MODEL> --force --output-format text <prompt> |
| hermes | hermes -z <prompt> --provider <HIVEMIND_HERMES_PROVIDER> -m <HIVEMIND_HERMES_MODEL> --yolo --ignore-user-config |
For hermes via OpenRouter (the default), set OPENROUTER_API_KEY in
the environment; the worker inherits the parent process env. Other
providers (anthropic, openai, etc.) need their respective API keys.
Worker activity logs to ~/.claude/hooks/skilify.log. Each line shows
which session pool was mined, what the gate decided, and whether a file
was written.
| Agent | Mechanism | Hooks/tools wired |
|---|---|---|
| Claude Code | Marketplace plugin | SessionStart · UserPromptSubmit · PreToolUse · PostToolUse · Stop · SubagentStop · SessionEnd |
| Codex | ~/.codex/hooks.json |
SessionStart · UserPromptSubmit · PreToolUse(Bash) · PostToolUse · Stop |
| OpenClaw | Native extension at ~/.openclaw/extensions/hivemind/ |
agent_end capture · before_agent_start recall · contracted tools (hivemind_search/read/index) |
| Cursor (1.7+) | ~/.cursor/hooks.json |
sessionStart · beforeSubmitPrompt · postToolUse · afterAgentResponse · stop · sessionEnd |
| Hermes | Skill at ~/.hermes/skills/hivemind-memory/ |
recall via grep on ~/.deeplake/memory/ |
| pi | ~/.pi/agent/AGENTS.md + skill |
recall via grep on ~/.deeplake/memory/ |
hivemind/
├── src/ ← shared core (API client, auth, config, SQL utils)
│ ├── hooks/ ← Claude Code hooks
│ ├── hooks/codex/ ← Codex hooks
│ ├── hooks/cursor/ ← Cursor hooks
│ ├── hooks/hermes/ ← Hermes shell hooks
│ ├── hooks/pi/ ← pi wiki-worker (extension lives in pi/extension-source/)
│ ├── embeddings/ ← nomic embed-daemon + protocol + SQL helpers
│ ├── mcp/ ← MCP server (used by Hermes; available to any future MCP-aware client)
│ ├── commands/ ← auth, auth-creds, auth-login, session-prune
│ └── cli/ ← unified `hivemind install` CLI + per-agent installers
├── claude-code/ ← Claude Code plugin source (marketplace-distributed)
├── codex/ ← Codex plugin build output (npm-distributed)
├── cursor/ ← Cursor plugin build output (npm-distributed)
├── hermes/ ← Hermes plugin build output (npm-distributed)
├── mcp/ ← MCP server build output (shared by Hermes + future MCP clients)
├── openclaw/ ← OpenClaw plugin source + build output (ClawHub-distributed)
├── pi/ ← pi extension source (ships raw .ts; pi compiles at load)
└── bundle/ ← unified `hivemind` CLI build output
- SQL values escaped with
sqlStr(),sqlLike(),sqlIdent() - ~70 allowlisted builtins run in the virtual FS; unrecognized commands are denied
- Credentials stored with mode
0600, config dir with mode0700 - Device flow login: no tokens in environment or code
HIVEMIND_CAPTURE=falsefully disables data collection
git clone https://github.com/activeloopai/hivemind.git
cd hivemind
npm install
npm run build # tsc + esbuild → claude-code/bundle/ + codex/bundle/ + cursor/bundle/ + openclaw/dist/ + mcp/bundle/ + bundle/cli.js
npm test # vitestTest locally with Claude Code:
claude --plugin-dir claude-codeInteractive shell against Deeplake:
npm run shellApache License 2.0 — © Activeloop, Inc. See LICENSE for details.