Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ Skills work seamlessly with **Command Code** providing consistent, high-quality

### Content & Communication

- **[Memory Engine](skills/memory-engine/)** - Store and retrieve long-term knowledge across sessions with semantic naming and hybrid search. Use when you need to remember decisions, recall past context, or search what was discussed or built in a previous session.
- **[Content Research Writer](skills/content-research-writer/)** - Assists in writing high-quality content by conducting research, adding citations, improving hooks, and providing section-by-section feedback.
- **[Internal Comms](skills/internal-comms/)** - Helps write internal communications including 3P updates, company newsletters, FAQs, status reports, and project updates using company-specific formats.
- **[Meeting Insights Analyzer](skills/meeting-insights-analyzer/)** - Analyzes meeting transcripts to uncover behavioral patterns including conflict avoidance, speaking ratios, filler words, and leadership style.
Expand Down Expand Up @@ -98,6 +99,7 @@ Skills work seamlessly with **Command Code** providing consistent, high-quality

### Workflow Automation

- **[Site Automation](skills/site-automation/)** - Manage website content via automation APIs. Sync GitHub projects, generate changelogs, create posts, send newsletters, and manage comments. Use for any website content management task.
- **[File Organizer](skills/file-organizer/)** - Intelligently organizes files and folders by understanding context, finding duplicates, and suggesting better organizational structures.
- **[Invoice Organizer](skills/invoice-organizer/)** - Automatically organizes invoices and receipts for tax preparation by reading files, extracting information, and renaming consistently.
- **[Raffle Winner Picker](skills/raffle-winner-picker/)** - Randomly selects winners from lists, spreadsheets, or Google Sheets for giveaways and contests with cryptographically secure randomness.
Expand Down
129 changes: 129 additions & 0 deletions skills/memory-engine/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
---
name: memory-engine
description: >
Store and retrieve long-term knowledge across sessions with semantic naming and hybrid search.
Use when you need to remember something for the future, recall past decisions, look up prior context,
or search what was discussed or built in a previous session.
user-invocable: true
---

# Memory Engine 🧠

Persistent, searchable memory across all sessions. Storage uses semantic file naming and hybrid search (semantic + keyword fallback).

## When to Use This Skill

- "Remember that…" / "Don't forget…" → store a memory
- "What did we decide about…" / "What have we talked about…" → search memory
- "What's the status of…" → search memory
- Start of any new session — search relevant topics before answering
- After completing important work — store a summary immediately

## How It Works

**Storage:** Memory files use semantic naming convention:
```
YYYY-MM-DD__[CATEGORY]__[TOPIC]__[CONTEXT].md
```

**Search:** Hybrid approach:
1. Try semantic search first (embeddings-based)
2. Expand query with synonyms if sparse results
3. Fall back to keyword search if semantic unavailable
4. Return diagnostics showing which methods were used

## Commands

### Store a Memory
```bash
memory_store "[fact or summary]" \
--category [briefing|strategy|audit|vision|code|project|bug|feature|research|decision|general] \
--topic [your-topic] \
--context [optional-descriptor]
```

### Search Memory (Enhanced)
```bash
memory_search "[query]" --limit 10 --diagnostics
```

### Search Memory (Basic Fallback)
```bash
memory_search_basic "[query]" --limit 10
```

### Check Memory Health
```bash
memory_health
```

## File Naming Convention

Memory files follow a semantic naming pattern:

```
YYYY-MM-DD__[CATEGORY]__[TOPIC]__[CONTEXT].md
```

**Categories:** briefing, strategy, audit, vision, code, project, bug, feature, research, decision, general

**Examples:**
- `2026-02-25__briefing__morning-signal__daily.md`
- `2026-02-24__strategy__architecture__design.md`
- `2026-02-24__audit__database__schema.md`
- `2026-02-20__decision__authentication__oauth.md`

## Examples

```bash
# Store a decision
memory_store \
"Decided to use semantic file naming with YYYY-MM-DD__CATEGORY__TOPIC__CONTEXT pattern" \
--category decision --topic memory-engine --context naming

# Store a briefing summary
memory_store \
"Morning briefing sent successfully to 142 subscribers with 8 key insights" \
--category briefing --topic daily-briefing --context summary

# Search for past decisions
memory_search "authentication oauth" --limit 5

# Exploratory search with diagnostics
memory_search "what have we decided about architecture" --limit 5 --diagnostics
```

## Search Behavior

The enhanced search engine automatically:

1. **Detects query type** (factual / conceptual / exploratory) and adjusts relevance threshold
2. **Tries semantic search** first using embeddings
3. **Expands query** with synonyms if results are sparse
4. **Falls back to keyword search** if semantic search is unavailable or returns nothing
5. **Returns diagnostics** showing which methods were used and why

## Guardrails

- **Always store** important facts, decisions, and task completions immediately after they happen
- **Always search** before answering questions about past sessions — never assume context is absent
- Use `--diagnostics` when a search returns no results to understand why
- If enhanced search is unavailable, fall back to basic keyword search
- Store the **summary of significant changes**, not just raw data — make it human-readable for future retrieval
- Keep memory entries concise but complete (50-500 words)
- Use consistent category and topic names across sessions for better search results

## Integration

This skill works best when:
- Invoked at the start of each session to recall relevant context
- Used after completing major work to store summaries
- Combined with other skills that need historical context
- Integrated into agent workflows for persistent learning

## Technical Notes

- Semantic search requires embeddings API (e.g., Google Generative AI)
- Keyword search works offline without external dependencies
- Memory files are stored as markdown for human readability
- Search results are scored by relevance and recency
198 changes: 198 additions & 0 deletions skills/site-automation/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
---
name: site-automation
description: >
Manage website content via automation APIs. Sync GitHub projects, generate changelogs,
create posts, send newsletters, manage comments, and check health. Use when any task
involves website content management or newsletter operations.
user-invocable: true
---

# Site Automation 🌐

Manage website content through automation APIs. Sync GitHub projects, generate changelogs, create posts, send newsletters, and manage comments.

## When to Use This Skill

- "Sync the latest GitHub projects"
- "Generate a changelog from recent commits"
- "Create a blog post"
- "Send a newsletter"
- "Check recent comments on posts"
- "Reply to a comment"
- "List newsletter subscribers"

## Prerequisites

Set up environment variables:
```bash
export SITE_BASE_URL="http://localhost:3000" # or your site URL
export CLAW_AUTOMATION_KEY="your-api-key" # Bearer token for API auth
export GITHUB_USERNAME="your-github-user" # GitHub user for project sync
```

Store these in your `.env` file for persistence.

## Commands

### Health Check
```bash
site_automation health
```

### Full Sync (Projects + Changelog)
```bash
site_automation full-sync
```

Syncs GitHub projects to database and generates changelog from recent commits.

### Sync Projects Only
```bash
site_automation sync-projects
```

Pulls latest GitHub repositories and updates project database.

### Sync Changelog Only
```bash
site_automation sync-changelog
```

Generates changelog entries from recent git commits.

### Create a Blog Post
```bash
site_automation create-post \
--slug post-slug \
--title "Post Title" \
--content-file /path/to/content.md \
--type blog \
--tags tag1 tag2 tag3
```

### Create a Changelog Post
```bash
site_automation create-post \
--slug changelog-slug \
--title "Changelog Title" \
--content-file /path/to/content.md \
--type changelog
```

### Send Newsletter
```bash
site_automation send-newsletter \
--subject "Newsletter Subject" \
--html-file /path/to/email.html \
--idempotency-key unique-key-for-deduplication
```

### List Subscribers
```bash
site_automation list-subscribers
```

### List Recent Comments
```bash
site_automation list-comments --since 2026-02-20
```

### Reply to a Comment
```bash
site_automation reply-comment \
--post-id POST_UUID \
--body "Your reply text" \
--source-comment-id COMMENT_UUID
```

## API Endpoints (Direct)

If you need to call the API directly:

| Method | Endpoint | Purpose |
|--------|----------|---------|
| GET | `/api/automation/health` | Health check |
| POST | `/api/automation/projects/sync` | GitHub → projects DB |
| POST | `/api/automation/changelog/github` | Commits → changelog posts |
| POST | `/api/automation/posts` | Create blog/changelog post |
| GET | `/api/automation/comments` | Poll recent comments |
| POST | `/api/automation/comments/respond` | Reply to comment |
| GET | `/api/automation/newsletter/subscribers` | List subscribers |
| POST | `/api/automation/newsletter/send` | Send newsletter |

**Auth header:** `Authorization: Bearer <CLAW_AUTOMATION_KEY>`

## Workflow: Publish Article from Intelligence

1. Generate article content (from briefing, research, or manual writing)
2. Write content to a temporary `.md` file
3. Call `create-post` with the file
4. Optionally send newsletter linking to the new post

## Examples

```bash
# Full sync before publishing
site_automation full-sync

# Create a blog post
site_automation create-post \
--slug "ai-trends-2026" \
--title "AI Trends in 2026" \
--content-file ./article.md \
--type blog \
--tags ai trends 2026

# Send newsletter
site_automation send-newsletter \
--subject "Weekly Digest — Feb 25, 2026" \
--html-file ./newsletter.html \
--idempotency-key "digest-2026-02-25"

# Check recent comments
site_automation list-comments --since 2026-02-20

# Reply to a comment
site_automation reply-comment \
--post-id "550e8400-e29b-41d4-a716-446655440000" \
--body "Thanks for the feedback!" \
--source-comment-id "6ba7b810-9dad-11d1-80b4-00c04fd430c8"
```

## Integration Patterns

### With Briefing Pipeline
The daily briefing automatically runs `full-sync` before synthesizing the newsletter. This ensures:
- New GitHub projects are synced to the database
- Recent commits become changelog entries
- Subscriber list is current before send

### With Content Pipeline
Use in content creation workflows:
1. Research/write content
2. Store in temporary file
3. Call `create-post` to publish
4. Trigger newsletter send if needed

### With Comment Moderation
Monitor comments and respond:
1. Poll recent comments with `list-comments`
2. Review for moderation
3. Reply with `reply-comment` or flag for manual review

## Response Format

After execution, report:
- Counts (synced, created, sent)
- Any errors or warnings
- Next steps if applicable

## Guardrails

- Always run `health` check before major operations
- Use idempotency keys for newsletter sends to prevent duplicates
- Validate markdown content before posting
- Store API key securely (never hardcode)
- Test with `--draft` flag before publishing
- Monitor rate limits on GitHub API calls
- Keep changelog entries concise and user-focused