-
Notifications
You must be signed in to change notification settings - Fork 175
Description
Feature Request: Parameter Aliases for MCP Tools (Pydantic AliasChoices)
Problem
AI models consistently fail on first use of BM MCP tools because parameter names don't match training data conventions. The failure cascade is:
- Model reaches for training-predicted parameter name (e.g.,
offsetinstead ofpage) - Pydantic validation error fires
- Model distrusts the BM tool
- Model falls back to filesystem reads, bypassing BM's knowledge graph entirely
This is the single highest-frequency error across sessions. Tested across Opus 4.6, Sonnet 4.6, with and without extended thinking. Every model hits the same parameters on first attempt.
Affected Parameters
read_note:
| Canonical | Models try | Notes |
|---|---|---|
page |
offset, page_number |
Universal pagination convention in training data |
page_size |
limit, per_page |
Same |
edit_note:
| Canonical | Models try | Notes |
|---|---|---|
find_text |
find, old_text, old_content, search |
Every instance reaches for the wrong name first |
content (as replacement) |
new_content, replacement, replace_with |
content is ambiguous when used as "replacement text" |
section |
section_heading, heading |
Models expect the more descriptive name |
Other tools likely affected: Haven't audited every tool, but any parameter where the canonical name diverges from common API conventions will hit the same pattern. A systematic audit across all MCP tool parameters would be valuable.
Proposed Fix
Pydantic v2 AliasChoices — accept common alternatives alongside the canonical name. Zero runtime cost, zero breaking changes, eliminates the entire error class.
from pydantic import Field, AliasChoices
# read_note example
class ReadNoteParams(BaseModel):
page: int = Field(
default=1,
validation_alias=AliasChoices('page', 'offset', 'page_number')
)
page_size: int = Field(
default=10,
validation_alias=AliasChoices('page_size', 'limit', 'per_page')
)
# edit_note example
class EditNoteParams(BaseModel):
find_text: Optional[str] = Field(
default=None,
validation_alias=AliasChoices('find_text', 'find', 'old_text', 'old_content', 'search')
)
content: str = Field(
validation_alias=AliasChoices('content', 'new_content', 'replacement', 'replace_with')
)
section: Optional[str] = Field(
default=None,
validation_alias=AliasChoices('section', 'section_heading', 'heading')
)Why This Matters
- First-use experience: Validation errors on initial tool calls poison model trust in BM tools for the rest of the session
- Silent degradation: Filesystem fallback bypasses the knowledge graph entirely — models read raw files instead of going through BM's index, search, and relation tracking
- Token waste: Models spend tokens on error recovery, retries, and workarounds instead of productive work
- Documentation doesn't fix it: The clash is in the model weights, not in the model's willingness to read docs. Even with extensive tool descriptions, the training-predicted name fires before the schema is consulted
Principle
MCP tools built for AI consumption should meet the model where it is. If every consumer guesses the same wrong name, the interface is wrong, not the consumer.
Environment
- Basic Memory 0.19.0 (PyPI)
- Claude Code (Opus 4.6, Sonnet 4.6)
- Tested across multiple sessions over several weeks