docs(TSP-1164): Add documentation for 19 new MCP evaluation tools#598
Closed
claude[bot] wants to merge 1 commit intomainfrom
Closed
docs(TSP-1164): Add documentation for 19 new MCP evaluation tools#598claude[bot] wants to merge 1 commit intomainfrom
claude[bot] wants to merge 1 commit intomainfrom
Conversation
- Create new page: programmatic-evals.mdx covering the full evaluation lifecycle via MCP (test sets, test cases, evaluator rules, tool simulation, running evaluations, monitoring batch results) - Add Programmatic Access section to evals.mdx linking to new page - Update agent-skills.mdx: bump tool count 46 → 65, expand Evals card - Update docs.json: nest evals pages under an Evals group Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Contributor
|
Preview deployment for your docs. Learn more about Mintlify Previews.
💡 Tip: Enable Workflows to automatically generate PRs for you. |
5 tasks
Collaborator
|
Consolidated into #608 as part of trimming the docs-drafter queue. Branch kept; PR can be reopened if needed. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
build/agents/build-your-agent/evals/programmatic-evals.mdx— documents the 19 new MCP tools covering the full evaluation lifecycle: managing test sets and test cases, configuring evaluator rules and tool simulations, running evaluations, and monitoring batch resultsbuild/agents/build-your-agent/evals.mdx— added a Programmatic Access section near the top linking to the new pageintegrations/mcp/programmatic-gtm/agent-skills.mdx— bumped tool count from 46 → 65, expanded the Evals card description to reflect the comprehensive new capabilitiesdocs.json— converted the flatevalsnav entry into an Evals group containing both the main page and the new programmatic-evals sub-pageContext
These docs cover the 19 new MCP tools added in RelevanceAI/relevance-api-node#13459. The new tools fill a major gap by enabling programmatic access to evals, supporting CI/CD integration and automated testing frameworks.
Linear issue: https://linear.app/relevance/issue/TSP-1164/
Test plan
/build/agents/build-your-agent/evals/programmatic-evals🤖 Generated with Claude Code