Skip to content

epic: Evolve into Python Code Quality MCP #40

@nnennandukwe

Description

@nnennandukwe

Vision

Transform this project from an MCP workshop demo into a Python Code Quality MCP - a suite of deep semantic analysis tools optimized for Pythonic code, exposed via MCP for AI agent consumption.

Tagline: Deep semantic analysis for Pythonic code

Why This Direction

  1. Astroid infrastructure already exists - The performance profiler's semantic analysis (type inference, call resolution) can power additional tools
  2. MCP is the right interface - AI agents can run these tools, interpret results, and generate fixes
  3. Python-specific focus - Go deep on idioms and patterns rather than shallow multi-language support
  4. Differentiation - Ruff/pylint are fast but syntactic; this is deep analysis that catches what linters miss

Planned Tool Suite

Tool Status Description
performance_check ✅ Exists N+1 queries, blocking I/O in async, inefficient loops
pythonic_check 🔲 Planned Idiomatic Python patterns and anti-patterns
security_scan 🔲 Planned SQL injection, pickle, eval, shell=True, hardcoded secrets
complexity_analysis 🔲 Planned Cyclomatic complexity, nesting depth, function length
dead_code_detection 🔲 Planned Unused imports, functions, unreachable code
type_coverage 🔲 Planned Untyped public APIs, missing annotations
full_analysis 🔲 Planned Unified tool that runs all checkers

Architecture

Python Code Quality MCP
│
├── Core (shared)
│   ├── Astroid utilities (AST parsing, inference)
│   ├── Issue schema (unified output format)
│   └── Pattern matching helpers
│
├── Tools
│   ├── performance_check/
│   ├── pythonic_check/
│   ├── security_scan/
│   └── ...
│
└── MCP Server
    └── Tool registration and routing

Implementation Order

  1. Refactor shared Astroid utilities into core module
  2. Define unified issue output schema across tools
  3. Implement pythonic_check tool
  4. Implement additional tools (security, complexity, etc.)
  5. Add full_analysis unified tool
  6. Update project branding/README (optional)

Success Criteria

  • At least 3 code quality tools implemented
  • Shared infrastructure used by all tools
  • Consistent output format across tools
  • AI agents can effectively use tools to improve code

Related Issues

Links will be added as child issues are created

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions