feat: add complexity analyzer tool#53
feat: add complexity analyzer tool#53nnennandukwe wants to merge 1 commit intofeature/41-core-modulefrom
Conversation
Adds a new complexity_analysis MCP tool that calculates cyclomatic complexity, cognitive complexity, function length, parameter count, nesting depth, and class metrics for Python source code. Uses Astroid for AST analysis via the shared core module. Includes configurable thresholds and actionable suggestions for reducing complexity. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Review Summary by QodoAdd complexity analyzer MCP tool with configurable thresholds
WalkthroughsDescription• Adds new complexity_analysis MCP tool for measuring Python code complexity • Implements cyclomatic and cognitive complexity calculators using Astroid AST • Detects multiple complexity issues: high complexity, long functions, too many parameters, deep nesting, large classes, deep inheritance • Integrates tool into MCP server with configurable thresholds and severity scaling • Includes 44 comprehensive tests covering calculators, metrics, and server integration Diagramflowchart LR
Source["Python Source Code"]
Parse["Parse with Astroid"]
Cyclo["CyclomaticCalculator"]
Cog["CognitiveCalculator"]
Metrics["analyze_complexity"]
Issues["Complexity Issues"]
Server["MCP Server"]
Source --> Parse
Parse --> Cyclo
Parse --> Cog
Cyclo --> Metrics
Cog --> Metrics
Metrics --> Issues
Issues --> Server
File Changes1. src/workshop_mcp/complexity_analysis/__init__.py
|
Code Review by Qodo
1. Module not under tools/
|
| """Complexity analysis tools for measuring Python code complexity metrics.""" | ||
|
|
||
| __version__ = "0.1.0" | ||
|
|
||
| from .calculator import CognitiveCalculator, CyclomaticCalculator | ||
| from .metrics import ClassMetrics, FileMetrics, FunctionMetrics, analyze_complexity | ||
| from .patterns import ComplexityCategory | ||
|
|
||
| __all__ = [ | ||
| "CyclomaticCalculator", | ||
| "CognitiveCalculator", | ||
| "FunctionMetrics", | ||
| "ClassMetrics", | ||
| "FileMetrics", | ||
| "ComplexityCategory", | ||
| "analyze_complexity", | ||
| ] |
There was a problem hiding this comment.
1. Module not under tools/ 📎 Requirement gap ✓ Correctness
• The new complexity analyzer code is added under src/workshop_mcp/complexity_analysis/ instead of the required src/workshop_mcp/tools/complexity_analysis/ package path. • This breaks the expected project directory structure for tools and can cause import/discovery inconsistencies for MCP tool modules.
Agent Prompt
## Issue description
The complexity analyzer module is not located in the required tool package path `src/workshop_mcp/tools/complexity_analysis/`.
## Issue Context
Compliance requires tool implementations to exist under the `src/workshop_mcp/tools/` directory structure. The current module lives at `src/workshop_mcp/complexity_analysis/`, which does not meet that requirement.
## Fix Focus Areas
- src/workshop_mcp/complexity_analysis/__init__.py[1-17]
- src/workshop_mcp/complexity_analysis/calculator.py[1-120]
- src/workshop_mcp/complexity_analysis/metrics.py[1-333]
- src/workshop_mcp/complexity_analysis/patterns.py[1-87]
- src/workshop_mcp/server.py[13-18]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| if file_path: | ||
| try: | ||
| self.path_validator.validate_exists(file_path, must_be_file=True) | ||
| except PathValidationError as e: | ||
| return self._error_response( | ||
| request_id, | ||
| JsonRpcError(-32602, str(e)), | ||
| ) | ||
|
|
||
| cyclomatic_threshold = arguments.get("cyclomatic_threshold", 10) | ||
| cognitive_threshold = arguments.get("cognitive_threshold", 15) | ||
| max_function_length = arguments.get("max_function_length", 50) | ||
|
|
||
| try: | ||
| logger.info( | ||
| "Executing complexity analysis on %s", | ||
| file_path or "source code", | ||
| ) | ||
|
|
||
| # Read source from file if needed | ||
| if file_path and not source_code: | ||
| source_code = Path(file_path).read_text(encoding="utf-8") | ||
|
|
There was a problem hiding this comment.
2. Validated path not used 📘 Rule violation ⛨ Security
• The server validates file_path with self.path_validator.validate_exists(...) but then reads the file using Path(file_path).read_text(...) instead of the resolved/validated path returned by the validator. • This undermines the “validate and resolve before use” requirement and can reintroduce path-handling risks (e.g., TOCTOU issues or unexpected resolution differences).
Agent Prompt
## Issue description
`file_path` is validated but the validated/resolved path is not used for the subsequent file read.
## Issue Context
`PathValidator.validate_exists()` returns a resolved `Path` that has been checked to be within allowed roots and to exist. The server currently discards that return value and reads from `Path(file_path)`.
## Fix Focus Areas
- src/workshop_mcp/server.py[651-673]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| if method_count > DEFAULT_MAX_CLASS_METHODS: | ||
| result.issues.append( | ||
| ComplexityIssue( | ||
| tool="complexity", | ||
| category=ComplexityCategory.LARGE_CLASS.value, | ||
| severity="warning", | ||
| message=( | ||
| f"Class '{class_info.name}' has {method_count} methods " | ||
| f"(threshold: {DEFAULT_MAX_CLASS_METHODS})" | ||
| ), | ||
| line=class_info.line_number, | ||
| function=None, | ||
| suggestion="Consider splitting into smaller, focused classes", | ||
| ) | ||
| ) | ||
|
|
||
| if inheritance_depth > DEFAULT_MAX_INHERITANCE_DEPTH: | ||
| result.issues.append( | ||
| ComplexityIssue( | ||
| tool="complexity", | ||
| category=ComplexityCategory.DEEP_INHERITANCE.value, | ||
| severity="warning", | ||
| message=( | ||
| f"Class '{class_info.name}' has inheritance depth of " | ||
| f"{inheritance_depth} (threshold: {DEFAULT_MAX_INHERITANCE_DEPTH})" | ||
| ), | ||
| line=class_info.line_number, | ||
| function=None, | ||
| suggestion="Prefer composition over deep inheritance hierarchies", | ||
| ) | ||
| ) |
There was a problem hiding this comment.
3. Class issues lack metrics 📎 Requirement gap ✓ Correctness
• Issues emitted for class-level findings (e.g., large_class, deep_inheritance) do not include a metrics object, even though the compliance requirement expects issues to include structured metric context. • This makes the output less actionable/consistent for consumers expecting every issue to include structured metric data.
Agent Prompt
## Issue description
Class-level issues are emitted without a `metrics` object, violating the requirement that issues include structured metric context.
## Issue Context
`ComplexityIssue.metrics` exists but is not populated for `LARGE_CLASS` and `DEEP_INHERITANCE` issues.
## Fix Focus Areas
- src/workshop_mcp/complexity_analysis/metrics.py[264-294]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| cyclomatic_threshold = arguments.get("cyclomatic_threshold", 10) | ||
| cognitive_threshold = arguments.get("cognitive_threshold", 15) | ||
| max_function_length = arguments.get("max_function_length", 50) | ||
|
|
||
| try: | ||
| logger.info( | ||
| "Executing complexity analysis on %s", | ||
| file_path or "source code", | ||
| ) | ||
|
|
||
| # Read source from file if needed | ||
| if file_path and not source_code: | ||
| source_code = Path(file_path).read_text(encoding="utf-8") | ||
|
|
||
| complexity_result = analyze_complexity( | ||
| source_code, | ||
| file_path=file_path, | ||
| cyclomatic_threshold=cyclomatic_threshold, | ||
| cognitive_threshold=cognitive_threshold, | ||
| max_function_length=max_function_length, | ||
| ) |
There was a problem hiding this comment.
4. Threshold types unchecked 🐞 Bug ⛯ Reliability
• _execute_complexity_analysis forwards user-provided thresholds without validating they are integers. • If a client sends a non-int (e.g., string), numeric comparisons in analyze_complexity will raise TypeError, which is not caught by the ValueError handler. • This converts a client “invalid params” situation into a -32603 “Internal error”, reducing reliability and debuggability.
Agent Prompt
### Issue description
`complexity_analysis` accepts numeric thresholds from JSON-RPC arguments but does not validate their types. Non-integer values can raise `TypeError` during numeric comparisons in `analyze_complexity`, which bypasses the current `except ValueError` handler and becomes an internal error.
### Issue Context
The MCP server should return `-32602 Invalid params` for client input errors, not `-32603 Internal error`.
### Fix Focus Areas
- src/workshop_mcp/server.py[660-707]
- src/workshop_mcp/complexity_analysis/metrics.py[152-174]
- src/workshop_mcp/complexity_analysis/patterns.py[51-67]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Summary
Closes #44
Depends on: #52 (core module extraction)
Adds a new
complexity_analysisMCP tool that calculates multiple complexity metrics for Python source code using Astroid AST analysis via the shared core module.What changed
New
complexity_analysis/module with four files:calculator.py—CyclomaticCalculatorandCognitiveCalculatorclasses that walk Astroid ASTs to compute complexity scoresmetrics.py—FunctionMetrics,ClassMetrics,FileMetricsdataclasses,ComplexityResultcontainer, and the mainanalyze_complexity()entry pointpatterns.py—ComplexityCategoryenum, threshold constants, and severity mapping functions (severity_for_cyclomatic,severity_for_cognitive,cyclomatic_label)__init__.py— public API re-exportsserver.pychanges:from dataclasses import asdictandfrom pathlib import Pathimportsfrom .complexity_analysis import analyze_complexityimportcomplexity_analysistool in_handle_list_toolswith full JSON Schema (includingcyclomatic_threshold,cognitive_threshold,max_function_lengthoptions)elif name == "complexity_analysis"dispatch in_handle_call_tool_execute_complexity_analysis()handler method with validation, file reading, and error handling44 new tests in
tests/test_complexity_analysis.pycovering:analyze_complexity()integration: threshold configurability, file metrics aggregation, all issue categories (high cyclomatic, high cognitive, long function, too many parameters, deep nesting, large class)Metrics implemented
Additions beyond the original issue
cyclomatic_label()helper returning human-readable labels ("simple", "moderate", "high", "very high")What was NOT included from the issue
Test plan
poetry run pytest— 161 from base + 44 new)ruff checkandruff format --checkpass🤖 Generated with Claude Code