Skip to content

feat: add complexity analyzer tool#53

Open
nnennandukwe wants to merge 1 commit intofeature/41-core-modulefrom
feature/44-complexity-analysis
Open

feat: add complexity analyzer tool#53
nnennandukwe wants to merge 1 commit intofeature/41-core-modulefrom
feature/44-complexity-analysis

Conversation

@nnennandukwe
Copy link
Copy Markdown
Owner

Summary

Closes #44
Depends on: #52 (core module extraction)

Adds a new complexity_analysis MCP tool that calculates multiple complexity metrics for Python source code using Astroid AST analysis via the shared core module.

What changed

  • New complexity_analysis/ module with four files:

    • calculator.pyCyclomaticCalculator and CognitiveCalculator classes that walk Astroid ASTs to compute complexity scores
    • metrics.pyFunctionMetrics, ClassMetrics, FileMetrics dataclasses, ComplexityResult container, and the main analyze_complexity() entry point
    • patterns.pyComplexityCategory enum, threshold constants, and severity mapping functions (severity_for_cyclomatic, severity_for_cognitive, cyclomatic_label)
    • __init__.py — public API re-exports
  • server.py changes:

    • Added from dataclasses import asdict and from pathlib import Path imports
    • Added from .complexity_analysis import analyze_complexity import
    • Registered complexity_analysis tool in _handle_list_tools with full JSON Schema (including cyclomatic_threshold, cognitive_threshold, max_function_length options)
    • Added elif name == "complexity_analysis" dispatch in _handle_call_tool
    • Added _execute_complexity_analysis() handler method with validation, file reading, and error handling
  • 44 new tests in tests/test_complexity_analysis.py covering:

    • Cyclomatic calculator: simple functions, if/elif/else, for/while loops, except handlers, with statements, asserts, boolean operators, comprehensions, ternary expressions, complex functions
    • Cognitive calculator: nesting penalties, loop+if combos, boolean operators, recursion detection, deeply nested code
    • analyze_complexity() integration: threshold configurability, file metrics aggregation, all issue categories (high cyclomatic, high cognitive, long function, too many parameters, deep nesting, large class)
    • MCP server integration: tool listing, schema validation, source_code and file_path invocation, error cases, custom thresholds

Metrics implemented

Metric Threshold Description
Cyclomatic complexity 10 (default) Linearly independent paths through code
Cognitive complexity 15 (default) How hard code is to understand (Sonar metric)
Function length 50 lines (default) Lines per function
Parameter count 5 Parameters per function
Nesting depth 4 Maximum nesting level
Class method count 20 Methods per class
Inheritance depth 3 Class inheritance chain length

Additions beyond the original issue

  • Deep inheritance detection — the issue mentioned "inheritance depth" under class metrics but didn't specify detection; this PR flags classes exceeding depth threshold 3
  • Severity scaling — cyclomatic and cognitive issues get severity levels (info → warning → error → critical) based on how far they exceed the threshold, not just a binary flag
  • cyclomatic_label() helper returning human-readable labels ("simple", "moderate", "high", "very high")

What was NOT included from the issue

  • Coupling between classes — the issue mentioned this under class metrics but it requires cross-file analysis which is out of scope for single-file analysis

Test plan

  • All 205 tests pass (poetry run pytest — 161 from base + 44 new)
  • ruff check and ruff format --check pass
  • Pre-commit hooks pass
  • MCP integration tests verify tool registration, schema, invocation with source_code, invocation with file_path, error cases, and custom thresholds

Note: This PR targets feature/41-core-module. Merge #52 first, then this PR.

main ← PR #52 (core) ← PR #53 (this) ← PR #54 (dead code)

🤖 Generated with Claude Code

Adds a new complexity_analysis MCP tool that calculates cyclomatic
complexity, cognitive complexity, function length, parameter count,
nesting depth, and class metrics for Python source code. Uses Astroid
for AST analysis via the shared core module. Includes configurable
thresholds and actionable suggestions for reducing complexity.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@qodo-code-review
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Add complexity analyzer MCP tool with configurable thresholds

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Adds new complexity_analysis MCP tool for measuring Python code complexity
• Implements cyclomatic and cognitive complexity calculators using Astroid AST
• Detects multiple complexity issues: high complexity, long functions, too many parameters, deep
  nesting, large classes, deep inheritance
• Integrates tool into MCP server with configurable thresholds and severity scaling
• Includes 44 comprehensive tests covering calculators, metrics, and server integration
Diagram
flowchart LR
  Source["Python Source Code"]
  Parse["Parse with Astroid"]
  Cyclo["CyclomaticCalculator"]
  Cog["CognitiveCalculator"]
  Metrics["analyze_complexity"]
  Issues["Complexity Issues"]
  Server["MCP Server"]
  
  Source --> Parse
  Parse --> Cyclo
  Parse --> Cog
  Cyclo --> Metrics
  Cog --> Metrics
  Metrics --> Issues
  Issues --> Server
Loading

Grey Divider

File Changes

1. src/workshop_mcp/complexity_analysis/__init__.py ✨ Enhancement +17/-0

Module initialization and public API exports

• Exports public API for complexity analysis module
• Re-exports calculator classes, metric dataclasses, and main analyze_complexity function
• Provides clean module interface with version info

src/workshop_mcp/complexity_analysis/init.py


2. src/workshop_mcp/complexity_analysis/calculator.py ✨ Enhancement +120/-0

Cyclomatic and cognitive complexity calculators

• Implements CyclomaticCalculator class that counts linearly independent code paths
• Counts branching constructs: if/elif/else, loops, exception handlers, with statements, asserts,
 ternary expressions, comprehensions
• Implements CognitiveCalculator class measuring code understandability with nesting penalties
• Detects recursion and boolean operators for cognitive complexity scoring

src/workshop_mcp/complexity_analysis/calculator.py


3. src/workshop_mcp/complexity_analysis/metrics.py ✨ Enhancement +333/-0

Metrics collection and complexity analysis orchestration

• Defines dataclasses for FunctionMetrics, ClassMetrics, FileMetrics, and ComplexityIssue
• Implements main analyze_complexity() function that orchestrates AST analysis and issue detection
• Detects 7 issue categories: high cyclomatic/cognitive complexity, long functions, too many
 parameters, deep nesting, large classes, deep inheritance
• Generates actionable suggestions for each issue type
• Calculates file-level summary statistics (total functions, average/max complexity, complex
 function count)

src/workshop_mcp/complexity_analysis/metrics.py


View more (3)
4. src/workshop_mcp/complexity_analysis/patterns.py ✨ Enhancement +87/-0

Complexity categories, thresholds, and severity mapping

• Defines ComplexityCategory enum with 7 issue types
• Sets default thresholds for all metrics (cyclomatic: 10, cognitive: 15, function length: 50, etc.)
• Implements severity mapping functions that scale severity based on how far metrics exceed
 thresholds
• Provides cyclomatic_label() helper returning human-readable complexity labels

src/workshop_mcp/complexity_analysis/patterns.py


5. src/workshop_mcp/server.py ✨ Enhancement +164/-1

MCP server integration for complexity analysis tool

• Adds imports for asdict, Path, and analyze_complexity
• Registers complexity_analysis tool in _handle_list_tools() with full JSON Schema including
 configurable thresholds
• Adds elif branch in _handle_call_tool() to dispatch complexity_analysis requests
• Implements _execute_complexity_analysis() handler with validation, file reading, error handling,
 and result serialization

src/workshop_mcp/server.py


6. tests/test_complexity_analysis.py 🧪 Tests +703/-0

Comprehensive test suite for complexity analysis module

• Tests CyclomaticCalculator with 13 test cases covering simple functions, branching, loops,
 exception handlers, boolean operators, comprehensions, ternary expressions, and complex functions
• Tests CognitiveCalculator with 7 test cases covering nesting penalties, loop/if combinations,
 boolean operators, recursion detection, and deeply nested code
• Tests analyze_complexity() integration with 15 test cases covering threshold configurability,
 file metrics aggregation, all 7 issue categories, and error cases
• Tests MCP server integration with 6 test cases covering tool listing, schema validation,
 invocation with source_code/file_path, error handling, and custom thresholds
• Tests pattern helpers (labels and severity functions) with 3 test cases

tests/test_complexity_analysis.py


Grey Divider

Qodo Logo


🛠️ Relevant configurations:


These are the relevant configurations for this tool:

[config]

is_auto_command: True
is_new_pr: True
model_reasoning: vertex_ai/gemini-2.5-pro
model: gpt-5.2-2025-12-11
model_turbo: anthropic/claude-haiku-4-5-20251001
fallback_models: ['anthropic/claude-sonnet-4-5-20250929', 'bedrock/us.anthropic.claude-sonnet-4-5-20250929-v1:0']
second_model_for_exhaustive_mode: o4-mini
git_provider: github
publish_output: True
publish_output_no_suggestions: True
publish_output_progress: True
verbosity_level: 0
publish_logs: False
debug_mode: False
use_wiki_settings_file: True
use_repo_settings_file: True
use_global_settings_file: True
use_global_wiki_settings_file: False
disable_auto_feedback: False
ai_timeout: 150
response_language: en-US
clone_repo_instead_of_fetch: True
always_clone: False
add_repo_metadata: True
clone_repo_time_limit: 300
publish_inline_comments_fallback_batch_size: 5
publish_inline_comments_fallback_sleep_time: 2
max_model_tokens: 32000
custom_model_max_tokens: -1
patch_extension_skip_types: ['.md', '.txt']
extra_allowed_extensions: []
allow_dynamic_context: True
allow_forward_dynamic_context: True
max_extra_lines_before_dynamic_context: 12
patch_extra_lines_before: 5
patch_extra_lines_after: 1
ai_handler: litellm
cli_mode: False
TRIAL_GIT_ORG_MAX_INVOKES_PER_MONTH: 30
TRIAL_RATIO_CLOSE_TO_LIMIT: 0.8
invite_only_mode: False
enable_request_access_msg_on_new_pr: False
check_also_invites_field: False
calculate_context: True
disable_checkboxes: False
output_relevant_configurations: True
large_patch_policy: clip
seed: -1
temperature: 0.2
allow_dynamic_context_ab_testing: False
choose_dynamic_context_ab_testing_ratio: 0.5
ignore_pr_title: ['^\\[Auto\\]', '^Auto']
ignore_pr_target_branches: []
ignore_pr_source_branches: []
ignore_pr_labels: []
ignore_ticket_labels: []
allow_only_specific_folders: []
ignore_pr_authors: []
ignore_repositories: []
ignore_language_framework: []
enable_ai_metadata: True
present_reasoning: True
max_tickets: 10
max_tickets_chars: 8000
prevent_any_approval: False
enable_comment_approval: False
enable_auto_approval: False
auto_approve_for_low_review_effort: -1
auto_approve_for_no_suggestions: False
ensure_ticket_compliance: False
new_diff_format: True
new_diff_format_add_external_references: True
tasks_queue_ttl_from_dequeue_in_seconds: 900
enable_custom_labels: False

[pr_description]

publish_labels: False
add_original_user_description: True
generate_ai_title: False
extra_instructions: 
enable_pr_type: True
final_update_message: True
enable_help_text: False
enable_help_comment: False
bring_latest_tag: False
enable_pr_diagram: True
publish_description_as_comment: False
publish_description_as_comment_persistent: True
enable_semantic_files_types: True
collapsible_file_list: adaptive
collapsible_file_list_threshold: 8
inline_file_summary: False
use_description_markers: False
include_generated_by_header: True
enable_large_pr_handling: True
max_ai_calls: 4
auto_create_ticket: False

@qodo-code-review
Copy link
Copy Markdown
Contributor

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (1) 📎 Requirement gaps (2)

Grey Divider


Action required

1. Module not under tools/ 📎 Requirement gap ✓ Correctness
Description
• The new complexity analyzer code is added under src/workshop_mcp/complexity_analysis/ instead of
  the required src/workshop_mcp/tools/complexity_analysis/ package path.
• This breaks the expected project directory structure for tools and can cause import/discovery
  inconsistencies for MCP tool modules.
Code

src/workshop_mcp/complexity_analysis/init.py[R1-17]

+"""Complexity analysis tools for measuring Python code complexity metrics."""
+
+__version__ = "0.1.0"
+
+from .calculator import CognitiveCalculator, CyclomaticCalculator
+from .metrics import ClassMetrics, FileMetrics, FunctionMetrics, analyze_complexity
+from .patterns import ComplexityCategory
+
+__all__ = [
+    "CyclomaticCalculator",
+    "CognitiveCalculator",
+    "FunctionMetrics",
+    "ClassMetrics",
+    "FileMetrics",
+    "ComplexityCategory",
+    "analyze_complexity",
+]
Evidence
PR Compliance ID 8 requires the tool implementation to live under
src/workshop_mcp/tools/complexity_analysis/. The PR adds the module under
src/workshop_mcp/complexity_analysis/, which does not satisfy the required directory structure.

Create the tool module/package at src/workshop_mcp/tools/complexity_analysis/
src/workshop_mcp/complexity_analysis/init.py[1-17]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The complexity analyzer module is not located in the required tool package path `src/workshop_mcp/tools/complexity_analysis/`.

## Issue Context
Compliance requires tool implementations to exist under the `src/workshop_mcp/tools/` directory structure. The current module lives at `src/workshop_mcp/complexity_analysis/`, which does not meet that requirement.

## Fix Focus Areas
- src/workshop_mcp/complexity_analysis/__init__.py[1-17]
- src/workshop_mcp/complexity_analysis/calculator.py[1-120]
- src/workshop_mcp/complexity_analysis/metrics.py[1-333]
- src/workshop_mcp/complexity_analysis/patterns.py[1-87]
- src/workshop_mcp/server.py[13-18]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Validated path not used 📘 Rule violation ⛨ Security
Description
• The server validates file_path with self.path_validator.validate_exists(...) but then reads
  the file using Path(file_path).read_text(...) instead of the resolved/validated path returned by
  the validator.
• This undermines the “validate and resolve before use” requirement and can reintroduce
  path-handling risks (e.g., TOCTOU issues or unexpected resolution differences).
Code

src/workshop_mcp/server.py[R651-673]

+        if file_path:
+            try:
+                self.path_validator.validate_exists(file_path, must_be_file=True)
+            except PathValidationError as e:
+                return self._error_response(
+                    request_id,
+                    JsonRpcError(-32602, str(e)),
+                )
+
+        cyclomatic_threshold = arguments.get("cyclomatic_threshold", 10)
+        cognitive_threshold = arguments.get("cognitive_threshold", 15)
+        max_function_length = arguments.get("max_function_length", 50)
+
+        try:
+            logger.info(
+                "Executing complexity analysis on %s",
+                file_path or "source code",
+            )
+
+            # Read source from file if needed
+            if file_path and not source_code:
+                source_code = Path(file_path).read_text(encoding="utf-8")
+
Evidence
PR Compliance ID 26 requires root paths to be validated/resolved before use. validate_exists()
returns a resolved Path, but the implementation discards it and performs file I/O using the
original user-provided file_path string.

AGENTS.md
src/workshop_mcp/server.py[651-673]
src/workshop_mcp/security/path_validator.py[154-171]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`file_path` is validated but the validated/resolved path is not used for the subsequent file read.

## Issue Context
`PathValidator.validate_exists()` returns a resolved `Path` that has been checked to be within allowed roots and to exist. The server currently discards that return value and reads from `Path(file_path)`.

## Fix Focus Areas
- src/workshop_mcp/server.py[651-673]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Class issues lack metrics 📎 Requirement gap ✓ Correctness
Description
• Issues emitted for class-level findings (e.g., large_class, deep_inheritance) do not include a
  metrics object, even though the compliance requirement expects issues to include structured metric
  context.
• This makes the output less actionable/consistent for consumers expecting every issue to include
  structured metric data.
Code

src/workshop_mcp/complexity_analysis/metrics.py[R264-294]

+        if method_count > DEFAULT_MAX_CLASS_METHODS:
+            result.issues.append(
+                ComplexityIssue(
+                    tool="complexity",
+                    category=ComplexityCategory.LARGE_CLASS.value,
+                    severity="warning",
+                    message=(
+                        f"Class '{class_info.name}' has {method_count} methods "
+                        f"(threshold: {DEFAULT_MAX_CLASS_METHODS})"
+                    ),
+                    line=class_info.line_number,
+                    function=None,
+                    suggestion="Consider splitting into smaller, focused classes",
+                )
+            )
+
+        if inheritance_depth > DEFAULT_MAX_INHERITANCE_DEPTH:
+            result.issues.append(
+                ComplexityIssue(
+                    tool="complexity",
+                    category=ComplexityCategory.DEEP_INHERITANCE.value,
+                    severity="warning",
+                    message=(
+                        f"Class '{class_info.name}' has inheritance depth of "
+                        f"{inheritance_depth} (threshold: {DEFAULT_MAX_INHERITANCE_DEPTH})"
+                    ),
+                    line=class_info.line_number,
+                    function=None,
+                    suggestion="Prefer composition over deep inheritance hierarchies",
+                )
+            )
Evidence
PR Compliance ID 13 requires issue entries to include a metrics object with key metric context.
The PR creates class-related issues without setting metrics, leaving it None/omitted in the
serialized output.

Output includes issue details with function context and metrics
src/workshop_mcp/complexity_analysis/metrics.py[58-69]
src/workshop_mcp/complexity_analysis/metrics.py[264-294]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Class-level issues are emitted without a `metrics` object, violating the requirement that issues include structured metric context.

## Issue Context
`ComplexityIssue.metrics` exists but is not populated for `LARGE_CLASS` and `DEEP_INHERITANCE` issues.

## Fix Focus Areas
- src/workshop_mcp/complexity_analysis/metrics.py[264-294]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
4. Threshold types unchecked 🐞 Bug ⛯ Reliability
Description
• _execute_complexity_analysis forwards user-provided thresholds without validating they are
  integers.
• If a client sends a non-int (e.g., string), numeric comparisons in analyze_complexity will raise
  TypeError, which is not caught by the ValueError handler.
• This converts a client “invalid params” situation into a -32603 “Internal error”, reducing
  reliability and debuggability.
Code

src/workshop_mcp/server.py[R660-680]

+        cyclomatic_threshold = arguments.get("cyclomatic_threshold", 10)
+        cognitive_threshold = arguments.get("cognitive_threshold", 15)
+        max_function_length = arguments.get("max_function_length", 50)
+
+        try:
+            logger.info(
+                "Executing complexity analysis on %s",
+                file_path or "source code",
+            )
+
+            # Read source from file if needed
+            if file_path and not source_code:
+                source_code = Path(file_path).read_text(encoding="utf-8")
+
+            complexity_result = analyze_complexity(
+                source_code,
+                file_path=file_path,
+                cyclomatic_threshold=cyclomatic_threshold,
+                cognitive_threshold=cognitive_threshold,
+                max_function_length=max_function_length,
+            )
Evidence
Server reads thresholds directly from JSON arguments and passes them through;
analyze_complexity/pattern helpers perform numeric comparisons assuming ints, which will throw if
given non-numeric types.

src/workshop_mcp/server.py[660-680]
src/workshop_mcp/complexity_analysis/metrics.py[152-174]
src/workshop_mcp/complexity_analysis/patterns.py[51-67]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`complexity_analysis` accepts numeric thresholds from JSON-RPC arguments but does not validate their types. Non-integer values can raise `TypeError` during numeric comparisons in `analyze_complexity`, which bypasses the current `except ValueError` handler and becomes an internal error.

### Issue Context
The MCP server should return `-32602 Invalid params` for client input errors, not `-32603 Internal error`.

### Fix Focus Areas
- src/workshop_mcp/server.py[660-707]
- src/workshop_mcp/complexity_analysis/metrics.py[152-174]
- src/workshop_mcp/complexity_analysis/patterns.py[51-67]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

5. Nested recursion miscount 🐞 Bug ✓ Correctness
Description
• CognitiveCalculator uses func_name to detect recursion (“function calls itself”).
• When walking a nested function definition, it passes the *outer* function’s func_name into the
  nested walk.
• Result: recursion inside nested functions can be undercounted because calls are compared against
  the wrong function name, skewing cognitive complexity results.
Code

src/workshop_mcp/complexity_analysis/calculator.py[R73-111]

+        for child in node.get_children():
+            if isinstance(child, (astroid.FunctionDef, astroid.AsyncFunctionDef)):
+                # Nested function definitions increase nesting
+                total += self._walk(child, nesting + 1, func_name)
+                continue
+
+            # Increment for breaks in linear flow + nesting penalty
+            if isinstance(child, astroid.If):
+                total += 1 + nesting  # +1 for if + nesting penalty
+                total += self._walk(child, nesting + 1, func_name)
+                continue
+            elif isinstance(child, (astroid.For, astroid.While)):
+                total += 1 + nesting
+                total += self._walk(child, nesting + 1, func_name)
+                continue
+            elif isinstance(child, astroid.ExceptHandler):
+                total += 1 + nesting
+                total += self._walk(child, nesting + 1, func_name)
+                continue
+            elif isinstance(child, astroid.With):
+                total += 1 + nesting
+                total += self._walk(child, nesting + 1, func_name)
+                continue
+            elif isinstance(child, astroid.IfExp):
+                total += 1 + nesting
+                total += self._walk(child, nesting, func_name)
+                continue
+
+            # Boolean operators: +1 for each sequence
+            if isinstance(child, astroid.BoolOp):
+                total += 1
+
+            # Recursion: +1 when function calls itself
+            if isinstance(child, astroid.Call):
+                call_name = self._get_call_name(child)
+                if call_name == func_name:
+                    total += 1
+
+            total += self._walk(child, nesting, func_name)
Evidence
The implementation defines recursion as a call whose name matches func_name, but nested functions
are walked with the parent func_name, not the nested function’s name.

src/workshop_mcp/complexity_analysis/calculator.py[73-77]
src/workshop_mcp/complexity_analysis/calculator.py[105-111]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Cognitive recursion detection is keyed off `func_name`, but nested functions are analyzed using the parent function’s name. This can undercount recursion within nested functions.

### Issue Context
The code comments explicitly define recursion as “function calls itself,” which should be evaluated per-function.

### Fix Focus Areas
- src/workshop_mcp/complexity_analysis/calculator.py[73-111]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Bad inheritance fallback 🐞 Bug ✓ Correctness
Description
• When astroid cannot infer ancestors, _inheritance_depth falls back to len(node.bases).
• len(node.bases) measures number of direct base classes, not inheritance depth; for multiple
  inheritance it can inflate “depth” and trigger false DEEP_INHERITANCE warnings.
• This can reduce trust in reported class complexity metrics.
Code

src/workshop_mcp/complexity_analysis/metrics.py[R327-333]

+def _inheritance_depth(node: astroid.ClassDef) -> int:
+    """Calculate the inheritance depth of a class."""
+    try:
+        ancestors = list(node.ancestors())
+        return len(ancestors) if ancestors else 0
+    except (astroid.InferenceError, StopIteration, RecursionError):
+        return len(node.bases)
Evidence
The analyzer uses _inheritance_depth to decide whether to emit a DEEP_INHERITANCE issue; the
fallback returns base-class count, which is not a depth metric.

src/workshop_mcp/complexity_analysis/metrics.py[327-333]
src/workshop_mcp/complexity_analysis/metrics.py[280-289]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`_inheritance_depth` uses `len(node.bases)` when inference fails, but that value is not an inheritance *depth* and can produce incorrect DEEP_INHERITANCE warnings.

### Issue Context
`analyze_complexity` uses the returned `inheritance_depth` to decide whether to emit `deep_inheritance` issues.

### Fix Focus Areas
- src/workshop_mcp/complexity_analysis/metrics.py[251-295]
- src/workshop_mcp/complexity_analysis/metrics.py[327-333]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +1 to +17
"""Complexity analysis tools for measuring Python code complexity metrics."""

__version__ = "0.1.0"

from .calculator import CognitiveCalculator, CyclomaticCalculator
from .metrics import ClassMetrics, FileMetrics, FunctionMetrics, analyze_complexity
from .patterns import ComplexityCategory

__all__ = [
"CyclomaticCalculator",
"CognitiveCalculator",
"FunctionMetrics",
"ClassMetrics",
"FileMetrics",
"ComplexityCategory",
"analyze_complexity",
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Module not under tools/ 📎 Requirement gap ✓ Correctness

• The new complexity analyzer code is added under src/workshop_mcp/complexity_analysis/ instead of
  the required src/workshop_mcp/tools/complexity_analysis/ package path.
• This breaks the expected project directory structure for tools and can cause import/discovery
  inconsistencies for MCP tool modules.
Agent Prompt
## Issue description
The complexity analyzer module is not located in the required tool package path `src/workshop_mcp/tools/complexity_analysis/`.

## Issue Context
Compliance requires tool implementations to exist under the `src/workshop_mcp/tools/` directory structure. The current module lives at `src/workshop_mcp/complexity_analysis/`, which does not meet that requirement.

## Fix Focus Areas
- src/workshop_mcp/complexity_analysis/__init__.py[1-17]
- src/workshop_mcp/complexity_analysis/calculator.py[1-120]
- src/workshop_mcp/complexity_analysis/metrics.py[1-333]
- src/workshop_mcp/complexity_analysis/patterns.py[1-87]
- src/workshop_mcp/server.py[13-18]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +651 to +673
if file_path:
try:
self.path_validator.validate_exists(file_path, must_be_file=True)
except PathValidationError as e:
return self._error_response(
request_id,
JsonRpcError(-32602, str(e)),
)

cyclomatic_threshold = arguments.get("cyclomatic_threshold", 10)
cognitive_threshold = arguments.get("cognitive_threshold", 15)
max_function_length = arguments.get("max_function_length", 50)

try:
logger.info(
"Executing complexity analysis on %s",
file_path or "source code",
)

# Read source from file if needed
if file_path and not source_code:
source_code = Path(file_path).read_text(encoding="utf-8")

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Validated path not used 📘 Rule violation ⛨ Security

• The server validates file_path with self.path_validator.validate_exists(...) but then reads
  the file using Path(file_path).read_text(...) instead of the resolved/validated path returned by
  the validator.
• This undermines the “validate and resolve before use” requirement and can reintroduce
  path-handling risks (e.g., TOCTOU issues or unexpected resolution differences).
Agent Prompt
## Issue description
`file_path` is validated but the validated/resolved path is not used for the subsequent file read.

## Issue Context
`PathValidator.validate_exists()` returns a resolved `Path` that has been checked to be within allowed roots and to exist. The server currently discards that return value and reads from `Path(file_path)`.

## Fix Focus Areas
- src/workshop_mcp/server.py[651-673]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +264 to +294
if method_count > DEFAULT_MAX_CLASS_METHODS:
result.issues.append(
ComplexityIssue(
tool="complexity",
category=ComplexityCategory.LARGE_CLASS.value,
severity="warning",
message=(
f"Class '{class_info.name}' has {method_count} methods "
f"(threshold: {DEFAULT_MAX_CLASS_METHODS})"
),
line=class_info.line_number,
function=None,
suggestion="Consider splitting into smaller, focused classes",
)
)

if inheritance_depth > DEFAULT_MAX_INHERITANCE_DEPTH:
result.issues.append(
ComplexityIssue(
tool="complexity",
category=ComplexityCategory.DEEP_INHERITANCE.value,
severity="warning",
message=(
f"Class '{class_info.name}' has inheritance depth of "
f"{inheritance_depth} (threshold: {DEFAULT_MAX_INHERITANCE_DEPTH})"
),
line=class_info.line_number,
function=None,
suggestion="Prefer composition over deep inheritance hierarchies",
)
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Class issues lack metrics 📎 Requirement gap ✓ Correctness

• Issues emitted for class-level findings (e.g., large_class, deep_inheritance) do not include a
  metrics object, even though the compliance requirement expects issues to include structured metric
  context.
• This makes the output less actionable/consistent for consumers expecting every issue to include
  structured metric data.
Agent Prompt
## Issue description
Class-level issues are emitted without a `metrics` object, violating the requirement that issues include structured metric context.

## Issue Context
`ComplexityIssue.metrics` exists but is not populated for `LARGE_CLASS` and `DEEP_INHERITANCE` issues.

## Fix Focus Areas
- src/workshop_mcp/complexity_analysis/metrics.py[264-294]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +660 to +680
cyclomatic_threshold = arguments.get("cyclomatic_threshold", 10)
cognitive_threshold = arguments.get("cognitive_threshold", 15)
max_function_length = arguments.get("max_function_length", 50)

try:
logger.info(
"Executing complexity analysis on %s",
file_path or "source code",
)

# Read source from file if needed
if file_path and not source_code:
source_code = Path(file_path).read_text(encoding="utf-8")

complexity_result = analyze_complexity(
source_code,
file_path=file_path,
cyclomatic_threshold=cyclomatic_threshold,
cognitive_threshold=cognitive_threshold,
max_function_length=max_function_length,
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Threshold types unchecked 🐞 Bug ⛯ Reliability

• _execute_complexity_analysis forwards user-provided thresholds without validating they are
  integers.
• If a client sends a non-int (e.g., string), numeric comparisons in analyze_complexity will raise
  TypeError, which is not caught by the ValueError handler.
• This converts a client “invalid params” situation into a -32603 “Internal error”, reducing
  reliability and debuggability.
Agent Prompt
### Issue description
`complexity_analysis` accepts numeric thresholds from JSON-RPC arguments but does not validate their types. Non-integer values can raise `TypeError` during numeric comparisons in `analyze_complexity`, which bypasses the current `except ValueError` handler and becomes an internal error.

### Issue Context
The MCP server should return `-32602 Invalid params` for client input errors, not `-32603 Internal error`.

### Fix Focus Areas
- src/workshop_mcp/server.py[660-707]
- src/workshop_mcp/complexity_analysis/metrics.py[152-174]
- src/workshop_mcp/complexity_analysis/patterns.py[51-67]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant