Skip to content

feat: add new performance anti-pattern detections#51

Open
nnennandukwe wants to merge 4 commits intomainfrom
feat/new-perf-patterns-29
Open

feat: add new performance anti-pattern detections#51
nnennandukwe wants to merge 4 commits intomainfrom
feat/new-perf-patterns-29

Conversation

@nnennandukwe
Copy link
Copy Markdown
Owner

Summary

Adds 3 new performance anti-pattern detections to the performance profiler (closes #29).

New Patterns

Pattern Severity Description
EXCEPTION_IN_LOOP MEDIUM Try/except inside loops incurs overhead each iteration
TYPE_CONVERSION_IN_LOOP MEDIUM int(), str(), float(), etc. creating objects repeatedly
GLOBAL_MUTATION MEDIUM Functions using global keyword to modify state

Changes

  • patterns.py: Added 3 new IssueCategory values and TYPE_CONVERSION_FUNCTIONS set
  • ast_analyzer.py: Added TryExceptInfo and GlobalInfo dataclasses with extraction methods
  • performance_checker.py: Added 3 new check methods integrated into check_all()
  • test_performance_checker.py: Added 14 new tests (29 total now)

Test plan

  • All 29 performance checker tests pass
  • Exception-in-loop detection works for for/while loops
  • Type conversion detection covers all builtin types
  • Global mutation only flags inside functions, not module level

🤖 Generated with Claude Code

@qodo-code-review
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Add three new performance anti-pattern detections

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add 3 new performance anti-pattern detections: exception handling in loops, type conversions in
  loops, and global variable mutations
• Extend ASTAnalyzer with TryExceptInfo and GlobalInfo dataclasses for extracting try/except
  and global statements
• Implement 3 new check methods in PerformanceChecker integrated into check_all()
• Add 14 new comprehensive tests covering all new detection patterns
Diagram
flowchart LR
  A["patterns.py<br/>New IssueCategory values<br/>TYPE_CONVERSION_FUNCTIONS set"] --> B["ast_analyzer.py<br/>TryExceptInfo & GlobalInfo<br/>dataclasses"]
  B --> C["performance_checker.py<br/>3 new check methods"]
  C --> D["test_performance_checker.py<br/>14 new tests"]
Loading

Grey Divider

File Changes

1. src/workshop_mcp/performance_profiler/patterns.py ✨ Enhancement +44/-0

Add new issue categories and type conversion detection

• Added 3 new IssueCategory enum values: EXCEPTION_IN_LOOP, TYPE_CONVERSION_IN_LOOP,
 GLOBAL_MUTATION
• Added TYPE_CONVERSION_FUNCTIONS set containing builtin type conversion functions with both short
 and fully qualified names
• Added is_type_conversion() helper function to check if a function call is a type conversion

src/workshop_mcp/performance_profiler/patterns.py


2. src/workshop_mcp/performance_profiler/ast_analyzer.py ✨ Enhancement +107/-0

Add try/except and global statement extraction

• Added TryExceptInfo dataclass to store try/except statement metadata including line numbers,
 parent function, and loop context
• Added GlobalInfo dataclass to store global statement metadata including variable names, line
 number, and parent function
• Added _try_excepts and _globals instance variables for caching extracted information
• Implemented get_try_except_statements() method to extract all try/except blocks from code
• Implemented get_global_statements() method to extract all global statements from code
• Added _extract_try_excepts_recursive() helper to recursively traverse AST and identify
 try/except blocks within loops
• Added _extract_globals_recursive() helper to recursively traverse AST and identify global
 statements within functions

src/workshop_mcp/performance_profiler/ast_analyzer.py


3. src/workshop_mcp/performance_profiler/performance_checker.py ✨ Enhancement +104/-0

Add three new performance issue detection methods

• Imported is_type_conversion helper function from patterns module
• Integrated 3 new check methods into check_all() method
• Implemented check_exception_in_loops() to detect try/except blocks inside loops with MEDIUM
 severity
• Implemented check_type_conversions_in_loops() to detect type conversion calls inside loops with
 MEDIUM severity
• Implemented check_global_mutations() to detect functions using global keyword to modify state
 with MEDIUM severity

src/workshop_mcp/performance_profiler/performance_checker.py


View more (1)
4. tests/test_performance_checker.py 🧪 Tests +212/-0

Add 14 new tests for anti-pattern detections

• Added TestExceptionInLoops class with 4 tests covering for loops, while loops, nested loops, and
 false positives
• Added TestTypeConversionsInLoops class with 4 tests covering int/float/str conversions, multiple
 conversions, and container types
• Added TestGlobalMutations class with 5 tests covering single/multiple globals, module-level
 globals, nested functions, and suggestions
• Tests verify correct detection, severity levels, descriptions, and suggestions for all new
 patterns

tests/test_performance_checker.py


Grey Divider

Qodo Logo


🛠️ Relevant configurations:


These are the relevant configurations for this tool:

[config]

is_auto_command: True
is_new_pr: True
model_reasoning: vertex_ai/gemini-2.5-pro
model: gpt-5.2-2025-12-11
model_turbo: anthropic/claude-haiku-4-5-20251001
fallback_models: ['anthropic/claude-sonnet-4-5-20250929', 'bedrock/us.anthropic.claude-sonnet-4-5-20250929-v1:0']
second_model_for_exhaustive_mode: o4-mini
git_provider: github
publish_output: True
publish_output_no_suggestions: True
publish_output_progress: True
verbosity_level: 0
publish_logs: False
debug_mode: False
use_wiki_settings_file: True
use_repo_settings_file: True
use_global_settings_file: True
use_global_wiki_settings_file: False
disable_auto_feedback: False
ai_timeout: 150
response_language: en-US
clone_repo_instead_of_fetch: True
always_clone: False
add_repo_metadata: True
clone_repo_time_limit: 300
publish_inline_comments_fallback_batch_size: 5
publish_inline_comments_fallback_sleep_time: 2
max_model_tokens: 32000
custom_model_max_tokens: -1
patch_extension_skip_types: ['.md', '.txt']
extra_allowed_extensions: []
allow_dynamic_context: True
allow_forward_dynamic_context: True
max_extra_lines_before_dynamic_context: 12
patch_extra_lines_before: 5
patch_extra_lines_after: 1
ai_handler: litellm
cli_mode: False
TRIAL_GIT_ORG_MAX_INVOKES_PER_MONTH: 75
TRIAL_RATIO_CLOSE_TO_LIMIT: 0.8
invite_only_mode: False
enable_request_access_msg_on_new_pr: False
check_also_invites_field: False
calculate_context: True
disable_checkboxes: False
output_relevant_configurations: True
large_patch_policy: clip
seed: -1
temperature: 0.2
allow_dynamic_context_ab_testing: False
choose_dynamic_context_ab_testing_ratio: 0.5
ignore_pr_title: ['^\\[Auto\\]', '^Auto']
ignore_pr_target_branches: []
ignore_pr_source_branches: []
ignore_pr_labels: []
ignore_ticket_labels: []
allow_only_specific_folders: []
ignore_pr_authors: []
ignore_repositories: []
ignore_language_framework: []
enable_ai_metadata: True
present_reasoning: True
max_tickets: 10
max_tickets_chars: 8000
prevent_any_approval: False
enable_comment_approval: False
enable_auto_approval: False
auto_approve_for_low_review_effort: -1
auto_approve_for_no_suggestions: False
ensure_ticket_compliance: False
new_diff_format: True
new_diff_format_add_external_references: True
enable_custom_labels: False

[pr_description]

publish_labels: False
add_original_user_description: True
generate_ai_title: False
extra_instructions: 
enable_pr_type: True
final_update_message: True
enable_help_text: False
enable_help_comment: False
bring_latest_tag: False
enable_pr_diagram: True
publish_description_as_comment: False
publish_description_as_comment_persistent: True
enable_semantic_files_types: True
collapsible_file_list: adaptive
collapsible_file_list_threshold: 8
inline_file_summary: False
use_description_markers: False
include_generated_by_header: True
enable_large_pr_handling: True
max_ai_calls: 4
auto_create_ticket: False

@qodo-code-review
Copy link
Copy Markdown
Contributor

qodo-code-review bot commented Feb 2, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (2) 📎 Requirement gaps (1)

Grey Divider


Action required

1. Docs missing new patterns 📎 Requirement gap ✧ Quality
Suggestion Impact:Instead of updating docs/performance-profiler.md, the commit added inline descriptive comments to the three new IssueCategory entries in patterns.py, partially addressing the “describe them” aspect but not the documentation requirement.

code diff:

-    EXCEPTION_IN_LOOP = "exception_in_loop"
-    TYPE_CONVERSION_IN_LOOP = "type_conversion_in_loop"
-    GLOBAL_MUTATION = "global_mutation"
+    EXCEPTION_IN_LOOP = "exception_in_loop"  # try/except inside loops (per-iteration overhead)
+    TYPE_CONVERSION_IN_LOOP = "type_conversion_in_loop"  # repeated int()/str()/float()/... in loops
+    GLOBAL_MUTATION = "global_mutation"  # functions using `global` to mutate module-level state

Description
• The PR adds three new anti-pattern categories, but the performance profiler documentation does not
  describe them.
• This violates the requirement to keep user-facing docs accurate, leaving users unaware of the new
  detections and their meaning.
• Update the performance-profiler guide to include descriptions and examples for
  exception_in_loop, type_conversion_in_loop, and global_mutation.
Code

src/workshop_mcp/performance_profiler/patterns.py[R25-27]

+    EXCEPTION_IN_LOOP = "exception_in_loop"
+    TYPE_CONVERSION_IN_LOOP = "type_conversion_in_loop"
+    GLOBAL_MUTATION = "global_mutation"
Evidence
patterns.py introduces the new issue categories, while the dedicated performance profiler
documentation lists existing categories but does not include the newly added ones, indicating
documentation was not updated for the new patterns.

Update documentation with new anti-pattern descriptions
src/workshop_mcp/performance_profiler/patterns.py[16-27]
docs/performance-profiler.md[37-161]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The performance-profiler documentation does not include the three newly added anti-patterns (`exception_in_loop`, `type_conversion_in_loop`, `global_mutation`), so users cannot discover or understand these detections.

## Issue Context
The codebase now supports these new categories (added to `IssueCategory`), but `docs/performance-profiler.md` only documents the pre-existing categories.

## Fix Focus Areas
- docs/performance-profiler.md[37-161]
- src/workshop_mcp/performance_profiler/patterns.py[16-27]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. New 100+ char lines 📘 Rule violation ✓ Correctness
Suggestion Impact:Broke up long description and suggestion strings in PerformanceIssue constructions into parenthesized, multi-line implicit concatenations (including f-strings) to keep lines under 100 characters.

code diff:

@@ -309,8 +309,14 @@
                     severity=Severity.MEDIUM,
                     line_number=try_except.line_number,
                     end_line_number=try_except.end_line_number,
-                    description="Try/except block inside loop incurs exception handling overhead on each iteration",
-                    suggestion="Move try/except outside the loop, or use conditional checks (if/else) for expected cases",
+                    description=(
+                        "Try/except block inside loop incurs exception handling "
+                        "overhead on each iteration"
+                    ),
+                    suggestion=(
+                        "Move try/except outside the loop, or use conditional "
+                        "checks (if/else) for expected cases"
+                    ),
                     code_snippet=code_snippet,
                     function_name=try_except.parent_function,
                 )
@@ -341,8 +347,14 @@
                     severity=Severity.MEDIUM,
                     line_number=call.line_number,
                     end_line_number=call.line_number,
-                    description=f"Type conversion '{call.function_name}()' called inside loop creates new objects each iteration",
-                    suggestion="If converting the same value repeatedly, move the conversion outside the loop",
+                    description=(
+                        f"Type conversion '{call.function_name}()' called inside "
+                        f"loop creates new objects each iteration"
+                    ),
+                    suggestion=(
+                        "If converting the same value repeatedly, move the "
+                        "conversion outside the loop"
+                    ),
                     code_snippet=code_snippet,
                     function_name=call.parent_function,
                 )
@@ -376,7 +388,9 @@
                     line_number=global_stmt.line_number,
                     end_line_number=global_stmt.line_number,
                     description=f"Function modifies global variable(s): {names_str}",
-                    suggestion="Pass values as parameters and return results instead of using global state",
+                    suggestion=(
+                        "Pass values as parameters and return results instead of using global state"
+                    ),
                     code_snippet=code_snippet,
                     function_name=global_stmt.parent_function,
                 )

Description
• Newly added code in performance_checker.py contains multiple lines over 100 characters, which
  will violate Ruff formatting/linting.
• This can break CI/style gates and makes the code harder to review and maintain.
• Reformat long conditional expressions and long description/suggestion strings using
  parentheses and implicit string concatenation.
Code

src/workshop_mcp/performance_profiler/performance_checker.py[R335-345]

+        for call in calls:
+            if call.is_in_loop and is_type_conversion(call.function_name, call.inferred_callable):
+                code_snippet = self.analyzer.get_source_segment(call.line_number, call.line_number)
+
+                issue = PerformanceIssue(
+                    category=IssueCategory.TYPE_CONVERSION_IN_LOOP,
+                    severity=Severity.MEDIUM,
+                    line_number=call.line_number,
+                    end_line_number=call.line_number,
+                    description=f"Type conversion '{call.function_name}()' called inside loop creates new objects each iteration",
+                    suggestion="If converting the same value repeatedly, move the conversion outside the loop",
Evidence
The Ruff compliance rule requires changed code to conform to a 100-character line length. The cited
lines in performance_checker.py are over 100 characters (long conditionals and long string
literals), indicating new formatting violations were introduced in this PR.

CLAUDE.md, AGENTS.md
src/workshop_mcp/performance_profiler/performance_checker.py[335-346]
src/workshop_mcp/performance_profiler/performance_checker.py[307-315]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Several newly added lines in `performance_checker.py` exceed the 100-character line length expectation, which can cause Ruff formatting/lint failures.

## Issue Context
Long lines occur in the new check methods (notably the `if` condition and `description`/`suggestion` strings).

## Fix Focus Areas
- src/workshop_mcp/performance_profiler/performance_checker.py[287-386]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Tests lack type hints 📘 Rule violation ✓ Correctness
Description
• Newly added test functions in tests/test_performance_checker.py do not include parameter and
  return type annotations.
• This violates the requirement that all functions in src/ and tests/ must have type hints and
  remain mypy-compatible.
• Add -> None return annotations (and any needed parameter annotations) to the new test methods.
Code

tests/test_performance_checker.py[236]

+    def test_detect_try_except_in_for_loop(self):
Evidence
The mypy/type-hints compliance rule requires all functions to be annotated. The new test methods are
defined without return type annotations (e.g., missing -> None), which is a direct violation of
the rule.

CLAUDE.md, AGENTS.md
tests/test_performance_checker.py[233-240]
tests/test_performance_checker.py[298-306]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Newly added test methods are missing type hints (notably return type annotations like `-&gt; None`), which violates the project requirement that all functions are type hinted and mypy-compatible.

## Issue Context
The new tests introduced for the three new performance patterns define multiple methods without annotations.

## Fix Focus Areas
- tests/test_performance_checker.py[233-444]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
4. Async for loops missed 🐞 Bug ✓ Correctness
Description
• Try/except-in-loop detection only treats For/While as loops, so try/except inside `async
  for` won’t be reported.
• Type-conversion-in-loop detection has the same limitation because loop context for calls is also
  tracked only for For/While.
• This creates false negatives in async-heavy code (a primary target for performance tooling).
Code

src/workshop_mcp/performance_profiler/ast_analyzer.py[R487-488]

+        if isinstance(node, (astroid.For, astroid.While)):
+            current_in_loop = True
Evidence
The new try/except extraction sets is_in_loop only when encountering
astroid.For/astroid.While, excluding astroid.AsyncFor. Separately, call extraction uses the
same loop-type check, so the new TYPE_CONVERSION_IN_LOOP check will also miss conversions inside
async for loops.

src/workshop_mcp/performance_profiler/ast_analyzer.py[473-489]
src/workshop_mcp/performance_profiler/ast_analyzer.py[406-427]
src/workshop_mcp/performance_profiler/performance_checker.py[287-336]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Async loops (`async for`) are not treated as loops, causing false negatives for the newly added `EXCEPTION_IN_LOOP` and `TYPE_CONVERSION_IN_LOOP` checks.

### Issue Context
The analyzer already recognizes `astroid.AsyncFunctionDef`, so async constructs are within scope.

### Fix Focus Areas
- src/workshop_mcp/performance_profiler/ast_analyzer.py[473-505]
- src/workshop_mcp/performance_profiler/ast_analyzer.py[406-449]
- src/workshop_mcp/performance_profiler/ast_analyzer.py[364-405]
- tests/test_performance_checker.py[233-360]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

5. Type conversion false positives 🐞 Bug ✓ Correctness
Description
is_type_conversion() falls back to function_name even when Astroid successfully inferred a
  non-builtin callable name.
• This can incorrectly flag shadowed/aliased callables named int/str/etc. (noise in results,
  misleading suggestions).
• The problem is most acute in codebases that intentionally shadow builtins (or where inference
  resolves to a different qualified name).
Code

src/workshop_mcp/performance_profiler/patterns.py[R381-383]

+    if inferred_callable and inferred_callable in TYPE_CONVERSION_FUNCTIONS:
+        return True
+    return function_name in TYPE_CONVERSION_FUNCTIONS
Evidence
PerformanceChecker relies on CallInfo.inferred_callable (from Astroid inference) and passes it
to is_type_conversion(), but is_type_conversion() ignores a present-but-non-matching
inferred_callable and still flags purely by the bare function_name. That means a
successfully-inferred non-builtin target can still be classified as a builtin conversion just due to
naming.

src/workshop_mcp/performance_profiler/patterns.py[370-383]
src/workshop_mcp/performance_profiler/performance_checker.py[321-349]
src/workshop_mcp/performance_profiler/ast_analyzer.py[459-471]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`is_type_conversion()` can raise false positives because it checks `function_name` even when `inferred_callable` is present but indicates a non-builtin callable.

### Issue Context
`PerformanceChecker.check_type_conversions_in_loops()` passes both `function_name` and `inferred_callable`. When Astroid resolves a callable to a qualified name, that result should be the source of truth.

### Fix Focus Areas
- src/workshop_mcp/performance_profiler/patterns.py[370-383]
- tests/test_performance_checker.py[298-360]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



ⓘ The new review experience is currently in Beta. Learn more

Qodo Logo

Comment thread src/workshop_mcp/performance_profiler/patterns.py Outdated
Comment thread src/workshop_mcp/performance_profiler/performance_checker.py Outdated
class TestExceptionInLoops:
"""Test exception-in-loop detection."""

def test_detect_try_except_in_for_loop(self):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Tests lack type hints 📘 Rule violation ✓ Correctness

• Newly added test functions in tests/test_performance_checker.py do not include parameter and
  return type annotations.
• This violates the requirement that all functions in src/ and tests/ must have type hints and
  remain mypy-compatible.
• Add -> None return annotations (and any needed parameter annotations) to the new test methods.
Agent prompt
## Issue description
Newly added test methods are missing type hints (notably return type annotations like `-> None`), which violates the project requirement that all functions are type hinted and mypy-compatible.

## Issue Context
The new tests introduced for the three new performance patterns define multiple methods without annotations.

## Fix Focus Areas
- tests/test_performance_checker.py[233-444]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +487 to +488
if isinstance(node, (astroid.For, astroid.While)):
current_in_loop = True
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Async for loops missed 🐞 Bug ✓ Correctness

• Try/except-in-loop detection only treats For/While as loops, so try/except inside `async
  for` won’t be reported.
• Type-conversion-in-loop detection has the same limitation because loop context for calls is also
  tracked only for For/While.
• This creates false negatives in async-heavy code (a primary target for performance tooling).
Agent prompt
### Issue description
Async loops (`async for`) are not treated as loops, causing false negatives for the newly added `EXCEPTION_IN_LOOP` and `TYPE_CONVERSION_IN_LOOP` checks.

### Issue Context
The analyzer already recognizes `astroid.AsyncFunctionDef`, so async constructs are within scope.

### Fix Focus Areas
- src/workshop_mcp/performance_profiler/ast_analyzer.py[473-505]
- src/workshop_mcp/performance_profiler/ast_analyzer.py[406-449]
- src/workshop_mcp/performance_profiler/ast_analyzer.py[364-405]
- tests/test_performance_checker.py[233-360]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

nnennandukwe and others added 4 commits February 15, 2026 10:56
Added 3 new pattern detections to the performance profiler:

1. Exception handling in loops (EXCEPTION_IN_LOOP)
   - Detects try/except blocks inside loops
   - Severity: MEDIUM

2. Type conversions in loops (TYPE_CONVERSION_IN_LOOP)
   - Detects int(), str(), float(), list(), dict(), etc. in loops
   - Severity: MEDIUM

3. Global variable mutation (GLOBAL_MUTATION)
   - Detects functions using 'global' keyword
   - Severity: MEDIUM

Changes:
- patterns.py: Added new IssueCategory values and TYPE_CONVERSION_FUNCTIONS set
- ast_analyzer.py: Added TryExceptInfo, GlobalInfo dataclasses and extraction methods
- performance_checker.py: Added check_exception_in_loops(), check_type_conversions_in_loops(), check_global_mutations()
- test_performance_checker.py: Added 14 new tests

Closes #29

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: qodo-code-review[bot] <151058649+qodo-code-review[bot]@users.noreply.github.com>
Break long strings in description and suggestion fields to comply
with 100-character line length limit.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
PathValidator.validate() and validate_exists() return resolved Path
objects after symlink resolution, but the returned values were being
discarded. The code then used the original user-provided strings for
file operations, creating a Time-of-Check-Time-of-Use vulnerability.

Changes:
- keyword_search: capture validated_paths from validate_multiple()
  and use validated_path_strings for tool execution
- performance_check: capture validated_path from validate_exists()
  and use validated_file_path for PerformanceChecker and result output

Addresses PR #49 review feedback.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@nnennandukwe nnennandukwe force-pushed the feat/new-perf-patterns-29 branch from 6b99a77 to bcb605d Compare February 15, 2026 16:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Add new performance anti-pattern detections

1 participant