Skip to content

feat: add rate limiting, concurrency control, and timeout utilities with IP filtering support#278

Open
frontegg-david wants to merge 2 commits intomainfrom
rate-limit-concurrency-timeouts
Open

feat: add rate limiting, concurrency control, and timeout utilities with IP filtering support#278
frontegg-david wants to merge 2 commits intomainfrom
rate-limit-concurrency-timeouts

Conversation

@frontegg-david
Copy link
Contributor

@frontegg-david frontegg-david commented Mar 16, 2026

Summary by CodeRabbit

  • New Features

    • Added Guard system for production-grade traffic controls: per-tool and global rate limiting, concurrency control, execution timeouts, and IP filtering.
    • Configure guards via metadata on tools and agents with support for custom partition strategies.
    • Support for multiple storage backends: in-memory (development), Redis, and serverless platforms (Vercel KV, Upstash).
  • Documentation

    • Added comprehensive rate-limiting and guards guide with configuration examples.
    • Added Guard SDK reference with API details and usage patterns.
  • Tests

    • Added E2E test suites covering rate limiting, concurrency, timeouts, IP filtering, and browser-based scenarios.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

This PR introduces the @frontmcp/guard library and integrates it into the SDK, enabling production-grade traffic controls: rate limiting, concurrency control, execution timeouts, and IP filtering for tools and agents. Includes comprehensive E2E test coverage (browser, CLI, combined guards, timeout/concurrency scenarios) and detailed documentation.

Changes

Cohort / File(s) Summary
Guard Library Core
libs/guard/src/rate-limit/*, libs/guard/src/concurrency/*, libs/guard/src/timeout/*, libs/guard/src/ip-filter/*, libs/guard/src/partition-key/*, libs/guard/src/errors/*, libs/guard/src/manager/*, libs/guard/src/schemas/*, libs/guard/src/index.ts
Complete guard library implementation with sliding-window rate limiter, distributed semaphore for concurrency, timeout wrapper, CIDR-based IP filtering, partition key resolution, error hierarchy, and centralized GuardManager orchestrating all features.
Guard Library Configuration & Package
libs/guard/package.json, libs/guard/project.json, libs/guard/tsconfig*, libs/guard/jest.config.ts, libs/guard/README.md
Package metadata, build targets (CJS/ESM), Jest test configuration with 90-95% coverage thresholds, TypeScript configuration, and comprehensive README documenting all guard features.
Guard Library Tests
libs/guard/src/**/__tests__/*, libs/guard/src/**/README.md
Extensive test suites covering rate limiting (basic, window reset, per-tool isolation), concurrency (mutex, queued, queueing behavior), timeout handling, IP filtering (IPv4/IPv6/CIDR, precedence rules), partition key resolution, and error classes with full coverage.
Guard E2E Tests - Core Suites
apps/e2e/demo-e2e-guard/e2e/guard-*.e2e.spec.ts
Multiple E2E test specs validating: global and per-tool rate limiting, concurrency control (mutex/queued), timeout behavior, combined guards, IP filtering placeholders, and CLI execution without enforcement.
Guard E2E Tests - Browser
apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts
Playwright browser-based E2E tests exercising MCP JSON-RPC over HTTP, validating rate limiting, timeouts, and per-tool isolation via direct HTTP requests to /mcp endpoint.
Guard E2E Infrastructure
apps/e2e/demo-e2e-guard/e2e/helpers/exec-cli.ts, apps/e2e/demo-e2e-guard/jest.*.config.ts, apps/e2e/demo-e2e-guard/playwright.config.ts
Helper utilities for CLI execution, Jest configurations for E2E and CLI tests with module aliases, and Playwright configuration for browser testing with webServer startup.
Guard Demo Application
apps/e2e/demo-e2e-guard/src/main.ts, apps/e2e/demo-e2e-guard/src/apps/guard/*, apps/e2e/demo-e2e-guard/fixture/src/main.ts, apps/e2e/demo-e2e-guard/fixture/frontmcp.config.js
Demo server implementations with guard-enabled tools (rate-limited, timeout, concurrency mutex/queued, combined guard, slow-tool, unguarded) and CLI entry point with guard configurations.
Guard Demo Configuration
apps/e2e/demo-e2e-guard/tsconfig*, apps/e2e/demo-e2e-guard/webpack.config.js, apps/e2e/demo-e2e-guard/project.json
TypeScript and webpack configurations for building the demo application, NX project configuration with build, serve, test, and Playwright targets.
SDK Integration - Rate Limit & Guard
libs/sdk/src/rate-limit/index.ts, libs/sdk/package.json, libs/sdk/project.json, libs/sdk/src/index.ts, libs/sdk/tsconfig.lib.json
SDK package updates adding @frontmcp/guard dependency, re-exports of guard API, backward-compatible ThrottleConfig alias, and TypeScript path aliases for guard module.
SDK Integration - Metadata & Tokens
libs/sdk/src/common/metadata/tool.metadata.ts, libs/sdk/src/common/metadata/agent.metadata.ts, libs/sdk/src/common/metadata/front-mcp.metadata.ts, libs/sdk/src/common/tokens/tool.tokens.ts, libs/sdk/src/common/tokens/agent.tokens.ts, libs/sdk/src/common/tokens/front-mcp.tokens.ts
Added rateLimit, concurrency, timeout properties to tool and agent metadata; added throttle to base metadata; corresponding token declarations and Zod schema validations.
SDK Integration - Flow Changes
libs/sdk/src/tool/flows/call-tool.flow.ts, libs/sdk/src/agent/flows/call-agent.flow.ts, libs/sdk/src/scope/flows/http.request.flow.ts, libs/sdk/src/scope/scope.instance.ts
Integrated guard checks into tool/agent execution flows: acquireQuota (rate limiting), acquireSemaphore (concurrency), timeout-wrapped execution, and releaseSemaphore. Scope now initializes GuardManager and provides rateLimitManager for HTTP global rate limits.
Documentation
docs/frontmcp/servers/guard.mdx, docs/frontmcp/guides/rate-limiting-and-guards.mdx, docs/frontmcp/sdk-reference/guard.mdx, docs/docs.json, libs/guard/README.md
Comprehensive guides covering Guard architecture, quick starts, rate limiting strategies, concurrency control, timeouts, IP filtering, storage backends (in-memory/Redis/Vercel KV), full API reference, configuration precedence, and integration patterns.
Testing Infrastructure
libs/testing/src/server/port-registry.ts, eslint.config.mjs, tsconfig.base.json
Port registry entry for demo-e2e-guard (50400-50409), path alias for @frontmcp/guard, ESLint ignores for fixture directories.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant HTTP Request Flow
    participant Tool Call Flow
    participant GuardManager
    participant Storage
    
    Client->>HTTP Request Flow: HTTP POST /mcp
    HTTP Request Flow->>GuardManager: checkGlobalRateLimit()
    GuardManager->>Storage: mget(currentWindow, previousWindow)
    Storage-->>GuardManager: counts
    GuardManager-->>HTTP Request Flow: RateLimitResult
    
    alt Rate Limit Exceeded
        HTTP Request Flow-->>Client: 429 with Retry-After
    else Allowed
        HTTP Request Flow->>Tool Call Flow: acquireQuota()
        Tool Call Flow->>GuardManager: checkRateLimit(tool)
        GuardManager->>Storage: mget(currentWindow, previousWindow)
        Storage-->>GuardManager: counts
        GuardManager-->>Tool Call Flow: RateLimitResult
        
        alt Per-Tool Limit Exceeded
            Tool Call Flow-->>Client: 429 Error
        else Allowed
            Tool Call Flow->>GuardManager: acquireSemaphore(tool)
            GuardManager->>Storage: incr(activeCount)
            Storage-->>GuardManager: SemaphoreTicket
            GuardManager-->>Tool Call Flow: SemaphoreTicket
            
            Tool Call Flow->>Tool Call Flow: execute() with timeout
            alt Timeout Exceeded
                Tool Call Flow-->>Client: 408 ExecutionTimeoutError
            else Success
                Tool Call Flow-->>Client: Result
            end
            
            Tool Call Flow->>GuardManager: release(ticket)
            GuardManager->>Storage: decr(activeCount)
        end
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

  • PR #120 – Modifies FrontMcpContext and flow/scope integrations for context-aware features used by Guard/throttle changes
  • PR #6 – Introduces the foundational flow/runner refactor that the Guard changes build upon
  • PR #57 – Modifies call-tool.flow.ts for early tool lookup; overlaps with Guard's rate-limit/concurrency logic in the same file

Poem

🐰 A guard so wise, with limits set,
Rate-limits, timeouts, no regret,
Semaphores queue with gentle care,
IP filters screen who may dare,
From chaos tamed to order blessed,
The FrontMCP flows now rest. 🛡️

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch rate-limit-concurrency-timeouts

@github-actions
Copy link
Contributor

github-actions bot commented Mar 16, 2026

Performance Test Results

Status: ✅ All tests passed

Summary

Project Tests Passed Warnings Failed Leaks
✅ demo-e2e-agents 4 4 0 0 0
✅ demo-e2e-cache 11 11 0 0 0
✅ demo-e2e-codecall 4 4 0 0 0
✅ demo-e2e-config 4 4 0 0 0
✅ demo-e2e-direct 3 3 0 0 0
✅ demo-e2e-elicitation 1 1 0 0 0
✅ demo-e2e-errors 4 4 0 0 0
✅ demo-e2e-hooks 3 3 0 0 0
✅ demo-e2e-multiapp 4 4 0 0 0
✅ demo-e2e-notifications 3 3 0 0 0
✅ demo-e2e-openapi 2 2 0 0 0
✅ demo-e2e-providers 4 4 0 0 0
✅ demo-e2e-public 4 4 0 0 0
✅ demo-e2e-redis 14 14 0 0 0
✅ demo-e2e-remember 4 4 0 0 0
✅ demo-e2e-remote 5 5 0 0 0
✅ demo-e2e-serverless 2 2 0 0 0
✅ demo-e2e-skills 15 15 0 0 0
✅ demo-e2e-standalone 2 2 0 0 0
✅ demo-e2e-transport-recreation 3 3 0 0 0
✅ demo-e2e-ui 4 4 0 0 0

Total: 100 tests across 21 projects

📊 View full report in workflow run


Generated at: 2026-03-16T02:29:31.911Z
Commit: 45764670

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🟠 Major comments (16)
libs/guard/src/partition-key/partition-key.resolver.ts-27-28 (1)

27-28: ⚠️ Potential issue | 🟠 Major

Avoid a shared 'unknown-ip' partition.

Unlike the userId branch on Line 32, Line 28 collapses every caller without clientIp into the same key. That turns per-client guards into a shared throttle. Falling back to sessionId when available—or rejecting partitionBy: 'ip' when the IP is missing—preserves isolation.

🔧 Minimal fallback
     case 'ip':
-      return ctx.clientIp ?? 'unknown-ip';
+      return ctx.clientIp ?? ctx.sessionId;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/partition-key/partition-key.resolver.ts` around lines 27 - 28,
The 'ip' branch currently collapses missing IPs into a shared 'unknown-ip' key;
update the case handling in partition-key.resolver.ts (the switch branch for
"ip") to preserve per-client isolation by returning ctx.clientIp ??
ctx.sessionId and, if neither exists, fail fast (throw a clear error like
'partitionBy "ip" requires clientIp or sessionId') instead of returning a shared
constant; this keeps behavior consistent with the userId branch and avoids a
shared throttle across clients.
docs/frontmcp/servers/guard.mdx-249-259 (1)

249-259: ⚠️ Potential issue | 🟠 Major

Add a spoofing warning around trustProxy.

This section normalizes trustProxy: true without calling out that it is only safe behind infrastructure you control. Otherwise clients can forge X-Forwarded-For and bypass IP filtering or any IP-partitioned rate limits.

Also applies to: 286-294

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/frontmcp/servers/guard.mdx` around lines 249 - 259, The docs show
ipFilter with trustProxy: true but omit a spoofing warning; update the
FrontMcp/ipFilter documentation to call out that setting trustProxy: true only
safe when running behind infrastructure you control (load balancers/proxies that
properly sanitize X-Forwarded-For), otherwise clients can forge X-Forwarded-For
and bypass IP filters or rate limits; mention trustedProxyDepth must be
configured to match the number of trusted hops and recommend defaulting to false
or making trustProxy configurable, and add a short advisory next to the
trustProxy and trustedProxyDepth fields referencing security implications and
deployment guidance.
libs/guard/jest.config.ts-28-35 (1)

28-35: ⚠️ Potential issue | 🟠 Major

The coverage gate is still below the repo bar.

branches is set to 90, and without collectCoverageFrom unimported guard modules can sit outside coverage entirely. That weakens the 95% requirement for a brand-new library surface.

Suggested config tightening
 module.exports = {
   displayName: 'guard',
   preset: '../../jest.preset.js',
   testEnvironment: 'node',
@@
   },
   moduleFileExtensions: ['ts', 'js', 'html'],
+  collectCoverageFrom: [
+    '<rootDir>/src/**/*.ts',
+    '!<rootDir>/src/**/__tests__/**',
+  ],
   coverageDirectory: '../../coverage/unit/guard',
   coverageThreshold: {
     global: {
       statements: 95,
-      branches: 90,
+      branches: 95,
       functions: 95,
       lines: 95,
     },
   },
 };
As per coding guidelines, "Maintain 95%+ test coverage across all metrics (statements, branches, functions, lines) in Jest tests."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/jest.config.ts` around lines 28 - 35, The Jest config's
coverageThreshold currently sets branches to 90 and lacks collectCoverageFrom,
allowing unimported modules to avoid coverage; update the coverageThreshold so
branches: 95 (matching statements/functions/lines) and add a collectCoverageFrom
array (e.g., "src/**/*.ts" and exclude any generated/index barrels as needed) to
ensure all guard modules are included in coverage collection; modify the jest
config entries coverageThreshold and collectCoverageFrom accordingly to enforce
the 95% repo bar for statements, branches, functions, and lines.
apps/e2e/demo-e2e-guard/e2e/guard-ip-filter.e2e.spec.ts-4-7 (1)

4-7: ⚠️ Potential issue | 🟠 Major

Tests remain unimplemented as placeholder test.todo() calls.

Lines 4–6 confirm IP filter is not yet wired into the SDK flow stages (call-tool.flow.ts, http.request.flow.ts), and lines 17–20 contain only test.todo() placeholders. Implement concrete test cases once the SDK integration is complete, or update the comment to clarify whether this feature is deferred to a follow-up PR.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/guard-ip-filter.e2e.spec.ts` around lines 4 - 7,
The placeholder tests in guard-ip-filter.e2e.spec.ts currently use test.todo()
because GuardManager.checkIpFilter isn’t wired into the SDK flows
(call-tool.flow.ts, http.request.flow.ts); either implement real E2E tests after
wiring by replacing test.todo() with concrete scenarios that exercise
GuardManager.checkIpFilter through the SDK (mock or start flows in
call-tool.flow.ts and http.request.flow.ts and assert allow/deny behavior), or
update the file comment to explicitly state this is deferred to a follow-up PR
and add a TODO referencing GuardManager.checkIpFilter and the two flow files so
reviewers know where to hook tests later.
libs/sdk/src/scope/flows/http.request.flow.ts-172-184 (1)

172-184: ⚠️ Potential issue | 🟠 Major

Short-circuit the flow on global-rate-limit rejection and include JSON-RPC id.

After sending the 429 response, the flow should terminate immediately. Also, include the request id in the JSON-RPC error payload for correct client correlation.

Suggested fix
   const result = await manager.checkGlobalRateLimit(partitionCtx);
   if (!result.allowed) {
+    const requestId = (this.rawInput.request.body as { id?: string | number | null } | undefined)?.id ?? null;
     const retryAfter = Math.ceil((result.retryAfterMs ?? 60_000) / 1000);
     this.respond(
       httpRespond.json(
         {
           jsonrpc: '2.0',
+          id: requestId,
           error: { code: -32029, message: `Rate limit exceeded. Retry after ${retryAfter} seconds` },
         },
         { status: 429, headers: { 'Retry-After': String(retryAfter) } },
       ),
     );
+    return this.handled();
   }

As per coding guidelines: MCP Response Types should use strictly typed MCP protocol types (e.g., Promise<GetPromptResult>, Promise<ReadResourceResult>), never use unknown for MCP protocol types.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/scope/flows/http.request.flow.ts` around lines 172 - 184, The
global rate-limit branch in manager.checkGlobalRateLimit currently calls
this.respond(httpRespond.json(...)) but does not short-circuit the flow or
include the JSON-RPC request id; fix by immediately returning after the respond
(add return) and include the incoming JSON-RPC request id in the error payload
(include "id": <incoming request id>, e.g., from partitionCtx.request.id or the
flow's RPC request variable) inside the error object passed to httpRespond.json;
keep the 429 status and Retry-After header and ensure the message stays as is.
apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts-16-57 (1)

16-57: ⚠️ Potential issue | 🟠 Major

Replace unsafe request parameter typing with APIRequestContext.

The complex conditional type and unknown cast remove TypeScript safety and can mask API misuse. Import and use Playwright's APIRequestContext directly for both helper functions.

Suggested fix
-import { test, expect } from '@playwright/test';
+import { test, expect, type APIRequestContext } from '@playwright/test';
@@
-async function initializeSession(request: ReturnType<typeof test.extend>['request'] extends infer R ? R : never) {
-  const response = await (request as { post: Function }).post(MCP_ENDPOINT, {
+async function initializeSession(request: APIRequestContext) {
+  const response = await request.post(MCP_ENDPOINT, {
@@
 async function callTool(
-  request: unknown,
+  request: APIRequestContext,
   sessionId: string,
   toolName: string,
   args: Record<string, unknown>,
   id: string | number = 'call-1',
 ) {
-  return (request as { post: Function }).post(MCP_ENDPOINT, {
+  return request.post(MCP_ENDPOINT, {

Per coding guidelines: Use strict TypeScript type checking with no any types without justification; use unknown instead of any for generic type defaults.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts` around lines 16
- 57, The helper functions initializeSession and callTool use unsafe/complex
typings and unknown casts for the request parameter; import Playwright's
APIRequestContext and change the request parameter type to APIRequestContext in
both functions, remove the manual casts like (request as { post: Function }) and
call request.post(...) directly, keeping the same MCP_ENDPOINT payload and
headers; this ensures proper typing for request.post and response.headers() and
satisfies strict TypeScript checks.
libs/guard/src/manager/guard.factory.ts-38-46 (1)

38-46: ⚠️ Potential issue | 🟠 Major

Remove the unnecessary as unknown as string cast on line 46.

The GuardLogger.info signature is info(msg: string, ...args: unknown[]): void;, which already accepts the metadata object as a rest argument of type unknown. The double cast serves no purpose, violates strict TypeScript typing guidelines, and misleads readers into thinking the object needs string conversion when it doesn't.

Fix
  logger?.info('GuardManager initialized', {
    keyPrefix,
    hasGlobalRateLimit: !!config.global,
    hasGlobalConcurrency: !!config.globalConcurrency,
    hasDefaultRateLimit: !!config.defaultRateLimit,
    hasDefaultConcurrency: !!config.defaultConcurrency,
    hasDefaultTimeout: !!config.defaultTimeout,
    hasIpFilter: !!config.ipFilter,
- } as unknown as string);
+ });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/manager/guard.factory.ts` around lines 38 - 46, The log call
in GuardManager uses an unnecessary double-cast on the metadata object; remove
the "as unknown as string" from the logger?.info invocation so the object is
passed directly as the second argument (the rest/unknown[] param) to
GuardLogger.info; update the call in guard.factory.ts where
logger?.info('GuardManager initialized', { keyPrefix, hasGlobalRateLimit:
!!config.global, hasGlobalConcurrency: !!config.globalConcurrency,
hasDefaultRateLimit: !!config.defaultRateLimit, hasDefaultConcurrency:
!!config.defaultConcurrency, hasDefaultTimeout: !!config.defaultTimeout,
hasIpFilter: !!config.ipFilter }) to pass the object without any casting.
libs/guard/src/timeout/timeout.ts-12-24 (1)

12-24: ⚠️ Potential issue | 🟠 Major

Background promise execution continues after timeout, allowing side effects even when caller receives error.

The withTimeout function rejects the wrapper promise when timeout expires via Promise.race, but the underlying fn() promise continues executing. In call-tool.flow.ts:584 and call-agent.flow.ts:445, when doExecute calls toolContext.execute(), the operation can complete after the caller receives ExecutionTimeoutError, resulting in silently completed work despite the timeout.

To prevent background execution, pass the AbortSignal to fn() so callers can respect cancellation:

Proposed fix
-export async function withTimeout<T>(fn: () => Promise<T>, timeoutMs: number, entityName: string): Promise<T> {
+export async function withTimeout<T>(
+  fn: (signal: AbortSignal) => Promise<T>,
+  timeoutMs: number,
+  entityName: string,
+): Promise<T> {
   const controller = new AbortController();
   const timer = setTimeout(() => controller.abort(), timeoutMs);

   try {
     return await Promise.race([
-      fn(),
+      fn(controller.signal),
       new Promise<never>((_, reject) => {
         controller.signal.addEventListener('abort', () => {
           reject(new ExecutionTimeoutError(entityName, timeoutMs));
-        });
+        }, { once: true });
       }),
     ]);
   } finally {
     clearTimeout(timer);
   }
 }

Call sites must be updated to accept the signal and respect abort (e.g., pass to fetch/http calls, check signal.aborted in loops, or propagate to nested operations).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/timeout/timeout.ts` around lines 12 - 24, withTimeout
currently races fn() against an abort promise but does not cancel the underlying
work; change withTimeout<T> to accept fn: (signal: AbortSignal) => Promise<T>,
create an AbortController, call fn(controller.signal) inside the race, and still
reject with new ExecutionTimeoutError(entityName, timeoutMs) when
controller.abort(); ensure you clear the timer and remove the abort event
listener in a finally block to avoid leaks. Update all call sites (e.g.,
doExecute / toolContext.execute in call-tool.flow.ts and call-agent.flow.ts) to
accept and propagate the AbortSignal and to abort/respect it in fetch/http
calls, loops, or nested operations so background execution stops after timeout.
apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts-40-41 (1)

40-41: ⚠️ Potential issue | 🟠 Major

Assert payload success, not only settled status, for “should succeed” cases.

At Lines 40-41 and 111-112, fulfilled alone is insufficient; these tests should also assert toBeSuccessful() on fulfilled values.

💡 Proposed fix
       // First should succeed
       expect(result1.status).toBe('fulfilled');
+      if (result1.status === 'fulfilled') {
+        expect(result1.value).toBeSuccessful();
+      }
...
       // First should succeed (eventually)
       expect(result1.status).toBe('fulfilled');
+      if (result1.status === 'fulfilled') {
+        expect(result1.value).toBeSuccessful();
+      }

Also applies to: 111-112

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts` around lines 40 -
41, The test currently only asserts PromiseSettledResult.status === 'fulfilled'
for the "should succeed" cases (e.g., result1 at the top of the file and the
similar check around result2 at the later block); update each fulfilled
assertion to also assert that the fulfilled value is successful by calling
toBeSuccessful() on the fulfilled value (e.g., after
expect(result1.status).toBe('fulfilled') add expect((result1 as
PromiseFulfilledResult<any>).value).toBeSuccessful(), and do the same for the
other fulfilled result in the later block) so the test checks payload success,
not just settled status.
apps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.ts-70-73 (1)

70-73: ⚠️ Potential issue | 🟠 Major

Strengthen success assertions for the first two calls.

At Lines 70-73, checking only fulfilled does not guarantee business success; a fulfilled MCP response can still be an error payload.

💡 Proposed fix
       // First two should succeed
       expect(results[0].status).toBe('fulfilled');
       expect(results[1].status).toBe('fulfilled');
+      if (results[0].status === 'fulfilled') {
+        expect(results[0].value).toBeSuccessful();
+      }
+      if (results[1].status === 'fulfilled') {
+        expect(results[1].value).toBeSuccessful();
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.ts` around lines 70 - 73,
The assertions only check PromiseSettledResults are 'fulfilled' but not that the
MCP responses indicate business success; update the assertions for results[0]
and results[1] to inspect their value payloads (e.g., results[0].value and
results[1].value) and assert expected success indicators such as a 200 status
code and/or a success flag (for example assert value.statusCode === 200 and
value.body.error is undefined, or value.success === true), rather than only
checking .status === 'fulfilled'.
libs/sdk/src/agent/flows/call-agent.flow.ts-367-373 (1)

367-373: ⚠️ Potential issue | 🟠 Major

This bypasses top-level semaphore defaults for agents without local config.

Returning here when agent.metadata.concurrency is absent means defaultConcurrency and globalConcurrency never apply unless every agent opts in individually. The flow should still enter the guard manager, or add a separate path for global/default semaphore acquisition.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/agent/flows/call-agent.flow.ts` around lines 367 - 373, The
early return when agent.metadata.concurrency is falsy skips applying
defaultConcurrency/globalConcurrency; modify the logic in call-agent.flow.ts
(around the block using this.state.required.agent and
agent.metadata.concurrency) so that instead of returning you route through the
guard/semaphore manager: if agent.metadata.concurrency is missing, resolve to
the top-level concurrency config (globalConcurrency or defaultConcurrency) and
invoke the same guard acquisition path (the guard manager / acquireSemaphore
mark and semaphore acquisition code) so global defaults are applied; ensure you
still call this.state.agentContext?.mark('acquireSemaphore') and use the same
semaphore acquisition functions used for agents with local config.
libs/sdk/src/tool/flows/call-tool.flow.ts-487-493 (1)

487-493: ⚠️ Potential issue | 🟠 Major

Use a scope-qualified guard key.

tool.metadata.name can collide across apps/scopes. With the current key, same-named tools share rate-limit buckets and semaphore slots, so one app can throttle or block another. Use tool.fullName or another guaranteed-unique identifier for the guard manager calls.

Also applies to: 528-534

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/tool/flows/call-tool.flow.ts` around lines 487 - 493, The
rate-limiting key currently uses tool.metadata.name which can collide across
scopes; update both calls that interact with the guard manager
(manager.checkRateLimit and the other manager.acquire/guard-related call around
the second block) to use a scope-qualified identifier such as tool.fullName (or
another guaranteed-unique id) instead of tool.metadata.name so each tool's
rate-limit and semaphore buckets are unique per app/scope; replace all uses of
tool.metadata.name in these guard-manager calls (including the logged context)
with tool.fullName and keep the existing retry/log/RateLimitError behavior
unchanged.
libs/sdk/src/agent/flows/call-agent.flow.ts-344-346 (1)

344-346: ⚠️ Potential issue | 🟠 Major

Use a scope-qualified guard key for agents.

agent.metadata.name is not unique across apps/scopes, so same-named agents will share rate-limit buckets and semaphore slots. That lets one app throttle or block another's agent traffic. Use agent.fullName or another guaranteed-unique identifier for the guard manager calls.

Also applies to: 384-386

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/agent/flows/call-agent.flow.ts` around lines 344 - 346, The
rate-limit guard is using a non-unique key (agent.metadata.name) so replace that
with the scope-qualified unique identifier (agent.fullName) wherever the guard
manager is called (e.g., manager.checkRateLimit(...) and the related
semaphore/reservation calls around the other referenced lines such as
manager.reserveSemaphore(...)/manager.releaseSemaphore(...)); update the
checkRateLimit call to pass agent.fullName instead of agent.metadata.name and do
the same substitution for all other guard manager invocations so each agent’s
quota/semaphore is partitioned by fullName.
libs/sdk/src/tool/flows/call-tool.flow.ts-511-517 (1)

511-517: ⚠️ Potential issue | 🟠 Major

This makes top-level semaphore defaults unreachable for tools without local config.

Returning here when tool.metadata.concurrency is absent means defaultConcurrency and globalConcurrency never apply to those tools. The flow should still enter the guard manager and let it resolve the effective semaphore policy.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/tool/flows/call-tool.flow.ts` around lines 511 - 517, The early
return when tool.metadata.concurrency is falsy prevents fallback to top-level
defaults; remove the return so the flow continues into the guard manager
(keeping the existing this.state.toolContext?.mark('acquireSemaphore') and
this.logger.verbose call) and pass the possibly-undefined config through so the
guard manager can resolve effective semaphore policy using
defaultConcurrency/globalConcurrency; update code around the check of const
config = tool.metadata.concurrency in call-tool.flow (the acquireSemaphore
logic) to avoid short-circuiting when config is missing.
libs/guard/src/ip-filter/ip-filter.ts-107-115 (1)

107-115: ⚠️ Potential issue | 🟠 Major

Validate octet and group format before parsing to prevent truncation bypasses.

parseInt('1foo', 10) returns 1, allowing malformed IPs like 10.0.0.1foo to be treated as valid 10.0.0.1. On an access-control boundary, this is a security bypass. Validate the exact format (1–3 decimal digits for IPv4 octets, 1–4 hex digits for IPv6 groups) before conversion:

  • IPv4: Add if (!/^\d{1,3}$/.test(part)) return null; before parseInt
  • IPv6: Add if (!/^[0-9a-fA-F]{1,4}$/.test(group)) return null; before parseInt

This applies to both parseIpv4 (lines 107–115) and parseIpv6 (lines 146–149).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/ip-filter/ip-filter.ts` around lines 107 - 115, The parsing
functions currently allow truncated numeric parsing (e.g., "1foo" -> 1), so
update parseIpv4 and parseIpv6 to validate each octet/group with strict regex
before calling parseInt/parseInt/BigInt conversion: in parseIpv4 add a check
like /^\d{1,3}$/ on each part and return null if it fails, and in parseIpv6 add
a check like /^[0-9A-Fa-f]{1,4}$/ on each group and return null if it fails;
keep the existing numeric-range checks after the regex validation and then
perform the conversion as before in the functions parseIpv4 and parseIpv6.
libs/sdk/src/common/metadata/agent.metadata.ts-491-501 (1)

491-501: ⚠️ Potential issue | 🟠 Major

Align runtime schema validation with declared guard configuration types.

The rateLimit, concurrency, and timeout fields declare full type compatibility with RateLimitConfig, ConcurrencyConfig, and TimeoutConfig, but the runtime schemas only validate required fields and allow arbitrary extra properties via z.looseObject(). This lets invalid configurations pass validation and fail later during execution. Use the shared guard schemas as tool.metadata.ts already does.

Suggested fix
+import type { RateLimitConfig, ConcurrencyConfig, TimeoutConfig } from '@frontmcp/guard';
+import { rateLimitConfigSchema, concurrencyConfigSchema, timeoutConfigSchema } from '@frontmcp/guard';
...
-  rateLimit?: import('@frontmcp/guard').RateLimitConfig;
+  rateLimit?: RateLimitConfig;
...
-  concurrency?: import('@frontmcp/guard').ConcurrencyConfig;
+  concurrency?: ConcurrencyConfig;
...
-  timeout?: import('@frontmcp/guard').TimeoutConfig;
+  timeout?: TimeoutConfig;
...
-    rateLimit: z.looseObject({ maxRequests: z.number() }).optional(),
-    concurrency: z.looseObject({ maxConcurrent: z.number() }).optional(),
-    timeout: z.looseObject({ executeMs: z.number() }).optional(),
+    rateLimit: rateLimitConfigSchema.optional(),
+    concurrency: concurrencyConfigSchema.optional(),
+    timeout: timeoutConfigSchema.optional(),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/common/metadata/agent.metadata.ts` around lines 491 - 501, The
runtime schemas for the agent metadata fields rateLimit, concurrency, and
timeout currently use z.looseObject() and therefore allow extra/invalid
properties; replace those loose validators with the shared guard schemas used in
tool.metadata.ts so the runtime validation matches the declared types
(RateLimitConfig, ConcurrencyConfig, TimeoutConfig). Import the corresponding
guard schema exports from the shared guard module (the same names used in
tool.metadata.ts) and use them for the rateLimit, concurrency, and timeout
schema entries inside agent.metadata.ts, removing z.looseObject() wrappers so
invalid shapes are rejected at validation time.
🟡 Minor comments (4)
apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts-23-23 (1)

23-23: ⚠️ Potential issue | 🟡 Minor

Use the testing library's protocol version constant instead of hardcoding.

The test hardcodes '2025-03-26', but @frontmcp/testing provides DEFAULT_PROTOCOL_VERSION = '2025-06-18'. Either import and reuse this constant, or configure it via test client options to maintain consistency across the codebase and avoid version drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts` at line 23,
Replace the hardcoded protocol version string with the testing library constant:
import DEFAULT_PROTOCOL_VERSION from "@frontmcp/testing" (or reference it from
the test client options) and use DEFAULT_PROTOCOL_VERSION in place of the
literal '2025-03-26' in the guard-browser.pw.spec.ts test configuration so the
test uses the canonical protocol version constant.
libs/sdk/src/scope/flows/http.request.flow.ts-160-160 (1)

160-160: ⚠️ Potential issue | 🟡 Minor

Use a single type cast instead of the double cast for scope access.

(this.scope as unknown as Scope) bypasses type safety unnecessarily. Since Scope extends ScopeEntry, a single cast (this.scope as Scope) is both type-safe and consistent with the pattern used throughout the codebase in call-tool.flow.ts and call-agent.flow.ts.

Current code
const manager = (this.scope as unknown as Scope).rateLimitManager;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/scope/flows/http.request.flow.ts` at line 160, Replace the
double type cast on this.scope with a single, direct cast to Scope to avoid
bypassing type safety: change uses of (this.scope as unknown as Scope) where you
access rateLimitManager to (this.scope as Scope).rateLimitManager so it matches
the pattern used in call-tool.flow.ts and call-agent.flow.ts; ensure any other
occurrences in http.request.flow.ts follow the same single-cast approach.
apps/e2e/demo-e2e-guard/src/main.ts-4-4 (1)

4-4: ⚠️ Potential issue | 🟡 Minor

Validate PORT before wiring it into http.port to prevent NaN failures during server bootstrap.

Line 4 parses PORT without NaN validation. A non-numeric value (e.g., PORT=abc) produces NaN, causing startup to fail. Other similar demo-e2e files implement validation using patterns like Number.isNaN() or Number.isFinite() checks; this file should match that pattern for consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/src/main.ts` at line 4, The PORT parsing in main.ts
currently assigns const port = parseInt(process.env['PORT'] ?? '50400', 10)
without validating the result; update the code that constructs the port value
(the const port variable used for http.port) to validate that parseInt returned
a finite number (e.g., using Number.isFinite or !Number.isNaN) and if invalid
fall back to the default numeric port 50400 or exit with a clear error; ensure
the validated numeric port is what gets passed into http.port so non-numeric
PORT values do not produce NaN at server bootstrap.
apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts-48-50 (1)

48-50: ⚠️ Potential issue | 🟡 Minor

Assert warm-up calls succeed before validating isolation.

On Line 48–Line 50, the three setup calls are not asserted. A failed warm-up call can still let the “blocked” assertion pass and mask a regression.

Proposed fix
-    for (let i = 0; i < 3; i++) {
-      await mcp.tools.call('rate-limited', { message: `req-${i}` });
-    }
+    for (let i = 0; i < 3; i++) {
+      const result = await mcp.tools.call('rate-limited', { message: `req-${i}` });
+      expect(result).toBeSuccessful();
+      expect(result).toHaveTextContent(`req-${i}`);
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts` around lines 48 -
50, The three warm-up calls using mcp.tools.call('rate-limited', { message:
`req-${i}` }) must be asserted before the isolation/block assertions: capture
each call's result (the array/response from mcp.tools.call invoked in the for
loop) and assert it indicates success (e.g., expected body/message or success
flag) inside the loop so a failing warm-up will fail the test immediately;
update the for loop that calls mcp.tools.call('rate-limited', ...) to store each
response and add an assertion (using the test framework's expect) that the
response equals the expected successful value before proceeding to the
blocked-request assertions.
🧹 Nitpick comments (8)
libs/testing/src/server/port-registry.ts (1)

54-59: Update the section range comment to match the new allocation.

demo-e2e-guard uses port 50400, but the section label still says 50300-50399. Keeping this comment accurate will prevent future range collisions/misclassification.

Proposed small cleanup
-  // Infrastructure E2E tests (50300-50399)
+  // Infrastructure E2E tests (50300-50499)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/testing/src/server/port-registry.ts` around lines 54 - 59, The section
comment "Infrastructure E2E tests (50300-50399)" is now incorrect because
entries like 'demo-e2e-guard' allocate ports starting at 50400; update the
section range comment to reflect the actual allocation (e.g., "Infrastructure
E2E tests (50300-50409)") so the block that includes 'demo-e2e-redis',
'demo-e2e-serverless', 'demo-e2e-uipack', 'demo-e2e-agent-adapters', and
'demo-e2e-guard' accurately describes the port span and avoids future
collisions.
apps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.js (1)

1-42: Consider whether compiled JS files should be committed.

This file appears to be TypeScript compilation output (contains __decorate helper from decorator compilation). The corresponding TypeScript source exists at timeout.tool.ts.

The ESLint no-var errors are expected in compiled output and not actionable here. However, committing both .ts source and compiled .js output can lead to drift if the JS isn't regenerated after TS changes.

If these JS files are intentionally committed for runtime use (e.g., the demo runs without a build step), consider adding them to .eslintignore to suppress the false positive warnings. Otherwise, consider generating them at build time instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.js` around lines 1
- 42, The committed timeout.tool.js is compiled TypeScript output (contains
__decorate) and duplicates the TypeScript source (timeout.tool.ts), which can
cause drift; either remove the compiled file from the repo and ensure JS is
generated during the build (update CI/build scripts so TimeoutTool is compiled
from timeout.tool.ts), or if the JS must remain for runtime, add timeout.tool.js
(or the compiled output folder) to .eslintignore to suppress linting of no-var
and other compiler artifacts and document that compiled artifacts must be
regenerated when timeout.tool.ts / the TimeoutTool class changes.
libs/sdk/src/scope/scope.instance.ts (2)

104-106: Rename _rateLimitManager to follow private field naming guideline.

Line 105 introduces an underscore-prefixed private field, which conflicts with the project’s private-field convention.

Suggested rename
-  private _rateLimitManager?: GuardManager;
+  private guardManager?: GuardManager;
@@
-            this._rateLimitManager = mgr;
+            this.guardManager = mgr;
@@
   get rateLimitManager(): GuardManager | undefined {
-    return this._rateLimitManager;
+    return this.guardManager;
   }

As per coding guidelines: "Use private keyword without underscore prefix for private fields in classes, expose via getter methods instead of exposing private fields directly".

Also applies to: 713-715

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/scope/scope.instance.ts` around lines 104 - 106, The private
field _rateLimitManager violates the no-underscore private-field guideline;
rename the field to rateLimitManager (type GuardManager) and update all usages
(e.g., references in methods and any assignments) to the new name, and if
external access is required expose it via a getter (get rateLimitManager())
instead of making the field public; apply the same renaming pattern to the other
private fields mentioned (the similar fields around the other occurrences) so
all underscore-prefixed private properties are converted and references updated
consistently.

185-205: Consider extracting guard initialization from initialize() into a helper.

The guard bootstrap block is feature-specific orchestration in scope.instance.ts; moving it to a helper keeps this initializer focused and consistent with existing capability-registration patterns.

Based on learnings: "Applies to **/scope.instance.ts : Use helper functions from skill module (like registerSkillCapabilities and detectSkillsOnlyMode) instead of inline feature-specific registration logic in scope.instance.ts".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/scope/scope.instance.ts` around lines 185 - 205, The guard
bootstrap logic (throttleConfig, rateLimitPromise, createGuardManager, and
assignment to this._rateLimitManager) should be extracted out of initialize()
into a dedicated helper (e.g., a new initializeGuards or
registerGuardCapabilities function) to mirror existing patterns like
registerSkillCapabilities and detectSkillsOnlyMode; move the conditional
creation of rateLimitPromise and its then-handler into that helper, accept
needed inputs (this.metadata.throttle, this.cliMode, this.logger) and return the
Promise<void> or undefined so initialize() can simply call the helper and push
its result into batch1, keeping scope.instance.initialize focused and consistent
with other capability registration code.
apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts (1)

80-82: Use condition-based retry instead of fixed sleep to reduce flakiness.

Line 81’s fixed wait (5500ms) can be brittle around timing boundaries. Polling until success (with an overall timeout) is usually more stable for window-reset assertions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts` around lines 80 -
82, Replace the fixed sleep (await new Promise(resolve => setTimeout(resolve,
5500))) with a condition-based retry loop that polls the rate-limit reset
condition until success or a max timeout; repeatedly perform the same
request/assertion inside a small-interval loop (e.g., 200–500ms) and break when
the window-reset assertion passes, otherwise fail after the overall timeout
(e.g., 10s). Update the test in guard-rate-limit.e2e.spec.ts so the polling loop
wraps the existing window-reset assertion and uses the same request/response
logic instead of the hardcoded 5500ms sleep to reduce flakiness.
libs/guard/src/errors/errors.ts (1)

71-93: Consider masking IP addresses in error messages for production logging.

The IpBlockedError and IpNotAllowedError include the full client IP in the error message. While useful for debugging, logging full IPs could raise privacy concerns (GDPR/CCPA). Consider whether these messages might be logged and whether partial masking (e.g., "192.168.x.x") would be appropriate for production.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/errors/errors.ts` around lines 71 - 93, The error classes
IpBlockedError and IpNotAllowedError currently embed the full client IP in the
error message; change them to avoid including the raw IP in the public message
by constructing a maskedIp (e.g., replace last octets or characters with "x" or
asterisks) and pass that masked value to super for the human/log message while
still storing the original value in the readonly clientIp property for internal
use; update the constructors of IpBlockedError and IpNotAllowedError to compute
maskedIp before calling super (or accept an optional flag to includeFullIp) so
logs/messages use maskedIp but clientIp retains the full address.
apps/e2e/demo-e2e-guard/e2e/helpers/exec-cli.ts (1)

8-8: Module-level build cache may cause issues in parallel test runs.

The buildDone flag is module-level state. If tests run in parallel across different workers that import this module separately, each worker will rebuild. This is likely acceptable for E2E tests (which typically run serially), but worth noting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/helpers/exec-cli.ts` at line 8, Module-level
buildDone flag can cause duplicate builds when workers import this module;
replace the module-level boolean with a process-shared mechanism (e.g.,
check/create a temp lock file in os.tmpdir or use an env var like
process.env.E2E_BUILD_DONE) so concurrent workers see the same state. Locate the
buildDone symbol in exec-cli.ts and change its usage to: atomically check for
the lock (fs.existsSync or fs.promises with proper error handling), perform the
build only if the lock is absent, then create the lock (write a small file or
set the env var) after a successful build; also ensure any cleanup/reset helper
(exported if needed) is available for CI teardown. Keep build-start/finish
logging and error handling intact and make the lock name unique to this project
to avoid cross-suite collisions.
libs/guard/src/concurrency/__tests__/semaphore.spec.ts (1)

185-196: Modernize the test pattern to use Jest's expect().rejects syntax instead of try/catch with fail().

While fail() currently works, it's a legacy Jasmine global and not part of Jest's modern documented API. Using expect().rejects.toThrow() is more idiomatic and aligns with Jest best practices.

♻️ Proposed fix
     it('should include entity name and timeout in QueueTimeoutError', async () => {
       await semaphore.acquire('test', 1, 0, 'my-tool');

-      try {
-        await semaphore.acquire('test', 1, 150, 'my-tool');
-        fail('should have thrown');
-      } catch (error) {
-        expect(error).toBeInstanceOf(QueueTimeoutError);
-        expect((error as QueueTimeoutError).entityName).toBe('my-tool');
-        expect((error as QueueTimeoutError).queueTimeoutMs).toBe(150);
-      }
+      const error = await expect(semaphore.acquire('test', 1, 150, 'my-tool')).rejects.toThrow();
+      expect(error).toBeInstanceOf(QueueTimeoutError);
+      expect((error as QueueTimeoutError).entityName).toBe('my-tool');
+      expect((error as QueueTimeoutError).queueTimeoutMs).toBe(150);
     }, 10_000);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/concurrency/__tests__/semaphore.spec.ts` around lines 185 -
196, Replace the try/catch + fail pattern in the test that calls
semaphore.acquire with Jest's promise rejection matcher: call
expect(semaphore.acquire('test', 1, 150, 'my-tool')).rejects to assert it throws
a QueueTimeoutError and then inspect the error's properties; specifically use
expect(...).rejects.toBeInstanceOf(QueueTimeoutError) and
expect(...).rejects.resolves? No — instead await expect(...).rejects.toThrow()
(or toBeInstanceOf) and to check entityName and queueTimeoutMs await the
rejected promise and assert those properties (e.g., const err = await
expect(...).rejects; then assert err.entityName and err.queueTimeoutMs). Update
the test around semaphore.acquire, QueueTimeoutError, entityName and
queueTimeoutMs to remove fail() and the try/catch, using Jest's expect().rejects
pattern.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2c800be3-9058-4f43-bc13-202c6c4180e0

📥 Commits

Reviewing files that changed from the base of the PR and between efb37c5 and 9db24a8.

📒 Files selected for processing (104)
  • apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-cli.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-global.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-ip-filter.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-timeout.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/helpers/exec-cli.ts
  • apps/e2e/demo-e2e-guard/fixture/frontmcp.config.js
  • apps/e2e/demo-e2e-guard/fixture/src/main.ts
  • apps/e2e/demo-e2e-guard/jest.cli.config.ts
  • apps/e2e/demo-e2e-guard/jest.e2e.config.ts
  • apps/e2e/demo-e2e-guard/playwright.config.ts
  • apps/e2e/demo-e2e-guard/project.json
  • apps/e2e/demo-e2e-guard/src/apps/guard/index.js
  • apps/e2e/demo-e2e-guard/src/apps/guard/index.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/combined-guard.tool.js
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/combined-guard.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-mutex.tool.js
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-mutex.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-queued.tool.js
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-queued.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/rate-limited.tool.js
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/rate-limited.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/slow.tool.js
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/slow.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.js
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/unguarded.tool.js
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/unguarded.tool.ts
  • apps/e2e/demo-e2e-guard/src/main.ts
  • apps/e2e/demo-e2e-guard/tsconfig.app.json
  • apps/e2e/demo-e2e-guard/tsconfig.json
  • apps/e2e/demo-e2e-guard/webpack.config.js
  • docs/docs.json
  • docs/frontmcp/guides/rate-limiting-and-guards.mdx
  • docs/frontmcp/sdk-reference/guard.mdx
  • docs/frontmcp/servers/guard.mdx
  • libs/guard/README.md
  • libs/guard/jest.config.ts
  • libs/guard/package.json
  • libs/guard/project.json
  • libs/guard/src/__tests__/exports.spec.ts
  • libs/guard/src/concurrency/README.md
  • libs/guard/src/concurrency/__tests__/semaphore.spec.ts
  • libs/guard/src/concurrency/index.ts
  • libs/guard/src/concurrency/semaphore.ts
  • libs/guard/src/concurrency/types.ts
  • libs/guard/src/errors/README.md
  • libs/guard/src/errors/__tests__/errors.spec.ts
  • libs/guard/src/errors/errors.ts
  • libs/guard/src/errors/index.ts
  • libs/guard/src/index.ts
  • libs/guard/src/ip-filter/README.md
  • libs/guard/src/ip-filter/__tests__/ip-filter.spec.ts
  • libs/guard/src/ip-filter/index.ts
  • libs/guard/src/ip-filter/ip-filter.ts
  • libs/guard/src/ip-filter/types.ts
  • libs/guard/src/manager/README.md
  • libs/guard/src/manager/__tests__/guard.factory.spec.ts
  • libs/guard/src/manager/__tests__/guard.manager.spec.ts
  • libs/guard/src/manager/guard.factory.ts
  • libs/guard/src/manager/guard.manager.ts
  • libs/guard/src/manager/index.ts
  • libs/guard/src/manager/types.ts
  • libs/guard/src/partition-key/README.md
  • libs/guard/src/partition-key/__tests__/partition-key.resolver.spec.ts
  • libs/guard/src/partition-key/index.ts
  • libs/guard/src/partition-key/partition-key.resolver.ts
  • libs/guard/src/partition-key/types.ts
  • libs/guard/src/rate-limit/README.md
  • libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts
  • libs/guard/src/rate-limit/index.ts
  • libs/guard/src/rate-limit/rate-limiter.ts
  • libs/guard/src/rate-limit/types.ts
  • libs/guard/src/schemas/README.md
  • libs/guard/src/schemas/__tests__/schemas.spec.ts
  • libs/guard/src/schemas/index.ts
  • libs/guard/src/schemas/schemas.ts
  • libs/guard/src/timeout/README.md
  • libs/guard/src/timeout/__tests__/timeout.spec.ts
  • libs/guard/src/timeout/index.ts
  • libs/guard/src/timeout/timeout.ts
  • libs/guard/src/timeout/types.ts
  • libs/guard/tsconfig.json
  • libs/guard/tsconfig.lib.json
  • libs/sdk/package.json
  • libs/sdk/project.json
  • libs/sdk/src/agent/flows/call-agent.flow.ts
  • libs/sdk/src/common/metadata/agent.metadata.ts
  • libs/sdk/src/common/metadata/front-mcp.metadata.ts
  • libs/sdk/src/common/metadata/tool.metadata.ts
  • libs/sdk/src/common/tokens/agent.tokens.ts
  • libs/sdk/src/common/tokens/front-mcp.tokens.ts
  • libs/sdk/src/common/tokens/tool.tokens.ts
  • libs/sdk/src/index.ts
  • libs/sdk/src/rate-limit/index.ts
  • libs/sdk/src/scope/flows/http.request.flow.ts
  • libs/sdk/src/scope/scope.instance.ts
  • libs/sdk/src/tool/flows/call-tool.flow.ts
  • libs/sdk/tsconfig.lib.json
  • libs/testing/src/server/port-registry.ts
  • tsconfig.base.json

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
eslint.config.mjs (1)

8-8: Minor redundancy: **/fixture/dist is already covered by **/dist.

The glob **/dist already matches any dist directory at any depth, including **/fixture/dist. You can safely remove '**/fixture/dist' from the ignores list. The '**/fixture/libs' addition is appropriate for excluding fixture library artifacts.

🧹 Suggested simplification
-    ignores: ['**/dist', '**/*.d.ts', '**/*.d.ts.map', '**/fixture/dist', '**/fixture/libs'],
+    ignores: ['**/dist', '**/*.d.ts', '**/*.d.ts.map', '**/fixture/libs'],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@eslint.config.mjs` at line 8, The ignores array in eslint.config.mjs contains
a redundant entry; remove the specific '**/fixture/dist' string from the ignores
list since the existing '**/dist' already covers all dist directories, leaving
'**/dist', '**/*.d.ts', '**/*.d.ts.map', '**/fixture/libs' (and other entries)
intact; update the ignores array declaration to drop '**/fixture/dist'.
apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts (1)

16-33: Add response validation before extracting session ID.

The initializeSession helper extracts the session ID without verifying that the initialization request succeeded. If the server returns an error, subsequent tests will proceed with an undefined sessionId, leading to confusing failures.

🛡️ Proposed defensive check
 async function initializeSession(request: APIRequestContext) {
   const response = await request.post(MCP_ENDPOINT, {
     data: {
       jsonrpc: '2.0',
       id: 'init-1',
       method: 'initialize',
       params: {
         protocolVersion: '2025-03-26',
         capabilities: {},
         clientInfo: { name: 'playwright-test', version: '1.0.0' },
       },
     },
     headers: { 'Content-Type': 'application/json' },
   });

+  if (!response.ok()) {
+    throw new Error(`Session initialization failed: ${response.status()}`);
+  }
+
   const sessionId = response.headers()['mcp-session-id'];
+  if (!sessionId) {
+    throw new Error('Missing mcp-session-id header in response');
+  }
+
   return { response, sessionId };
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts` around lines 16
- 33, The initializeSession helper currently extracts sessionId from
response.headers() without validating the HTTP status or JSON-RPC payload;
update initializeSession to first assert response.ok (or check response.status),
parse the JSON body (await response.json()), verify there is no JSON-RPC error
and that a valid result exists, and only then read the 'mcp-session-id' header
into sessionId; if the HTTP status, JSON-RPC error, or missing header occur,
throw a descriptive error including response.status and the parsed body to fail
fast and avoid downstream undefined sessionId usage.
apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts (1)

72-75: Add success assertions to warmup calls for consistent validation.

Unlike the "Basic" test which asserts success for each warmup request (lines 23-27), this test silently ignores potential failures during the limit exhaustion phase. If the server behaves unexpectedly, the test could pass for wrong reasons.

💡 Proposed fix
     // Exhaust the limit (3 requests in 5s window)
     for (let i = 0; i < 3; i++) {
-      await mcp.tools.call('rate-limited', { message: `req-${i}` });
+      const warmup = await mcp.tools.call('rate-limited', { message: `req-${i}` });
+      expect(warmup).toBeSuccessful();
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts` around lines 72 -
75, The warmup loop that exhausts the rate limit using
mcp.tools.call('rate-limited', { message: `req-${i}` }) lacks assertions; update
the loop in guard-rate-limit.e2e.spec.ts to assert each warmup call succeeded
(e.g., check returned status/success flag or that no error was thrown) so
failures during the three warmup requests are detected, keeping the same loop
structure but adding an expect/assert for each iteration referencing
mcp.tools.call and the 'rate-limited' endpoint.
apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts (1)

46-54: Consider asserting explicitly for the rejected case.

The comment notes that transport-level rejection is acceptable, but there's no assertion verifying this. If result2.status is 'rejected', the test passes silently without confirming the rejection reason relates to concurrency.

💡 Proposed explicit assertion
       // Second should have an error (concurrency limit, queue:0 = immediate reject)
       if (result2.status === 'fulfilled') {
         expect(result2.value).toBeError();
+      } else {
+        // Transport-level rejection is also acceptable for concurrency limit
+        expect(result2.reason).toBeDefined();
       }
-      // If rejected at transport level, that's also acceptable
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts` around lines 46 -
54, Add an explicit assertion for the rejected branch of the PromiseResult:
after checking result2.status === 'fulfilled', add an else-if for result2.status
=== 'rejected' and assert the rejection is related to concurrency by verifying
result2.reason is defined and its message/string contains concurrency-related
text (e.g., match /concurr|queue/i); reference result2.status and result2.reason
when adding these expects so the test fails if the transport rejection is
unrelated.
libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts (2)

18-29: Make mock storage batch/introspection methods stateful.

mset, keys, and count are static/no-op right now; this can mask regressions if the limiter starts relying on them. Align them with the same in-memory Map behavior as get/set/incr.

Proposed test-fixture hardening
-    mset: jest.fn().mockResolvedValue(undefined),
+    mset: jest.fn().mockImplementation(async (entries: Record<string, string>) => {
+      for (const [k, v] of Object.entries(entries)) {
+        data.set(k, v);
+      }
+    }),
@@
-    keys: jest.fn().mockResolvedValue([]),
-    count: jest.fn().mockResolvedValue(0),
+    keys: jest.fn().mockImplementation(async (pattern?: string) => {
+      const all = [...data.keys()];
+      if (!pattern) return all;
+      const regex = new RegExp(`^${pattern.replace(/\*/g, '.*')}$`);
+      return all.filter((k) => regex.test(k));
+    }),
+    count: jest.fn().mockImplementation(async () => data.size),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts` around lines 18 -
29, The mock storage's batch/introspection methods (mset, keys, count) are
currently no-ops and should be made stateful to mirror the in-memory Map used by
get/set/incr; update the mock implementations so mset writes multiple key/value
pairs into the same in-memory Map (respecting the value shape used by set), keys
returns the current keys (optionally filtered by a pattern) from that Map, and
count returns the Map.size (or number of matching keys) so tests accurately
reflect real storage behavior and catch regressions in functions that rely on
these methods.

114-148: Add exact-boundary interpolation tests to catch off-by-one logic.

Mid-window interpolation is covered, but exact edges (window start/end) and exact-threshold acceptance/rejection are not explicitly asserted. Adding those cases will harden branch behavior.

As per coding guidelines: "Maintain 95%+ test coverage across all metrics (statements, branches, functions, lines) in Jest tests".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts` around lines 114 -
148, Add tests that assert interpolation behavior at exact window boundaries to
catch off-by-one errors: after pre-populating storage via
storage.set(previousKey, '<count>') with previousWindowStart and
currentWindowStart defined as in existing tests, add one test where
nowSpy.mockReturnValue(currentWindowStart) (exact start of current window) and
assert limiter.check('test-key', limit, 60_000) yields the expected
allowed/remaining when previous weight should be 0, and another test where
nowSpy.mockReturnValue(currentWindowStart - 1) (exact end of previous window)
and assert limiter.check(...) yields expected allowed/remaining when previous
weight should be 1; use the same symbols limiter.check, nowSpy, storage.set,
previousKey, previousWindowStart, and currentWindowStart so the new specs mirror
the existing tests and verify exact-threshold acceptance/rejection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@libs/guard/src/ip-filter/__tests__/ip-filter.spec.ts`:
- Around line 249-259: The test description and the CIDR in the IpFilter setup
are inconsistent: the it() text says "IPv6 /64 range" while the allowList uses
'2001:db8::/32'; fix by making them match—either change the test description to
"IPv6 /32 range" or change the allowList CIDR to '2001:db8::/64' in the test
that constructs new IpFilter({ allowList: [...], defaultAction: 'deny' }) and
adjust the expectations if you change the CIDR.
- Around line 499-509: The test currently only asserts result is defined; update
it to assert the actual allow/deny outcome and validity: call
IpFilter.check('::ffff:192.168.1.0') and assert that the returned object
indicates an allowed decision (e.g., result.action === 'allow' or result.allowed
=== true) and that result.valid === true to ensure the IPv4-mapped IPv6 address
is both recognized and permitted by the allowList/defaultAction logic.

---

Nitpick comments:
In `@apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts`:
- Around line 16-33: The initializeSession helper currently extracts sessionId
from response.headers() without validating the HTTP status or JSON-RPC payload;
update initializeSession to first assert response.ok (or check response.status),
parse the JSON body (await response.json()), verify there is no JSON-RPC error
and that a valid result exists, and only then read the 'mcp-session-id' header
into sessionId; if the HTTP status, JSON-RPC error, or missing header occur,
throw a descriptive error including response.status and the parsed body to fail
fast and avoid downstream undefined sessionId usage.

In `@apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts`:
- Around line 46-54: Add an explicit assertion for the rejected branch of the
PromiseResult: after checking result2.status === 'fulfilled', add an else-if for
result2.status === 'rejected' and assert the rejection is related to concurrency
by verifying result2.reason is defined and its message/string contains
concurrency-related text (e.g., match /concurr|queue/i); reference
result2.status and result2.reason when adding these expects so the test fails if
the transport rejection is unrelated.

In `@apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts`:
- Around line 72-75: The warmup loop that exhausts the rate limit using
mcp.tools.call('rate-limited', { message: `req-${i}` }) lacks assertions; update
the loop in guard-rate-limit.e2e.spec.ts to assert each warmup call succeeded
(e.g., check returned status/success flag or that no error was thrown) so
failures during the three warmup requests are detected, keeping the same loop
structure but adding an expect/assert for each iteration referencing
mcp.tools.call and the 'rate-limited' endpoint.

In `@eslint.config.mjs`:
- Line 8: The ignores array in eslint.config.mjs contains a redundant entry;
remove the specific '**/fixture/dist' string from the ignores list since the
existing '**/dist' already covers all dist directories, leaving '**/dist',
'**/*.d.ts', '**/*.d.ts.map', '**/fixture/libs' (and other entries) intact;
update the ignores array declaration to drop '**/fixture/dist'.

In `@libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts`:
- Around line 18-29: The mock storage's batch/introspection methods (mset, keys,
count) are currently no-ops and should be made stateful to mirror the in-memory
Map used by get/set/incr; update the mock implementations so mset writes
multiple key/value pairs into the same in-memory Map (respecting the value shape
used by set), keys returns the current keys (optionally filtered by a pattern)
from that Map, and count returns the Map.size (or number of matching keys) so
tests accurately reflect real storage behavior and catch regressions in
functions that rely on these methods.
- Around line 114-148: Add tests that assert interpolation behavior at exact
window boundaries to catch off-by-one errors: after pre-populating storage via
storage.set(previousKey, '<count>') with previousWindowStart and
currentWindowStart defined as in existing tests, add one test where
nowSpy.mockReturnValue(currentWindowStart) (exact start of current window) and
assert limiter.check('test-key', limit, 60_000) yields the expected
allowed/remaining when previous weight should be 0, and another test where
nowSpy.mockReturnValue(currentWindowStart - 1) (exact end of previous window)
and assert limiter.check(...) yields expected allowed/remaining when previous
weight should be 1; use the same symbols limiter.check, nowSpy, storage.set,
previousKey, previousWindowStart, and currentWindowStart so the new specs mirror
the existing tests and verify exact-threshold acceptance/rejection.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d483570d-d3e5-4a25-9836-41a3f4b98abf

📥 Commits

Reviewing files that changed from the base of the PR and between 9db24a8 and b681a9f.

📒 Files selected for processing (11)
  • apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts
  • eslint.config.mjs
  • libs/guard/src/ip-filter/__tests__/ip-filter.spec.ts
  • libs/guard/src/ip-filter/ip-filter.ts
  • libs/guard/src/manager/guard.factory.ts
  • libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts
  • libs/sdk/src/scope/flows/http.request.flow.ts
  • libs/testing/src/server/port-registry.ts
🚧 Files skipped from review as they are similar to previous changes (4)
  • libs/sdk/src/scope/flows/http.request.flow.ts
  • libs/guard/src/manager/guard.factory.ts
  • libs/testing/src/server/port-registry.ts
  • apps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.ts

Comment on lines +249 to +259
describe('IPv6 CIDR ranges', () => {
it('should match IPs in an IPv6 /64 range', () => {
const filter = new IpFilter({
allowList: ['2001:db8::/32'],
defaultAction: 'deny',
});

expect(filter.check('2001:0db8::1').allowed).toBe(true);
expect(filter.check('2001:0db8:ffff::1').allowed).toBe(true);
expect(filter.check('2001:0db9::1').allowed).toBe(false);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test description mismatch: says "/64 range" but tests /32.

The test description at line 250 mentions "IPv6 /64 range" but the allowList uses '2001:db8::/32'.

Suggested fix
-    it('should match IPs in an IPv6 /64 range', () => {
+    it('should match IPs in an IPv6 /32 range', () => {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
describe('IPv6 CIDR ranges', () => {
it('should match IPs in an IPv6 /64 range', () => {
const filter = new IpFilter({
allowList: ['2001:db8::/32'],
defaultAction: 'deny',
});
expect(filter.check('2001:0db8::1').allowed).toBe(true);
expect(filter.check('2001:0db8:ffff::1').allowed).toBe(true);
expect(filter.check('2001:0db9::1').allowed).toBe(false);
});
describe('IPv6 CIDR ranges', () => {
it('should match IPs in an IPv6 /32 range', () => {
const filter = new IpFilter({
allowList: ['2001:db8::/32'],
defaultAction: 'deny',
});
expect(filter.check('2001:0db8::1').allowed).toBe(true);
expect(filter.check('2001:0db8:ffff::1').allowed).toBe(true);
expect(filter.check('2001:0db9::1').allowed).toBe(false);
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/ip-filter/__tests__/ip-filter.spec.ts` around lines 249 - 259,
The test description and the CIDR in the IpFilter setup are inconsistent: the
it() text says "IPv6 /64 range" while the allowList uses '2001:db8::/32'; fix by
making them match—either change the test description to "IPv6 /32 range" or
change the allowList CIDR to '2001:db8::/64' in the test that constructs new
IpFilter({ allowList: [...], defaultAction: 'deny' }) and adjust the
expectations if you change the CIDR.

Comment on lines +499 to +509
it('should handle IPv4-mapped IPv6 addresses', () => {
const filter = new IpFilter({
allowList: ['::ffff:192.168.1.0/128'],
defaultAction: 'deny',
});

// Note: exact match required for /128
// ::ffff:192.168.1.0 is the IPv4-mapped representation
const result = filter.check('::ffff:192.168.1.0');
expect(result).toBeDefined();
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Strengthen assertion: toBeDefined() does not verify allow/deny behavior.

This test only asserts that result is defined, which doesn't validate the actual filtering logic. Since the implementation now correctly handles IPv4-mapped IPv6 with the valid flag, this test should explicitly verify the expected outcome.

Suggested fix
       const result = filter.check('::ffff:192.168.1.0');
-      expect(result).toBeDefined();
+      expect(result.allowed).toBe(true);
+      expect(result.reason).toBe('allowlisted');
+      expect(result.matchedRule).toBe('::ffff:192.168.1.0/128');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/guard/src/ip-filter/__tests__/ip-filter.spec.ts` around lines 499 - 509,
The test currently only asserts result is defined; update it to assert the
actual allow/deny outcome and validity: call
IpFilter.check('::ffff:192.168.1.0') and assert that the returned object
indicates an allowed decision (e.g., result.action === 'allow' or result.allowed
=== true) and that result.valid === true to ensure the IPv4-mapped IPv6 address
is both recognized and permitted by the allowList/defaultAction logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant