feat: add rate limiting, concurrency control, and timeout utilities with IP filtering support#278
feat: add rate limiting, concurrency control, and timeout utilities with IP filtering support#278frontegg-david wants to merge 2 commits intomainfrom
Conversation
…ith IP filtering support
📝 WalkthroughWalkthroughThis PR introduces the Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant HTTP Request Flow
participant Tool Call Flow
participant GuardManager
participant Storage
Client->>HTTP Request Flow: HTTP POST /mcp
HTTP Request Flow->>GuardManager: checkGlobalRateLimit()
GuardManager->>Storage: mget(currentWindow, previousWindow)
Storage-->>GuardManager: counts
GuardManager-->>HTTP Request Flow: RateLimitResult
alt Rate Limit Exceeded
HTTP Request Flow-->>Client: 429 with Retry-After
else Allowed
HTTP Request Flow->>Tool Call Flow: acquireQuota()
Tool Call Flow->>GuardManager: checkRateLimit(tool)
GuardManager->>Storage: mget(currentWindow, previousWindow)
Storage-->>GuardManager: counts
GuardManager-->>Tool Call Flow: RateLimitResult
alt Per-Tool Limit Exceeded
Tool Call Flow-->>Client: 429 Error
else Allowed
Tool Call Flow->>GuardManager: acquireSemaphore(tool)
GuardManager->>Storage: incr(activeCount)
Storage-->>GuardManager: SemaphoreTicket
GuardManager-->>Tool Call Flow: SemaphoreTicket
Tool Call Flow->>Tool Call Flow: execute() with timeout
alt Timeout Exceeded
Tool Call Flow-->>Client: 408 ExecutionTimeoutError
else Success
Tool Call Flow-->>Client: Result
end
Tool Call Flow->>GuardManager: release(ticket)
GuardManager->>Storage: decr(activeCount)
end
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
|
Performance Test ResultsStatus: ✅ All tests passed Summary
Total: 100 tests across 21 projects 📊 View full report in workflow run Generated at: 2026-03-16T02:29:31.911Z |
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (16)
libs/guard/src/partition-key/partition-key.resolver.ts-27-28 (1)
27-28:⚠️ Potential issue | 🟠 MajorAvoid a shared
'unknown-ip'partition.Unlike the
userIdbranch on Line 32, Line 28 collapses every caller withoutclientIpinto the same key. That turns per-client guards into a shared throttle. Falling back tosessionIdwhen available—or rejectingpartitionBy: 'ip'when the IP is missing—preserves isolation.🔧 Minimal fallback
case 'ip': - return ctx.clientIp ?? 'unknown-ip'; + return ctx.clientIp ?? ctx.sessionId;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/src/partition-key/partition-key.resolver.ts` around lines 27 - 28, The 'ip' branch currently collapses missing IPs into a shared 'unknown-ip' key; update the case handling in partition-key.resolver.ts (the switch branch for "ip") to preserve per-client isolation by returning ctx.clientIp ?? ctx.sessionId and, if neither exists, fail fast (throw a clear error like 'partitionBy "ip" requires clientIp or sessionId') instead of returning a shared constant; this keeps behavior consistent with the userId branch and avoids a shared throttle across clients.docs/frontmcp/servers/guard.mdx-249-259 (1)
249-259:⚠️ Potential issue | 🟠 MajorAdd a spoofing warning around
trustProxy.This section normalizes
trustProxy: truewithout calling out that it is only safe behind infrastructure you control. Otherwise clients can forgeX-Forwarded-Forand bypass IP filtering or any IP-partitioned rate limits.Also applies to: 286-294
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/frontmcp/servers/guard.mdx` around lines 249 - 259, The docs show ipFilter with trustProxy: true but omit a spoofing warning; update the FrontMcp/ipFilter documentation to call out that setting trustProxy: true only safe when running behind infrastructure you control (load balancers/proxies that properly sanitize X-Forwarded-For), otherwise clients can forge X-Forwarded-For and bypass IP filters or rate limits; mention trustedProxyDepth must be configured to match the number of trusted hops and recommend defaulting to false or making trustProxy configurable, and add a short advisory next to the trustProxy and trustedProxyDepth fields referencing security implications and deployment guidance.libs/guard/jest.config.ts-28-35 (1)
28-35:⚠️ Potential issue | 🟠 MajorThe coverage gate is still below the repo bar.
branchesis set to 90, and withoutcollectCoverageFromunimported guard modules can sit outside coverage entirely. That weakens the 95% requirement for a brand-new library surface.As per coding guidelines, "Maintain 95%+ test coverage across all metrics (statements, branches, functions, lines) in Jest tests."Suggested config tightening
module.exports = { displayName: 'guard', preset: '../../jest.preset.js', testEnvironment: 'node', @@ }, moduleFileExtensions: ['ts', 'js', 'html'], + collectCoverageFrom: [ + '<rootDir>/src/**/*.ts', + '!<rootDir>/src/**/__tests__/**', + ], coverageDirectory: '../../coverage/unit/guard', coverageThreshold: { global: { statements: 95, - branches: 90, + branches: 95, functions: 95, lines: 95, }, }, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/jest.config.ts` around lines 28 - 35, The Jest config's coverageThreshold currently sets branches to 90 and lacks collectCoverageFrom, allowing unimported modules to avoid coverage; update the coverageThreshold so branches: 95 (matching statements/functions/lines) and add a collectCoverageFrom array (e.g., "src/**/*.ts" and exclude any generated/index barrels as needed) to ensure all guard modules are included in coverage collection; modify the jest config entries coverageThreshold and collectCoverageFrom accordingly to enforce the 95% repo bar for statements, branches, functions, and lines.apps/e2e/demo-e2e-guard/e2e/guard-ip-filter.e2e.spec.ts-4-7 (1)
4-7:⚠️ Potential issue | 🟠 MajorTests remain unimplemented as placeholder test.todo() calls.
Lines 4–6 confirm IP filter is not yet wired into the SDK flow stages (call-tool.flow.ts, http.request.flow.ts), and lines 17–20 contain only test.todo() placeholders. Implement concrete test cases once the SDK integration is complete, or update the comment to clarify whether this feature is deferred to a follow-up PR.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/guard-ip-filter.e2e.spec.ts` around lines 4 - 7, The placeholder tests in guard-ip-filter.e2e.spec.ts currently use test.todo() because GuardManager.checkIpFilter isn’t wired into the SDK flows (call-tool.flow.ts, http.request.flow.ts); either implement real E2E tests after wiring by replacing test.todo() with concrete scenarios that exercise GuardManager.checkIpFilter through the SDK (mock or start flows in call-tool.flow.ts and http.request.flow.ts and assert allow/deny behavior), or update the file comment to explicitly state this is deferred to a follow-up PR and add a TODO referencing GuardManager.checkIpFilter and the two flow files so reviewers know where to hook tests later.libs/sdk/src/scope/flows/http.request.flow.ts-172-184 (1)
172-184:⚠️ Potential issue | 🟠 MajorShort-circuit the flow on global-rate-limit rejection and include JSON-RPC
id.After sending the 429 response, the flow should terminate immediately. Also, include the request
idin the JSON-RPC error payload for correct client correlation.Suggested fix
const result = await manager.checkGlobalRateLimit(partitionCtx); if (!result.allowed) { + const requestId = (this.rawInput.request.body as { id?: string | number | null } | undefined)?.id ?? null; const retryAfter = Math.ceil((result.retryAfterMs ?? 60_000) / 1000); this.respond( httpRespond.json( { jsonrpc: '2.0', + id: requestId, error: { code: -32029, message: `Rate limit exceeded. Retry after ${retryAfter} seconds` }, }, { status: 429, headers: { 'Retry-After': String(retryAfter) } }, ), ); + return this.handled(); }As per coding guidelines: MCP Response Types should use strictly typed MCP protocol types (e.g.,
Promise<GetPromptResult>,Promise<ReadResourceResult>), never useunknownfor MCP protocol types.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/scope/flows/http.request.flow.ts` around lines 172 - 184, The global rate-limit branch in manager.checkGlobalRateLimit currently calls this.respond(httpRespond.json(...)) but does not short-circuit the flow or include the JSON-RPC request id; fix by immediately returning after the respond (add return) and include the incoming JSON-RPC request id in the error payload (include "id": <incoming request id>, e.g., from partitionCtx.request.id or the flow's RPC request variable) inside the error object passed to httpRespond.json; keep the 429 status and Retry-After header and ensure the message stays as is.apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts-16-57 (1)
16-57:⚠️ Potential issue | 🟠 MajorReplace unsafe request parameter typing with
APIRequestContext.The complex conditional type and
unknowncast remove TypeScript safety and can mask API misuse. Import and use Playwright'sAPIRequestContextdirectly for both helper functions.Suggested fix
-import { test, expect } from '@playwright/test'; +import { test, expect, type APIRequestContext } from '@playwright/test'; @@ -async function initializeSession(request: ReturnType<typeof test.extend>['request'] extends infer R ? R : never) { - const response = await (request as { post: Function }).post(MCP_ENDPOINT, { +async function initializeSession(request: APIRequestContext) { + const response = await request.post(MCP_ENDPOINT, { @@ async function callTool( - request: unknown, + request: APIRequestContext, sessionId: string, toolName: string, args: Record<string, unknown>, id: string | number = 'call-1', ) { - return (request as { post: Function }).post(MCP_ENDPOINT, { + return request.post(MCP_ENDPOINT, {Per coding guidelines: Use strict TypeScript type checking with no
anytypes without justification; useunknowninstead ofanyfor generic type defaults.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts` around lines 16 - 57, The helper functions initializeSession and callTool use unsafe/complex typings and unknown casts for the request parameter; import Playwright's APIRequestContext and change the request parameter type to APIRequestContext in both functions, remove the manual casts like (request as { post: Function }) and call request.post(...) directly, keeping the same MCP_ENDPOINT payload and headers; this ensures proper typing for request.post and response.headers() and satisfies strict TypeScript checks.libs/guard/src/manager/guard.factory.ts-38-46 (1)
38-46:⚠️ Potential issue | 🟠 MajorRemove the unnecessary
as unknown as stringcast on line 46.The
GuardLogger.infosignature isinfo(msg: string, ...args: unknown[]): void;, which already accepts the metadata object as a rest argument of typeunknown. The double cast serves no purpose, violates strict TypeScript typing guidelines, and misleads readers into thinking the object needs string conversion when it doesn't.Fix
logger?.info('GuardManager initialized', { keyPrefix, hasGlobalRateLimit: !!config.global, hasGlobalConcurrency: !!config.globalConcurrency, hasDefaultRateLimit: !!config.defaultRateLimit, hasDefaultConcurrency: !!config.defaultConcurrency, hasDefaultTimeout: !!config.defaultTimeout, hasIpFilter: !!config.ipFilter, - } as unknown as string); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/src/manager/guard.factory.ts` around lines 38 - 46, The log call in GuardManager uses an unnecessary double-cast on the metadata object; remove the "as unknown as string" from the logger?.info invocation so the object is passed directly as the second argument (the rest/unknown[] param) to GuardLogger.info; update the call in guard.factory.ts where logger?.info('GuardManager initialized', { keyPrefix, hasGlobalRateLimit: !!config.global, hasGlobalConcurrency: !!config.globalConcurrency, hasDefaultRateLimit: !!config.defaultRateLimit, hasDefaultConcurrency: !!config.defaultConcurrency, hasDefaultTimeout: !!config.defaultTimeout, hasIpFilter: !!config.ipFilter }) to pass the object without any casting.libs/guard/src/timeout/timeout.ts-12-24 (1)
12-24:⚠️ Potential issue | 🟠 MajorBackground promise execution continues after timeout, allowing side effects even when caller receives error.
The
withTimeoutfunction rejects the wrapper promise when timeout expires viaPromise.race, but the underlyingfn()promise continues executing. Incall-tool.flow.ts:584andcall-agent.flow.ts:445, whendoExecutecallstoolContext.execute(), the operation can complete after the caller receivesExecutionTimeoutError, resulting in silently completed work despite the timeout.To prevent background execution, pass the
AbortSignaltofn()so callers can respect cancellation:Proposed fix
-export async function withTimeout<T>(fn: () => Promise<T>, timeoutMs: number, entityName: string): Promise<T> { +export async function withTimeout<T>( + fn: (signal: AbortSignal) => Promise<T>, + timeoutMs: number, + entityName: string, +): Promise<T> { const controller = new AbortController(); const timer = setTimeout(() => controller.abort(), timeoutMs); try { return await Promise.race([ - fn(), + fn(controller.signal), new Promise<never>((_, reject) => { controller.signal.addEventListener('abort', () => { reject(new ExecutionTimeoutError(entityName, timeoutMs)); - }); + }, { once: true }); }), ]); } finally { clearTimeout(timer); } }Call sites must be updated to accept the signal and respect abort (e.g., pass to fetch/http calls, check
signal.abortedin loops, or propagate to nested operations).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/src/timeout/timeout.ts` around lines 12 - 24, withTimeout currently races fn() against an abort promise but does not cancel the underlying work; change withTimeout<T> to accept fn: (signal: AbortSignal) => Promise<T>, create an AbortController, call fn(controller.signal) inside the race, and still reject with new ExecutionTimeoutError(entityName, timeoutMs) when controller.abort(); ensure you clear the timer and remove the abort event listener in a finally block to avoid leaks. Update all call sites (e.g., doExecute / toolContext.execute in call-tool.flow.ts and call-agent.flow.ts) to accept and propagate the AbortSignal and to abort/respect it in fetch/http calls, loops, or nested operations so background execution stops after timeout.apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts-40-41 (1)
40-41:⚠️ Potential issue | 🟠 MajorAssert payload success, not only settled status, for “should succeed” cases.
At Lines 40-41 and 111-112,
fulfilledalone is insufficient; these tests should also asserttoBeSuccessful()on fulfilled values.💡 Proposed fix
// First should succeed expect(result1.status).toBe('fulfilled'); + if (result1.status === 'fulfilled') { + expect(result1.value).toBeSuccessful(); + } ... // First should succeed (eventually) expect(result1.status).toBe('fulfilled'); + if (result1.status === 'fulfilled') { + expect(result1.value).toBeSuccessful(); + }Also applies to: 111-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts` around lines 40 - 41, The test currently only asserts PromiseSettledResult.status === 'fulfilled' for the "should succeed" cases (e.g., result1 at the top of the file and the similar check around result2 at the later block); update each fulfilled assertion to also assert that the fulfilled value is successful by calling toBeSuccessful() on the fulfilled value (e.g., after expect(result1.status).toBe('fulfilled') add expect((result1 as PromiseFulfilledResult<any>).value).toBeSuccessful(), and do the same for the other fulfilled result in the later block) so the test checks payload success, not just settled status.apps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.ts-70-73 (1)
70-73:⚠️ Potential issue | 🟠 MajorStrengthen success assertions for the first two calls.
At Lines 70-73, checking only
fulfilleddoes not guarantee business success; a fulfilled MCP response can still be an error payload.💡 Proposed fix
// First two should succeed expect(results[0].status).toBe('fulfilled'); expect(results[1].status).toBe('fulfilled'); + if (results[0].status === 'fulfilled') { + expect(results[0].value).toBeSuccessful(); + } + if (results[1].status === 'fulfilled') { + expect(results[1].value).toBeSuccessful(); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.ts` around lines 70 - 73, The assertions only check PromiseSettledResults are 'fulfilled' but not that the MCP responses indicate business success; update the assertions for results[0] and results[1] to inspect their value payloads (e.g., results[0].value and results[1].value) and assert expected success indicators such as a 200 status code and/or a success flag (for example assert value.statusCode === 200 and value.body.error is undefined, or value.success === true), rather than only checking .status === 'fulfilled'.libs/sdk/src/agent/flows/call-agent.flow.ts-367-373 (1)
367-373:⚠️ Potential issue | 🟠 MajorThis bypasses top-level semaphore defaults for agents without local config.
Returning here when
agent.metadata.concurrencyis absent meansdefaultConcurrencyandglobalConcurrencynever apply unless every agent opts in individually. The flow should still enter the guard manager, or add a separate path for global/default semaphore acquisition.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/agent/flows/call-agent.flow.ts` around lines 367 - 373, The early return when agent.metadata.concurrency is falsy skips applying defaultConcurrency/globalConcurrency; modify the logic in call-agent.flow.ts (around the block using this.state.required.agent and agent.metadata.concurrency) so that instead of returning you route through the guard/semaphore manager: if agent.metadata.concurrency is missing, resolve to the top-level concurrency config (globalConcurrency or defaultConcurrency) and invoke the same guard acquisition path (the guard manager / acquireSemaphore mark and semaphore acquisition code) so global defaults are applied; ensure you still call this.state.agentContext?.mark('acquireSemaphore') and use the same semaphore acquisition functions used for agents with local config.libs/sdk/src/tool/flows/call-tool.flow.ts-487-493 (1)
487-493:⚠️ Potential issue | 🟠 MajorUse a scope-qualified guard key.
tool.metadata.namecan collide across apps/scopes. With the current key, same-named tools share rate-limit buckets and semaphore slots, so one app can throttle or block another. Usetool.fullNameor another guaranteed-unique identifier for the guard manager calls.Also applies to: 528-534
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/tool/flows/call-tool.flow.ts` around lines 487 - 493, The rate-limiting key currently uses tool.metadata.name which can collide across scopes; update both calls that interact with the guard manager (manager.checkRateLimit and the other manager.acquire/guard-related call around the second block) to use a scope-qualified identifier such as tool.fullName (or another guaranteed-unique id) instead of tool.metadata.name so each tool's rate-limit and semaphore buckets are unique per app/scope; replace all uses of tool.metadata.name in these guard-manager calls (including the logged context) with tool.fullName and keep the existing retry/log/RateLimitError behavior unchanged.libs/sdk/src/agent/flows/call-agent.flow.ts-344-346 (1)
344-346:⚠️ Potential issue | 🟠 MajorUse a scope-qualified guard key for agents.
agent.metadata.nameis not unique across apps/scopes, so same-named agents will share rate-limit buckets and semaphore slots. That lets one app throttle or block another's agent traffic. Useagent.fullNameor another guaranteed-unique identifier for the guard manager calls.Also applies to: 384-386
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/agent/flows/call-agent.flow.ts` around lines 344 - 346, The rate-limit guard is using a non-unique key (agent.metadata.name) so replace that with the scope-qualified unique identifier (agent.fullName) wherever the guard manager is called (e.g., manager.checkRateLimit(...) and the related semaphore/reservation calls around the other referenced lines such as manager.reserveSemaphore(...)/manager.releaseSemaphore(...)); update the checkRateLimit call to pass agent.fullName instead of agent.metadata.name and do the same substitution for all other guard manager invocations so each agent’s quota/semaphore is partitioned by fullName.libs/sdk/src/tool/flows/call-tool.flow.ts-511-517 (1)
511-517:⚠️ Potential issue | 🟠 MajorThis makes top-level semaphore defaults unreachable for tools without local config.
Returning here when
tool.metadata.concurrencyis absent meansdefaultConcurrencyandglobalConcurrencynever apply to those tools. The flow should still enter the guard manager and let it resolve the effective semaphore policy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/tool/flows/call-tool.flow.ts` around lines 511 - 517, The early return when tool.metadata.concurrency is falsy prevents fallback to top-level defaults; remove the return so the flow continues into the guard manager (keeping the existing this.state.toolContext?.mark('acquireSemaphore') and this.logger.verbose call) and pass the possibly-undefined config through so the guard manager can resolve effective semaphore policy using defaultConcurrency/globalConcurrency; update code around the check of const config = tool.metadata.concurrency in call-tool.flow (the acquireSemaphore logic) to avoid short-circuiting when config is missing.libs/guard/src/ip-filter/ip-filter.ts-107-115 (1)
107-115:⚠️ Potential issue | 🟠 MajorValidate octet and group format before parsing to prevent truncation bypasses.
parseInt('1foo', 10)returns1, allowing malformed IPs like10.0.0.1footo be treated as valid10.0.0.1. On an access-control boundary, this is a security bypass. Validate the exact format (1–3 decimal digits for IPv4 octets, 1–4 hex digits for IPv6 groups) before conversion:
- IPv4: Add
if (!/^\d{1,3}$/.test(part)) return null;beforeparseInt- IPv6: Add
if (!/^[0-9a-fA-F]{1,4}$/.test(group)) return null;beforeparseIntThis applies to both
parseIpv4(lines 107–115) andparseIpv6(lines 146–149).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/src/ip-filter/ip-filter.ts` around lines 107 - 115, The parsing functions currently allow truncated numeric parsing (e.g., "1foo" -> 1), so update parseIpv4 and parseIpv6 to validate each octet/group with strict regex before calling parseInt/parseInt/BigInt conversion: in parseIpv4 add a check like /^\d{1,3}$/ on each part and return null if it fails, and in parseIpv6 add a check like /^[0-9A-Fa-f]{1,4}$/ on each group and return null if it fails; keep the existing numeric-range checks after the regex validation and then perform the conversion as before in the functions parseIpv4 and parseIpv6.libs/sdk/src/common/metadata/agent.metadata.ts-491-501 (1)
491-501:⚠️ Potential issue | 🟠 MajorAlign runtime schema validation with declared guard configuration types.
The
rateLimit,concurrency, andtimeoutfields declare full type compatibility withRateLimitConfig,ConcurrencyConfig, andTimeoutConfig, but the runtime schemas only validate required fields and allow arbitrary extra properties viaz.looseObject(). This lets invalid configurations pass validation and fail later during execution. Use the shared guard schemas astool.metadata.tsalready does.Suggested fix
+import type { RateLimitConfig, ConcurrencyConfig, TimeoutConfig } from '@frontmcp/guard'; +import { rateLimitConfigSchema, concurrencyConfigSchema, timeoutConfigSchema } from '@frontmcp/guard'; ... - rateLimit?: import('@frontmcp/guard').RateLimitConfig; + rateLimit?: RateLimitConfig; ... - concurrency?: import('@frontmcp/guard').ConcurrencyConfig; + concurrency?: ConcurrencyConfig; ... - timeout?: import('@frontmcp/guard').TimeoutConfig; + timeout?: TimeoutConfig; ... - rateLimit: z.looseObject({ maxRequests: z.number() }).optional(), - concurrency: z.looseObject({ maxConcurrent: z.number() }).optional(), - timeout: z.looseObject({ executeMs: z.number() }).optional(), + rateLimit: rateLimitConfigSchema.optional(), + concurrency: concurrencyConfigSchema.optional(), + timeout: timeoutConfigSchema.optional(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/common/metadata/agent.metadata.ts` around lines 491 - 501, The runtime schemas for the agent metadata fields rateLimit, concurrency, and timeout currently use z.looseObject() and therefore allow extra/invalid properties; replace those loose validators with the shared guard schemas used in tool.metadata.ts so the runtime validation matches the declared types (RateLimitConfig, ConcurrencyConfig, TimeoutConfig). Import the corresponding guard schema exports from the shared guard module (the same names used in tool.metadata.ts) and use them for the rateLimit, concurrency, and timeout schema entries inside agent.metadata.ts, removing z.looseObject() wrappers so invalid shapes are rejected at validation time.
🟡 Minor comments (4)
apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts-23-23 (1)
23-23:⚠️ Potential issue | 🟡 MinorUse the testing library's protocol version constant instead of hardcoding.
The test hardcodes
'2025-03-26', but@frontmcp/testingprovidesDEFAULT_PROTOCOL_VERSION = '2025-06-18'. Either import and reuse this constant, or configure it via test client options to maintain consistency across the codebase and avoid version drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts` at line 23, Replace the hardcoded protocol version string with the testing library constant: import DEFAULT_PROTOCOL_VERSION from "@frontmcp/testing" (or reference it from the test client options) and use DEFAULT_PROTOCOL_VERSION in place of the literal '2025-03-26' in the guard-browser.pw.spec.ts test configuration so the test uses the canonical protocol version constant.libs/sdk/src/scope/flows/http.request.flow.ts-160-160 (1)
160-160:⚠️ Potential issue | 🟡 MinorUse a single type cast instead of the double cast for scope access.
(this.scope as unknown as Scope)bypasses type safety unnecessarily. SinceScopeextendsScopeEntry, a single cast(this.scope as Scope)is both type-safe and consistent with the pattern used throughout the codebase incall-tool.flow.tsandcall-agent.flow.ts.Current code
const manager = (this.scope as unknown as Scope).rateLimitManager;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/scope/flows/http.request.flow.ts` at line 160, Replace the double type cast on this.scope with a single, direct cast to Scope to avoid bypassing type safety: change uses of (this.scope as unknown as Scope) where you access rateLimitManager to (this.scope as Scope).rateLimitManager so it matches the pattern used in call-tool.flow.ts and call-agent.flow.ts; ensure any other occurrences in http.request.flow.ts follow the same single-cast approach.apps/e2e/demo-e2e-guard/src/main.ts-4-4 (1)
4-4:⚠️ Potential issue | 🟡 MinorValidate
PORTbefore wiring it intohttp.portto prevent NaN failures during server bootstrap.Line 4 parses
PORTwithout NaN validation. A non-numeric value (e.g.,PORT=abc) producesNaN, causing startup to fail. Other similar demo-e2e files implement validation using patterns likeNumber.isNaN()orNumber.isFinite()checks; this file should match that pattern for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/src/main.ts` at line 4, The PORT parsing in main.ts currently assigns const port = parseInt(process.env['PORT'] ?? '50400', 10) without validating the result; update the code that constructs the port value (the const port variable used for http.port) to validate that parseInt returned a finite number (e.g., using Number.isFinite or !Number.isNaN) and if invalid fall back to the default numeric port 50400 or exit with a clear error; ensure the validated numeric port is what gets passed into http.port so non-numeric PORT values do not produce NaN at server bootstrap.apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts-48-50 (1)
48-50:⚠️ Potential issue | 🟡 MinorAssert warm-up calls succeed before validating isolation.
On Line 48–Line 50, the three setup calls are not asserted. A failed warm-up call can still let the “blocked” assertion pass and mask a regression.
Proposed fix
- for (let i = 0; i < 3; i++) { - await mcp.tools.call('rate-limited', { message: `req-${i}` }); - } + for (let i = 0; i < 3; i++) { + const result = await mcp.tools.call('rate-limited', { message: `req-${i}` }); + expect(result).toBeSuccessful(); + expect(result).toHaveTextContent(`req-${i}`); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts` around lines 48 - 50, The three warm-up calls using mcp.tools.call('rate-limited', { message: `req-${i}` }) must be asserted before the isolation/block assertions: capture each call's result (the array/response from mcp.tools.call invoked in the for loop) and assert it indicates success (e.g., expected body/message or success flag) inside the loop so a failing warm-up will fail the test immediately; update the for loop that calls mcp.tools.call('rate-limited', ...) to store each response and add an assertion (using the test framework's expect) that the response equals the expected successful value before proceeding to the blocked-request assertions.
🧹 Nitpick comments (8)
libs/testing/src/server/port-registry.ts (1)
54-59: Update the section range comment to match the new allocation.
demo-e2e-guarduses port50400, but the section label still says50300-50399. Keeping this comment accurate will prevent future range collisions/misclassification.Proposed small cleanup
- // Infrastructure E2E tests (50300-50399) + // Infrastructure E2E tests (50300-50499)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/testing/src/server/port-registry.ts` around lines 54 - 59, The section comment "Infrastructure E2E tests (50300-50399)" is now incorrect because entries like 'demo-e2e-guard' allocate ports starting at 50400; update the section range comment to reflect the actual allocation (e.g., "Infrastructure E2E tests (50300-50409)") so the block that includes 'demo-e2e-redis', 'demo-e2e-serverless', 'demo-e2e-uipack', 'demo-e2e-agent-adapters', and 'demo-e2e-guard' accurately describes the port span and avoids future collisions.apps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.js (1)
1-42: Consider whether compiled JS files should be committed.This file appears to be TypeScript compilation output (contains
__decoratehelper from decorator compilation). The corresponding TypeScript source exists attimeout.tool.ts.The ESLint
no-varerrors are expected in compiled output and not actionable here. However, committing both.tssource and compiled.jsoutput can lead to drift if the JS isn't regenerated after TS changes.If these JS files are intentionally committed for runtime use (e.g., the demo runs without a build step), consider adding them to
.eslintignoreto suppress the false positive warnings. Otherwise, consider generating them at build time instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.js` around lines 1 - 42, The committed timeout.tool.js is compiled TypeScript output (contains __decorate) and duplicates the TypeScript source (timeout.tool.ts), which can cause drift; either remove the compiled file from the repo and ensure JS is generated during the build (update CI/build scripts so TimeoutTool is compiled from timeout.tool.ts), or if the JS must remain for runtime, add timeout.tool.js (or the compiled output folder) to .eslintignore to suppress linting of no-var and other compiler artifacts and document that compiled artifacts must be regenerated when timeout.tool.ts / the TimeoutTool class changes.libs/sdk/src/scope/scope.instance.ts (2)
104-106: Rename_rateLimitManagerto follow private field naming guideline.Line 105 introduces an underscore-prefixed private field, which conflicts with the project’s private-field convention.
Suggested rename
- private _rateLimitManager?: GuardManager; + private guardManager?: GuardManager; @@ - this._rateLimitManager = mgr; + this.guardManager = mgr; @@ get rateLimitManager(): GuardManager | undefined { - return this._rateLimitManager; + return this.guardManager; }As per coding guidelines: "Use
privatekeyword without underscore prefix for private fields in classes, expose via getter methods instead of exposing private fields directly".Also applies to: 713-715
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/scope/scope.instance.ts` around lines 104 - 106, The private field _rateLimitManager violates the no-underscore private-field guideline; rename the field to rateLimitManager (type GuardManager) and update all usages (e.g., references in methods and any assignments) to the new name, and if external access is required expose it via a getter (get rateLimitManager()) instead of making the field public; apply the same renaming pattern to the other private fields mentioned (the similar fields around the other occurrences) so all underscore-prefixed private properties are converted and references updated consistently.
185-205: Consider extracting guard initialization frominitialize()into a helper.The guard bootstrap block is feature-specific orchestration in
scope.instance.ts; moving it to a helper keeps this initializer focused and consistent with existing capability-registration patterns.Based on learnings: "Applies to **/scope.instance.ts : Use helper functions from skill module (like
registerSkillCapabilitiesanddetectSkillsOnlyMode) instead of inline feature-specific registration logic in scope.instance.ts".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/sdk/src/scope/scope.instance.ts` around lines 185 - 205, The guard bootstrap logic (throttleConfig, rateLimitPromise, createGuardManager, and assignment to this._rateLimitManager) should be extracted out of initialize() into a dedicated helper (e.g., a new initializeGuards or registerGuardCapabilities function) to mirror existing patterns like registerSkillCapabilities and detectSkillsOnlyMode; move the conditional creation of rateLimitPromise and its then-handler into that helper, accept needed inputs (this.metadata.throttle, this.cliMode, this.logger) and return the Promise<void> or undefined so initialize() can simply call the helper and push its result into batch1, keeping scope.instance.initialize focused and consistent with other capability registration code.apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts (1)
80-82: Use condition-based retry instead of fixed sleep to reduce flakiness.Line 81’s fixed wait (
5500ms) can be brittle around timing boundaries. Polling until success (with an overall timeout) is usually more stable for window-reset assertions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts` around lines 80 - 82, Replace the fixed sleep (await new Promise(resolve => setTimeout(resolve, 5500))) with a condition-based retry loop that polls the rate-limit reset condition until success or a max timeout; repeatedly perform the same request/assertion inside a small-interval loop (e.g., 200–500ms) and break when the window-reset assertion passes, otherwise fail after the overall timeout (e.g., 10s). Update the test in guard-rate-limit.e2e.spec.ts so the polling loop wraps the existing window-reset assertion and uses the same request/response logic instead of the hardcoded 5500ms sleep to reduce flakiness.libs/guard/src/errors/errors.ts (1)
71-93: Consider masking IP addresses in error messages for production logging.The
IpBlockedErrorandIpNotAllowedErrorinclude the full client IP in the error message. While useful for debugging, logging full IPs could raise privacy concerns (GDPR/CCPA). Consider whether these messages might be logged and whether partial masking (e.g.,"192.168.x.x") would be appropriate for production.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/src/errors/errors.ts` around lines 71 - 93, The error classes IpBlockedError and IpNotAllowedError currently embed the full client IP in the error message; change them to avoid including the raw IP in the public message by constructing a maskedIp (e.g., replace last octets or characters with "x" or asterisks) and pass that masked value to super for the human/log message while still storing the original value in the readonly clientIp property for internal use; update the constructors of IpBlockedError and IpNotAllowedError to compute maskedIp before calling super (or accept an optional flag to includeFullIp) so logs/messages use maskedIp but clientIp retains the full address.apps/e2e/demo-e2e-guard/e2e/helpers/exec-cli.ts (1)
8-8: Module-level build cache may cause issues in parallel test runs.The
buildDoneflag is module-level state. If tests run in parallel across different workers that import this module separately, each worker will rebuild. This is likely acceptable for E2E tests (which typically run serially), but worth noting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/helpers/exec-cli.ts` at line 8, Module-level buildDone flag can cause duplicate builds when workers import this module; replace the module-level boolean with a process-shared mechanism (e.g., check/create a temp lock file in os.tmpdir or use an env var like process.env.E2E_BUILD_DONE) so concurrent workers see the same state. Locate the buildDone symbol in exec-cli.ts and change its usage to: atomically check for the lock (fs.existsSync or fs.promises with proper error handling), perform the build only if the lock is absent, then create the lock (write a small file or set the env var) after a successful build; also ensure any cleanup/reset helper (exported if needed) is available for CI teardown. Keep build-start/finish logging and error handling intact and make the lock name unique to this project to avoid cross-suite collisions.libs/guard/src/concurrency/__tests__/semaphore.spec.ts (1)
185-196: Modernize the test pattern to use Jest'sexpect().rejectssyntax instead of try/catch withfail().While
fail()currently works, it's a legacy Jasmine global and not part of Jest's modern documented API. Usingexpect().rejects.toThrow()is more idiomatic and aligns with Jest best practices.♻️ Proposed fix
it('should include entity name and timeout in QueueTimeoutError', async () => { await semaphore.acquire('test', 1, 0, 'my-tool'); - try { - await semaphore.acquire('test', 1, 150, 'my-tool'); - fail('should have thrown'); - } catch (error) { - expect(error).toBeInstanceOf(QueueTimeoutError); - expect((error as QueueTimeoutError).entityName).toBe('my-tool'); - expect((error as QueueTimeoutError).queueTimeoutMs).toBe(150); - } + const error = await expect(semaphore.acquire('test', 1, 150, 'my-tool')).rejects.toThrow(); + expect(error).toBeInstanceOf(QueueTimeoutError); + expect((error as QueueTimeoutError).entityName).toBe('my-tool'); + expect((error as QueueTimeoutError).queueTimeoutMs).toBe(150); }, 10_000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/src/concurrency/__tests__/semaphore.spec.ts` around lines 185 - 196, Replace the try/catch + fail pattern in the test that calls semaphore.acquire with Jest's promise rejection matcher: call expect(semaphore.acquire('test', 1, 150, 'my-tool')).rejects to assert it throws a QueueTimeoutError and then inspect the error's properties; specifically use expect(...).rejects.toBeInstanceOf(QueueTimeoutError) and expect(...).rejects.resolves? No — instead await expect(...).rejects.toThrow() (or toBeInstanceOf) and to check entityName and queueTimeoutMs await the rejected promise and assert those properties (e.g., const err = await expect(...).rejects; then assert err.entityName and err.queueTimeoutMs). Update the test around semaphore.acquire, QueueTimeoutError, entityName and queueTimeoutMs to remove fail() and the try/catch, using Jest's expect().rejects pattern.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2c800be3-9058-4f43-bc13-202c6c4180e0
📒 Files selected for processing (104)
apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-cli.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-global.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-ip-filter.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-timeout.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/helpers/exec-cli.tsapps/e2e/demo-e2e-guard/fixture/frontmcp.config.jsapps/e2e/demo-e2e-guard/fixture/src/main.tsapps/e2e/demo-e2e-guard/jest.cli.config.tsapps/e2e/demo-e2e-guard/jest.e2e.config.tsapps/e2e/demo-e2e-guard/playwright.config.tsapps/e2e/demo-e2e-guard/project.jsonapps/e2e/demo-e2e-guard/src/apps/guard/index.jsapps/e2e/demo-e2e-guard/src/apps/guard/index.tsapps/e2e/demo-e2e-guard/src/apps/guard/tools/combined-guard.tool.jsapps/e2e/demo-e2e-guard/src/apps/guard/tools/combined-guard.tool.tsapps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-mutex.tool.jsapps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-mutex.tool.tsapps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-queued.tool.jsapps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-queued.tool.tsapps/e2e/demo-e2e-guard/src/apps/guard/tools/rate-limited.tool.jsapps/e2e/demo-e2e-guard/src/apps/guard/tools/rate-limited.tool.tsapps/e2e/demo-e2e-guard/src/apps/guard/tools/slow.tool.jsapps/e2e/demo-e2e-guard/src/apps/guard/tools/slow.tool.tsapps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.jsapps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.tsapps/e2e/demo-e2e-guard/src/apps/guard/tools/unguarded.tool.jsapps/e2e/demo-e2e-guard/src/apps/guard/tools/unguarded.tool.tsapps/e2e/demo-e2e-guard/src/main.tsapps/e2e/demo-e2e-guard/tsconfig.app.jsonapps/e2e/demo-e2e-guard/tsconfig.jsonapps/e2e/demo-e2e-guard/webpack.config.jsdocs/docs.jsondocs/frontmcp/guides/rate-limiting-and-guards.mdxdocs/frontmcp/sdk-reference/guard.mdxdocs/frontmcp/servers/guard.mdxlibs/guard/README.mdlibs/guard/jest.config.tslibs/guard/package.jsonlibs/guard/project.jsonlibs/guard/src/__tests__/exports.spec.tslibs/guard/src/concurrency/README.mdlibs/guard/src/concurrency/__tests__/semaphore.spec.tslibs/guard/src/concurrency/index.tslibs/guard/src/concurrency/semaphore.tslibs/guard/src/concurrency/types.tslibs/guard/src/errors/README.mdlibs/guard/src/errors/__tests__/errors.spec.tslibs/guard/src/errors/errors.tslibs/guard/src/errors/index.tslibs/guard/src/index.tslibs/guard/src/ip-filter/README.mdlibs/guard/src/ip-filter/__tests__/ip-filter.spec.tslibs/guard/src/ip-filter/index.tslibs/guard/src/ip-filter/ip-filter.tslibs/guard/src/ip-filter/types.tslibs/guard/src/manager/README.mdlibs/guard/src/manager/__tests__/guard.factory.spec.tslibs/guard/src/manager/__tests__/guard.manager.spec.tslibs/guard/src/manager/guard.factory.tslibs/guard/src/manager/guard.manager.tslibs/guard/src/manager/index.tslibs/guard/src/manager/types.tslibs/guard/src/partition-key/README.mdlibs/guard/src/partition-key/__tests__/partition-key.resolver.spec.tslibs/guard/src/partition-key/index.tslibs/guard/src/partition-key/partition-key.resolver.tslibs/guard/src/partition-key/types.tslibs/guard/src/rate-limit/README.mdlibs/guard/src/rate-limit/__tests__/rate-limiter.spec.tslibs/guard/src/rate-limit/index.tslibs/guard/src/rate-limit/rate-limiter.tslibs/guard/src/rate-limit/types.tslibs/guard/src/schemas/README.mdlibs/guard/src/schemas/__tests__/schemas.spec.tslibs/guard/src/schemas/index.tslibs/guard/src/schemas/schemas.tslibs/guard/src/timeout/README.mdlibs/guard/src/timeout/__tests__/timeout.spec.tslibs/guard/src/timeout/index.tslibs/guard/src/timeout/timeout.tslibs/guard/src/timeout/types.tslibs/guard/tsconfig.jsonlibs/guard/tsconfig.lib.jsonlibs/sdk/package.jsonlibs/sdk/project.jsonlibs/sdk/src/agent/flows/call-agent.flow.tslibs/sdk/src/common/metadata/agent.metadata.tslibs/sdk/src/common/metadata/front-mcp.metadata.tslibs/sdk/src/common/metadata/tool.metadata.tslibs/sdk/src/common/tokens/agent.tokens.tslibs/sdk/src/common/tokens/front-mcp.tokens.tslibs/sdk/src/common/tokens/tool.tokens.tslibs/sdk/src/index.tslibs/sdk/src/rate-limit/index.tslibs/sdk/src/scope/flows/http.request.flow.tslibs/sdk/src/scope/scope.instance.tslibs/sdk/src/tool/flows/call-tool.flow.tslibs/sdk/tsconfig.lib.jsonlibs/testing/src/server/port-registry.tstsconfig.base.json
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
eslint.config.mjs (1)
8-8: Minor redundancy:**/fixture/distis already covered by**/dist.The glob
**/distalready matches anydistdirectory at any depth, including**/fixture/dist. You can safely remove'**/fixture/dist'from the ignores list. The'**/fixture/libs'addition is appropriate for excluding fixture library artifacts.🧹 Suggested simplification
- ignores: ['**/dist', '**/*.d.ts', '**/*.d.ts.map', '**/fixture/dist', '**/fixture/libs'], + ignores: ['**/dist', '**/*.d.ts', '**/*.d.ts.map', '**/fixture/libs'],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eslint.config.mjs` at line 8, The ignores array in eslint.config.mjs contains a redundant entry; remove the specific '**/fixture/dist' string from the ignores list since the existing '**/dist' already covers all dist directories, leaving '**/dist', '**/*.d.ts', '**/*.d.ts.map', '**/fixture/libs' (and other entries) intact; update the ignores array declaration to drop '**/fixture/dist'.apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts (1)
16-33: Add response validation before extracting session ID.The
initializeSessionhelper extracts the session ID without verifying that the initialization request succeeded. If the server returns an error, subsequent tests will proceed with an undefinedsessionId, leading to confusing failures.🛡️ Proposed defensive check
async function initializeSession(request: APIRequestContext) { const response = await request.post(MCP_ENDPOINT, { data: { jsonrpc: '2.0', id: 'init-1', method: 'initialize', params: { protocolVersion: '2025-03-26', capabilities: {}, clientInfo: { name: 'playwright-test', version: '1.0.0' }, }, }, headers: { 'Content-Type': 'application/json' }, }); + if (!response.ok()) { + throw new Error(`Session initialization failed: ${response.status()}`); + } + const sessionId = response.headers()['mcp-session-id']; + if (!sessionId) { + throw new Error('Missing mcp-session-id header in response'); + } + return { response, sessionId }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts` around lines 16 - 33, The initializeSession helper currently extracts sessionId from response.headers() without validating the HTTP status or JSON-RPC payload; update initializeSession to first assert response.ok (or check response.status), parse the JSON body (await response.json()), verify there is no JSON-RPC error and that a valid result exists, and only then read the 'mcp-session-id' header into sessionId; if the HTTP status, JSON-RPC error, or missing header occur, throw a descriptive error including response.status and the parsed body to fail fast and avoid downstream undefined sessionId usage.apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts (1)
72-75: Add success assertions to warmup calls for consistent validation.Unlike the "Basic" test which asserts success for each warmup request (lines 23-27), this test silently ignores potential failures during the limit exhaustion phase. If the server behaves unexpectedly, the test could pass for wrong reasons.
💡 Proposed fix
// Exhaust the limit (3 requests in 5s window) for (let i = 0; i < 3; i++) { - await mcp.tools.call('rate-limited', { message: `req-${i}` }); + const warmup = await mcp.tools.call('rate-limited', { message: `req-${i}` }); + expect(warmup).toBeSuccessful(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts` around lines 72 - 75, The warmup loop that exhausts the rate limit using mcp.tools.call('rate-limited', { message: `req-${i}` }) lacks assertions; update the loop in guard-rate-limit.e2e.spec.ts to assert each warmup call succeeded (e.g., check returned status/success flag or that no error was thrown) so failures during the three warmup requests are detected, keeping the same loop structure but adding an expect/assert for each iteration referencing mcp.tools.call and the 'rate-limited' endpoint.apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts (1)
46-54: Consider asserting explicitly for the rejected case.The comment notes that transport-level rejection is acceptable, but there's no assertion verifying this. If
result2.statusis'rejected', the test passes silently without confirming the rejection reason relates to concurrency.💡 Proposed explicit assertion
// Second should have an error (concurrency limit, queue:0 = immediate reject) if (result2.status === 'fulfilled') { expect(result2.value).toBeError(); + } else { + // Transport-level rejection is also acceptable for concurrency limit + expect(result2.reason).toBeDefined(); } - // If rejected at transport level, that's also acceptable🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts` around lines 46 - 54, Add an explicit assertion for the rejected branch of the PromiseResult: after checking result2.status === 'fulfilled', add an else-if for result2.status === 'rejected' and assert the rejection is related to concurrency by verifying result2.reason is defined and its message/string contains concurrency-related text (e.g., match /concurr|queue/i); reference result2.status and result2.reason when adding these expects so the test fails if the transport rejection is unrelated.libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts (2)
18-29: Make mock storage batch/introspection methods stateful.
mset,keys, andcountare static/no-op right now; this can mask regressions if the limiter starts relying on them. Align them with the same in-memoryMapbehavior asget/set/incr.Proposed test-fixture hardening
- mset: jest.fn().mockResolvedValue(undefined), + mset: jest.fn().mockImplementation(async (entries: Record<string, string>) => { + for (const [k, v] of Object.entries(entries)) { + data.set(k, v); + } + }), @@ - keys: jest.fn().mockResolvedValue([]), - count: jest.fn().mockResolvedValue(0), + keys: jest.fn().mockImplementation(async (pattern?: string) => { + const all = [...data.keys()]; + if (!pattern) return all; + const regex = new RegExp(`^${pattern.replace(/\*/g, '.*')}$`); + return all.filter((k) => regex.test(k)); + }), + count: jest.fn().mockImplementation(async () => data.size),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts` around lines 18 - 29, The mock storage's batch/introspection methods (mset, keys, count) are currently no-ops and should be made stateful to mirror the in-memory Map used by get/set/incr; update the mock implementations so mset writes multiple key/value pairs into the same in-memory Map (respecting the value shape used by set), keys returns the current keys (optionally filtered by a pattern) from that Map, and count returns the Map.size (or number of matching keys) so tests accurately reflect real storage behavior and catch regressions in functions that rely on these methods.
114-148: Add exact-boundary interpolation tests to catch off-by-one logic.Mid-window interpolation is covered, but exact edges (window start/end) and exact-threshold acceptance/rejection are not explicitly asserted. Adding those cases will harden branch behavior.
As per coding guidelines: "Maintain 95%+ test coverage across all metrics (statements, branches, functions, lines) in Jest tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts` around lines 114 - 148, Add tests that assert interpolation behavior at exact window boundaries to catch off-by-one errors: after pre-populating storage via storage.set(previousKey, '<count>') with previousWindowStart and currentWindowStart defined as in existing tests, add one test where nowSpy.mockReturnValue(currentWindowStart) (exact start of current window) and assert limiter.check('test-key', limit, 60_000) yields the expected allowed/remaining when previous weight should be 0, and another test where nowSpy.mockReturnValue(currentWindowStart - 1) (exact end of previous window) and assert limiter.check(...) yields expected allowed/remaining when previous weight should be 1; use the same symbols limiter.check, nowSpy, storage.set, previousKey, previousWindowStart, and currentWindowStart so the new specs mirror the existing tests and verify exact-threshold acceptance/rejection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@libs/guard/src/ip-filter/__tests__/ip-filter.spec.ts`:
- Around line 249-259: The test description and the CIDR in the IpFilter setup
are inconsistent: the it() text says "IPv6 /64 range" while the allowList uses
'2001:db8::/32'; fix by making them match—either change the test description to
"IPv6 /32 range" or change the allowList CIDR to '2001:db8::/64' in the test
that constructs new IpFilter({ allowList: [...], defaultAction: 'deny' }) and
adjust the expectations if you change the CIDR.
- Around line 499-509: The test currently only asserts result is defined; update
it to assert the actual allow/deny outcome and validity: call
IpFilter.check('::ffff:192.168.1.0') and assert that the returned object
indicates an allowed decision (e.g., result.action === 'allow' or result.allowed
=== true) and that result.valid === true to ensure the IPv4-mapped IPv6 address
is both recognized and permitted by the allowList/defaultAction logic.
---
Nitpick comments:
In `@apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.ts`:
- Around line 16-33: The initializeSession helper currently extracts sessionId
from response.headers() without validating the HTTP status or JSON-RPC payload;
update initializeSession to first assert response.ok (or check response.status),
parse the JSON body (await response.json()), verify there is no JSON-RPC error
and that a valid result exists, and only then read the 'mcp-session-id' header
into sessionId; if the HTTP status, JSON-RPC error, or missing header occur,
throw a descriptive error including response.status and the parsed body to fail
fast and avoid downstream undefined sessionId usage.
In `@apps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.ts`:
- Around line 46-54: Add an explicit assertion for the rejected branch of the
PromiseResult: after checking result2.status === 'fulfilled', add an else-if for
result2.status === 'rejected' and assert the rejection is related to concurrency
by verifying result2.reason is defined and its message/string contains
concurrency-related text (e.g., match /concurr|queue/i); reference
result2.status and result2.reason when adding these expects so the test fails if
the transport rejection is unrelated.
In `@apps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.ts`:
- Around line 72-75: The warmup loop that exhausts the rate limit using
mcp.tools.call('rate-limited', { message: `req-${i}` }) lacks assertions; update
the loop in guard-rate-limit.e2e.spec.ts to assert each warmup call succeeded
(e.g., check returned status/success flag or that no error was thrown) so
failures during the three warmup requests are detected, keeping the same loop
structure but adding an expect/assert for each iteration referencing
mcp.tools.call and the 'rate-limited' endpoint.
In `@eslint.config.mjs`:
- Line 8: The ignores array in eslint.config.mjs contains a redundant entry;
remove the specific '**/fixture/dist' string from the ignores list since the
existing '**/dist' already covers all dist directories, leaving '**/dist',
'**/*.d.ts', '**/*.d.ts.map', '**/fixture/libs' (and other entries) intact;
update the ignores array declaration to drop '**/fixture/dist'.
In `@libs/guard/src/rate-limit/__tests__/rate-limiter.spec.ts`:
- Around line 18-29: The mock storage's batch/introspection methods (mset, keys,
count) are currently no-ops and should be made stateful to mirror the in-memory
Map used by get/set/incr; update the mock implementations so mset writes
multiple key/value pairs into the same in-memory Map (respecting the value shape
used by set), keys returns the current keys (optionally filtered by a pattern)
from that Map, and count returns the Map.size (or number of matching keys) so
tests accurately reflect real storage behavior and catch regressions in
functions that rely on these methods.
- Around line 114-148: Add tests that assert interpolation behavior at exact
window boundaries to catch off-by-one errors: after pre-populating storage via
storage.set(previousKey, '<count>') with previousWindowStart and
currentWindowStart defined as in existing tests, add one test where
nowSpy.mockReturnValue(currentWindowStart) (exact start of current window) and
assert limiter.check('test-key', limit, 60_000) yields the expected
allowed/remaining when previous weight should be 0, and another test where
nowSpy.mockReturnValue(currentWindowStart - 1) (exact end of previous window)
and assert limiter.check(...) yields expected allowed/remaining when previous
weight should be 1; use the same symbols limiter.check, nowSpy, storage.set,
previousKey, previousWindowStart, and currentWindowStart so the new specs mirror
the existing tests and verify exact-threshold acceptance/rejection.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d483570d-d3e5-4a25-9836-41a3f4b98abf
📒 Files selected for processing (11)
apps/e2e/demo-e2e-guard/e2e/browser/guard-browser.pw.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-concurrency.e2e.spec.tsapps/e2e/demo-e2e-guard/e2e/guard-rate-limit.e2e.spec.tseslint.config.mjslibs/guard/src/ip-filter/__tests__/ip-filter.spec.tslibs/guard/src/ip-filter/ip-filter.tslibs/guard/src/manager/guard.factory.tslibs/guard/src/rate-limit/__tests__/rate-limiter.spec.tslibs/sdk/src/scope/flows/http.request.flow.tslibs/testing/src/server/port-registry.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- libs/sdk/src/scope/flows/http.request.flow.ts
- libs/guard/src/manager/guard.factory.ts
- libs/testing/src/server/port-registry.ts
- apps/e2e/demo-e2e-guard/e2e/guard-combined.e2e.spec.ts
| describe('IPv6 CIDR ranges', () => { | ||
| it('should match IPs in an IPv6 /64 range', () => { | ||
| const filter = new IpFilter({ | ||
| allowList: ['2001:db8::/32'], | ||
| defaultAction: 'deny', | ||
| }); | ||
|
|
||
| expect(filter.check('2001:0db8::1').allowed).toBe(true); | ||
| expect(filter.check('2001:0db8:ffff::1').allowed).toBe(true); | ||
| expect(filter.check('2001:0db9::1').allowed).toBe(false); | ||
| }); |
There was a problem hiding this comment.
Test description mismatch: says "/64 range" but tests /32.
The test description at line 250 mentions "IPv6 /64 range" but the allowList uses '2001:db8::/32'.
Suggested fix
- it('should match IPs in an IPv6 /64 range', () => {
+ it('should match IPs in an IPv6 /32 range', () => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| describe('IPv6 CIDR ranges', () => { | |
| it('should match IPs in an IPv6 /64 range', () => { | |
| const filter = new IpFilter({ | |
| allowList: ['2001:db8::/32'], | |
| defaultAction: 'deny', | |
| }); | |
| expect(filter.check('2001:0db8::1').allowed).toBe(true); | |
| expect(filter.check('2001:0db8:ffff::1').allowed).toBe(true); | |
| expect(filter.check('2001:0db9::1').allowed).toBe(false); | |
| }); | |
| describe('IPv6 CIDR ranges', () => { | |
| it('should match IPs in an IPv6 /32 range', () => { | |
| const filter = new IpFilter({ | |
| allowList: ['2001:db8::/32'], | |
| defaultAction: 'deny', | |
| }); | |
| expect(filter.check('2001:0db8::1').allowed).toBe(true); | |
| expect(filter.check('2001:0db8:ffff::1').allowed).toBe(true); | |
| expect(filter.check('2001:0db9::1').allowed).toBe(false); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/guard/src/ip-filter/__tests__/ip-filter.spec.ts` around lines 249 - 259,
The test description and the CIDR in the IpFilter setup are inconsistent: the
it() text says "IPv6 /64 range" while the allowList uses '2001:db8::/32'; fix by
making them match—either change the test description to "IPv6 /32 range" or
change the allowList CIDR to '2001:db8::/64' in the test that constructs new
IpFilter({ allowList: [...], defaultAction: 'deny' }) and adjust the
expectations if you change the CIDR.
| it('should handle IPv4-mapped IPv6 addresses', () => { | ||
| const filter = new IpFilter({ | ||
| allowList: ['::ffff:192.168.1.0/128'], | ||
| defaultAction: 'deny', | ||
| }); | ||
|
|
||
| // Note: exact match required for /128 | ||
| // ::ffff:192.168.1.0 is the IPv4-mapped representation | ||
| const result = filter.check('::ffff:192.168.1.0'); | ||
| expect(result).toBeDefined(); | ||
| }); |
There was a problem hiding this comment.
Strengthen assertion: toBeDefined() does not verify allow/deny behavior.
This test only asserts that result is defined, which doesn't validate the actual filtering logic. Since the implementation now correctly handles IPv4-mapped IPv6 with the valid flag, this test should explicitly verify the expected outcome.
Suggested fix
const result = filter.check('::ffff:192.168.1.0');
- expect(result).toBeDefined();
+ expect(result.allowed).toBe(true);
+ expect(result.reason).toBe('allowlisted');
+ expect(result.matchedRule).toBe('::ffff:192.168.1.0/128');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/guard/src/ip-filter/__tests__/ip-filter.spec.ts` around lines 499 - 509,
The test currently only asserts result is defined; update it to assert the
actual allow/deny outcome and validity: call
IpFilter.check('::ffff:192.168.1.0') and assert that the returned object
indicates an allowed decision (e.g., result.action === 'allow' or result.allowed
=== true) and that result.valid === true to ensure the IPv4-mapped IPv6 address
is both recognized and permitted by the allowList/defaultAction logic.
Summary by CodeRabbit
New Features
Documentation
Tests