-
Notifications
You must be signed in to change notification settings - Fork 52
feature/jwks validator #1103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feature/jwks validator #1103
Conversation
WalkthroughAdds two new core packages: JWKS fetch/validate and Device ID (CSR/CA/cert/sign/verify). Refactors keybag APIs and storage schemas (V1/V2 migrations, fingerprinted keys, JWT/device-id storage), updates CRDT watcher/notify paths, introduces base64 utils, adjusts gateways and use-fireproof to JWT-based token storage, and broad test coverage. Includes dependency bumps. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor App
participant Client as DeviceIdClient
participant Keybag as KeyBagIf
participant CSR as DeviceIdCSR
participant Transport as DeviceIdProtocol
participant CA as DeviceIdCA
participant Verify as DeviceIdVerifyMsg
App->>Client: ensureDeviceId()
Client->>Keybag: getDeviceId()
alt No certificate
Client->>CSR: createCSR(subject, extensions)
CSR-->>Client: csrJWT (Result)
Client->>Transport: issueCertificate(csrJWT)
Transport->>CA: processCSR(csrJWT)
CA-->>Transport: IssueCertificateResult (Result)
Transport-->>Client: IssueCertificateResult (Result)
Client->>Keybag: setDeviceId(deviceId JWK, cert)
Client-->>App: MsgSigner
else Has certificate
Client-->>App: MsgSigner
end
Note over App,Verify: Verification flow (later)
App->>Verify: verifyWithCertificate(jwt)
Verify->>Verify: extract x5c/x5t, pick cert
Verify->>CA: findTrustedCA()
Verify-->>App: Success or structured error
sequenceDiagram
autonumber
actor Service
participant JWKS as JWKS Module
Service->>JWKS: fetchAndValidateJWKS(config, vOpts, fOpts)
JWKS->>JWKS: buildJWKSUrl(config)
JWKS->>JWKS: fetchJWKS(url, retries/timeout)
alt fetch ok
JWKS->>JWKS: validateJWKS(jwks, opts)
JWKS-->>Service: Ok({jwks, validation})
else fetch error
JWKS-->>Service: Err(JWKSFetchError)
end
sequenceDiagram
autonumber
participant Remote as Remote Sync
participant CRDT as CRDTImpl
participant Clock as CRDTClockImpl
participant Subs as watchers/noPayloadWatchers
Remote->>CRDT: applyMeta(newHead)
CRDT->>Clock: applyHead(newHead, localUpdates=false)
Clock->>Clock: int_applyHead(...)
alt localUpdates
Clock->>Subs: notifyWatchers(changes)
else not localUpdates
Clock->>Clock: compute clockChangesSince
alt changes exist
Clock->>Subs: notifyWatchers(changes)
else no changes
Clock->>Subs: invoke noPayloadWatchers
end
end
sequenceDiagram
autonumber
actor UI
participant Hook as use-attach
participant KB as KeyBagIf
UI->>Hook: ready()/token()
Hook->>KB: getKeyBag()
Hook->>KB: getJwt(name)
alt Ok
KB-->>Hook: { key, jwt, claims? }
Hook-->>UI: token + claims
else Err
Hook-->>UI: no token
end
UI->>Hook: setToken(token)
Hook->>KB: setJwt(name, token)
UI->>Hook: resetToken()
Hook->>KB: delete(name)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120–180 minutes Possibly related PRs
Suggested reviewers
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Nitpick comments (17)
core/jwks/vitest.config.ts (1)
3-8: Solid base; consider increasing timeout for live HTTP and verify fetch availability in CI (Node ≥18).Integration tests hit live endpoints; the default Vitest timeout can be tight and Node <18 won’t have global fetch in a node env. Recommend bumping the timeout and, if your CI isn’t on Node ≥18, adding a small setup file to polyfill fetch.
Proposed tweak:
export default defineConfig({ test: { globals: true, environment: 'node', + testTimeout: 15000 }, });If CI runs on Node <18, add a setup file and reference it:
export default defineConfig({ test: { globals: true, environment: 'node', + setupFiles: ['./tests/setup-fetch.ts'], testTimeout: 15000 }, });tests/setup-fetch.ts (new file, optional if Node <18):
// Only needed on Node <18 import { fetch, Headers, Request, Response } from 'undici'; Object.assign(globalThis, { fetch, Headers, Request, Response });core/runtime/jwks-fetcher.ts (2)
1-9: Enrich error context and type the return as a JWKS to improve DX.Current errors only include the status code and JSON parse errors bubble up opaquely. Typing the return to a minimal JWKS shape and adding statusText/body snippets makes production issues diagnosable without extra logging. Also set an Accept header by default.
-export async function fetchJwks(url: string) { - const response = await fetch(url); - - if (!response.ok) { - throw new Error(`Failed to fetch JWKS: ${response.status}`); - } - - return await response.json(); -} +export type Jwk = { kty: string; kid?: string; use?: string; [prop: string]: unknown }; +export type Jwks = { keys: Jwk[] }; + +export async function fetchJwks( + url: string, + init?: RequestInit +): Promise<Jwks> { + const response = await fetch(url, { + // Default to JSON; allow caller overrides/merges + headers: { Accept: 'application/json', ...(init?.headers ?? {}) }, + ...init, + }); + + if (!response.ok) { + let snippet = ''; + try { + const text = await response.text(); + snippet = text ? ` — ${text.slice(0, 200)}` : ''; + } catch {} + throw new Error( + `Failed to fetch JWKS (${url}): ${response.status} ${response.statusText}${snippet}` + ); + } + + try { + const json = (await response.json()) as unknown; + if (!json || typeof json !== 'object' || !Array.isArray((json as any).keys)) { + throw new Error('Response JSON is not a JWKS object with a keys array'); + } + return json as Jwks; + } catch (e: any) { + throw new Error(`Failed to parse JWKS JSON from ${url}: ${e?.message ?? e}`); + } +}
1-1: Potential name collision/ambiguity with core/jwks/src/fetcher.ts’s fetchJwks.Two exported functions named fetchJwks in different packages can be confusing for consumers. Consider renaming this to fetchJwksRaw or namespacing in index barrels to avoid accidental imports.
core/jwks/tsconfig.json (1)
3-13: E SM resolution alignment: consider NodeNext/Bundler resolution if the package ships ESM.You’re emitting ES modules ("module": "ES2020"). If the monorepo standard is ESM and TS ≥5, switching moduleResolution to "bundler" (or "NodeNext") often avoids cjs/esm edge cases and improves type resolution with modern tooling. Optional but recommended.
"compilerOptions": { "target": "ES2020", "module": "ES2020", "outDir": "./dist", "rootDir": "./src", "declaration": true, "declarationMap": true, "sourceMap": true, - "moduleResolution": "node", + "moduleResolution": "bundler", "skipLibCheck": true },If the repo targets Node-style ESM rather than bundler semantics, prefer:
- "moduleResolution": "node", + "moduleResolution": "NodeNext",Please confirm the monorepo’s TS settings so we stay consistent with prior work (e.g., the “core-cli tsc/tsgo” flow from PR #1016).
core/runtime/jwks-fetcher.test.ts (2)
4-13: Fix unused type lint error by actually using the interface (or remove it).ESLint flags Jwks as unused. Either remove it or use it to type the result.
Apply one of the following:
A) Use the type:
interface Jwks { keys: { use: string; kty: string; kid: string; x5c: string[]; n: string; e: string; }[]; }- const result = await fetchJwks(url); + const result: Jwks = await fetchJwks(url);B) Or remove the interface entirely if not needed.
22-27: Avoid brittle index-based assertions; assert over “some” key and add a longer per-test timeout.The first key’s position is not guaranteed. Use Array.prototype.some and raise the timeout to reduce flakiness for live HTTP.
- expect(result.keys[0]).toHaveProperty("use", "sig"); - expect(result.keys[0]).toHaveProperty("kty", "RSA"); - expect(result.keys[0]).toHaveProperty("kid"); - }); + expect(result.keys.length).toBeGreaterThan(0); + const hasValidKey = result.keys.some( + (k) => k.use === "sig" && k.kty === "RSA" && typeof k.kid === "string" + ); + expect(hasValidKey).toBe(true); + }, 20000);core/jwks/package.json (2)
5-7: ESM entry points look fine, but consider addingexportsconditions for Node and type safetyRight now you only expose
"import"and"types". If consumers ever userequire(even indirectly via tooling), resolution can break. Also, since you’re compiling TS in-place, make suresrc/*.jsactually exists before publish.Suggested tweaks:
- Add
"default"inexportsfor broader tooling compatibility.- Add
"files"and"prepare"/"prepublishOnly"to ensure built JS is published.- Optionally add
"sideEffects": falseto aid tree-shaking."exports": { ".": { - "import": "./src/index.js", - "types": "./src/index.d.ts" + "import": "./src/index.js", + "default": "./src/index.js", + "types": "./src/index.d.ts" } }, + "files": [ + "src/**/*.{js,d.ts,js.map}" + ], + "sideEffects": false, + "engines": { + "node": ">=18" + }, + "publishConfig": { + "access": "public" + }Also applies to: 30-35
8-12: Add type-check and lint scripts; wire build-before-test to avoid stale outputsThis package uses TS and runs tests via Vitest. Add a
typecheckscript and ensuretestruns with type checking in CI to catch issues early."scripts": { - "build": "tsc", - "test": "vitest", - "test:run": "vitest run" + "build": "tsc", + "typecheck": "tsc --noEmit", + "test": "vitest", + "test:run": "vitest run", + "ci": "npm run typecheck && npm run test:run" },core/jwks/src/validator.ts (5)
68-81: Robustify buildJWKSUrl: trim input and tighten Clerk detectionCurrent logic treats any string containing “clerk” as Clerk host and doesn’t handle surrounding whitespace. Trim and validate.
export function buildJWKSUrl(config: string): string { - if (config.startsWith("http://") || config.startsWith("https://")) { + const input = config.trim(); + if (input.startsWith("http://") || input.startsWith("https://")) { - return config; + return input; } // Handle Clerk-style strings (both "trusted-glowworm-5" and "*.clerk.accounts.dev") - if (config.includes("clerk") || (!config.includes(".") && config.length > 0)) { - const domain = config.includes(".") ? config : `${config}.clerk.accounts.dev`; + if (input.endsWith(".clerk.accounts.dev") || (!input.includes(".") && input.length > 0)) { + const domain = input.includes(".") ? input : `${input}.clerk.accounts.dev`; return `https://${domain}/.well-known/jwks.json`; } - throw new JWKSValidationError("Invalid JWKS configuration", "INVALID_CONFIG", { config }); + throw new JWKSValidationError("Invalid JWKS configuration", "INVALID_CONFIG", { config: input }); }
232-236: TreatmaxKeysoverflow as a warning, not a hard errorProviders sometimes publish >10 keys. Rejecting the whole JWKS can be disruptive. Consider either:
- clipping to the first
maxKeyskeys and warning, or- returning a non-fatal warning.
- if (jwks.keys.length > maxKeys) { - result.validationErrors.push(`Too many keys: ${jwks.keys.length} (max: ${maxKeys})`); - return result; - } + if (jwks.keys.length > maxKeys) { + result.warningMessages.push(`Too many keys: ${jwks.keys.length} (max: ${maxKeys})`); + }
247-249: Validity threshold: accept emptyvalidationErrorsOR at least one valid key?You currently require zero top-level validation errors and at least one valid key. If
maxKeysbecomes a warning (above),isValidlogic should reflect that. Adjust as needed based on product semantics.
251-270: Optional: escalate invalid JWKS to an error via an option
fetchAndValidateJWKSreturnsOkeven when validation fails. Consider afailOnInvalid?: booleaninvalidationOptions(or method param) that returnsErr(new JWKSValidationError(...))when!validation.isValid.I can wire this in and update tests if you want.
91-156: Add unit tests for retry/backoff and abort timeout behaviorGiven the non-trivial retry/timeout behavior, add tests that:
- assert a single attempt on 4xx (except 408/429),
- retry count on 5xx/AbortError,
- confirm abort after
timeoutusing a mock clock.I can provide Vitest tests with fake timers and a mocked
fetch.core/jwks/tests/basic.test.ts (1)
1-19: Nice coverage of URL builder happy/invalid paths; add whitespace and full-domain casesAdd a case for trimmed input and for full Clerk domains without protocol (already covered in integration). Minor, but prevents regressions if we add
trim()in the builder.it("should throw on invalid config", () => { expect(() => buildJWKSUrl("")).toThrow(JWKSValidationError); }); + + it("trims whitespace", () => { + const result = buildJWKSUrl(" trusted-glowworm-5 "); + expect(result).toBe("https://trusted-glowworm-5.clerk.accounts.dev/.well-known/jwks.json"); + });core/jwks/src/fetcher.ts (3)
8-14: Add explicit return type and keep types local to avoid accidental anyMake the legacy API’s return type explicit and import the
JWKStype. This improves DX and prevents regressions ifunwrap()typing changes.-import { fetchJWKS } from "./validator.js"; +import { fetchJWKS } from "./validator.js"; +import type { JWKS } from "./validator.js"; @@ -export async function fetchJwks(url: string) { - const result = await fetchJWKS(url); +export async function fetchJwks(url: string): Promise<JWKS> { + const result = await fetchJWKS(url); if (result.is_err()) { throw result.unwrap_err(); } return result.unwrap(); }
8-14: Optional: pass through fetch options without breaking callersTo aid migration while keeping backward compatibility, accept an optional second param and forward it to
fetchJWKS.-export async function fetchJwks(url: string): Promise<JWKS> { - const result = await fetchJWKS(url); +export async function fetchJwks( + url: string, + options?: Parameters<typeof fetchJWKS>[1] +): Promise<JWKS> { + const result = await fetchJWKS(url, options); if (result.is_err()) { throw result.unwrap_err(); } return result.unwrap(); }
2-2: Ensure ESM-compatible module resolution for.jsimports in core/jwksThe local tsconfig in
core/jwks/tsconfig.jsoncurrently uses"module": "ES2020"with"moduleResolution": "node", and the package.json declares"type": "module". While this may work in many setups, TypeScript’s classicnoderesolution doesn’t strictly follow Node.js ESM rules for.jsextensions. To avoid surprises in editors and at runtime, switch to a Node-compatible ESM resolver:• core/jwks/tsconfig.json (lines 5 & 11):
"compilerOptions": { "module": "ES2020", - "moduleResolution": "node", + "moduleResolution": "nodenext", // … }This change tells TS to apply Node.js’s ESM resolution algorithm (available since TS 4.7), ensuring
import {…} from "./validator.js"in a.tsfile correctly maps tovalidator.tsat compile time and tovalidator.jsat runtime.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (12)
core/jwks/README.md(1 hunks)core/jwks/package.json(1 hunks)core/jwks/src/fetcher.ts(1 hunks)core/jwks/src/index.ts(1 hunks)core/jwks/src/validator.ts(1 hunks)core/jwks/test-all.sh(1 hunks)core/jwks/tests/basic.test.ts(1 hunks)core/jwks/tests/integration.test.ts(1 hunks)core/jwks/tsconfig.json(1 hunks)core/jwks/vitest.config.ts(1 hunks)core/runtime/jwks-fetcher.test.ts(1 hunks)core/runtime/jwks-fetcher.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-05T07:28:12.473Z
Learnt from: mabels
PR: fireproof-storage/fireproof#1016
File: cli/package.json:8-12
Timestamp: 2025-08-05T07:28:12.473Z
Learning: In PR #1016 "update tsc -> tsgo", mabels is implementing global control over TypeScript compilation across the fireproof monorepo by replacing direct "tsc" calls with "core-cli tsc" in package.json build scripts. This allows centralized management of which TypeScript compiler to use (tsc vs tsgo) through the core-cli tool.
Applied to files:
core/jwks/tsconfig.json
🧬 Code graph analysis (4)
core/jwks/tests/basic.test.ts (1)
core/jwks/src/validator.ts (2)
buildJWKSUrl(69-81)JWKSValidationError(49-54)
core/jwks/src/fetcher.ts (2)
core/jwks/src/index.ts (1)
fetchJwks(5-5)core/jwks/src/validator.ts (1)
fetchJWKS(84-156)
core/runtime/jwks-fetcher.test.ts (1)
core/runtime/jwks-fetcher.ts (1)
fetchJwks(1-9)
core/jwks/tests/integration.test.ts (1)
core/jwks/src/validator.ts (2)
buildJWKSUrl(69-81)fetchAndValidateJWKS(252-270)
🪛 ESLint
core/runtime/jwks-fetcher.test.ts
[error] 4-4: 'Jwks' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
core/jwks/src/validator.ts
[error] 1-1: '@adviser/cement' imported multiple times.
(import/no-duplicates)
[error] 2-2: '@adviser/cement' imported multiple times.
(import/no-duplicates)
[error] 50-50: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
🪛 LanguageTool
core/jwks/README.md
[grammar] ~55-~55: There might be a mistake here.
Context: ...ration Supports multiple input formats: - Direct URLs: `"https://example.com/.well...
(QB_NEW_EN)
[grammar] ~56-~56: There might be a mistake here.
Context: ... multiple input formats: - Direct URLs: "https://example.com/.well-known/jwks.json" - Clerk shortcuts: "trusted-glowworm-5" ...
(QB_NEW_EN)
[grammar] ~57-~57: There might be a mistake here.
Context: ...ll-known/jwks.json"- Clerk shortcuts:"trusted-glowworm-5" - Clerk domains:"trusted-glowworm-5.cler...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
core/jwks/README.md
83-83: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (6)
core/jwks/src/validator.ts (2)
279-282: LGTM: ergonomicfindKeyByIdwith OptionThe use of
Optionhere is clean and helps avoid sentinel values.
83-156: Confirm runtime environment and adjustfetchJWKSaccordinglyI’ve reviewed
core/jwks/src/validator.tsand verified the points below:
- The current code unconditionally sets forbidden headers (
User-Agent,Cache-Control) on everyfetch, which will throw in browsers and trigger CORS preflight requests.- The timeout created by
setTimeoutis only cleared on successful responses; iffetchthrows, the timer may leak.- All 4xx errors are treated as non-retriable today, including 408/429, but it’s generally advisable to retry “Request Timeout” (408) and “Too Many Requests” (429) with backoff + jitter.
Please confirm whether this library must run in browser contexts. If browser support is required, apply the following essential refactor; if it’s Node-only, you can retain the unconditional
User-Agentheader and omit the browser guard.— core/jwks/src/validator.ts (around lines 100–140) —
export async function fetchJWKS( config: string, options?: { timeout?: number; retries?: number; userAgent?: string } ): Promise<Result<JWKS, JWKSFetchError>> { // … for (let attempt = 0; attempt <= retries; attempt++) { try { - const controller = new AbortController(); - const timeoutId = setTimeout(() => controller.abort(), timeout); - - const response = await fetch(url, { - signal: controller.signal, - headers: { - "User-Agent": userAgent, - "Accept": "application/json", - "Cache-Control": "no-cache" - } - }); - clearTimeout(timeoutId); + const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), timeout); + const headers: Record<string, string> = {}; + // only set User-Agent in Node; avoid forbidden/preflight headers in browsers + if (typeof window === "undefined" && userAgent) { + headers["User-Agent"] = userAgent; + } + const response = await fetch(url, { signal: controller.signal, headers }); + } finally { + clearTimeout(timeoutId); + } if (!response.ok) { throw new JWKSFetchError( `HTTP ${response.status}: ${response.statusText}`, response.status, url ); } @@ } catch (error) { lastError = error instanceof Error ? error : new Error(String(error)); - // Don't retry on client errors - if (error instanceof JWKSFetchError && error.statusCode && error.statusCode >= 400 && error.statusCode < 500) { - throw error; - } + // Don't retry most client errors (but retry 408/429) + if ( + error instanceof JWKSFetchError && + error.statusCode >= 400 && + error.statusCode < 500 && + ![408, 429].includes(error.statusCode) + ) { + throw error; + } - // Wait before retry - if (attempt < retries) { - await new Promise(resolve => setTimeout(resolve, Math.pow(2, attempt) * 1000)); - } + // backoff + jitter before next retry + if (attempt < retries) { + const base = Math.pow(2, attempt) * 1000; + const jitter = Math.floor(Math.random() * 250); + await new Promise(resolve => setTimeout(resolve, base + jitter)); + } }• If browser support is required: merge this diff.
• If Node-only, you may omit the browser‐guard aroundUser-Agentbut should still clear the timeout infinallyand add 408/429 retries.core/jwks/tests/integration.test.ts (1)
1-18: LGTM: URL construction assertions match builder logicTests align with
buildJWKSUrlbehavior, including direct URL passthrough and Clerk slug resolution.core/jwks/src/index.ts (1)
1-5: Barrel export looks good; ensure.jspaths exist post-buildRe-exporting from compiled
.jspaths is correct for ESM. Just ensuretscoutputs.jsnext to.ts(default when nooutDir) or adjust the paths if you later move build output todist/.If you plan to use
outDir: "dist"intsconfig.json, update these paths andpackage.jsonaccordingly.core/jwks/src/fetcher.ts (1)
4-7: Deprecation notice is clear and actionable — niceThe JSDoc deprecation points users to
fetchJWKSandfetchAndValidateJWKS. Good guidance for migration.core/jwks/README.md (1)
16-33: Usage sample is clear and idiomaticGood demonstration of
Resulthandling (is_ok/unwrap). No changes needed.
core/jwks/test-all.sh
Outdated
| #!/bin/bash | ||
|
|
||
| echo "🧪 Running JWKS Validator Test Suite" | ||
| echo "====================================" | ||
|
|
||
| echo "📋 1. Basic unit tests..." | ||
| npx vitest run tests/basic.test.ts --reporter=verbose | ||
|
|
||
| echo -e "\n🌐 2. Integration tests (with live Clerk endpoint)..." | ||
| npx vitest run tests/integration.test.ts --reporter=verbose | ||
|
|
||
| echo -e "\n📊 3. All tests..." | ||
| npx vitest run tests/ --reporter=verbose | ||
|
|
||
| echo -e "\n✅ Test suite completed!" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Script fails when launched from repo root; make it cwd-agnostic and fail-fast.
Relative test paths are resolved against the current shell’s CWD. Running this as core/jwks/test-all.sh from the repo root will fail to find tests. Also, the script should stop on the first error.
#!/bin/bash
+set -euo pipefail
+
+# Resolve to script directory so relative paths work from anywhere
+cd "$(dirname "${BASH_SOURCE[0]}")"
-echo "🧪 Running JWKS Validator Test Suite"
-echo "===================================="
+printf "🧪 Running JWKS Validator Test Suite\n"
+printf "====================================\n"
-echo "📋 1. Basic unit tests..."
-npx vitest run tests/basic.test.ts --reporter=verbose
+printf "📋 1. Basic unit tests...\n"
+npx vitest run -c vitest.config.ts tests/basic.test.ts --reporter=verbose
-echo -e "\n🌐 2. Integration tests (with live Clerk endpoint)..."
-npx vitest run tests/integration.test.ts --reporter=verbose
+printf "\n🌐 2. Integration tests (with live Clerk endpoint)...\n"
+npx vitest run -c vitest.config.ts tests/integration.test.ts --reporter=verbose
-echo -e "\n📊 3. All tests..."
-npx vitest run tests/ --reporter=verbose
+printf "\n📊 3. All tests...\n"
+npx vitest run -c vitest.config.ts tests/ --reporter=verbose
-echo -e "\n✅ Test suite completed!"
+printf "\n✅ Test suite completed!\n"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #!/bin/bash | |
| echo "🧪 Running JWKS Validator Test Suite" | |
| echo "====================================" | |
| echo "📋 1. Basic unit tests..." | |
| npx vitest run tests/basic.test.ts --reporter=verbose | |
| echo -e "\n🌐 2. Integration tests (with live Clerk endpoint)..." | |
| npx vitest run tests/integration.test.ts --reporter=verbose | |
| echo -e "\n📊 3. All tests..." | |
| npx vitest run tests/ --reporter=verbose | |
| echo -e "\n✅ Test suite completed!" | |
| #!/bin/bash | |
| set -euo pipefail | |
| # Resolve to script directory so relative paths work from anywhere | |
| cd "$(dirname "${BASH_SOURCE[0]}")" | |
| printf "🧪 Running JWKS Validator Test Suite\n" | |
| printf "====================================\n" | |
| printf "📋 1. Basic unit tests...\n" | |
| npx vitest run -c vitest.config.ts tests/basic.test.ts --reporter=verbose | |
| printf "\n🌐 2. Integration tests (with live Clerk endpoint)...\n" | |
| npx vitest run -c vitest.config.ts tests/integration.test.ts --reporter=verbose | |
| printf "\n📊 3. All tests...\n" | |
| npx vitest run -c vitest.config.ts tests/ --reporter=verbose | |
| printf "\n✅ Test suite completed!\n" |
🤖 Prompt for AI Agents
In core/jwks/test-all.sh around lines 1 to 15, the script uses relative paths
and doesn't stop on errors; make it cwd-agnostic and fail-fast by setting strict
shell options (e.g. set -euo pipefail) at the top and changing to the script
directory (cd "$(dirname "$0")" || exit 1) before running the npx vitest
commands so tests run regardless of caller CWD and the script exits immediately
on the first failure.
…r support - Add fetchAndValidateJWKS with Result-based error handling - Support multiple config formats (URLs, Clerk shortcuts, domains) - Implement retry logic with exponential backoff and jitter - Add comprehensive key validation with jose integration - Include browser compatibility fixes for fetch headers - Add legacy fetchJwks wrapper for backward compatibility - Comprehensive test suite with unit and integration tests - Address all CodeRabbit nitpicks: ESM resolution, type safety, error handling Resolves #1103
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
core/jwks/src/validator.ts (3)
72-84: Consider adding URL validation for security.The
buildJWKSUrlfunction correctly handles different URL formats, but it could benefit from additional validation to prevent potential security issues like open redirect vulnerabilities.Consider adding URL validation to ensure the constructed URLs are safe:
export function buildJWKSUrl(config: string): string { if (config.startsWith("http://") || config.startsWith("https://")) { + // Validate URL to prevent open redirect vulnerabilities + try { + const url = new URL(config); + // Optionally, check against a whitelist of allowed domains + if (!url.pathname.includes("..") && !url.pathname.includes("//")) { + return config; + } + } catch { + throw new JWKSValidationError("Invalid URL format", "INVALID_CONFIG", { config }); + } return config; }
86-159: Consider adding jitter to retry backoff.The retry mechanism with exponential backoff is well-implemented, but adding jitter can help prevent the thundering herd problem when multiple clients retry simultaneously.
Add jitter to the exponential backoff:
// Wait before retry if (attempt < retries) { - await new Promise((resolve) => setTimeout(resolve, Math.pow(2, attempt) * 1000)); + const baseDelay = Math.pow(2, attempt) * 1000; + const jitter = Math.random() * baseDelay * 0.1; // 10% jitter + await new Promise((resolve) => setTimeout(resolve, baseDelay + jitter)); }
212-246: Consider adding duplicate key ID detection.The JWKS validation could benefit from checking for duplicate key IDs, which might indicate configuration issues.
Add duplicate key ID detection:
export async function validateJWKS(jwks: JWKS, options: JWKSValidationOptions = {}): Promise<JWKSValidationResult> { const result: JWKSValidationResult = { isValid: false, validKeysCount: 0, currentKeysCount: 0, totalKeysCount: jwks.keys.length, validationErrors: [], warningMessages: [], keyResults: [], }; if (jwks.keys.length === 0) { result.validationErrors.push("JWKS contains no keys"); return result; } const maxKeys = options.maxKeys ?? 10; if (jwks.keys.length > maxKeys) { result.validationErrors.push(`Too many keys: ${jwks.keys.length} (max: ${maxKeys})`); return result; } + // Check for duplicate key IDs + const keyIds = jwks.keys.filter(k => k.kid).map(k => k.kid); + const duplicateIds = keyIds.filter((kid, index) => keyIds.indexOf(kid) !== index); + if (duplicateIds.length > 0) { + result.warningMessages.push(`Duplicate key IDs found: ${[...new Set(duplicateIds)].join(', ')}`); + } // Validate each key for (const key of jwks.keys) {
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
core/jwks/README.md(1 hunks)core/jwks/src/validator.ts(1 hunks)core/jwks/test-all.sh(1 hunks)core/jwks/tests/integration.test.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- core/jwks/tests/integration.test.ts
- core/jwks/README.md
- core/jwks/test-all.sh
🔇 Additional comments (4)
core/jwks/src/validator.ts (4)
1-2: LGTM!The imports are correctly structured, with both
ResultandOptionimported from@adviser/cementin a single statement, andimportJWKfromjoseis appropriately imported for key validation.
48-57: LGTM!The
JWKSValidationErrorclass properly usesunknowntype for thedetailsproperty, avoiding the use ofany. The error structure is well-designed with appropriate fields for error tracking.
161-210: LGTM!The key validation logic is comprehensive and correctly handles the algorithm hint for
importJWK. TheisCurrentflag is appropriately set only when the key passes all validations and imports successfully.
270-272: LGTM!The
getCurrentKeysfunction correctly filters for keys that are both current AND valid, ensuring only fully validated keys are returned. This prevents returning keys that might have imported successfully but failed other validation checks.
- Fetches JWKS from well-known endpoints - Converts to environment strings for STS - Includes comprehensive tests
…r support - Add fetchAndValidateJWKS with Result-based error handling - Support multiple config formats (URLs, Clerk shortcuts, domains) - Implement retry logic with exponential backoff and jitter - Add comprehensive key validation with jose integration - Include browser compatibility fixes for fetch headers - Add legacy fetchJwks wrapper for backward compatibility - Comprehensive test suite with unit and integration tests - Address all CodeRabbit nitpicks: ESM resolution, type safety, error handling Resolves #1103
3d9cd88 to
751f0ba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 47
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
core/base/crdt-clock.ts (3)
81-96: notifyWatchers() comment and behavior diverge; always notify noPayloadWatchers even on empty updates.The comment says “Always notify both types…” but an early return prevents any notification on empty updates. If the intent is to tick noPayload watchers regardless of payload, move that call before the early return.
notifyWatchers(updates: DocUpdate<DocTypes>[]) { updates = updates.filter((update) => update.id !== PARAM.GENESIS_CID); - if (!updates.length) { - return; - } + // Always tick "no payload" watchers, even if there are no document updates. + this.noPayloadWatchers.forEach((fn) => fn()); + if (!updates.length) return; - this.logger + this.logger .Debug() .Int("updatesCount", updates.length) .Int("watchersCount", this.watchers.size) .Int("noPayloadWatchersCount", this.noPayloadWatchers.size) .Msg("NOTIFY_WATCHERS: Triggering subscriptions"); - // Always notify both types of watchers - subscription systems need notifications - // regardless of whether there are document updates - this.noPayloadWatchers.forEach((fn) => fn()); this.watchers.forEach((fn) => fn(updates || [])); }
214-222: validateBlocks() is not awaiting the async checks; validation is effectively a no-op.map(async ...) without awaiting the resulting promises means missing blocks won’t be detected before proceeding.
async function validateBlocks(logger: Logger, newHead: ClockHead, blockstore?: BaseBlockstore) { if (!blockstore) throw logger.Error().Msg("missing blockstore"); - newHead.map(async (cid) => { - const got = await blockstore.get(cid); - if (!got) { - throw logger.Error().Str("cid", cid.toString()).Msg("int_applyHead missing block").AsError(); - } - }); + for (const cid of newHead) { + const got = await blockstore.get(cid); + if (!got) { + throw logger.Error().Str("cid", cid.toString()).Msg("int_applyHead missing block").AsError(); + } + } }
127-160: Ensure remote (noLoader=true) transactions are always committedThe current
applyHeadimplementation in core/base/crdt-clock.ts opens a transaction on the first remote update (whenlocalUpdates=false) but never commits it until a local update arrives. This unbounded, long-lived transaction can accumulate blocks indefinitely and prevent compaction.Key locations to address:
- core/base/crdt-clock.ts (≈ lines 155–161):
–if (!this.transaction) { this.transaction = this.blockstore.openTransaction({ noLoader, add: false }); } const tblocks = this.transaction;noLoaderistruefor remote updates, soopenTransactionis called here.- core/base/crdt-clock.ts (≈ lines 176–181):
– This commit only runs whenif (!noLoader) { await this.blockstore.commitTransaction(tblocks, { head: advancedHead }, { add: false, noLoader }); this.transaction = undefined; }noLoader=false(i.e. local updates), leaving the remote-path transaction open.- core/blockstore/loader.ts (≈ lines 228–231):
– Commits a new transaction, not the one fromawait (this.blockstoreParent as BaseBlockstore).commitTransaction( new CarTransactionImpl(...), { head: this.blockstoreParent?.crdtParent?.clock.head }, { add: false, noLoader: false }, );crdt-clock.ts, so it doesn’t flush the pending remote transaction.To resolve, implement a commit policy for remote transactions, for example:
- Commit whenever the
applyHeadqueue drains.- Commit after N remote updates or on a periodic timer.
- Or immediately call
commitTransactionin thenoLoader=truebranch (perhaps with{ noLoader: true }), then clearthis.transaction.Once the desired semantics are confirmed, we can codify a concrete strategy.
core/gateways/base/meta-key-hack.ts (4)
137-143: Bug: lastDecodedMetas keeps 3 items, not 2.Because the check runs before push, the array ends up length 3 after each decode. Clamp after push or pre-check using >=.
Apply:
- // we only want to keep the last 2 metas - if (lastDecodedMetas.length > 2) { - lastDecodedMetas.shift(); - } - lastDecodedMetas.push(r.Ok()); + lastDecodedMetas.push(r.Ok()); + // keep only the last 2 metas + while (lastDecodedMetas.length > 2) { + lastDecodedMetas.shift(); + }
111-117: Error propagation returns a Result inside Err; unwrap the actual error.Returning Result.Err(rJsObj) nests a Result as the error payload. Use the underlying error.
Apply:
- if (rJsObj.isErr()) { - return Result.Err(rJsObj); - } + if (rJsObj.isErr()) { + return Result.Err(rJsObj.Err()); + }
134-136: Same nested-Result issue when decode fails; unwrap before returning.Apply:
- if (r.isErr()) { - return Promise.resolve(Result.Err(r)); - } + if (r.isErr()) { + return Promise.resolve(Result.Err(r.Err())); + }
207-209: Same nested-Result issue in encoder; unwrap Err.Apply:
- if (obj.isErr()) { - return Promise.resolve(Result.Err(obj)); - } + if (obj.isErr()) { + return Promise.resolve(Result.Err(obj.Err())); + }core/gateways/file/key-bag-file.ts (1)
20-21: Avoid id collisions when deriving filenames; use a reversible, collision-resistant mapping.Replacing all non-alphanumerics with "_" can collide: e.g., "a/b" and "a_b" both map to "a_b.json". With URLs or arbitrary ids, this risks overwriting different bags.
Apply this diff to use encodeURIComponent (simple, reversible, filesystem-safe) and keep names unique:
- fName: this.sthis.pathOps.join(dirName, `${id.replace(/[^a-zA-Z0-9]/g, "_")}.json`), + fName: this.sthis.pathOps.join(dirName, `${encodeURIComponent(id)}.json`),If you prefer truly fixed-length names, hash id to hex:
// import { createHash } from "node:crypto"; const safe = encodeURIComponent(id); // or createHash("sha256").update(id).digest("hex");core/keybag/key-bag-memory.ts (1)
14-16: Use the same id encoding scheme as file provider to prevent key collisions and keep parity.Currently, keys use raw id; if file provider encodes ids, the same logical id might map differently between providers. Encoding also prevents separators from polluting the in-memory namespace.
Apply this diff:
key(id: string): string { - return `${this.url.pathname}/${id}`; + return `${this.url.pathname}/${encodeURIComponent(id)}`; }core/runtime/keyed-crypto.ts (1)
195-199: Update internal error message to use the new method nameThe renaming of
asV2KeysItem→asV2StorageKeyItemleft a stale reference inside the error message. Please update the string literal so it reflects the new API name.• File:
core/runtime/keyed-crypto.ts
Location: inside theasV2StorageKeyItemimplementation (around line 196)Suggested diff:
- throw this.logger.Error().Msg("noCrypto.asV2KeysItem not implemented").AsError(); + throw this.logger.Error().Msg("noCrypto.asV2StorageKeyItem not implemented").AsError();core/device-id/device-id-verify-msg.ts (1)
342-343: Unsafe Result access in asV2StorageKeyItem.Line 343 accesses
.value.Ok().Ok().kfpwithout checking if the Results contain errors, which could cause runtime errors.async asV2StorageKeyItem(): Promise<V2StorageKeyItem> { const kis = await Promise.all( this.lookUp .values() .filter((i) => i.key !== "*") + .filter((i) => i.value?.isOk()) .map((i) => i.value.Ok().Ok().kfp.asKeysItem()),However, this still leaves a double
.Ok().Ok()chain. The method should handle the nested Result properly:async asV2StorageKeyItem(): Promise<V2StorageKeyItem> { const kis = await Promise.all( this.lookUp .values() .filter((i) => i.key !== "*") - .map((i) => i.value.Ok().Ok().kfp.asKeysItem()), + .filter((i) => i.value?.isOk() && i.value.Ok().isOk()) + .map((i) => i.value.Ok().Ok().kfp.asKeysItem()),
♻️ Duplicate comments (3)
core/jwks/tests/integration.test.ts (1)
21-64: Gate the live network test behind LIVE_JWKS (missed one place)You gated the first live test, but this second one still runs unconditionally and will hit the network in CI.
Apply this diff to gate it:
- // Integration test with real Clerk endpoint (may fail in CI/testing environments) - it("should fetch and validate real Clerk JWKS (integration test)", async () => { + // Integration test with real Clerk endpoint (opt-in via LIVE_JWKS=1) + it.runIf(runLive)("should fetch and validate real Clerk JWKS (integration test)", async () => { @@ - }, 10000); // 10 second timeout + }, 10000); // 10 second timeoutOptionally mute console output during the live run with vi.spyOn(console, ...).mockImplementation(() => {}) to keep logs clean.
core/keybag/internal-keys-by-fingerprint.ts (2)
250-256: Critical: get() method always returns undefined.The
get()method has the same critical issue as in the previous file - it always returnsundefinedregardless of whether a key is found. Additionally, it unsafely accesses nested Results in the warning log.This issue was already identified in the
device-id-verify-msg.tsfile review. The same fix should be applied here.
310-336: Unsafe nested Result access in upsertNoStore.Similar to the previous issues, line 313 unsafely accesses
rI.Ok().kfpwithout checking ifrIis actually Ok.This issue was already identified in the
device-id-verify-msg.tsfile review. The same fix should be applied here.
| if (needsManualNotification) { | ||
| const changes = await clockChangesSince<DocTypes>(this.blockstore, advancedHead, prevHead, {}, this.logger); | ||
| const triggerReason = | ||
| this.watchers.size > 0 && this.noPayloadWatchers.size > 0 | ||
| ? "both" | ||
| : this.watchers.size > 0 | ||
| ? "watchers" | ||
| : "noPayloadWatchers"; | ||
| this.logger | ||
| .Debug() | ||
| .Int("changesCount", changes.result.length) | ||
| .Str("triggerReason", triggerReason) | ||
| .Int("watchersCount", this.watchers.size) | ||
| .Int("noPayloadWatchersCount", this.noPayloadWatchers.size) | ||
| .Msg("MANUAL_NOTIFICATION: Checking for changes"); | ||
| if (changes.result.length > 0) { | ||
| this.logger.Debug().Msg("MANUAL_NOTIFICATION: Calling notifyWatchers with changes"); | ||
| this.notifyWatchers(changes.result); | ||
| this.noPayloadWatchers.forEach((fn) => fn()); | ||
| } else { | ||
| this.logger.Debug().Msg("MANUAL_NOTIFICATION: Calling noPayloadWatchers directly"); | ||
| this.noPayloadWatchers.forEach((fn) => fn()); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid duplicate noPayloadWatcher invocations when changes > 0.
In the manual path you call notifyWatchers(changes.result), which already triggers noPayloadWatchers, and then you invoke noPayloadWatchers again — leading to duplicate tock callbacks.
if (changes.result.length > 0) {
this.logger.Debug().Msg("MANUAL_NOTIFICATION: Calling notifyWatchers with changes");
- this.notifyWatchers(changes.result);
- this.noPayloadWatchers.forEach((fn) => fn());
+ this.notifyWatchers(changes.result);
} else {
this.logger.Debug().Msg("MANUAL_NOTIFICATION: Calling noPayloadWatchers directly");
this.noPayloadWatchers.forEach((fn) => fn());
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (needsManualNotification) { | |
| const changes = await clockChangesSince<DocTypes>(this.blockstore, advancedHead, prevHead, {}, this.logger); | |
| const triggerReason = | |
| this.watchers.size > 0 && this.noPayloadWatchers.size > 0 | |
| ? "both" | |
| : this.watchers.size > 0 | |
| ? "watchers" | |
| : "noPayloadWatchers"; | |
| this.logger | |
| .Debug() | |
| .Int("changesCount", changes.result.length) | |
| .Str("triggerReason", triggerReason) | |
| .Int("watchersCount", this.watchers.size) | |
| .Int("noPayloadWatchersCount", this.noPayloadWatchers.size) | |
| .Msg("MANUAL_NOTIFICATION: Checking for changes"); | |
| if (changes.result.length > 0) { | |
| this.logger.Debug().Msg("MANUAL_NOTIFICATION: Calling notifyWatchers with changes"); | |
| this.notifyWatchers(changes.result); | |
| this.noPayloadWatchers.forEach((fn) => fn()); | |
| } else { | |
| this.logger.Debug().Msg("MANUAL_NOTIFICATION: Calling noPayloadWatchers directly"); | |
| this.noPayloadWatchers.forEach((fn) => fn()); | |
| } | |
| } | |
| if (needsManualNotification) { | |
| const changes = await clockChangesSince<DocTypes>(this.blockstore, advancedHead, prevHead, {}, this.logger); | |
| const triggerReason = | |
| this.watchers.size > 0 && this.noPayloadWatchers.size > 0 | |
| ? "both" | |
| : this.watchers.size > 0 | |
| ? "watchers" | |
| : "noPayloadWatchers"; | |
| this.logger | |
| .Debug() | |
| .Int("changesCount", changes.result.length) | |
| .Str("triggerReason", triggerReason) | |
| .Int("watchersCount", this.watchers.size) | |
| .Int("noPayloadWatchersCount", this.noPayloadWatchers.size) | |
| .Msg("MANUAL_NOTIFICATION: Checking for changes"); | |
| if (changes.result.length > 0) { | |
| this.logger.Debug().Msg("MANUAL_NOTIFICATION: Calling notifyWatchers with changes"); | |
| this.notifyWatchers(changes.result); | |
| } else { | |
| this.logger.Debug().Msg("MANUAL_NOTIFICATION: Calling noPayloadWatchers directly"); | |
| this.noPayloadWatchers.forEach((fn) => fn()); | |
| } | |
| } |
🤖 Prompt for AI Agents
In core/base/crdt-clock.ts around lines 182 to 205, the manual-notification
branch calls notifyWatchers(changes.result) which already triggers the
noPayloadWatchers, and then calls noPayloadWatchers.forEach(...) again causing
duplicate invocations; remove the second noPayloadWatchers.forEach(...) from the
branch where changes.result.length > 0 so that noPayloadWatchers are only
invoked once (keep the noPayloadWatchers.forEach(...) in the else branch for the
no-changes case).
| this.logger | ||
| .Debug() | ||
| .Str("newHead", crdtMeta.head.map((h) => h.toString()).join(",")) | ||
| .Int("subscribers", this.clock.watchers.size + this.clock.noPayloadWatchers.size) | ||
| .Int("headLength", crdtMeta.head.length) | ||
| .Int("currentHeadLength", this.clock.head.length) | ||
| .Str("dbName", this.opts.name || "unnamed") | ||
| .Msg("APPLY_META: Calling applyHead for REMOTE sync"); | ||
| await this.clock.applyHead(crdtMeta.head, []); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Pass the current head as prevHead in applyMeta; avoid using an empty prevHead.
Using [] as prevHead causes downstream code (clockChangesSince) to treat the change as “since genesis,” potentially flooding subscribers with all historical updates on each remote sync. Pass a snapshot of the current head instead.
- await this.clock.applyHead(crdtMeta.head, []);
+ const prevHead = [...this.clock.head];
+ await this.clock.applyHead(crdtMeta.head, prevHead);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| this.logger | |
| .Debug() | |
| .Str("newHead", crdtMeta.head.map((h) => h.toString()).join(",")) | |
| .Int("subscribers", this.clock.watchers.size + this.clock.noPayloadWatchers.size) | |
| .Int("headLength", crdtMeta.head.length) | |
| .Int("currentHeadLength", this.clock.head.length) | |
| .Str("dbName", this.opts.name || "unnamed") | |
| .Msg("APPLY_META: Calling applyHead for REMOTE sync"); | |
| await this.clock.applyHead(crdtMeta.head, []); | |
| this.logger | |
| .Debug() | |
| .Str("newHead", crdtMeta.head.map((h) => h.toString()).join(",")) | |
| .Int("subscribers", this.clock.watchers.size + this.clock.noPayloadWatchers.size) | |
| .Int("headLength", crdtMeta.head.length) | |
| .Int("currentHeadLength", this.clock.head.length) | |
| .Str("dbName", this.opts.name || "unnamed") | |
| .Msg("APPLY_META: Calling applyHead for REMOTE sync"); | |
| const prevHead = [...this.clock.head]; | |
| await this.clock.applyHead(crdtMeta.head, prevHead); |
🤖 Prompt for AI Agents
In core/base/crdt.ts around lines 109 to 117, the call to this.clock.applyHead
uses an empty array for prevHead which causes downstream code to treat the
change as "since genesis" and flood subscribers; replace the empty prevHead with
a snapshot of the current head (e.g., a shallow copy of this.clock.head) so
applyHead receives the actual previous head state, ensuring clockChangesSince
computes diffs correctly and avoids replaying the entire history.
| }, | ||
| "dependencies": { | ||
| "@adviser/cement": "^0.4.25", | ||
| "@adviser/cement": "^0.4.26", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify repo-wide version alignment and lockfile refresh.
Ensure all packages (deps, devDeps, peerDeps) consistently reference @adviser/cement ^0.4.26 and refresh the workspace lockfile to avoid mixed resolutions.
Run this script from repo root to surface any mismatches (e.g., deps vs peerDeps):
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo -e "File\tdep\tdevDep\tpeerDep"
fd -a -H package.json | while read -r f; do
jq -r --arg PKG "@adviser/cement" '
. as $root
| [
input_filename,
($root.dependencies[$PKG] // ""),
($root.devDependencies[$PKG] // ""),
($root.peerDependencies[$PKG] // "")
] | @tsv
' "$f" 2>/dev/null
done | sort
echo
echo "Mismatches (dependency != peerDependency):"
fd -a -H package.json | while read -r f; do
jq -r --arg PKG "@adviser/cement" '
. as $root
| ($root.dependencies[$PKG] // "") as $dep
| ($root.peerDependencies[$PKG] // "") as $peer
| if ($dep != "" and $peer != "" and $dep != $peer) then
(input_filename + "\tdep=" + $dep + "\tpeer=" + $peer)
else empty end
' "$f" 2>/dev/null
doneLength of output: 11352
Align @adviser/cement versions across all packages & refresh lockfile
It looks like two consumer packages are still on older versions of @adviser/cement, which will lead to mixed resolutions in the workspace. Please update them to ^0.4.26 and then regenerate the lockfile.
Files to update:
core/jwks/package.json(currently^0.4.25)examples/react-router/package.json(currently^0.4.21)
Suggested diffs:
--- core/jwks/package.json
+++ core/jwks/package.json
@@ -12,7 +12,7 @@
"dependencies": {
- "@adviser/cement": "^0.4.25",
+ "@adviser/cement": "^0.4.26",
/* … */
}
}--- examples/react-router/package.json
+++ examples/react-router/package.json
@@ -8,7 +8,7 @@
"dependencies": {
- "@adviser/cement": "^0.4.21",
+ "@adviser/cement": "^0.4.26",
/* … */
}
}Once these are updated, run your package manager’s install command (e.g. yarn install or npm install) from the repo root to refresh the lockfile and ensure a single, consistent resolution of @adviser/cement across the monorepo.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "@adviser/cement": "^0.4.26", | |
| --- core/jwks/package.json | |
| @@ -12,7 +12,7 @@ | |
| "dependencies": { | |
| "@adviser/cement": "^0.4.26", | |
| /* … */ | |
| } | |
| } |
| "@adviser/cement": "^0.4.26", | |
| --- examples/react-router/package.json | |
| @@ -8,7 +8,7 @@ | |
| "dependencies": { | |
| "@adviser/cement": "^0.4.26", | |
| /* … */ | |
| } | |
| } |
🤖 Prompt for AI Agents
In core/base/package.json around line 39, the workspace is mixing
@adviser/cement versions; update the consumers core/jwks/package.json and
examples/react-router/package.json to use "^0.4.26" (replacing "^0.4.25" and
"^0.4.21" respectively) and then run your package manager install (e.g. yarn
install or npm install) from the repo root to regenerate the lockfile so the
workspace resolves a single consistent @adviser/cement version.
| ensureDeviceId() { | ||
| return onceDeviceId.once(async () => { | ||
| const kBag = await getKeyBag(this.#sthis); | ||
| let deviceIdResult = await kBag.getDeviceId(); | ||
| if (deviceIdResult.deviceId.IsNone()) { | ||
| const key = await DeviceIdKey.create(); | ||
| deviceIdResult = await kBag.setDeviceId(await key.exportPrivateJWK()); | ||
| } | ||
| const key = await DeviceIdKey.createFromJWK(deviceIdResult.deviceId.unwrap()); | ||
|
|
||
| if (deviceIdResult.cert.IsNone()) { | ||
| const csr = new DeviceIdCSR(this.#sthis, key); | ||
| const rCsrJWT = await csr.createCSR({ commonName: `fp-dev@${await key.fingerPrint()}` }); | ||
| if (rCsrJWT.isErr()) { | ||
| return Result.Err<MsgSigner>(rCsrJWT.Err()); | ||
| } | ||
| const rCertResult = await this.#transport.issueCertificate(rCsrJWT.Ok()); | ||
| if (rCertResult.isErr()) { | ||
| return Result.Err<MsgSigner>(rCertResult.Err()); | ||
| } | ||
| deviceIdResult = await kBag.setDeviceId(deviceIdResult.deviceId.Unwrap(), rCertResult.Ok()); | ||
| } | ||
| return Result.Err<MsgSigner>("No certificate found"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Result/Option API usage and complete the provisioning flow
The code mixes non-existent methods (IsNone, Unwrap, isErr, Ok, Err) with the actual Result/Option API used elsewhere (is_err, is_ok, unwrap, unwrap_err). It also always returns Err("No certificate found") even when a cert exists, so the happy path never succeeds.
Proposed fix for the whole method body to:
- Use consistent Result/Option APIs,
- Create/restore key,
- Issue and optionally persist cert,
- Return a working MsgSigner.
ensureDeviceId() {
return onceDeviceId.once(async () => {
const kBag = await getKeyBag(this.#sthis);
- let deviceIdResult = await kBag.getDeviceId();
- if (deviceIdResult.deviceId.IsNone()) {
- const key = await DeviceIdKey.create();
- deviceIdResult = await kBag.setDeviceId(await key.exportPrivateJWK());
- }
- const key = await DeviceIdKey.createFromJWK(deviceIdResult.deviceId.unwrap());
-
- if (deviceIdResult.cert.IsNone()) {
- const csr = new DeviceIdCSR(this.#sthis, key);
- const rCsrJWT = await csr.createCSR({ commonName: `fp-dev@${await key.fingerPrint()}` });
- if (rCsrJWT.isErr()) {
- return Result.Err<MsgSigner>(rCsrJWT.Err());
- }
- const rCertResult = await this.#transport.issueCertificate(rCsrJWT.Ok());
- if (rCertResult.isErr()) {
- return Result.Err<MsgSigner>(rCertResult.Err());
- }
- deviceIdResult = await kBag.setDeviceId(deviceIdResult.deviceId.Unwrap(), rCertResult.Ok());
- }
- return Result.Err<MsgSigner>("No certificate found");
+ let state = await kBag.getDeviceId();
+
+ // 1) Ensure key material exists
+ if (state.deviceId.is_none()) {
+ const newKey = await DeviceIdKey.create();
+ state = await kBag.setDeviceId(await newKey.exportPrivateJWK());
+ }
+ const key = await DeviceIdKey.createFromJWK(state.deviceId.unwrap());
+
+ // 2) Ensure certificate exists (issue if missing)
+ if (state.cert.is_none()) {
+ const csr = new DeviceIdCSR(this.#sthis, key);
+ const rCsr = await csr.createCSR({ commonName: `fp-dev@${await key.fingerPrint()}` });
+ if (rCsr.is_err()) return Result.Err<MsgSigner>(rCsr.unwrap_err());
+
+ const rCert = await this.#transport.issueCertificate(rCsr.unwrap());
+ if (rCert.is_err()) return Result.Err<MsgSigner>(rCert.unwrap_err());
+
+ const certResult = rCert.unwrap();
+ // Optional: verify the certificate JWT before trusting it.
+ // const verify = await this.#transport.verifyMsg(certResult.certificateJWT);
+ // if (verify.is_err()) return Result.Err<MsgSigner>(verify.unwrap_err());
+
+ state = await kBag.setDeviceId(state.deviceId.unwrap(), certResult);
+ return Result.Ok(new MsgSigner(new DeviceIdSignMsg(this.#sthis.txt.base64, key, certResult.certificatePayload)));
+ }
+
+ // 3) Already provisioned
+ const storedCert = state.cert.unwrap();
+ return Result.Ok(new MsgSigner(new DeviceIdSignMsg(this.#sthis.txt.base64, key, storedCert.certificatePayload)));
});
}Follow-up: if DeviceIdKeyBagItem["cert"] does not contain certificatePayload, adapt the constructor to use the correct field. See the next comment for a type check suggestion.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ensureDeviceId() { | |
| return onceDeviceId.once(async () => { | |
| const kBag = await getKeyBag(this.#sthis); | |
| let deviceIdResult = await kBag.getDeviceId(); | |
| if (deviceIdResult.deviceId.IsNone()) { | |
| const key = await DeviceIdKey.create(); | |
| deviceIdResult = await kBag.setDeviceId(await key.exportPrivateJWK()); | |
| } | |
| const key = await DeviceIdKey.createFromJWK(deviceIdResult.deviceId.unwrap()); | |
| if (deviceIdResult.cert.IsNone()) { | |
| const csr = new DeviceIdCSR(this.#sthis, key); | |
| const rCsrJWT = await csr.createCSR({ commonName: `fp-dev@${await key.fingerPrint()}` }); | |
| if (rCsrJWT.isErr()) { | |
| return Result.Err<MsgSigner>(rCsrJWT.Err()); | |
| } | |
| const rCertResult = await this.#transport.issueCertificate(rCsrJWT.Ok()); | |
| if (rCertResult.isErr()) { | |
| return Result.Err<MsgSigner>(rCertResult.Err()); | |
| } | |
| deviceIdResult = await kBag.setDeviceId(deviceIdResult.deviceId.Unwrap(), rCertResult.Ok()); | |
| } | |
| return Result.Err<MsgSigner>("No certificate found"); | |
| ensureDeviceId() { | |
| return onceDeviceId.once(async () => { | |
| const kBag = await getKeyBag(this.#sthis); | |
| let state = await kBag.getDeviceId(); | |
| // 1) Ensure key material exists | |
| if (state.deviceId.is_none()) { | |
| const newKey = await DeviceIdKey.create(); | |
| state = await kBag.setDeviceId(await newKey.exportPrivateJWK()); | |
| } | |
| const key = await DeviceIdKey.createFromJWK(state.deviceId.unwrap()); | |
| // 2) Ensure certificate exists (issue if missing) | |
| if (state.cert.is_none()) { | |
| const csr = new DeviceIdCSR(this.#sthis, key); | |
| const rCsr = await csr.createCSR({ commonName: `fp-dev@${await key.fingerPrint()}` }); | |
| if (rCsr.is_err()) return Result.Err<MsgSigner>(rCsr.unwrap_err()); | |
| const rCert = await this.#transport.issueCertificate(rCsr.unwrap()); | |
| if (rCert.is_err()) return Result.Err<MsgSigner>(rCert.unwrap_err()); | |
| const certResult = rCert.unwrap(); | |
| // Optional: verify the certificate JWT before trusting it. | |
| // const verify = await this.#transport.verifyMsg(certResult.certificateJWT); | |
| // if (verify.is_err()) return Result.Err<MsgSigner>(verify.unwrap_err()); | |
| // Persist the newly issued certificate | |
| state = await kBag.setDeviceId(state.deviceId.unwrap(), certResult); | |
| return Result.Ok( | |
| new MsgSigner( | |
| new DeviceIdSignMsg(this.#sthis.txt.base64, key, certResult.certificatePayload) | |
| ) | |
| ); | |
| } | |
| // 3) Already provisioned | |
| const storedCert = state.cert.unwrap(); | |
| return Result.Ok( | |
| new MsgSigner( | |
| new DeviceIdSignMsg(this.#sthis.txt.base64, key, storedCert.certificatePayload) | |
| ) | |
| ); | |
| }); | |
| } |
🤖 Prompt for AI Agents
In core/device-id/device-id-client.ts around lines 39–61, the method mixes wrong
Result/Option method names and always returns an Err; update it to use the
actual API (Option: is_none()/is_some()/unwrap(); Result:
is_ok()/is_err()/unwrap()/unwrap_err()), ensure the key is created/restored (if
deviceId is none -> create DeviceIdKey and call kBag.setDeviceId with exported
JWK), create the DeviceIdKey from the stored JWK, if cert is none create CSR and
check results with is_err()/unwrap_err(), call this.#transport.issueCertificate
and on success persist the cert via kBag.setDeviceId(...) (passing the deviceId
jwk and certificate), then construct and return Result.Ok(MsgSigner) built from
the key and the issued certificate payload; if any Result is_err propagate the
unwrap_err() as Result.Err. Also add a follow-up note to adapt the MsgSigner
constructor if DeviceIdKeyBagItem["cert"] uses a different field name for the
certificate payload.
| // sign a message | ||
| // @param msg: string // JWT String | ||
| sendSigned<T extends NonNullable<unknown>>(_payload: T, _algorithm?: string): Promise<string> { | ||
| throw new Error("Method not implemented."); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement sendSigned via provisioned signer
This should reuse ensureDeviceId and delegate to MsgSigner.sign(). Throwing keeps the API unusable.
- sendSigned<T extends NonNullable<unknown>>(_payload: T, _algorithm?: string): Promise<string> {
- throw new Error("Method not implemented.");
- }
+ async sendSigned<T extends NonNullable<unknown>>(payload: T, algorithm?: string): Promise<string> {
+ const rSigner = await this.ensureDeviceId();
+ if (rSigner.is_err()) throw rSigner.unwrap_err();
+ return rSigner.unwrap().sign(payload, algorithm);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // sign a message | |
| // @param msg: string // JWT String | |
| sendSigned<T extends NonNullable<unknown>>(_payload: T, _algorithm?: string): Promise<string> { | |
| throw new Error("Method not implemented."); | |
| } | |
| // sign a message | |
| // @param msg: string // JWT String | |
| async sendSigned<T extends NonNullable<unknown>>(payload: T, algorithm?: string): Promise<string> { | |
| const rSigner = await this.ensureDeviceId(); | |
| if (rSigner.is_err()) throw rSigner.unwrap_err(); | |
| return rSigner.unwrap().sign(payload, algorithm); | |
| } |
🤖 Prompt for AI Agents
In core/device-id/device-id-client.ts around lines 72 to 76, the sendSigned
method currently throws and must be implemented to reuse ensureDeviceId and
delegate signing to the provisioned MsgSigner: call await this.ensureDeviceId()
to guarantee a provisioned signer exists, obtain the provisioned signer (e.g.,
this.deviceId.provisionedSigner or equivalent field populated by
ensureDeviceId), and invoke its sign method with the payload and optional
algorithm, returning the resulting Promise<string>; propagate or throw any
errors from ensureDeviceId or sign so callers receive failures.
| getJwt(name: string, key?: CryptoKey | KeyObject | JWK | Uint8Array, opts?: JWTVerifyOptions): Promise<Result<JWTResult>>; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Broaden key parameter type via KeyLike and avoid node-specific import leaks
Using KeyLike captures both CryptoKey and KeyObject. Keep Uint8Array/JWK for symmetric/JWK usage.
- setJwt(name: string, jwtStr: string, key?: CryptoKey | KeyObject | JWK | Uint8Array, opts?: JWTVerifyOptions): Promise<Result<JWTResult>>;
- getJwt(name: string, key?: CryptoKey | KeyObject | JWK | Uint8Array, opts?: JWTVerifyOptions): Promise<Result<JWTResult>>;
+ setJwt(name: string, jwtStr: string, key?: KeyLike | JWK | Uint8Array, opts?: JWTVerifyOptions): Promise<Result<JWTResult>>;
+ getJwt(name: string, key?: KeyLike | JWK | Uint8Array, opts?: JWTVerifyOptions): Promise<Result<JWTResult>>;Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In core/types/base/key-bag-if.ts around lines 30-31, the key parameter union
should be widened to use KeyLike to cover both CryptoKey and KeyObject and avoid
leaking node-specific imports; change both method signatures to accept key?:
KeyLike | JWK | Uint8Array and add a type-only import for KeyLike from the
cross-platform 'jose' package (import type { KeyLike } from 'jose'), removing
direct CryptoKey/KeyObject references.
| // Main FPCloudClaim schema | ||
| export const FPCloudClaimSchema = JWTPayloadSchema.extend({ | ||
| userId: z.string(), | ||
| email: z.email(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: use z.string().email(), not z.email()
z.email() is not a valid Zod API. This will throw/compile incorrectly with v4. Replace with z.string().email().
Apply:
- email: z.email(),
+ email: z.string().email(),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| email: z.email(), | |
| email: z.string().email(), |
🤖 Prompt for AI Agents
In core/types/protocols/cloud/msg-types.zod.ts around line 28, the schema
currently uses z.email() which is not a valid Zod API; replace z.email() with
z.string().email() so the field is validated as a string with email format
(i.e., change the schema entry to use z.string().email()).
| email: z.email(), | ||
| nickname: z.string().optional(), | ||
| provider: z.enum(["github", "google"]).optional(), | ||
| created: z.date(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Numeric date parsing likely treats seconds as milliseconds
JWT-like timestamps are often seconds since epoch. new Date(num) assumes milliseconds, so created: 1724670000 becomes ~1970-01-20. Fix by normalizing seconds to ms and validating.
Apply:
- created: z.date(),
+ // Accept Date output; actual parsing handled in FPCloudClaimSchema below
+ created: z.date(),And replace the Parse transform with a safe normalizer:
- // Transform string to Date if needed (common in JWT parsing)
- created: z.union([z.date(), z.string().transform((str) => new Date(str)), z.number().transform((num) => new Date(num))]),
+ // Normalize string/number into a valid Date; treat < 1e12 as seconds
+ created: z.union([
+ z.date(),
+ z.string().transform((str) => {
+ const d = new Date(str);
+ if (Number.isNaN(d.getTime())) throw new Error(`Invalid date string: ${str}`);
+ return d;
+ }),
+ z.number().transform((num) => {
+ const ms = num < 1e12 ? num * 1000 : num;
+ const d = new Date(ms);
+ if (Number.isNaN(d.getTime())) throw new Error(`Invalid epoch: ${num}`);
+ return d;
+ }),
+ ]),Also applies to: 52-52
🤖 Prompt for AI Agents
In core/types/protocols/cloud/msg-types.zod.ts around line 31 (and also at line
52), change the schema so it accepts numbers and dates and normalizes numeric
timestamps that are in seconds to milliseconds before creating a Date: accept
z.number() or z.date(), and in the transform/preprocess detect numeric values <
1e12 (seconds) and multiply by 1000, leave larger values as milliseconds, then
construct a Date and validate it is a valid date (throw/return a ZodError if
invalid). Ensure the transform is safe (no direct new Date(num) without
normalization) and keep the runtime type as Date.
| }, | ||
| "dependencies": { | ||
| "@adviser/cement": "^0.4.25", | ||
| "@adviser/cement": "^0.4.26", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Confirm Zod v4 API usage; “z.email()” is invalid in v4
You’ve upgraded to zod ^4.0.14. In v4, email validation is via z.string().email(), not z.email(). I see z.email() used in msg-types.zod.ts (Line 28). Please fix usages across the repo.
Run to locate invalid usages:
Also applies to: 44-45
🏁 Script executed:
#!/bin/bash
# Find invalid zod email usages
rg -nP '\bz\.email\s*\(' -g '!**/dist/**' -g '!**/build/**' -g '!**/node_modules/**'Length of output: 220
Update Zod email validations to use z.string().email()
Zod v4 no longer provides a top-level z.email() helper. Email validation must be called off of a string schema, e.g. z.string().email().
Please refactor the following usages:
- core/types/protocols/cloud/msg-types.zod.ts:28
- core/types/protocols/cloud/msg-types.zod.ts:48
Suggested diff (apply at both locations):
- email: z.email(),
+ email: z.string().email(),Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In core/types/protocols/cloud/msg-types.zod.ts around lines 28 and 48, the code
is using the removed top-level z.email(); replace each occurrence of z.email()
with z.string().email() so the email validation is called off a string schema
(i.e. change z.email() → z.string().email() at both locations).
| "gptdoc": "Fireproof/React/Usage: import { useFireproof } from 'use-fireproof'; function WordCounterApp() { const { useLiveQuery, useDocument } = useFireproof('my-word-app'); const { doc: wordInput, merge: updateWordInput, save: saveWordInput, reset: clearWordInput } = useDocument({ word: '', timestamp: Date.now() }); const recentWords = useLiveQuery('timestamp', { descending: true, limit: 10 }); const { doc: { totalSubmitted }, merge: updateTotalSubmitted, save: saveTotalSubmitted } = useDocument({ _id: 'word-counter', totalSubmitted: 0 }); const handleWordSubmission = (e) => { e.preventDefault(); updateTotalSubmitted({ totalSubmitted: totalSubmitted + 1 }); saveTotalSubmitted(); saveWordInput(); clearWordInput();}; return (<><p>{totalSubmitted} words submitted</p><form onSubmit={handleWordSubmission}><input type='text' value={wordInput.word} onChange={e => updateWordInput({ word: e.target.value })} placeholder='Enter a word' /></form><ul>{recentWords.docs.map(entry => (<li key={entry._id}>{entry.word}</li>))} </ul></>) } export default WordCounterApp;", | ||
| "dependencies": { | ||
| "@adviser/cement": "^0.4.25", | ||
| "@adviser/cement": "^0.4.26", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Align peerDependencies with dependency to avoid consumer install warnings.
use-fireproof depends on @adviser/cement ^0.4.26, but peerDependencies still declares ^0.4.20. This can trigger peer resolution warnings or mismatches for consumers. Align the peer range to ^0.4.26 (or a semver range that includes it).
Apply this diff:
"peerDependencies": {
- "@adviser/cement": "^0.4.20",
+ "@adviser/cement": "^0.4.26",
"react": ">=18.0.0"
},Also applies to: 39-41
🤖 Prompt for AI Agents
In use-fireproof/package.json around lines 25 and 39-41, the dependency declares
"@adviser/cement": "^0.4.26" but peerDependencies still lists "^0.4.20"; update
the peerDependencies entry for @adviser/cement to match (e.g. "^0.4.26" or a
semver range that includes 0.4.26) so consumers don't receive peer resolution
warnings, and run a quick npm/yarn install to verify no peer warnings remain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| const validator = new DeviceIdValidator(); | ||
| const validation = await validator.validateCSR(csrJWS); | ||
| if (!validation.valid) { | ||
| return Result.Err(validation.error); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type mismatch in Result.Err() usage.
On line 61, Result.Err(validation.error) is called where validation.error is a string. The Result.Err() typically expects an Error object, not a string. This could lead to type inconsistencies or runtime issues.
if (!validation.valid) {
- return Result.Err(validation.error);
+ return Result.Err(new Error(validation.error));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return Result.Err(validation.error); | |
| if (!validation.valid) { | |
| return Result.Err(new Error(validation.error)); | |
| } |
🤖 Prompt for AI Agents
In core/device-id/device-id-CA.ts around line 61, the code passes a string to
Result.Err which expects an Error object; change the call to wrap the message in
an Error (e.g., Result.Err(new Error(validation.error || 'Validation failed')))
so the returned Err contains an Error instance and handle the case where
validation.error may be undefined.
| }, | ||
| }); | ||
| if (rCert.isErr()) { | ||
| return Result.Err(rCert); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Result.Err() usage with nested Result.
Line 75 passes a Result object to Result.Err() instead of the actual error. This creates a Result<Result<...>> which is likely unintended.
if (rCert.isErr()) {
- return Result.Err(rCert);
+ return Result.Err(rCert.Err());
}🤖 Prompt for AI Agents
In core/device-id/device-id-CA.ts around line 75, the code calls
Result.Err(rCert) where rCert is already a Result, producing a nested Result;
change this to return the existing rCert directly (or if rCert is an error value
rather than a Result, extract the error and call Result.Err(errorValue)) so the
function returns a single-layer Result rather than Result<Result<...>>.
| throw new Error(`Certificate has expired (valid to: ${notAfter.toISOString()})`); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsafe access to nested Result values in warning log.
In the warning log on line 252, the code accesses .value.Ok().Ok().kfp.fingerPrint without checking if the values are error results first. This could throw an error if any of the Results contain errors.
this.keybag.logger
.Warn()
- .Any({ fprs: this.lookUp.values().map((i) => i.value.Ok().Ok().kfp.fingerPrint), fpr: fingerPrint })
+ .Any({
+ fprs: this.lookUp.values()
+ .filter((i) => i.value?.isOk())
+ .map((i) => i.value.Ok().kfp.fingerPrint),
+ fpr: fingerPrint
+ })
.Msg("keysByFingerprint:get: not found");Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In core/device-id/device-id-verify-msg.ts around lines 250 to 252, the warning
log currently dereferences nested Result objects with
.value.Ok().Ok().kfp.fingerPrint which can throw if any Result is an Err; guard
those accesses by first checking each Result for success (e.g., test .isOk() or
pattern-match the Result values) before accessing .value.Ok(), use a safe
extraction sequence (or optional chaining/fallback values) and log a clear
fallback (e.g., 'unknown fingerprint') when any intermediate Result is an Err so
the logger never dereferences an error result.
| return false; | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsafe Ok() access without error checking.
Line 313 calls rI.Ok().kfp without checking if rI is actually Ok. Since rI comes from .value.Ok() on line 310, which itself might be an error result, this could cause a runtime error.
const keyItems = this.lookUp.values().map((i) => i.value.Ok());
if (def) {
for (const rI of keyItems) {
- const i = rI.Ok().kfp
+ if (!rI.isOk()) continue;
+ const i = rI.Ok().kfp;
if (i.default && i.fingerPrint !== kfp.fingerPrint) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } | |
| const keyItems = this.lookUp.values().map((i) => i.value.Ok()); | |
| if (def) { | |
| for (const rI of keyItems) { | |
| // Skip any error results to avoid runtime exceptions from .Ok() | |
| if (!rI.isOk()) continue; | |
| const i = rI.Ok().kfp; | |
| if (i.default && i.fingerPrint !== kfp.fingerPrint) { | |
| // … | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In core/device-id/device-id-verify-msg.ts around line 313, rI is accessed as
rI.Ok().kfp without verifying rI is an Ok Result (rI originates from .value.Ok()
on line 310 and may be an Err); change the code to first check rI.Ok() (or use
pattern-matching/unwrap_or/if (rI.IsErr()) return a handled Err) and handle the
Err path (return or throw an appropriate error) before accessing kfp, then
safely read kfp from the Ok branch.
| let bagFnameOrUrl = sthis.env.get("FP_KEYBAG_URL"); | ||
| let url: URI; | ||
| if (runtimeFn().isBrowser) { | ||
| url = URI.from(bagFnameOrUrl || "indexeddb://fp-keybag"); | ||
| } else { | ||
| if (!bagFnameOrUrl) { | ||
| const home = sthis.env.get("HOME"); | ||
| bagFnameOrUrl = `${home}/.fireproof/keybag`; | ||
| url = URI.from(`file://${bagFnameOrUrl}`); | ||
| } else { | ||
| url = URI.from(bagFnameOrUrl); | ||
| } | ||
| } | ||
| const logger = ensureLogger(sthis, "defaultKeyBagUrl"); | ||
| logger.Debug().Url(url).Msg("from env"); | ||
| return url; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Duplicate URL resolution logic in defaultKeyBagOpts.
Lines 106-118 duplicate the exact same URL resolution logic from defaultKeyBagUrl() (lines 77-93). This violates the DRY principle.
export function defaultKeyBagOpts(sthis: SuperThis, kbo?: Partial<KeyBagOpts>): KeyBagRuntime {
kbo = kbo || {};
if (kbo.keyRuntime) {
return kbo.keyRuntime;
}
const logger = ensureLogger(sthis, "KeyBag");
let url: URI;
if (kbo.url) {
url = URI.from(kbo.url);
logger.Debug().Url(url).Msg("from opts");
} else {
- let bagFnameOrUrl = sthis.env.get("FP_KEYBAG_URL");
- if (runtimeFn().isBrowser) {
- url = URI.from(bagFnameOrUrl || "indexeddb://fp-keybag");
- } else {
- if (!bagFnameOrUrl) {
- const home = sthis.env.get("HOME");
- bagFnameOrUrl = `${home}/.fireproof/keybag`;
- url = URI.from(`file://${bagFnameOrUrl}`);
- } else {
- url = URI.from(bagFnameOrUrl);
- }
- }
- logger.Debug().Url(url).Msg("from env");
+ url = defaultKeyBagUrl(sthis);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let bagFnameOrUrl = sthis.env.get("FP_KEYBAG_URL"); | |
| let url: URI; | |
| if (runtimeFn().isBrowser) { | |
| url = URI.from(bagFnameOrUrl || "indexeddb://fp-keybag"); | |
| } else { | |
| if (!bagFnameOrUrl) { | |
| const home = sthis.env.get("HOME"); | |
| bagFnameOrUrl = `${home}/.fireproof/keybag`; | |
| url = URI.from(`file://${bagFnameOrUrl}`); | |
| } else { | |
| url = URI.from(bagFnameOrUrl); | |
| } | |
| } | |
| const logger = ensureLogger(sthis, "defaultKeyBagUrl"); | |
| logger.Debug().Url(url).Msg("from env"); | |
| return url; | |
| } | |
| export function defaultKeyBagOpts(sthis: SuperThis, kbo?: Partial<KeyBagOpts>): KeyBagRuntime { | |
| kbo = kbo || {}; | |
| if (kbo.keyRuntime) { | |
| return kbo.keyRuntime; | |
| } | |
| const logger = ensureLogger(sthis, "KeyBag"); | |
| let url: URI; | |
| if (kbo.url) { | |
| url = URI.from(kbo.url); | |
| logger.Debug().Url(url).Msg("from opts"); | |
| } else { | |
| url = defaultKeyBagUrl(sthis); | |
| } | |
| // ... rest of the function remains unchanged ... | |
| } |
🤖 Prompt for AI Agents
In core/keybag/key-bag-setup.ts around lines 77-93 (and the duplicate at
106-118), the URL resolution logic is duplicated; remove the repeated block and
reuse the existing implementation by either calling defaultKeyBagUrl() from the
later location or extracting a small shared helper (e.g.,
resolveDefaultKeyBagUrl(env, runtimeFn)) that returns the URI; update the later
code to call that helper/function and reuse the same logger creation so behavior
and logging remain identical.
| const asKeysItem = await rKbf.Ok().asV2KeysItem(); | ||
| const asKeysItem = (await rKbf.Ok().asV2StorageKeyItem()).keys; | ||
| expect(asKeysItem.name).toEqual("kaputt"); | ||
| expect(Array.from(Object.values(asKeysItem.keys))).toEqual([ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect property access.
After the fix above, this line needs to be updated to use the correct variable and property path.
- expect(Array.from(Object.values(asKeysItem.keys))).toEqual([
+ expect(Array.from(Object.values(keys2StorageItem.keys))).toEqual([📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expect(Array.from(Object.values(asKeysItem.keys))).toEqual([ | |
| expect(Array.from(Object.values(keys2StorageItem.keys))).toEqual([ |
🤖 Prompt for AI Agents
In core/tests/runtime/key-bag.test.ts around line 204, the test is incorrectly
accessing asKeysItem.keys; replace that access with the correct variable and
property path used earlier in the test — use asKeys.keys (i.e.,
Array.from(Object.values(asKeys.keys))) so the expectation reads the actual keys
collection created for the test.
| key: "test", | ||
| jwt, | ||
| } satisfies JWTResult); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Incomplete JWT test assertion.
The JWT test doesn't verify that the claims are properly extracted and included in the result.
expect(rGet.Ok()).toEqual({
key: "test",
jwt,
+ claims: expect.objectContaining({
+ hello: "world",
+ iss: "fpcloud",
+ aud: "fpcloud-app",
+ sub: "Test"
+ })
} satisfies JWTResult);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expect(rGet.Ok()).toEqual({ | |
| key: "test", | |
| jwt, | |
| } satisfies JWTResult); | |
| expect(rGet.Ok()).toEqual({ | |
| key: "test", | |
| jwt, | |
| claims: expect.objectContaining({ | |
| hello: "world", | |
| iss: "fpcloud", | |
| aud: "fpcloud-app", | |
| sub: "Test" | |
| }) | |
| } satisfies JWTResult); |
🤖 Prompt for AI Agents
In core/tests/runtime/key-bag.test.ts around lines 425 to 428, the JWT assertion
only checks key and raw jwt but doesn't validate extracted claims; update the
test to parse or construct the expected claims and assert that the result's
claims field matches them (e.g., create expectedClaims object with the same
sub/iat/exp/other claims used to build jwt and add an assertion that
rGet.Ok().claims equals expectedClaims), ensuring types align with JWTResult and
handle any numeric vs string claim normalization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
core/jwks/tests/basic.test.ts (3)
1-2: Import from the public barrel to test the API surface (and avoid ESM extension churn).This keeps tests stable if internals move and matches how consumers will import.
-import { buildJWKSUrl, JWKSValidationError } from "../src/validator.js"; +import { buildJWKSUrl, JWKSValidationError } from "../src/index.js";
4-9: Add a case for a full Clerk domain (not just slug).Covers the
config.includes(".")branch in buildJWKSUrl.it("should build Clerk URLs correctly", () => { const result = buildJWKSUrl("trusted-glowworm-5"); expect(result).toBe("https://trusted-glowworm-5.clerk.accounts.dev/.well-known/jwks.json"); }); + + it("should build Clerk domain inputs correctly", () => { + const result = buildJWKSUrl("warm-owl-1.clerk.accounts.dev"); + expect(result).toBe("https://warm-owl-1.clerk.accounts.dev/.well-known/jwks.json"); + });
16-18: Assert error code/message and add another invalid case.Strengthens the contract and protects against regressions for non-Clerk domains.
- it("should throw on invalid config", () => { - expect(() => buildJWKSUrl("")).toThrow(JWKSValidationError); - }); + it("should throw on invalid config and expose error details", () => { + try { + buildJWKSUrl(""); + throw new Error("Expected JWKSValidationError"); + } catch (e: unknown) { + expect(e).toBeInstanceOf(JWKSValidationError); + const err = e as JWKSValidationError; + expect(err.code).toBe("INVALID_CONFIG"); + expect(err.message).toMatch(/Invalid JWKS configuration/); + } + }); + + it("should reject arbitrary domains without Clerk", () => { + expect(() => buildJWKSUrl("example.com")).toThrow(JWKSValidationError); + });
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
core/jwks/tests/basic.test.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
core/jwks/tests/basic.test.ts (1)
core/jwks/src/validator.ts (2)
buildJWKSUrl(72-85)JWKSValidationError(48-57)
| it("should handle direct URLs", () => { | ||
| const url = "https://example.com/.well-known/jwks.json"; | ||
| const result = buildJWKSUrl(url); | ||
| expect(result).toBe(url); | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Enforce HTTPS for remote JWKS (allow http only for localhost).
Accepting arbitrary http URLs is risky; add tests to lock expected behavior and drive validator hardening.
it("should handle direct URLs", () => {
const url = "https://example.com/.well-known/jwks.json";
const result = buildJWKSUrl(url);
expect(result).toBe(url);
});
+
+ // Security guardrails (adjust validator accordingly if needed)
+ it("should reject non-HTTPS remote URLs", () => {
+ expect(() => buildJWKSUrl("http://example.com/.well-known/jwks.json")).toThrow(JWKSValidationError);
+ });
+
+ it("should allow localhost over http for dev", () => {
+ const url = "http://localhost:8787/.well-known/jwks.json";
+ expect(buildJWKSUrl(url)).toBe(url);
+ });🏁 Script executed:
#!/bin/bash
# Locate the implementation of buildJWKSUrl in the TypeScript sources
rg -nP "buildJWKSUrl" -C3 -g "*.ts"Length of output: 3942
Enforce HTTPS for remote JWKS URLs (allow HTTP only for localhost)
Tighten buildJWKSUrl so it throws JWKSValidationError when given a non-HTTPS URL unless the host is localhost. Add tests to core/jwks/tests/basic.test.ts to lock in this behavior:
it("should handle direct URLs", () => {
const url = "https://example.com/.well-known/jwks.json";
expect(buildJWKSUrl(url)).toBe(url);
});
+
+it("should reject non-HTTPS remote URLs", () => {
+ expect(() => buildJWKSUrl("http://example.com/.well-known/jwks.json"))
+ .toThrow(JWKSValidationError);
+});
+
+it("should allow localhost HTTP URLs", () => {
+ const url = "http://localhost:8787/.well-known/jwks.json";
+ expect(buildJWKSUrl(url)).toBe(url);
+});Then update core/jwks/src/validator.ts (lines 72–75) to:
if (config.startsWith("http://")) {
const host = new URL(config).hostname;
if (host !== "localhost") {
throw new JWKSValidationError("Non-HTTPS JWKS URLs are not allowed");
}
return config;
}🤖 Prompt for AI Agents
In core/jwks/tests/basic.test.ts around lines 10 to 15, update the unit tests to
enforce HTTPS-only JWKS URLs: keep the existing HTTPS test (expect result toBe
the URL), add a test that http://localhost/... returns the URL, and add a test
that http://example.com/... throws a JWKSValidationError; and in
core/jwks/src/validator.ts around lines 72–75, change the http-handling logic so
that if config startsWith("http://") you parse the host (new
URL(config).hostname) and if host !== "localhost" throw new
JWKSValidationError("Non-HTTPS JWKS URLs are not allowed"), otherwise return
config. Ensure tests import JWKSValidationError and assert the throw correctly.
Summary by CodeRabbit