fix(security): harden OTP generation, sanitize(), and verifyApiKey project select#106
fix(security): harden OTP generation, sanitize(), and verifyApiKey project select#106prathamtagad wants to merge 3 commits intogeturbackend:mainfrom
Conversation
…lect
- Replace Math.random() with crypto.randomInt() for OTP generation in
userAuth.controller.js (lines 970, 995, 1306, 1344). Math.random() is
not cryptographically secure; crypto.randomInt uses Node's CSPRNG,
consistent with how dashboard-api generates OTPs in auth.controller.js.
- Make sanitize() in input.validation.js recursive so nested himBHsprefixed
keys (e.g. { profile: { "$gt": "" } }) are stripped at every depth,
closing a NoSQL injection vector through nested update payloads.
- Add siteUrl and authProviders to the Project.findOne().select() clause
in verifyApiKey.js. Without these fields the cached project object is
missing data required by getFrontendCallbackBaseUrl(), causing social
auth callbacks to always redirect to localhost:5173 in production.
All 90 public-api tests pass.
📝 WalkthroughWalkthroughReplaced insecure Math.random()-based OTPs with crypto.randomInt, standardized several API responses under { success, data, message }, expanded project fields in API-key lookup, made input sanitization recursively handle arrays/objects, and adjusted shutdown/handler behaviors across public and dashboard APIs. Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Hardens parts of the public API by improving OTP generation security, strengthening request sanitization against NoSQL operator injection, and expanding the Project projection used during API key verification.
Changes:
- Replaced OTP generation sites using
Math.random()withcrypto.randomInt()inuserAuth.controller.js. - Updated
sanitize()to recursively strip$-prefixed keys across nested objects/arrays. - Added
siteUrlandauthProvidersto the Project fields selected inverifyApiKeymiddleware.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| packages/common/src/utils/input.validation.js | Makes sanitize() recursive to remove $ keys at all depths. |
| apps/public-api/src/middlewares/verifyApiKey.js | Extends Project .select() projection to include siteUrl and authProviders. |
| apps/public-api/src/controllers/userAuth.controller.js | Switches OTP generation to use Node crypto CSPRNG. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| siteUrl | ||
| authProviders |
There was a problem hiding this comment.
Adding siteUrl/authProviders to this middleware’s Project projection may not resolve the social-auth redirect issue described in the PR: the social flow re-fetches the project via getSocialProviderConfig() (apps/public-api/src/controllers/userAuth.controller.js around the selectClause construction) and that select does not include siteUrl, which is what getFrontendCallbackBaseUrl() reads. Consider including siteUrl in the getSocialProviderConfig() projection (or reusing req.project there) so the callback URL is built from the actual project setting.
| for (const key in obj) { | ||
| if (!key.startsWith("$")) { |
There was a problem hiding this comment.
sanitize() now recursively copies keys into a fresh object, but it still allows special keys like __proto__, constructor, and prototype. If an attacker submits these keys in JSON payloads (e.g. in insert/update endpoints that call sanitize()), assigning them onto clean can mutate the object's prototype and lead to prototype-pollution style issues. Please explicitly block these keys (in addition to $-prefixed keys) and consider iterating only own-properties (e.g., Object.keys() / hasOwnProperty) when building clean.
| for (const key in obj) { | |
| if (!key.startsWith("$")) { | |
| const blockedKeys = new Set(["__proto__", "constructor", "prototype"]); | |
| for (const key of Object.keys(obj)) { | |
| if (!key.startsWith("$") && !blockedKeys.has(key)) { |
| const otp = crypto.randomInt(100000, 1000000).toString(); | ||
| await redis.set(`project:${project._id}:otp:verification:${email}`, otp, 'EX', 300); | ||
| await setPublicOtpCooldown(project._id, email, 'verification'); |
There was a problem hiding this comment.
OTP/cooldown Redis keys here are built with the raw email from the request, but verifyEmail later looks up the OTP using a normalized/lowercased email in the Redis key. If the user signs up with mixed-case or surrounding whitespace, the OTP will be stored under a different key and verification (and cooldown enforcement) can fail / be bypassed. Use the same normalized email value consistently for OTP storage and cooldown keys (and ideally for downstream email sending payloads too).
| const otp = crypto.randomInt(100000, 1000000).toString(); | ||
|
|
There was a problem hiding this comment.
In this signup path, normalizedEmail is computed earlier but the OTP is later stored in Redis under a key that uses the raw email value (see the redis.set(...otp:verification:${email}...) a few lines below). This can break verification/cooldown when the input email contains uppercase or whitespace. Use the normalized email consistently for OTP storage and subsequent lookups.
| const otp = crypto.randomInt(100000, 1000000).toString(); | ||
| await redis.set(`project:${project._id}:otp:verification:${email}`, otp, 'EX', 300); | ||
| await setPublicOtpCooldown(project._id, email, 'verification'); |
There was a problem hiding this comment.
Same issue as in signup: OTP/cooldown Redis keys use the raw email value, but the verification flow normalizes email before reading the OTP key. This can cause resend + verification to fail for mixed-case/whitespace emails and can allow cooldown bypass by changing email casing. Store OTP and set cooldown using the normalized email consistently.
- enforce correct api response formats - add graceful shutdown for redis/bullmq - fix email normalization on verify/resend - remove dead code and noisy logs - secure storage delete endpoints
|
@prathamtagad |
Done i've resolved the Conflicts ;) |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/common/src/utils/encryption.js (1)
27-42:⚠️ Potential issue | 🟠 MajorMake decrypt() fail fast on missing key instead of silently returning null.
Line 27 currently allows missing-key misconfiguration to be swallowed by the broad
try/catchand reported as generic decryption failure. That makes config errors look like tampering and obscures root cause.🔧 Suggested fix
function decrypt(encryptedData) { const ENCRYPTION_KEY = process.env.ENCRYPTION_KEY; + if (!ENCRYPTION_KEY) throw new Error("ENCRYPTION_KEY missing in .env"); try { const iv = Buffer.from(encryptedData.iv, "hex"); const tag = Buffer.from(encryptedData.tag, "hex"); const encryptedText = Buffer.from(encryptedData.encrypted, "hex"); const decipher = crypto.createDecipheriv(algorithm, Buffer.from(ENCRYPTION_KEY, 'hex'), iv); decipher.setAuthTag(tag);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/common/src/utils/encryption.js` around lines 27 - 42, The decrypt function currently swallows a missing ENCRYPTION_KEY inside its try/catch; fix by validating ENCRYPTION_KEY before the try block in function decrypt (i.e., check the ENCRYPTION_KEY constant) and if it's missing or falsy throw a clear Error (e.g., "ENCRYPTION_KEY not set") so the misconfiguration fails fast instead of being caught and reported as a generic decryption failure; ensure the Buffer.from(ENCRYPTION_KEY, 'hex') call only happens after this check so you don't hit a misleading catch path.
🧹 Nitpick comments (3)
apps/public-api/src/utils/mailLimit.js (1)
18-21: Useprojector remove it to avoid a misleading helper contract.
getMonthlyMailLimit(project)acceptsprojectbut ignores it, while the call site now passesreq.project. This suggests tier/project-aware limits that are not actually enforced yet. Either implement project-based limit resolution here or keep the old no-arg signature until that logic is ready.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/utils/mailLimit.js` around lines 18 - 21, getMonthlyMailLimit currently accepts a project parameter but ignores it, which is misleading because call sites pass req.project; either implement project-aware logic here or remove the parameter. Update getMonthlyMailLimit to inspect the passed project (e.g., project.tier or project.plan) and return different limits instead of always MONTHLY_FREE_MAIL_LIMIT, or change its signature to getMonthlyMailLimit() and update all callers (where req.project is passed) to stop passing an argument so the helper contract matches behavior. Ensure references to getMonthlyMailLimit and MONTHLY_FREE_MAIL_LIMIT are updated consistently.apps/public-api/src/__tests__/storage.controller.test.js (1)
288-288: LGTM, but consider adding tests for new validation paths.The test correctly includes the new required fields (
keyRole: 'secret',body: { confirm: 'DELETE_ALL_FILES' }).However, there are no tests covering the new validation error paths added to
deleteAllFiles:
keyRole !== 'secret'should return 403- Missing or incorrect
confirmvalue should return 400💡 Suggested additional test cases
test('returns 403 when keyRole is not secret', async () => { const req = { project: makeProject(), keyRole: 'publishable', body: { confirm: 'DELETE_ALL_FILES' } }; const res = makeRes(); await storageController.deleteAllFiles(req, res); expect(res.status).toHaveBeenCalledWith(403); expect(res.json).toHaveBeenCalledWith({ error: 'Access denied. Action requires a Secret Key.' }); }); test('returns 400 when confirm token is missing', async () => { const req = { project: makeProject(), keyRole: 'secret', body: {} }; const res = makeRes(); await storageController.deleteAllFiles(req, res); expect(res.status).toHaveBeenCalledWith(400); expect(res.json).toHaveBeenCalledWith(expect.objectContaining({ error: expect.stringContaining('DELETE_ALL_FILES') })); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/__tests__/storage.controller.test.js` at line 288, Add tests for the new validation branches in deleteAllFiles: one test that calls storageController.deleteAllFiles with req.keyRole set to a non-'secret' value (e.g., 'publishable') and asserts res.status was called with 403 and res.json with { error: 'Access denied. Action requires a Secret Key.' }, and another test that calls deleteAllFiles with keyRole:'secret' but missing or incorrect req.body.confirm and asserts res.status was called with 400 and res.json contains an error mentioning 'DELETE_ALL_FILES'; reuse helpers makeProject() and makeRes() to construct req/res and keep test names descriptive.apps/public-api/src/controllers/data.controller.js (1)
157-166: LGTM!Response format properly standardized with explicit HTTP 200 status.
Note: Other endpoints in this file (
getSingleDoc,updateSingleData,deleteSingleDoc) and error responses still use non-standardized formats. Consider standardizing them in a follow-up for consistency with the coding guideline that all API endpoints return{ success: bool, data: {}, message: "" }.,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/controllers/data.controller.js` around lines 157 - 166, Other endpoints in this controller (getSingleDoc, updateSingleData, deleteSingleDoc) and their error handlers are not using the standardized response shape; update each of those functions to return res.status(...).json({ success: boolean, data: {...} or null, message: "..." }) on success and res.status(...).json({ success: false, data: {}, message: "<error message>" }) on errors, preserving appropriate HTTP status codes (e.g., 200, 404, 400, 500) and keeping existing response payloads inside the data field so all endpoints follow the same { success, data, message } contract.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard-api/src/app.js`:
- Around line 162-165: The code always prints "✅ Redis connection closed." even
if the redis client doesn't exist or quit() wasn't called; change this so the
success log is emitted only when redis && typeof redis.quit === 'function' and
after await redis.quit() completes (place the console.log inside that if block),
and wrap the await redis.quit() in a try/catch to log failures (use the same
redis variable and its quit method) so missing clients or errors don't produce a
misleading success message.
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 512-513: The current flow calls deleteProjectById(projectId) then
setProjectById(projectId, project.toObject()) but only refreshes the
project:id:${projectId} cache; you must also evict any API-key based caches so
public lookups don't return stale config. Update the code around
deleteProjectById/setProjectById to call the cache invalidation utility for keys
like project:publishableKey:${project.publishableKey} and
project:secretKey:${project.secretKey} (or the equivalent cache helper you use)
before re-setting the project cache, ensuring both API-key and id-based entries
are cleared and then repopulated.
- Around line 496-500: Validate incoming projectId using
mongoose.Types.ObjectId.isValid before calling Project.findOne (replace the
current empty-string guard) and return a 400 with { success: false, data: {},
message: "Invalid project ID." } for invalid ids; on DB errors, do not return
err.message to the client—instead log the error internally and return { success:
false, data: {}, message: "Failed to fetch project." } with a 500 status; on
success return { success: true, data: { project }, message: "Project fetched
successfully." }. Apply the same ObjectId validation and response-format changes
to deleteExternalStorageConfig, and reference Project.findOne, projectId, and
deleteExternalStorageConfig when updating the code.
In `@apps/public-api/src/app.js`:
- Around line 137-143: initWebhookWorker currently creates a new Worker on each
call causing the startup worker to never be closed; modify initWebhookWorker (in
webhookQueue.js) to cache the created worker (e.g., a module-scoped variable
`worker`) and return it if already set (same pattern used by
initAuthEmailWorker), or alternatively capture and store the startup worker
reference where it is created and use that stored instance in the shutdown
sequence; ensure the shutdown code calls close() on the cached/stored worker
instance rather than creating a new one.
In `@apps/public-api/src/middlewares/blockUsersCollectionDataAccess.js`:
- Around line 2-4: The middleware currently compares collectionName === 'users'
which only catches lowercase; normalize the extracted collectionName (e.g., call
.toLowerCase() on the trimmed String(req.params?.collectionName || '') or
otherwise convert to lowercase) before performing the guard so any case variants
like "Users" or "USERS" are blocked; update the comparison that references
collectionName in this middleware to use the normalized/lowercased value.
---
Outside diff comments:
In `@packages/common/src/utils/encryption.js`:
- Around line 27-42: The decrypt function currently swallows a missing
ENCRYPTION_KEY inside its try/catch; fix by validating ENCRYPTION_KEY before the
try block in function decrypt (i.e., check the ENCRYPTION_KEY constant) and if
it's missing or falsy throw a clear Error (e.g., "ENCRYPTION_KEY not set") so
the misconfiguration fails fast instead of being caught and reported as a
generic decryption failure; ensure the Buffer.from(ENCRYPTION_KEY, 'hex') call
only happens after this check so you don't hit a misleading catch path.
---
Nitpick comments:
In `@apps/public-api/src/__tests__/storage.controller.test.js`:
- Line 288: Add tests for the new validation branches in deleteAllFiles: one
test that calls storageController.deleteAllFiles with req.keyRole set to a
non-'secret' value (e.g., 'publishable') and asserts res.status was called with
403 and res.json with { error: 'Access denied. Action requires a Secret Key.' },
and another test that calls deleteAllFiles with keyRole:'secret' but missing or
incorrect req.body.confirm and asserts res.status was called with 400 and
res.json contains an error mentioning 'DELETE_ALL_FILES'; reuse helpers
makeProject() and makeRes() to construct req/res and keep test names
descriptive.
In `@apps/public-api/src/controllers/data.controller.js`:
- Around line 157-166: Other endpoints in this controller (getSingleDoc,
updateSingleData, deleteSingleDoc) and their error handlers are not using the
standardized response shape; update each of those functions to return
res.status(...).json({ success: boolean, data: {...} or null, message: "..." })
on success and res.status(...).json({ success: false, data: {}, message: "<error
message>" }) on errors, preserving appropriate HTTP status codes (e.g., 200,
404, 400, 500) and keeping existing response payloads inside the data field so
all endpoints follow the same { success, data, message } contract.
In `@apps/public-api/src/utils/mailLimit.js`:
- Around line 18-21: getMonthlyMailLimit currently accepts a project parameter
but ignores it, which is misleading because call sites pass req.project; either
implement project-aware logic here or remove the parameter. Update
getMonthlyMailLimit to inspect the passed project (e.g., project.tier or
project.plan) and return different limits instead of always
MONTHLY_FREE_MAIL_LIMIT, or change its signature to getMonthlyMailLimit() and
update all callers (where req.project is passed) to stop passing an argument so
the helper contract matches behavior. Ensure references to getMonthlyMailLimit
and MONTHLY_FREE_MAIL_LIMIT are updated consistently.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 177611e2-fd96-4861-bd0b-7bd977a031c0
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (18)
apps/dashboard-api/src/app.jsapps/dashboard-api/src/controllers/project.controller.jsapps/dashboard-api/src/middlewares/authMiddleware.jsapps/dashboard-api/src/middlewares/loadProjectForAdmin.jsapps/public-api/src/__tests__/storage.controller.test.jsapps/public-api/src/__tests__/userAuth.refresh.test.jsapps/public-api/src/app.jsapps/public-api/src/controllers/data.controller.jsapps/public-api/src/controllers/storage.controller.jsapps/public-api/src/controllers/userAuth.controller.jsapps/public-api/src/middlewares/api_usage.jsapps/public-api/src/middlewares/blockUsersCollectionDataAccess.jsapps/public-api/src/middlewares/verifyApiKey.jsapps/public-api/src/utils/mailLimit.jspackage.jsonpackages/common/src/config/redis.jspackages/common/src/utils/GC.jspackages/common/src/utils/encryption.js
💤 Files with no reviewable changes (4)
- packages/common/src/config/redis.js
- apps/dashboard-api/src/middlewares/authMiddleware.js
- apps/dashboard-api/src/middlewares/loadProjectForAdmin.js
- package.json
✅ Files skipped from review due to trivial changes (1)
- apps/public-api/src/middlewares/api_usage.js
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/public-api/src/controllers/userAuth.controller.js
- apps/public-api/src/middlewares/verifyApiKey.js
| const { redis } = require('@urbackend/common'); | ||
| if (redis && typeof redis.quit === 'function') await redis.quit(); | ||
| console.log('✅ Redis connection closed.'); | ||
|
|
There was a problem hiding this comment.
Avoid unconditional Redis “closed” log.
At Line 164, success is logged even when redis is missing or quit is not called. This can mislead incident/debugging traces.
Suggested patch
const { redis } = require('@urbackend/common');
- if (redis && typeof redis.quit === 'function') await redis.quit();
- console.log('✅ Redis connection closed.');
+ if (redis && typeof redis.quit === 'function') {
+ await redis.quit();
+ console.log('✅ Redis connection closed.');
+ } else {
+ console.log('ℹ️ Redis shutdown skipped (client unavailable).');
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { redis } = require('@urbackend/common'); | |
| if (redis && typeof redis.quit === 'function') await redis.quit(); | |
| console.log('✅ Redis connection closed.'); | |
| const { redis } = require('@urbackend/common'); | |
| if (redis && typeof redis.quit === 'function') { | |
| await redis.quit(); | |
| console.log('✅ Redis connection closed.'); | |
| } else { | |
| console.log('ℹ️ Redis shutdown skipped (client unavailable).'); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard-api/src/app.js` around lines 162 - 165, The code always prints
"✅ Redis connection closed." even if the redis client doesn't exist or quit()
wasn't called; change this so the success log is emitted only when redis &&
typeof redis.quit === 'function' and after await redis.quit() completes (place
the console.log inside that if block), and wrap the await redis.quit() in a
try/catch to log failures (use the same redis variable and its quit method) so
missing clients or errors don't produce a misleading success message.
| const projectId = String(req.params.projectId || '').trim(); | ||
| if (!projectId) return res.status(400).json({ error: "Project ID is required." }); | ||
|
|
||
| const project = await Project.findOne({ | ||
| _id: { $eq: projectId }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Controller and route references:"
rg -n -C4 '\bdeleteExternal(Db|Storage)Config\b' --type js --type ts
echo
echo "Any dashboard-api projectId/ObjectId validation:"
rg -n -C3 'ObjectId\.isValid|isValidObjectId|param\(["'"'"']projectId|projectId.*schema|projectId.*parse' --type js --type ts apps/dashboard-apiRepository: geturbackend/urBackend
Length of output: 4704
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== deleteExternalDbConfig full implementation ==="
sed -n '494,521p' apps/dashboard-api/src/controllers/project.controller.js
echo
echo "=== deleteExternalStorageConfig full implementation ==="
sed -n '523,550p' apps/dashboard-api/src/controllers/project.controller.js
echo
echo "=== Success response format in other project handlers ==="
rg -n -A2 'res\.status\(200\)\.json|res\.json\(\{.*success' apps/dashboard-api/src/controllers/project.controller.js | head -50
echo
echo "=== Cache invalidation in other mutation handlers ==="
rg -n -B2 -A2 'deleteProjectByApiKeyCache|setProjectById|deleteProjectById' apps/dashboard-api/src/controllers/project.controller.jsRepository: geturbackend/urBackend
Length of output: 5828
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check if AppError is imported or defined ==="
head -30 apps/dashboard-api/src/controllers/project.controller.js | grep -i apperror || echo "(not found in imports)"
echo
echo "=== Check how other handlers validate ObjectId (broader search) ==="
rg -n 'ObjectId\.isValid|mongoose\.Types\.ObjectId\.isValid' apps/dashboard-api/src/controllers/ --type js
echo
echo "=== Check standard error response pattern in this codebase ==="
rg -n 'res\.status\(400\)\.json\|res\.status\(404\)\.json' apps/dashboard-api/src/controllers/project.controller.js | head -20Repository: geturbackend/urBackend
Length of output: 393
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Error response patterns in webhook.controller.js ==="
grep -n "res\.status" apps/dashboard-api/src/controllers/webhook.controller.js | head -30
echo
echo "=== Sample success responses in webhook.controller.js ==="
rg -n 'res\.(json|status.*json)' apps/dashboard-api/src/controllers/webhook.controller.js -A1 | head -40
echo
echo "=== Check import statements at top of project.controller.js ==="
head -20 apps/dashboard-api/src/controllers/project.controller.js
echo
echo "=== Check if Mongoose import exists ==="
grep -n "require.*mongoose\|import.*mongoose" apps/dashboard-api/src/controllers/project.controller.jsRepository: geturbackend/urBackend
Length of output: 4093
Validate projectId as an ObjectId and fix response formats to match API guidelines.
This code has two issues:
-
ObjectId validation missing: The guard only rejects empty strings. A malformed non-empty
projectIdcan still reach Mongoose, trigger a CastError, and expose raw error details (res.status(500).json({ error: err.message })). Mongoose precedent exists inwebhook.controller.js. -
Response format violations: Both success and error responses violate the guideline
{ success: bool, data: {}, message: "" }. Currently returns{ message: "..." }on success and{ error: err.message }on error, exposing MongoDB/library errors to clients.
🔧 Suggested fix
module.exports.deleteExternalDbConfig = async (req, res) => {
try {
const projectId = String(req.params.projectId || "").trim();
+ if (!mongoose.Types.ObjectId.isValid(projectId)) {
+ return res.status(400).json({
+ success: false,
+ data: {},
+ message: "Invalid project ID.",
+ });
+ }
- if (!projectId) return res.status(400).json({ error: "Project ID is required." });
const project = await Project.findOne({
_id: { $eq: projectId },
owner: req.user._id,
});
if (!project)
return res
.status(404)
- .json({ error: "Project not found or access denied." });
+ .json({ success: false, data: {}, message: "Project not found or access denied." });
project.resources.db.isExternal = false;
project.resources.db.config = null;
await project.save();
await deleteProjectById(projectId);
await setProjectById(projectId, project.toObject());
res
.status(200)
- .json({ message: "External configuration deleted successfully." });
+ .json({ success: true, data: {}, message: "External configuration deleted successfully." });
} catch (err) {
- res.status(500).json({ error: err.message });
+ res.status(500).json({ success: false, data: {}, message: "Internal server error." });
}
};Apply the same fixes to deleteExternalStorageConfig (lines 525–529).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 496 -
500, Validate incoming projectId using mongoose.Types.ObjectId.isValid before
calling Project.findOne (replace the current empty-string guard) and return a
400 with { success: false, data: {}, message: "Invalid project ID." } for
invalid ids; on DB errors, do not return err.message to the client—instead log
the error internally and return { success: false, data: {}, message: "Failed to
fetch project." } with a 500 status; on success return { success: true, data: {
project }, message: "Project fetched successfully." }. Apply the same ObjectId
validation and response-format changes to deleteExternalStorageConfig, and
reference Project.findOne, projectId, and deleteExternalStorageConfig when
updating the code.
| await deleteProjectById(projectId); | ||
| await setProjectById(projectId, project.toObject()); |
There was a problem hiding this comment.
Evict API-key caches here too.
This only refreshes project:id:${projectId}. Public API lookups keyed by publishableKey/secretKey can keep serving the old external DB config until TTL expiry, even though the delete call already succeeded.
🔧 Suggested invalidation
await project.save();
await deleteProjectById(projectId);
await setProjectById(projectId, project.toObject());
+ await deleteProjectByApiKeyCache(project.publishableKey);
+ await deleteProjectByApiKeyCache(project.secretKey);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 512 -
513, The current flow calls deleteProjectById(projectId) then
setProjectById(projectId, project.toObject()) but only refreshes the
project:id:${projectId} cache; you must also evict any API-key based caches so
public lookups don't return stale config. Update the code around
deleteProjectById/setProjectById to call the cache invalidation utility for keys
like project:publishableKey:${project.publishableKey} and
project:secretKey:${project.secretKey} (or the equivalent cache helper you use)
before re-setting the project cache, ensuring both API-key and id-based entries
are cleared and then repopulated.
| const { redis, initWebhookWorker, initAuthEmailWorker } = require('@urbackend/common'); | ||
| const webhookWorker = initWebhookWorker(); | ||
| const authEmailWorker = initAuthEmailWorker(); | ||
|
|
||
| if (webhookWorker) await webhookWorker.close(); | ||
| if (authEmailWorker) await authEmailWorker.close(); | ||
| console.log('✅ BullMQ Workers closed.'); |
There was a problem hiding this comment.
initWebhookWorker() creates a new worker instead of returning the existing one.
Based on the context snippet from webhookQueue.js, initWebhookWorker() creates a new worker on every call (no caching). The worker created at startup (line 29) won't be closed—only this newly created one will be. This leaks the original worker connection.
In contrast, initAuthEmailWorker() has a guard clause (if (worker) return worker) that returns the cached instance.
Consider storing the worker reference at startup or adding caching to initWebhookWorker:
♻️ Suggested fix: store worker reference at startup
// Initialize webhook worker
+let webhookWorkerRef = null;
+let authEmailWorkerRef = null;
if (process.env.NODE_ENV !== 'test') {
- initWebhookWorker();
- initAuthEmailWorker();
+ webhookWorkerRef = initWebhookWorker();
+ authEmailWorkerRef = initAuthEmailWorker();
}Then in the shutdown handler:
- const { redis, initWebhookWorker, initAuthEmailWorker } = require('@urbackend/common');
- const webhookWorker = initWebhookWorker();
- const authEmailWorker = initAuthEmailWorker();
-
- if (webhookWorker) await webhookWorker.close();
- if (authEmailWorker) await authEmailWorker.close();
+ const { redis } = require('@urbackend/common');
+
+ if (webhookWorkerRef) await webhookWorkerRef.close();
+ if (authEmailWorkerRef) await authEmailWorkerRef.close();
console.log('✅ BullMQ Workers closed.');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/public-api/src/app.js` around lines 137 - 143, initWebhookWorker
currently creates a new Worker on each call causing the startup worker to never
be closed; modify initWebhookWorker (in webhookQueue.js) to cache the created
worker (e.g., a module-scoped variable `worker`) and return it if already set
(same pattern used by initAuthEmailWorker), or alternatively capture and store
the startup worker reference where it is created and use that stored instance in
the shutdown sequence; ensure the shutdown code calls close() on the
cached/stored worker instance rather than creating a new one.
| const collectionName = String(req.params?.collectionName || '').trim(); | ||
|
|
||
| if (collectionName === 'users') { |
There was a problem hiding this comment.
Reintroduce case-insensitive normalization before the users guard.
At Line 2 and Line 4, the exact comparison only blocks lowercase 'users'. Mixed-case inputs (for example, Users) can bypass this middleware’s protection intent. Normalize before comparison.
🔒 Proposed fix
-module.exports = (req, res, next) => {
- const collectionName = String(req.params?.collectionName || '').trim();
+module.exports = (req, res, next) => {
+ const collectionName = String(req.params?.collectionName || '')
+ .trim()
+ .toLowerCase();Based on learnings: Do not treat users like a normal collection. Public access to /api/data/users* is blocked.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const collectionName = String(req.params?.collectionName || '').trim(); | |
| if (collectionName === 'users') { | |
| module.exports = (req, res, next) => { | |
| const collectionName = String(req.params?.collectionName || '') | |
| .trim() | |
| .toLowerCase(); | |
| if (collectionName === 'users') { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/public-api/src/middlewares/blockUsersCollectionDataAccess.js` around
lines 2 - 4, The middleware currently compares collectionName === 'users' which
only catches lowercase; normalize the extracted collectionName (e.g., call
.toLowerCase() on the trimmed String(req.params?.collectionName || '') or
otherwise convert to lowercase) before performing the guard so any case variants
like "Users" or "USERS" are blocked; update the comparison that references
collectionName in this middleware to use the normalized/lowercased value.
|
@prathamtagad Bro coderabbit have added some comments :( |
|
@yash-pouranik Aah! here we go again. |
|
waiting!!! @prathamtagad |
Summary
Fixes three security and functional issues found in the public-api.
1. OTP generated with
Math.random()— Cryptographically WeakFiles:
apps/public-api/src/controllers/userAuth.controller.jsLines: 970, 995, 1306, 1344
Math.random()is not a CSPRNG and its output is predictable. Replacedall four OTP generation sites with
crypto.randomInt(100000, 1000000),which uses Node's secure random source. This matches the pattern already
in use in
apps/dashboard-api/src/controllers/auth.controller.js.2.
sanitize()only strips top-level$-prefixed keys — NoSQL Injection RiskFile:
packages/common/src/utils/input.validation.jsThe previous implementation only removed
$-operator keys at the toplevel. A payload like
{ "profile": { "$gt": "" } }would pass throughunchanged, enabling NoSQL injection via nested update/query paths.
The function is now recursive: it walks plain objects and arrays,
stripping
$-prefixed keys at every depth.3. Missing
siteUrlinverifyApiKeyselect — Social Auth Callback BreaksFile:
apps/public-api/src/middlewares/verifyApiKey.jssiteUrlandauthProviderswere not included in the Mongoose.select()projection. As a result, the project object cached andforwarded to downstream controllers was missing these fields.
getFrontendCallbackBaseUrl()would always fall back tolocalhost:5173,breaking social auth redirects in production.
Both fields are now included in the select clause.
Testing
public-apitests pass (npm test -- --runInBand).Summary by CodeRabbit
Bug Fixes
New Features / Behavior