test: improve ssg fixture process handling to reduce flakiness#3967
test: improve ssg fixture process handling to reduce flakiness#3967BobbieGoede merged 1 commit intomainfrom
Conversation
WalkthroughAdds Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
specs/utils/server.ts (1)
52-53: Awaitkill()for consistency with the widenedTestServerProcesscontract.In this branch
ctx.serverProcessis always the tinyexec result whosekill()is synchronous, so the current code is correct today. However,TestServerProcessnow admitskill: () => void | Promise<void>, so leaving this un-awaited will silently drop the promise if this branch ever grows to cover the in-process variant. Trivial hardening:🔧 Proposed diff
- ctx.serverProcess.kill() + await ctx.serverProcess.kill() throw lastError || new Error('Timeout waiting for dev server!')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@specs/utils/server.ts` around lines 52 - 53, The branch that calls ctx.serverProcess.kill() should await the result to handle the widened TestServerProcess.kill signature (which may return a Promise); change the call to await ctx.serverProcess.kill() and only then throw lastError || new Error('Timeout waiting for dev server!') so any asynchronous cleanup completes before the error is propagated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@specs/utils/server.ts`:
- Around line 54-82: The servers array is orphaned if one server.listen fails
because ctx.serverProcess is set only after awaiting Promise.all(listen); move
the assignment of ctx.serverProcess (the object that closes servers via
servers.map(...) calling server.closeAllConnections and server.close) to before
awaiting the listen promises so stopServer() can clean up partial binds, and
wrap the await Promise.all(servers.map(...listen...)) in a try/catch that calls
await stopServer() and rethrows on error (or combine the listen and waitForPort
awaits into a single try so any failure triggers stopServer()); reference the
servers variable, the ctx.serverProcess property, and the existing
stopServer()/server.closeAllConnections logic when making the change.
- Around line 55-56: The code uses optional chaining to set publicDir via
ctx.nuxt!.options.nitro!.output?.publicDir which can be undefined and causes
sirv(publicDir, ...) to silently serve process.cwd(); change the resolution to
assert or validate the value before passing to sirv: retrieve publicDir from
ctx.nuxt!.options.nitro.output (using the non-null assertion style used
elsewhere) or explicitly check and throw a clear error if undefined, then
construct handler = sirv(publicDir, { dev: false, etag: true, single: false })
with the guaranteed non-empty publicDir; update symbols: publicDir, handler,
sirv, and ctx.nuxt!.options.nitro!.output to reflect the validated/non-null
value.
---
Nitpick comments:
In `@specs/utils/server.ts`:
- Around line 52-53: The branch that calls ctx.serverProcess.kill() should await
the result to handle the widened TestServerProcess.kill signature (which may
return a Promise); change the call to await ctx.serverProcess.kill() and only
then throw lastError || new Error('Timeout waiting for dev server!') so any
asynchronous cleanup completes before the error is propagated.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: da6437a4-1c98-4413-92fa-5e61d7fe7304
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (4)
package.jsonspecs/utils/server.tsspecs/utils/setup/index.tsspecs/utils/types.ts
1393a69 to
09bf11b
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
specs/utils/server.ts (1)
61-73:⚠️ Potential issue | 🟠 MajorPort leak if one
listen()fails after others have already bound.If one port is already in use (EADDRINUSE),
Promise.allrejects but servers that successfully bound before the failure are orphaned —ctx.staticServersis never assigned, sostopServer()cannot clean them up. This is the exact class of leak this PR targets.Separate server creation from the listen operation so cleanup can occur on partial failure:
🔒 Proposed fix
- ctx.staticServers = await Promise.all( - ports.map( - port => - new Promise<import('node:http').Server>((resolveListen, rejectListen) => { - const server = createServer((req, res) => handler(req, res)) - server.once('error', rejectListen) - server.listen(port, host, () => { - server.off('error', rejectListen) - resolveListen(server) - }) - }) - ) - ) + ctx.staticServers = ports.map(() => createServer((req, res) => handler(req, res))) + + try { + await Promise.all( + ctx.staticServers.map( + (server, i) => + new Promise<void>((resolveListen, rejectListen) => { + server.once('error', rejectListen) + server.listen(ports[i], host, () => { + server.off('error', rejectListen) + resolveListen() + }) + }) + ) + ) + } catch (e) { + stopServer() + throw e + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@specs/utils/server.ts` around lines 61 - 73, The current Promise.all wrapping ports.map can leave bound servers orphaned if one server.listen fails; modify the logic in the block that assigns ctx.staticServers to first create and collect the Server instances (via createServer) before calling listen, then perform the listen operations with error handling that, on any listen failure (e.g., EADDRINUSE), closes any already-bound servers and still assigns ctx.staticServers so stopServer() can clean them up; ensure server.once('error')/off('error') logic is preserved for each Server and that any rejection triggers cleanup of the already resolved Server objects created earlier.
🧹 Nitpick comments (1)
specs/utils/server.ts (1)
99-110: Consider awaiting server closure to prevent port conflicts on restart.
server.close()is asynchronous but not awaited. SincestopServer()is called at the start ofstartServer()(line 19), if the previous servers aren't fully closed, the newlisten()could fail with EADDRINUSE intermittently.If this is intentional fire-and-forget cleanup, the current approach is acceptable for test teardown. Otherwise, consider making
stopServerasync:♻️ Optional: await server closure
-export function stopServer() { +export async function stopServer() { const ctx = useTestContext() ctx.serverProcess?.kill() ctx.serverProcess = undefined if (ctx.staticServers) { + await Promise.all(ctx.staticServers.map(server => new Promise<void>(resolve => { + server.closeAllConnections?.() + server.close(() => resolve()) + }))) - for (const server of ctx.staticServers) { - server.closeAllConnections?.() - server.close() - } ctx.staticServers = undefined } }Note: This would require updating callers to
await stopServer().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@specs/utils/server.ts` around lines 99 - 110, The stopServer function currently calls server.close() and ctx.serverProcess?.kill() without waiting for them, which can cause EADDRINUSE on restart; make stopServer async and await proper shutdown: for each server in ctx.staticServers call server.closeAllConnections?.(), then await server.close() by promisifying its callback (or using server.close()[Symbol.toStringTag] if your environment supports promises), and for ctx.serverProcess listen for its 'exit'/'close' event and await that before clearing ctx.serverProcess; update callers to await stopServer() as well.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@specs/utils/server.ts`:
- Around line 61-73: The current Promise.all wrapping ports.map can leave bound
servers orphaned if one server.listen fails; modify the logic in the block that
assigns ctx.staticServers to first create and collect the Server instances (via
createServer) before calling listen, then perform the listen operations with
error handling that, on any listen failure (e.g., EADDRINUSE), closes any
already-bound servers and still assigns ctx.staticServers so stopServer() can
clean them up; ensure server.once('error')/off('error') logic is preserved for
each Server and that any rejection triggers cleanup of the already resolved
Server objects created earlier.
---
Nitpick comments:
In `@specs/utils/server.ts`:
- Around line 99-110: The stopServer function currently calls server.close() and
ctx.serverProcess?.kill() without waiting for them, which can cause EADDRINUSE
on restart; make stopServer async and await proper shutdown: for each server in
ctx.staticServers call server.closeAllConnections?.(), then await server.close()
by promisifying its callback (or using server.close()[Symbol.toStringTag] if
your environment supports promises), and for ctx.serverProcess listen for its
'exit'/'close' event and await that before clearing ctx.serverProcess; update
callers to await stopServer() as well.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 85c3d5fd-43dd-4c8a-a4d7-6fa4ad078c6f
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (4)
package.jsonspecs/utils/server.tsspecs/utils/setup/index.tsspecs/utils/types.ts
✅ Files skipped from review due to trivial changes (1)
- package.json
🚧 Files skipped from review as they are similar to previous changes (2)
- specs/utils/setup/index.ts
- specs/utils/types.ts
🔗 Linked issue
📚 Description
This should reduce flakiness caused by improper cleanup and
pnpx serveinstall/startup race conditions.Summary by CodeRabbit