Skip to content

test: improve ssg fixture process handling to reduce flakiness#3967

Merged
BobbieGoede merged 1 commit intomainfrom
test/fix-ssg-test-port-flakiness
Apr 29, 2026
Merged

test: improve ssg fixture process handling to reduce flakiness#3967
BobbieGoede merged 1 commit intomainfrom
test/fix-ssg-test-port-flakiness

Conversation

@BobbieGoede
Copy link
Copy Markdown
Member

@BobbieGoede BobbieGoede commented Apr 24, 2026

🔗 Linked issue

📚 Description

This should reduce flakiness caused by improper cleanup and pnpx serve install/startup race conditions.

Summary by CodeRabbit

  • Chores
    • Introduced an in-process static server option for prerendered content.
    • Improved test server lifecycle and global process-level cleanup (handles exit/signals).
    • Ensured test servers are fully closed (including active connections) during shutdown.
    • Fixed URL port substitution to avoid an extra slash after the port.

@BobbieGoede BobbieGoede self-assigned this Apr 24, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

Walkthrough

Adds sirv to devDependencies and changes test server behavior: in prerender mode startServer now serves Nitro's publicDir in-process with sirv, creating an http server per configured port and storing them on ctx.staticServers. stopServer now closes any ctx.staticServers (including active connections when supported) and clears ctx.serverProcess and ctx.staticServers. url() port replacement no longer inserts an extra /. specs/utils/types.ts adds TestContext.staticServers?: Server[]. specs/utils/setup/index.ts tracks active test contexts and registers process exit/SIG handlers to stop servers on shutdown; afterAll now stops and removes contexts with servers.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: improving SSG fixture process handling to reduce test flakiness, which is reflected across all modified files.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch test/fix-ssg-test-port-flakiness

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
specs/utils/server.ts (1)

52-53: Await kill() for consistency with the widened TestServerProcess contract.

In this branch ctx.serverProcess is always the tinyexec result whose kill() is synchronous, so the current code is correct today. However, TestServerProcess now admits kill: () => void | Promise<void>, so leaving this un-awaited will silently drop the promise if this branch ever grows to cover the in-process variant. Trivial hardening:

🔧 Proposed diff
-    ctx.serverProcess.kill()
+    await ctx.serverProcess.kill()
     throw lastError || new Error('Timeout waiting for dev server!')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@specs/utils/server.ts` around lines 52 - 53, The branch that calls
ctx.serverProcess.kill() should await the result to handle the widened
TestServerProcess.kill signature (which may return a Promise); change the call
to await ctx.serverProcess.kill() and only then throw lastError || new
Error('Timeout waiting for dev server!') so any asynchronous cleanup completes
before the error is propagated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@specs/utils/server.ts`:
- Around line 54-82: The servers array is orphaned if one server.listen fails
because ctx.serverProcess is set only after awaiting Promise.all(listen); move
the assignment of ctx.serverProcess (the object that closes servers via
servers.map(...) calling server.closeAllConnections and server.close) to before
awaiting the listen promises so stopServer() can clean up partial binds, and
wrap the await Promise.all(servers.map(...listen...)) in a try/catch that calls
await stopServer() and rethrows on error (or combine the listen and waitForPort
awaits into a single try so any failure triggers stopServer()); reference the
servers variable, the ctx.serverProcess property, and the existing
stopServer()/server.closeAllConnections logic when making the change.
- Around line 55-56: The code uses optional chaining to set publicDir via
ctx.nuxt!.options.nitro!.output?.publicDir which can be undefined and causes
sirv(publicDir, ...) to silently serve process.cwd(); change the resolution to
assert or validate the value before passing to sirv: retrieve publicDir from
ctx.nuxt!.options.nitro.output (using the non-null assertion style used
elsewhere) or explicitly check and throw a clear error if undefined, then
construct handler = sirv(publicDir, { dev: false, etag: true, single: false })
with the guaranteed non-empty publicDir; update symbols: publicDir, handler,
sirv, and ctx.nuxt!.options.nitro!.output to reflect the validated/non-null
value.

---

Nitpick comments:
In `@specs/utils/server.ts`:
- Around line 52-53: The branch that calls ctx.serverProcess.kill() should await
the result to handle the widened TestServerProcess.kill signature (which may
return a Promise); change the call to await ctx.serverProcess.kill() and only
then throw lastError || new Error('Timeout waiting for dev server!') so any
asynchronous cleanup completes before the error is propagated.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: da6437a4-1c98-4413-92fa-5e61d7fe7304

📥 Commits

Reviewing files that changed from the base of the PR and between 845258e and 1393a69.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (4)
  • package.json
  • specs/utils/server.ts
  • specs/utils/setup/index.ts
  • specs/utils/types.ts

Comment thread specs/utils/server.ts Outdated
Comment thread specs/utils/server.ts Outdated
@BobbieGoede BobbieGoede force-pushed the test/fix-ssg-test-port-flakiness branch from 1393a69 to 09bf11b Compare April 29, 2026 14:13
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
specs/utils/server.ts (1)

61-73: ⚠️ Potential issue | 🟠 Major

Port leak if one listen() fails after others have already bound.

If one port is already in use (EADDRINUSE), Promise.all rejects but servers that successfully bound before the failure are orphaned — ctx.staticServers is never assigned, so stopServer() cannot clean them up. This is the exact class of leak this PR targets.

Separate server creation from the listen operation so cleanup can occur on partial failure:

🔒 Proposed fix
-    ctx.staticServers = await Promise.all(
-      ports.map(
-        port =>
-          new Promise<import('node:http').Server>((resolveListen, rejectListen) => {
-            const server = createServer((req, res) => handler(req, res))
-            server.once('error', rejectListen)
-            server.listen(port, host, () => {
-              server.off('error', rejectListen)
-              resolveListen(server)
-            })
-          })
-      )
-    )
+    ctx.staticServers = ports.map(() => createServer((req, res) => handler(req, res)))
+
+    try {
+      await Promise.all(
+        ctx.staticServers.map(
+          (server, i) =>
+            new Promise<void>((resolveListen, rejectListen) => {
+              server.once('error', rejectListen)
+              server.listen(ports[i], host, () => {
+                server.off('error', rejectListen)
+                resolveListen()
+              })
+            })
+        )
+      )
+    } catch (e) {
+      stopServer()
+      throw e
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@specs/utils/server.ts` around lines 61 - 73, The current Promise.all wrapping
ports.map can leave bound servers orphaned if one server.listen fails; modify
the logic in the block that assigns ctx.staticServers to first create and
collect the Server instances (via createServer) before calling listen, then
perform the listen operations with error handling that, on any listen failure
(e.g., EADDRINUSE), closes any already-bound servers and still assigns
ctx.staticServers so stopServer() can clean them up; ensure
server.once('error')/off('error') logic is preserved for each Server and that
any rejection triggers cleanup of the already resolved Server objects created
earlier.
🧹 Nitpick comments (1)
specs/utils/server.ts (1)

99-110: Consider awaiting server closure to prevent port conflicts on restart.

server.close() is asynchronous but not awaited. Since stopServer() is called at the start of startServer() (line 19), if the previous servers aren't fully closed, the new listen() could fail with EADDRINUSE intermittently.

If this is intentional fire-and-forget cleanup, the current approach is acceptable for test teardown. Otherwise, consider making stopServer async:

♻️ Optional: await server closure
-export function stopServer() {
+export async function stopServer() {
   const ctx = useTestContext()
   ctx.serverProcess?.kill()
   ctx.serverProcess = undefined
   if (ctx.staticServers) {
+    await Promise.all(ctx.staticServers.map(server => new Promise<void>(resolve => {
+      server.closeAllConnections?.()
+      server.close(() => resolve())
+    })))
-    for (const server of ctx.staticServers) {
-      server.closeAllConnections?.()
-      server.close()
-    }
     ctx.staticServers = undefined
   }
 }

Note: This would require updating callers to await stopServer().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@specs/utils/server.ts` around lines 99 - 110, The stopServer function
currently calls server.close() and ctx.serverProcess?.kill() without waiting for
them, which can cause EADDRINUSE on restart; make stopServer async and await
proper shutdown: for each server in ctx.staticServers call
server.closeAllConnections?.(), then await server.close() by promisifying its
callback (or using server.close()[Symbol.toStringTag] if your environment
supports promises), and for ctx.serverProcess listen for its 'exit'/'close'
event and await that before clearing ctx.serverProcess; update callers to await
stopServer() as well.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@specs/utils/server.ts`:
- Around line 61-73: The current Promise.all wrapping ports.map can leave bound
servers orphaned if one server.listen fails; modify the logic in the block that
assigns ctx.staticServers to first create and collect the Server instances (via
createServer) before calling listen, then perform the listen operations with
error handling that, on any listen failure (e.g., EADDRINUSE), closes any
already-bound servers and still assigns ctx.staticServers so stopServer() can
clean them up; ensure server.once('error')/off('error') logic is preserved for
each Server and that any rejection triggers cleanup of the already resolved
Server objects created earlier.

---

Nitpick comments:
In `@specs/utils/server.ts`:
- Around line 99-110: The stopServer function currently calls server.close() and
ctx.serverProcess?.kill() without waiting for them, which can cause EADDRINUSE
on restart; make stopServer async and await proper shutdown: for each server in
ctx.staticServers call server.closeAllConnections?.(), then await server.close()
by promisifying its callback (or using server.close()[Symbol.toStringTag] if
your environment supports promises), and for ctx.serverProcess listen for its
'exit'/'close' event and await that before clearing ctx.serverProcess; update
callers to await stopServer() as well.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 85c3d5fd-43dd-4c8a-a4d7-6fa4ad078c6f

📥 Commits

Reviewing files that changed from the base of the PR and between 1393a69 and 09bf11b.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (4)
  • package.json
  • specs/utils/server.ts
  • specs/utils/setup/index.ts
  • specs/utils/types.ts
✅ Files skipped from review due to trivial changes (1)
  • package.json
🚧 Files skipped from review as they are similar to previous changes (2)
  • specs/utils/setup/index.ts
  • specs/utils/types.ts

@BobbieGoede BobbieGoede merged commit a6dc098 into main Apr 29, 2026
11 checks passed
@BobbieGoede BobbieGoede deleted the test/fix-ssg-test-port-flakiness branch April 29, 2026 14:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant