Skip to content

fix: isolate cursor telemetry samples per recording session#457

Merged
siddharthvaddem merged 5 commits intosiddharthvaddem:mainfrom
shaun0927:fix/cursor-telemetry-session-isolation
Apr 28, 2026
Merged

fix: isolate cursor telemetry samples per recording session#457
siddharthvaddem merged 5 commits intosiddharthvaddem:mainfrom
shaun0927:fix/cursor-telemetry-session-isolation

Conversation

@shaun0927
Copy link
Copy Markdown
Contributor

@shaun0927 shaun0927 commented Apr 16, 2026

Description

The cursor telemetry pipeline in electron/ipc/handlers.ts kept two module-scope arrays — activeCursorSamples and pendingCursorSamples — and wiped both on every set-recording-state(true):

if (recording) {
  stopCursorCapture();
  activeCursorSamples = [];
  pendingCursorSamples = [];   // previous recording's telemetry, dropped
  ...

If the user hit Stop → Record before store-recorded-session finished writing the previous recording's .cursor.json, the previous samples were lost or replaced with the next session's data, depending on which call landed first.

This PR replaces the two arrays with a small FIFO buffer (createCursorTelemetryBuffer) that:

  • Keeps one batch per completed recording, never wipes them when a new session starts.
  • Yields batches in arrival order to storeRecordedSessionFiles.
  • Caps the pending queue (default 8) so a never-stored sequence cannot leak unbounded memory.

The buffer lives in its own file (src/lib/cursorTelemetryBuffer.ts) so it can be unit-tested without booting Electron.

Motivation

Reproduces with a purely state-level script against the v1.3.0 logic:

let active = [], pending = [];
// Recording 1
active.push(...r1Samples);
/* stop */ pending = [...active]; active = [];
// Recording 2 starts before the first store-recorded-session fires
active = []; pending = [];            // <-- r1 telemetry lost here
active.push(...r2Samples);
/* stop */ pending = [...active]; active = [];
/* late store for r1 */ writeCursorJson(r1Path, pending); // writes r2 data

The window is narrow (main-process FS write time) but fully reachable through the HUD's restart-recording path or a keyboard shortcut, and silent data corruption of the resulting .cursor.json is the outcome.

Type of Change

  • New Feature
  • Bug Fix
  • Refactor / Code Cleanup
  • Documentation Update
  • Other (please specify)

Related Issue(s)

Fixes #456

Testing

  • New src/lib/cursorTelemetryBuffer.test.ts (8 cases) — covers:
    • Normal stop → store flow.
    • Max-active ring behaviour.
    • The rapid-restart scenario that motivated this PR (pending batch preserved across a new session start).
    • Empty batches do not enter the queue.
    • maxPendingBatches caps the queue.
    • startSession() clears in-progress samples but keeps pending batches.
    • reset() wipes everything.
  • npx vitest run src/lib/cursorTelemetryBuffer.test.ts → 8/8 pass.
  • npx vitest run → 48/50 pass. The 2 failures are pre-existing *.browser.test.ts cases that require npm run test:browser:install (Playwright chromium headless shell) and reproduce identically on main without this change.
  • npx tsc --noEmit is clean.

Manual repro:

  1. Record a few seconds, Stop.
  2. Immediately click Record again (or use a restart-recording shortcut).
  3. Stop the second recording, let both store calls complete.
  4. Inspect the first recording's .cursor.json — before this PR it is empty or contains samples from recording 2; after this PR it contains exactly recording 1's samples.

Checklist

  • I have performed a self-review of my code.
  • I have added any necessary screenshots or videos.
  • I have linked related issue(s) and updated the changelog if applicable.

Summary by CodeRabbit

  • New Features

    • App API accepts an optional recordingId when starting/stopping recordings; added a discard action for pending cursor telemetry.
    • Telemetry batches are requeued on store failure for improved reliability.
  • Refactor

    • Cursor telemetry moved to a session-based, memory-bounded FIFO buffer with capped active samples and pending batches.
  • Tests

    • Comprehensive tests for buffer behavior: sessions, trimming/eviction, queue semantics, discard/prepend, and option sanitization.

Previously, the main process kept two module-scope arrays —
activeCursorSamples and pendingCursorSamples — and set-recording-state
on a new recording wiped BOTH. When a user stopped recording and
immediately started a new one before store-recorded-session fired,
the previous recording's pending samples were discarded or later
overwritten with the new session's data, producing empty or mismatched
.cursor.json files.

Replace the two arrays with a small FIFO buffer
(createCursorTelemetryBuffer) that:
- Keeps pending batches per completed recording, never wiping them on
  a new session start.
- Yields batches in arrival order to storeRecordedSessionFiles.
- Caps pending batches (default 8) so a never-stored sequence cannot
  leak unbounded memory.

Unit-tested directly in src/lib/cursorTelemetryBuffer.test.ts, including
the rapid-restart race that motivated the change.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Migrates cursor telemetry to a bounded in-memory buffer with session semantics. Main IPC, preload, and renderer flows use the buffer API (start/push/end/take/ prepend/discard). Persistence consumes pending batches; failed writes requeue batches. Adds an IPC to discard pending telemetry for a recording.

Changes

Cohort / File(s) Summary
Telemetry buffer implementation & tests
src/lib/cursorTelemetryBuffer.ts, src/lib/cursorTelemetryBuffer.test.ts
Adds createCursorTelemetryBuffer() and types (CursorTelemetryPoint, CursorTelemetryBatch, CursorTelemetryBuffer). Implements active-sample ring behavior, FIFO pending-batch queue, caps (maxActiveSamples, maxPendingBatches), and APIs: startSession, push, endSession, takeNextBatch, prependBatch, discardBatch, reset. Tests exercise lifecycle, bounds, prepend/discard semantics, warnings, and reset.
Main IPC handlers
electron/ipc/handlers.ts
Replaces module-scope arrays with cursorTelemetryBuffer. Uses buffer methods for session transitions and persistence; on write failure re-inserts batch with prependBatch. Adds "discard-cursor-telemetry" IPC handler to drop a pending batch by recordingId.
Preload & types
electron/preload.ts, electron/electron-env.d.ts
Preload exposes electronAPI.setRecordingState(recording, recordingId?) and electronAPI.discardCursorTelemetry(recordingId). Types updated to reflect the optional recordingId and new discard API.
Renderer hook changes
src/hooks/useScreenRecorder.ts
startRecording passes recordingId to setRecordingState; finalizeRecording calls discardCursorTelemetry(recordingId) on the discard path to drop buffered telemetry for that recording.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • siddharthvaddem

Poem

a little ring-buffer guards each frame,
sessions queue up, no telemetry shame,
ipc says “discard” when things get messy,
no more mixes from rapid-pressy,
quiet fix, lowkey satisfying ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed Title accurately summarizes the main change: replacing arrays with a FIFO buffer to isolate cursor telemetry per recording session.
Description check ✅ Passed Description covers purpose, motivation (race condition and data loss), type of change, linked issue, and comprehensive testing details. All required sections present and well-documented.
Linked Issues check ✅ Passed The PR fully addresses issue #456: replaces module-scope arrays with a FIFO buffer that preserves pending batches across session starts, prevents data loss on rapid restart, and caps queue growth.
Out of Scope Changes check ✅ Passed All changes directly support the core fix: new buffer module, IPC handler updates, preload/type updates for recordingId plumbing, tests, and hook updates. No unrelated changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/lib/cursorTelemetryBuffer.ts (1)

43-51: silent batch drop when pending queue overflows - might want observability here

when pending.length > maxPending, older batches get silently dropped (line 47). this is correct for bounding memory, but if a user hits rapid Stop→Record 9+ times before any storeRecordedSessionFiles completes, they'd lose telemetry with no indication.

not blocking since the PR explicitly wants this cap, but you might consider returning a boolean from endSession() or emitting a count of dropped batches for logging/metrics. totally optional though - the current behavior matches the issue requirements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/cursorTelemetryBuffer.ts` around lines 43 - 51, The endSession()
function currently pushes active into pending then quietly shifts older batches
while pending.length > maxPending, causing silent drops; update endSession() to
report drops by computing dropped = Math.max(0, (pending.length - maxPending))
before trimming and then either (a) return a boolean/number (e.g., return
droppedCount or return droppedCount === 0) or (b) emit/log a metric or counter
(call a logger/metrics.increment with dropped) so callers can observe when
batches are dropped; reference endSession(), pending, maxPending and ensure the
function signature is updated to return the chosen value or that the metrics
call is present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/lib/cursorTelemetryBuffer.ts`:
- Around line 43-51: The endSession() function currently pushes active into
pending then quietly shifts older batches while pending.length > maxPending,
causing silent drops; update endSession() to report drops by computing dropped =
Math.max(0, (pending.length - maxPending)) before trimming and then either (a)
return a boolean/number (e.g., return droppedCount or return droppedCount === 0)
or (b) emit/log a metric or counter (call a logger/metrics.increment with
dropped) so callers can observe when batches are dropped; reference
endSession(), pending, maxPending and ensure the function signature is updated
to return the chosen value or that the metrics call is present.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5bd8a19b-0222-4ec5-8e38-6f0f2fcf0854

📥 Commits

Reviewing files that changed from the base of the PR and between a6ae0e6 and 84ec5a7.

📒 Files selected for processing (3)
  • electron/ipc/handlers.ts
  • src/lib/cursorTelemetryBuffer.test.ts
  • src/lib/cursorTelemetryBuffer.ts

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 84ec5a7e68

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread electron/ipc/handlers.ts Outdated
stopCursorCapture();
activeCursorSamples = [];
pendingCursorSamples = [];
cursorTelemetryBuffer.startSession();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Clear stale telemetry when a recording is discarded

Calling set-recording-state(true) now uses cursorTelemetryBuffer.startSession(), which clears only the active samples and keeps queued pending batches. In the existing restart/cancel flows (useScreenRecorder.restartRecording / cancelRecording), a stopped segment can be intentionally discarded in the renderer and never reaches store-recorded-session, but setRecordingState(false) has already enqueued its cursor batch; the next successful save then dequeues that stale batch and writes it to the new recording’s .cursor.json, misaligning telemetry across recordings. The main process needs a discard path that drops the pending batch (or batches keyed by recording id) instead of always preserving them across session starts.

Useful? React with 👍 / 👎.

Comment thread electron/ipc/handlers.ts Outdated

const telemetryPath = `${screenVideoPath}.cursor.json`;
if (pendingCursorSamples.length > 0) {
const pendingSamples: CursorTelemetryPoint[] = cursorTelemetryBuffer.takeNextBatch();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Dequeue telemetry only after persistence succeeds

storeRecordedSessionFiles removes the next telemetry batch via takeNextBatch() before attempting filesystem writes. If writing the cursor file fails (or any subsequent throw aborts the save flow), that batch is already lost, so a retry cannot persist the original telemetry and later saves can consume the wrong batch. Dequeue should be acknowledged only after a successful write, or the batch should be re-queued on failure.

Useful? React with 👍 / 👎.

…uffer

Address two issues raised during review:

P1 – When a recording is cancelled or restarted, setRecordingState(false)
enqueues its cursor batch but store-recorded-session is never called,
leaving a stale batch that contaminates the next recording's telemetry.
Add discardLatestPending() to the buffer and a discard-cursor-telemetry
IPC handler; the renderer now calls it on the discard path.

P2 – takeNextBatch() dequeued the batch before fs.writeFile, so a write
failure would permanently lose the telemetry. Wrap the write in
try/catch and re-insert the batch via prependBatch() on failure.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/lib/cursorTelemetryBuffer.ts (1)

57-61: nit: prependBatch doesn't enforce maxPendingBatches

prependBatch() re-inserts a batch at the front without checking maxPending. in normal usage (retry after write failure) this is fine since you just took it out. but if something goes sideways and prepends keep happening without successful writes, you could exceed the cap.

probably fine in practice since the caller controls retry logic, but worth a mental note or maybe a comment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/cursorTelemetryBuffer.ts` around lines 57 - 61, prependBatch
currently unshifts a batch into pending without enforcing the maxPendingBatches
cap; update prependBatch (and/or document it) to either drop the oldest batches
when pending.length would exceed maxPendingBatches or refuse/prevent the prepend
when the cap would be exceeded, using the existing symbols pending and
maxPendingBatches and keeping behavior consistent with appendBatch; if you
prefer not to change runtime behavior, add a clear comment in prependBatch
explaining callers are responsible for honoring maxPendingBatches and the
reasoning about retry semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/lib/cursorTelemetryBuffer.ts`:
- Around line 57-61: prependBatch currently unshifts a batch into pending
without enforcing the maxPendingBatches cap; update prependBatch (and/or
document it) to either drop the oldest batches when pending.length would exceed
maxPendingBatches or refuse/prevent the prepend when the cap would be exceeded,
using the existing symbols pending and maxPendingBatches and keeping behavior
consistent with appendBatch; if you prefer not to change runtime behavior, add a
clear comment in prependBatch explaining callers are responsible for honoring
maxPendingBatches and the reasoning about retry semantics.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f33df6ac-97f2-4f31-90ae-ea56c22313d7

📥 Commits

Reviewing files that changed from the base of the PR and between 84ec5a7 and fac0b40.

📒 Files selected for processing (6)
  • electron/electron-env.d.ts
  • electron/ipc/handlers.ts
  • electron/preload.ts
  • src/hooks/useScreenRecorder.ts
  • src/lib/cursorTelemetryBuffer.test.ts
  • src/lib/cursorTelemetryBuffer.ts
✅ Files skipped from review due to trivial changes (1)
  • electron/ipc/handlers.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/lib/cursorTelemetryBuffer.test.ts

@Qodo-Free-For-OSS
Copy link
Copy Markdown

Hi, storeRecordedSessionFiles() always consumes the oldest pending batch (takeNextBatch) regardless of which recording is being stored, so if two recordings finalize/save out-of-order, telemetry can be written to the wrong *.cursor.json file. This breaks cursor playback alignment and can swap telemetry between sessions.

Severity: action required | Category: correctness

How to fix: Key batches by session id

Agent prompt to fix - you can give this to your LLM of choice:

Issue description

electron/ipc/handlers.ts writes cursor telemetry by dequeuing cursorTelemetryBuffer.takeNextBatch() (FIFO). If two recordings are finalized/saved out-of-order (e.g. recording 2 finishes fixWebmDuration sooner than recording 1), the FIFO batch can be written to the wrong screenVideoPath.cursor.json.

Issue Context

  • Cursor batches are enqueued on set-recording-state(false).
  • Video/session files are stored later via IPC store-recorded-session after async work in the renderer.
  • There is no stable identifier linking a telemetry batch to the payload being stored.

Fix Focus Areas

  • electron/ipc/handlers.ts[261-294]
  • electron/ipc/handlers.ts[534-544]
  • src/hooks/useScreenRecorder.ts[197-283]

Fix direction

Implement an ID-based association instead of FIFO:

  • Accept a recordingId (or use payload.createdAt / parsed fileName) and pass it through setRecordingState(true|false, recordingId).
  • Update CursorTelemetryBuffer to store pending batches as {id, samples} and provide takeBatch(id) (or takeBatchForRecordingId).
  • In storeRecordedSessionFiles(payload), fetch the batch for that payload’s ID rather than the next FIFO batch.
  • Update tests to cover out-of-order store calls and ensure correct mapping.

Found by Qodo code review

Add JSDoc to every public export in cursorTelemetryBuffer so the module
meets the 80% docstring-coverage threshold, and make two silent-drop
paths observable:

- endSession() now returns the number of pending batches evicted by the
  maxPendingBatches cap and emits console.warn when any are dropped.
- prependBatch() defensively trims and warns if an unusual retry pattern
  would push the queue past the cap (normal retry after takeNextBatch()
  stays a no-op).

Tests cover both drop paths.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/lib/cursorTelemetryBuffer.ts`:
- Around line 16-20: Current implementation treats pending batches as a FIFO
queue and uses takeNextBatch()/discardLatestPending(), which causes
mis-association when saves complete out-of-order; modify
startSession()/endSession() to accept and propagate a recording/session id,
store pending batches keyed by that id (instead of pure FIFO), and add targeted
APIs takeBatch(recordingId) and discardBatch(recordingId) (and adjust
prependBatch to accept recordingId) so callers can read/discard/retry batches
for a specific recording without risking cross-association; update usages of
takeNextBatch(), discardLatestPending(), startSession(), and endSession()
accordingly (e.g., in storeRecordedSessionFiles()) to pass the recording id.
- Around line 103-108: The code uses options.maxActiveSamples and
options.maxPendingBatches without validating them, which can allow negative, NaN
or Infinity values (e.g. causing infinite loops); update
createCursorTelemetryBuffer to validate and sanitize these runtime options by
coercing options.maxActiveSamples and options.maxPendingBatches into safe finite
non-negative integers (falling back to DEFAULT_MAX_PENDING_BATCHES and a sane
default for maxActive if invalid), and apply the same guards wherever maxActive
and maxPending are derived/used (e.g., the other places that assign
maxActive/maxPending in this module) so loops and memory bounds cannot be
disabled by bad inputs.
- Around line 1-6: Update the top-of-file comment for the cursor telemetry
sample to say that cx and cy are normalized, clamped ratios in the [0,1] range
(not device-pixel positions); reference the sampleCursorPoint push behavior and
keep timeMs described as the offset from the recording's start so callers of
CursorTelemetrySample / sampleCursorPoint know coordinates are normalized
ratios.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8a8d0a2f-26e2-44f8-ab07-45eba9e24b11

📥 Commits

Reviewing files that changed from the base of the PR and between fac0b40 and adc6105.

📒 Files selected for processing (2)
  • src/lib/cursorTelemetryBuffer.test.ts
  • src/lib/cursorTelemetryBuffer.ts
✅ Files skipped from review due to trivial changes (1)
  • src/lib/cursorTelemetryBuffer.test.ts

Comment thread src/lib/cursorTelemetryBuffer.ts
Comment thread src/lib/cursorTelemetryBuffer.ts Outdated
Comment on lines +16 to +20
* Flow: `startSession()` → `push(point)` N times → `endSession()` enqueues
* the collected samples as a completed batch. The main process later
* drains batches in FIFO order via `takeNextBatch()` to persist them to
* disk, and can `prependBatch()` on write failure to retry without losing
* order.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Key pending batches by recording id, not FIFO order.

This still gets kinda cursed if saves complete out of order: storeRecordedSessionFiles() writes the oldest takeNextBatch() result to whichever video is currently being stored. So A/B recordings can swap .cursor.json data if B is persisted before A. Same identity problem applies to discardLatestPending().

Prefer threading a recording/session id through startSession()/endSession() and exposing targeted APIs like takeBatch(recordingId) / discardBatch(recordingId).

shape of the safer API
 export interface CursorTelemetryBuffer {
-	startSession(): void;
+	startSession(recordingId: string): void;

-	takeNextBatch(): CursorTelemetryPoint[];
+	takeBatch(recordingId: string): CursorTelemetryPoint[];

-	prependBatch(batch: CursorTelemetryPoint[]): void;
+	prependBatch(recordingId: string, batch: CursorTelemetryPoint[]): void;

-	discardLatestPending(): void;
+	discardPending(recordingId: string): void;
 }
-	let active: CursorTelemetryPoint[] = [];
-	let pending: CursorTelemetryPoint[][] = [];
+	let activeRecordingId: string | null = null;
+	let active: CursorTelemetryPoint[] = [];
+	let pending: Array<{ recordingId: string; samples: CursorTelemetryPoint[] }> = [];

-	startSession() {
+	startSession(recordingId) {
+		activeRecordingId = recordingId;
 		active = [];
 	},

 	endSession() {
 		let dropped = 0;
-		if (active.length > 0) {
-			pending.push(active);
+		if (active.length > 0 && activeRecordingId !== null) {
+			pending.push({ recordingId: activeRecordingId, samples: active });
 			while (pending.length > maxPending) {
 				pending.shift();
 				dropped++;
 			}
 		}
+		activeRecordingId = null;
 		active = [];
 		// ...
 	},

-	takeNextBatch() {
-		return pending.shift() ?? [];
+	takeBatch(recordingId) {
+		const index = pending.findIndex((batch) => batch.recordingId === recordingId);
+		if (index === -1) return [];
+		return pending.splice(index, 1)[0].samples;
 	},

Also applies to: 55-79, 139-158

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/cursorTelemetryBuffer.ts` around lines 16 - 20, Current
implementation treats pending batches as a FIFO queue and uses
takeNextBatch()/discardLatestPending(), which causes mis-association when saves
complete out-of-order; modify startSession()/endSession() to accept and
propagate a recording/session id, store pending batches keyed by that id
(instead of pure FIFO), and add targeted APIs takeBatch(recordingId) and
discardBatch(recordingId) (and adjust prependBatch to accept recordingId) so
callers can read/discard/retry batches for a specific recording without risking
cross-association; update usages of takeNextBatch(), discardLatestPending(),
startSession(), and endSession() accordingly (e.g., in
storeRecordedSessionFiles()) to pass the recording id.

Comment thread src/lib/cursorTelemetryBuffer.ts
Two follow-up fixes for CodeRabbit feedback on the docs commit:

- CursorTelemetryPoint JSDoc previously described cx/cy as 'device-pixel
  positions'. The producer sampleCursorPoint() in electron/ipc/handlers.ts
  clamps them to the [0, 1] range after dividing by the source display's
  width/height, so they are normalised ratios, not pixel values. Correct
  the doc comment accordingly.
- createCursorTelemetryBuffer now sanitizes maxActiveSamples and
  maxPendingBatches: non-finite, zero, or negative values fall back to
  safe positive-integer defaults. Without this, a caller passing Infinity
  or NaN would hang the trim loops.

New test covers the sanitisation path for both options.
@shaun0927
Copy link
Copy Markdown
Contributor Author

shaun0927 commented Apr 21, 2026

I've pushed 96765e4 addressing two of the three points:

  • JSDoc cx/cy units: Fixed — they're normalised [0, 1] ratios produced by sampleCursorPoint() in electron/ipc/handlers.ts, not device-pixel positions. My earlier commit had this wrong.
  • Input validation on buffer options: Added. maxActiveSamples and maxPendingBatches are now coerced to safe finite positive integers, so Infinity / NaN / negative values fall back to defaults instead of hanging the trim loops. New test covers this path.

On the third point (FIFO → recording-id keyed queue): I'm leaving this out of this PR. The PR's stated scope is preserving per-session batches through a FIFO buffer, and the call site in handlers.ts assumes FIFO ordering end-to-end (single recorder, serialised set-recording-state handler, store-recorded-session drains in arrival order). Switching to id-keyed batches would change the IPC surface and storeRecordedSessionFiles contract, which feels out of scope for a bugfix PR. Happy to file a follow-up issue if out-of-order saves turn out to be reachable in practice.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/lib/cursorTelemetryBuffer.test.ts (1)

4-5: nit: keep fixture coordinates normalized

sample(101) gives cx: 10.1, which is outside the normalized-ratio shape these cursor points represent. Lowkey cleaner to decouple the timestamp tag from the coordinate fixture so tests don’t accidentally bless impossible telemetry.

🧹 Proposed fixture tweak
 function sample(tag: number): CursorTelemetryPoint {
-	return { timeMs: tag, cx: tag / 10, cy: tag / 10 };
+	const normalized = (tag % 10) / 10;
+	return { timeMs: tag, cx: normalized, cy: normalized };
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/cursorTelemetryBuffer.test.ts` around lines 4 - 5, The test fixture
function sample(tag: number): CursorTelemetryPoint currently derives cx/cy
directly from tag (e.g., tag 101 -> cx: 10.1), producing coordinates outside the
normalized [0,1] range; update sample to decouple timestamp from coordinates by
leaving timeMs as tag but computing normalized coordinates (e.g., cx and cy =
(tag % 100) / 100 or tag / 1000 to guarantee values within 0..1) so cursor
points remain valid normalized ratios.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/lib/cursorTelemetryBuffer.test.ts`:
- Around line 32-49: Add a regression test that simulates multiple recordings
finalizing out of order by switching the buffer API from FIFO-only
takeNextBatch() to a keyed takeBatch(recordingId) and asserting batches are
fetched by recording ID even if saved out-of-order: modify the existing test
that uses createCursorTelemetryBuffer and startSession/push/endSession to create
two recordings, simulate saving recording 2 before recording 1, then call
takeBatch(recordingId) for each recording ID in reverse save order and assert
the returned batches match the correct timeMs sequences for recording IDs 1 and
2 (replace uses of takeNextBatch() with takeBatch(recordingId) in the test and
add recordingId tracking to sessions).
- Line 1: The warning tests in cursorTelemetryBuffer.test.ts create spies on
console.warn but only call mockRestore() on the happy path, which can leak spies
if an assertion fails; add a global cleanup by inserting afterEach(() =>
vi.restoreAllMocks()) in the test file (placed alongside the existing import
line or at the top-level of the test file) so any console.warn spies (and other
mocks) created by the tests are always restored regardless of assertion
failures.

---

Nitpick comments:
In `@src/lib/cursorTelemetryBuffer.test.ts`:
- Around line 4-5: The test fixture function sample(tag: number):
CursorTelemetryPoint currently derives cx/cy directly from tag (e.g., tag 101 ->
cx: 10.1), producing coordinates outside the normalized [0,1] range; update
sample to decouple timestamp from coordinates by leaving timeMs as tag but
computing normalized coordinates (e.g., cx and cy = (tag % 100) / 100 or tag /
1000 to guarantee values within 0..1) so cursor points remain valid normalized
ratios.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0824cc8b-ccb8-412a-ad85-3fa0b6241e35

📥 Commits

Reviewing files that changed from the base of the PR and between adc6105 and 96765e4.

📒 Files selected for processing (2)
  • src/lib/cursorTelemetryBuffer.test.ts
  • src/lib/cursorTelemetryBuffer.ts
✅ Files skipped from review due to trivial changes (1)
  • src/lib/cursorTelemetryBuffer.ts

@@ -0,0 +1,230 @@
import { describe, expect, it, vi } from "vitest";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Inspect test config for automatic Vitest mock restoration.

fd -i 'vitest.config.*|vite.config.*|package.json' -t f \
  --exec sh -c '
    echo "### $1"
    rg -n -C2 "\b(restoreMocks|mockReset|clearMocks)\b" "$1" || true
  ' sh {}

Repository: siddharthvaddem/openscreen

Length of output: 132


🏁 Script executed:

cat vitest.config.ts

Repository: siddharthvaddem/openscreen

Length of output: 375


🏁 Script executed:

head -50 src/lib/cursorTelemetryBuffer.test.ts

Repository: siddharthvaddem/openscreen

Length of output: 1661


🏁 Script executed:

sed -n '143,190p' src/lib/cursorTelemetryBuffer.test.ts

Repository: siddharthvaddem/openscreen

Length of output: 1919


Spy restoration can leak if an assertion bails early

Both warning tests restore console.warn only at the happy path. Since vitest.config.ts doesn't set restoreMocks: true, a failed assertion before mockRestore() leaves the spy active for later tests—lowkey risky. Add afterEach(() => vi.restoreAllMocks()) to handle it no matter what.

🧹 Proposed cleanup
-import { describe, expect, it, vi } from "vitest";
+import { afterEach, describe, expect, it, vi } from "vitest";
 import { type CursorTelemetryPoint, createCursorTelemetryBuffer } from "./cursorTelemetryBuffer";
@@
 describe("createCursorTelemetryBuffer", () => {
+	afterEach(() => {
+		vi.restoreAllMocks();
+	});
+
@@
-		warn.mockRestore();
 	});
@@
-		warn.mockRestore();
 	});

Also applies to: 8-8, 143-190

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/cursorTelemetryBuffer.test.ts` at line 1, The warning tests in
cursorTelemetryBuffer.test.ts create spies on console.warn but only call
mockRestore() on the happy path, which can leak spies if an assertion fails; add
a global cleanup by inserting afterEach(() => vi.restoreAllMocks()) in the test
file (placed alongside the existing import line or at the top-level of the test
file) so any console.warn spies (and other mocks) created by the tests are
always restored regardless of assertion failures.

Comment thread src/lib/cursorTelemetryBuffer.test.ts Outdated
Comment on lines +32 to +49
it("preserves earlier pending batches when a new session starts before store", () => {
const buf = createCursorTelemetryBuffer({ maxActiveSamples: 10 });

// Recording 1
buf.startSession();
buf.push(sample(101));
buf.push(sample(102));
buf.endSession();

// Recording 2 starts before recording 1's batch has been consumed
buf.startSession();
buf.push(sample(201));
buf.endSession();

const batch1 = buf.takeNextBatch();
const batch2 = buf.takeNextBatch();
expect(batch1.map((s) => s.timeMs)).toEqual([101, 102]);
expect(batch2.map((s) => s.timeMs)).toEqual([201]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Inspect cursor telemetry buffer APIs/usages and tests for FIFO-only persistence.

fd -i 'cursorTelemetryBuffer.ts|cursorTelemetryBuffer.test.ts|handlers.ts|useScreenRecorder.ts' -t f \
  --exec sh -c '
    echo "### $1"
    rg -n -C3 "\b(takeNextBatch|takeBatch|takeBatchForRecordingId|recordingId|storeRecordedSessionFiles|createdAt|prependBatch|discardLatestPending)\b" "$1"
  ' sh {}

Repository: siddharthvaddem/openscreen

Length of output: 10732


Add out-of-order persistence regression once buffer API is keyed

This test covers rapid restart where batches drain in recording order. The actual failure mode—multiple recordings finalizing out of order—still isn't tested. takeNextBatch() is pure FIFO with no recording ID binding in handlers.ts (line 282), so if recording 2 saves before recording 1, it grabs recording 1's batch. Once you implement a keyed batch API (e.g. takeBatch(recordingId)), add a regression that fetches by ID in reverse order.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/cursorTelemetryBuffer.test.ts` around lines 32 - 49, Add a regression
test that simulates multiple recordings finalizing out of order by switching the
buffer API from FIFO-only takeNextBatch() to a keyed takeBatch(recordingId) and
asserting batches are fetched by recording ID even if saved out-of-order: modify
the existing test that uses createCursorTelemetryBuffer and
startSession/push/endSession to create two recordings, simulate saving recording
2 before recording 1, then call takeBatch(recordingId) for each recording ID in
reverse save order and assert the returned batches match the correct timeMs
sequences for recording IDs 1 and 2 (replace uses of takeNextBatch() with
takeBatch(recordingId) in the test and add recordingId tracking to sessions).

@FabLrc
Copy link
Copy Markdown
Collaborator

FabLrc commented Apr 27, 2026

Thanks for pushing 96765e4, the JSDoc and input validation fixes look good.

On the FIFO/recordingId issue: you're right that out-of-order saves aren't reachable with a single serialised recorder, so I'll withdraw that concern.

However, discardLatestPending() still has a real bug that isn't addressed. Even with serial execution, when finalizeRecording() does async fixWebmDuration work, a quick Stop→Record→Discard sequence can cause it to pop the wrong batch:

  1. Recording A ends → batch A queued
  2. finalizeRecording starts async fixWebmDuration
  3. User Record → Recording B ends → batch B queued (now most recent)
  4. finalizeRecording checks discard → calls discardLatestPending()
  5. pop() removes batch B, not batch A

This doesn't require out-of-order saves or IPC surface changes, just key each batch by recordingId and provide discardBatch(recordingId) instead of discardLatestPending(). Filing a follow-up issue is also fine if you want to keep this PR scoped as a bugfix.

discardLatestPending() popped whichever batch happened to be at the
back of the queue. With a Stop → Record → Discard sequence, the
pending queue can have recording B's batch sitting in front of A's by
the time A's finalize callback resolves (because finalizeRecording
awaits fixWebmDuration), so the discard targets the wrong recording.

Tag each completed batch with the recording id supplied at
startSession() time and replace discardLatestPending() with
discardBatch(recordingId). takeNextBatch() now returns the full
{recordingId, samples} shape so prependBatch() can re-queue it on
write-failure without losing the id. The renderer already owns a
stable recordingId (Date.now() in useScreenRecorder) and the IPC
surface threads it through set-recording-state and
discard-cursor-telemetry.

Adds a regression test that mirrors FabLrc's scenario in PR siddharthvaddem#457:
two recordings finalize, A is discarded after B has already been
queued, and the buffer must drop A while keeping B intact.
@shaun0927
Copy link
Copy Markdown
Contributor Author

@FabLrc You're right — discardLatestPending() is unsafe even with serialised recording state, because finalizeRecording awaits fixWebmDuration, so by the time the discard branch decides to run, recording B's batch can already be sitting at the back of the queue.

Pushed 3b9b419 to fix this without churning the IPC surface much:

  • Each pending batch now carries the recordingId it was produced under (the renderer already had one — recordingId.current = Date.now() in useScreenRecorder.ts). startSession(recordingId) tags the active session and endSession() enqueues {recordingId, samples}.
  • discardLatestPending() is replaced with discardBatch(recordingId), which finds and removes the batch by id (no-op if it's already been drained, returns false).
  • takeNextBatch() returns the full {recordingId, samples} shape so prependBatch() can re-queue it on write-failure without losing the id. Store path itself stays FIFO, since within a single serialised recorder there's no out-of-order risk on the take side.

Added a regression test mirroring the Stop → Record → Discard sequence: two recordings finalize, A is discarded after B is already queued, and the buffer drops A while leaving B intact.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
electron/ipc/handlers.ts (1)

282-297: ⚠️ Potential issue | 🟠 Major

FIFO-only drain can still write telemetry to the wrong recording.

Line 282 always takes the oldest batch, not the batch for the session being stored. If saves finalize out of order, cursor data can be swapped between recordings. You’ll want ID-keyed retrieval (e.g., takeBatch(recordingId)) and to pass that ID into storeRecordedSessionFiles.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@electron/ipc/handlers.ts` around lines 282 - 297, The current logic uses
cursorTelemetryBuffer.takeNextBatch() (FIFO) which can return the wrong
session's telemetry when saves finalize out of order; change to an ID-keyed
retrieval such as cursorTelemetryBuffer.takeBatch(recordingId) and ensure the
returned batch includes its recordingId (e.g., pendingBatch.recordingId) so you
pass that recordingId into storeRecordedSessionFiles; also update error handling
to call cursorTelemetryBuffer.prependBatch(pendingBatch) unchanged but ensure
prependBatch works with ID-keyed batches so the batch is re-inserted for the
correct recording if writeFile throws.
🧹 Nitpick comments (1)
electron/ipc/handlers.ts (1)

561-563: nit: return discard result to the renderer for better observability.

discardBatch(recordingId) already returns boolean; returning it from this IPC handler would make discard failures visible instead of silent no-ops.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@electron/ipc/handlers.ts` around lines 561 - 563, The IPC handler
"discard-cursor-telemetry" currently calls
cursorTelemetryBuffer.discardBatch(recordingId) but ignores its boolean return;
change the ipcMain.handle callback to return the boolean result from
discardBatch so the renderer receives success/failure (i.e., replace the void
call with a return of cursorTelemetryBuffer.discardBatch(recordingId) inside the
"discard-cursor-telemetry" handler).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@electron/ipc/handlers.ts`:
- Around line 545-546: The code currently treats any JS number as a valid
recordingId before calling cursorTelemetryBuffer.startSession; change the
validation so that recordingId is only accepted when it's a finite positive
integer (use Number.isFinite(recordingId) && Number.isInteger(recordingId) &&
recordingId > 0); otherwise compute id = Date.now() and then call
cursorTelemetryBuffer.startSession(id). Update the id assignment and use the
existing cursorTelemetryBuffer.startSession call (and any downstream logic that
relies on id) to ensure unstable values like NaN/Infinity are never passed.

---

Duplicate comments:
In `@electron/ipc/handlers.ts`:
- Around line 282-297: The current logic uses
cursorTelemetryBuffer.takeNextBatch() (FIFO) which can return the wrong
session's telemetry when saves finalize out of order; change to an ID-keyed
retrieval such as cursorTelemetryBuffer.takeBatch(recordingId) and ensure the
returned batch includes its recordingId (e.g., pendingBatch.recordingId) so you
pass that recordingId into storeRecordedSessionFiles; also update error handling
to call cursorTelemetryBuffer.prependBatch(pendingBatch) unchanged but ensure
prependBatch works with ID-keyed batches so the batch is re-inserted for the
correct recording if writeFile throws.

---

Nitpick comments:
In `@electron/ipc/handlers.ts`:
- Around line 561-563: The IPC handler "discard-cursor-telemetry" currently
calls cursorTelemetryBuffer.discardBatch(recordingId) but ignores its boolean
return; change the ipcMain.handle callback to return the boolean result from
discardBatch so the renderer receives success/failure (i.e., replace the void
call with a return of cursorTelemetryBuffer.discardBatch(recordingId) inside the
"discard-cursor-telemetry" handler).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4fc0fa02-dfb4-44be-9060-05600d212821

📥 Commits

Reviewing files that changed from the base of the PR and between 96765e4 and 3b9b419.

📒 Files selected for processing (6)
  • electron/electron-env.d.ts
  • electron/ipc/handlers.ts
  • electron/preload.ts
  • src/hooks/useScreenRecorder.ts
  • src/lib/cursorTelemetryBuffer.test.ts
  • src/lib/cursorTelemetryBuffer.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/hooks/useScreenRecorder.ts
  • src/lib/cursorTelemetryBuffer.ts

Comment thread electron/ipc/handlers.ts
Comment on lines +545 to +546
const id = typeof recordingId === "number" ? recordingId : Date.now();
cursorTelemetryBuffer.startSession(id);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Harden recordingId validation before starting a session.

Line 545 accepts any JS number, including NaN/Infinity. That can produce non-matchable or unstable batch keys. Use a finite positive integer guard before startSession, else fallback to Date.now().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@electron/ipc/handlers.ts` around lines 545 - 546, The code currently treats
any JS number as a valid recordingId before calling
cursorTelemetryBuffer.startSession; change the validation so that recordingId is
only accepted when it's a finite positive integer (use
Number.isFinite(recordingId) && Number.isInteger(recordingId) && recordingId >
0); otherwise compute id = Date.now() and then call
cursorTelemetryBuffer.startSession(id). Update the id assignment and use the
existing cursorTelemetryBuffer.startSession call (and any downstream logic that
relies on id) to ensure unstable values like NaN/Infinity are never passed.

@siddharthvaddem siddharthvaddem merged commit 608e0ab into siddharthvaddem:main Apr 28, 2026
11 checks passed
FabLrc added a commit to FabLrc/openscreen that referenced this pull request Apr 29, 2026
Sync ui-rework with main, which includes:
- fix/vp8-vp9-codec-normalization (PR siddharthvaddem#501)
- fix/cursor-telemetry-session-isolation (PR siddharthvaddem#457)

No source conflicts — changes are in disjoint areas.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: cursor telemetry lost or mixed between recordings on rapid restart

4 participants