fix: isolate cursor telemetry samples per recording session#457
Conversation
Previously, the main process kept two module-scope arrays — activeCursorSamples and pendingCursorSamples — and set-recording-state on a new recording wiped BOTH. When a user stopped recording and immediately started a new one before store-recorded-session fired, the previous recording's pending samples were discarded or later overwritten with the new session's data, producing empty or mismatched .cursor.json files. Replace the two arrays with a small FIFO buffer (createCursorTelemetryBuffer) that: - Keeps pending batches per completed recording, never wiping them on a new session start. - Yields batches in arrival order to storeRecordedSessionFiles. - Caps pending batches (default 8) so a never-stored sequence cannot leak unbounded memory. Unit-tested directly in src/lib/cursorTelemetryBuffer.test.ts, including the rapid-restart race that motivated the change.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughMigrates cursor telemetry to a bounded in-memory buffer with session semantics. Main IPC, preload, and renderer flows use the buffer API (start/push/end/take/ prepend/discard). Persistence consumes pending batches; failed writes requeue batches. Adds an IPC to discard pending telemetry for a recording. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/lib/cursorTelemetryBuffer.ts (1)
43-51: silent batch drop when pending queue overflows - might want observability herewhen
pending.length > maxPending, older batches get silently dropped (line 47). this is correct for bounding memory, but if a user hits rapid Stop→Record 9+ times before anystoreRecordedSessionFilescompletes, they'd lose telemetry with no indication.not blocking since the PR explicitly wants this cap, but you might consider returning a boolean from
endSession()or emitting a count of dropped batches for logging/metrics. totally optional though - the current behavior matches the issue requirements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/cursorTelemetryBuffer.ts` around lines 43 - 51, The endSession() function currently pushes active into pending then quietly shifts older batches while pending.length > maxPending, causing silent drops; update endSession() to report drops by computing dropped = Math.max(0, (pending.length - maxPending)) before trimming and then either (a) return a boolean/number (e.g., return droppedCount or return droppedCount === 0) or (b) emit/log a metric or counter (call a logger/metrics.increment with dropped) so callers can observe when batches are dropped; reference endSession(), pending, maxPending and ensure the function signature is updated to return the chosen value or that the metrics call is present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/cursorTelemetryBuffer.ts`:
- Around line 43-51: The endSession() function currently pushes active into
pending then quietly shifts older batches while pending.length > maxPending,
causing silent drops; update endSession() to report drops by computing dropped =
Math.max(0, (pending.length - maxPending)) before trimming and then either (a)
return a boolean/number (e.g., return droppedCount or return droppedCount === 0)
or (b) emit/log a metric or counter (call a logger/metrics.increment with
dropped) so callers can observe when batches are dropped; reference
endSession(), pending, maxPending and ensure the function signature is updated
to return the chosen value or that the metrics call is present.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5bd8a19b-0222-4ec5-8e38-6f0f2fcf0854
📒 Files selected for processing (3)
electron/ipc/handlers.tssrc/lib/cursorTelemetryBuffer.test.tssrc/lib/cursorTelemetryBuffer.ts
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 84ec5a7e68
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| stopCursorCapture(); | ||
| activeCursorSamples = []; | ||
| pendingCursorSamples = []; | ||
| cursorTelemetryBuffer.startSession(); |
There was a problem hiding this comment.
Clear stale telemetry when a recording is discarded
Calling set-recording-state(true) now uses cursorTelemetryBuffer.startSession(), which clears only the active samples and keeps queued pending batches. In the existing restart/cancel flows (useScreenRecorder.restartRecording / cancelRecording), a stopped segment can be intentionally discarded in the renderer and never reaches store-recorded-session, but setRecordingState(false) has already enqueued its cursor batch; the next successful save then dequeues that stale batch and writes it to the new recording’s .cursor.json, misaligning telemetry across recordings. The main process needs a discard path that drops the pending batch (or batches keyed by recording id) instead of always preserving them across session starts.
Useful? React with 👍 / 👎.
|
|
||
| const telemetryPath = `${screenVideoPath}.cursor.json`; | ||
| if (pendingCursorSamples.length > 0) { | ||
| const pendingSamples: CursorTelemetryPoint[] = cursorTelemetryBuffer.takeNextBatch(); |
There was a problem hiding this comment.
Dequeue telemetry only after persistence succeeds
storeRecordedSessionFiles removes the next telemetry batch via takeNextBatch() before attempting filesystem writes. If writing the cursor file fails (or any subsequent throw aborts the save flow), that batch is already lost, so a retry cannot persist the original telemetry and later saves can consume the wrong batch. Dequeue should be acknowledged only after a successful write, or the batch should be re-queued on failure.
Useful? React with 👍 / 👎.
…uffer Address two issues raised during review: P1 – When a recording is cancelled or restarted, setRecordingState(false) enqueues its cursor batch but store-recorded-session is never called, leaving a stale batch that contaminates the next recording's telemetry. Add discardLatestPending() to the buffer and a discard-cursor-telemetry IPC handler; the renderer now calls it on the discard path. P2 – takeNextBatch() dequeued the batch before fs.writeFile, so a write failure would permanently lose the telemetry. Wrap the write in try/catch and re-insert the batch via prependBatch() on failure. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/lib/cursorTelemetryBuffer.ts (1)
57-61: nit: prependBatch doesn't enforce maxPendingBatches
prependBatch()re-inserts a batch at the front without checkingmaxPending. in normal usage (retry after write failure) this is fine since you just took it out. but if something goes sideways and prepends keep happening without successful writes, you could exceed the cap.probably fine in practice since the caller controls retry logic, but worth a mental note or maybe a comment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/cursorTelemetryBuffer.ts` around lines 57 - 61, prependBatch currently unshifts a batch into pending without enforcing the maxPendingBatches cap; update prependBatch (and/or document it) to either drop the oldest batches when pending.length would exceed maxPendingBatches or refuse/prevent the prepend when the cap would be exceeded, using the existing symbols pending and maxPendingBatches and keeping behavior consistent with appendBatch; if you prefer not to change runtime behavior, add a clear comment in prependBatch explaining callers are responsible for honoring maxPendingBatches and the reasoning about retry semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/cursorTelemetryBuffer.ts`:
- Around line 57-61: prependBatch currently unshifts a batch into pending
without enforcing the maxPendingBatches cap; update prependBatch (and/or
document it) to either drop the oldest batches when pending.length would exceed
maxPendingBatches or refuse/prevent the prepend when the cap would be exceeded,
using the existing symbols pending and maxPendingBatches and keeping behavior
consistent with appendBatch; if you prefer not to change runtime behavior, add a
clear comment in prependBatch explaining callers are responsible for honoring
maxPendingBatches and the reasoning about retry semantics.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f33df6ac-97f2-4f31-90ae-ea56c22313d7
📒 Files selected for processing (6)
electron/electron-env.d.tselectron/ipc/handlers.tselectron/preload.tssrc/hooks/useScreenRecorder.tssrc/lib/cursorTelemetryBuffer.test.tssrc/lib/cursorTelemetryBuffer.ts
✅ Files skipped from review due to trivial changes (1)
- electron/ipc/handlers.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib/cursorTelemetryBuffer.test.ts
|
Hi, storeRecordedSessionFiles() always consumes the oldest pending batch (takeNextBatch) regardless of which recording is being stored, so if two recordings finalize/save out-of-order, telemetry can be written to the wrong *.cursor.json file. This breaks cursor playback alignment and can swap telemetry between sessions. Severity: action required | Category: correctness How to fix: Key batches by session id Agent prompt to fix - you can give this to your LLM of choice:
Found by Qodo code review |
Add JSDoc to every public export in cursorTelemetryBuffer so the module meets the 80% docstring-coverage threshold, and make two silent-drop paths observable: - endSession() now returns the number of pending batches evicted by the maxPendingBatches cap and emits console.warn when any are dropped. - prependBatch() defensively trims and warns if an unusual retry pattern would push the queue past the cap (normal retry after takeNextBatch() stays a no-op). Tests cover both drop paths.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/cursorTelemetryBuffer.ts`:
- Around line 16-20: Current implementation treats pending batches as a FIFO
queue and uses takeNextBatch()/discardLatestPending(), which causes
mis-association when saves complete out-of-order; modify
startSession()/endSession() to accept and propagate a recording/session id,
store pending batches keyed by that id (instead of pure FIFO), and add targeted
APIs takeBatch(recordingId) and discardBatch(recordingId) (and adjust
prependBatch to accept recordingId) so callers can read/discard/retry batches
for a specific recording without risking cross-association; update usages of
takeNextBatch(), discardLatestPending(), startSession(), and endSession()
accordingly (e.g., in storeRecordedSessionFiles()) to pass the recording id.
- Around line 103-108: The code uses options.maxActiveSamples and
options.maxPendingBatches without validating them, which can allow negative, NaN
or Infinity values (e.g. causing infinite loops); update
createCursorTelemetryBuffer to validate and sanitize these runtime options by
coercing options.maxActiveSamples and options.maxPendingBatches into safe finite
non-negative integers (falling back to DEFAULT_MAX_PENDING_BATCHES and a sane
default for maxActive if invalid), and apply the same guards wherever maxActive
and maxPending are derived/used (e.g., the other places that assign
maxActive/maxPending in this module) so loops and memory bounds cannot be
disabled by bad inputs.
- Around line 1-6: Update the top-of-file comment for the cursor telemetry
sample to say that cx and cy are normalized, clamped ratios in the [0,1] range
(not device-pixel positions); reference the sampleCursorPoint push behavior and
keep timeMs described as the offset from the recording's start so callers of
CursorTelemetrySample / sampleCursorPoint know coordinates are normalized
ratios.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8a8d0a2f-26e2-44f8-ab07-45eba9e24b11
📒 Files selected for processing (2)
src/lib/cursorTelemetryBuffer.test.tssrc/lib/cursorTelemetryBuffer.ts
✅ Files skipped from review due to trivial changes (1)
- src/lib/cursorTelemetryBuffer.test.ts
| * Flow: `startSession()` → `push(point)` N times → `endSession()` enqueues | ||
| * the collected samples as a completed batch. The main process later | ||
| * drains batches in FIFO order via `takeNextBatch()` to persist them to | ||
| * disk, and can `prependBatch()` on write failure to retry without losing | ||
| * order. |
There was a problem hiding this comment.
Key pending batches by recording id, not FIFO order.
This still gets kinda cursed if saves complete out of order: storeRecordedSessionFiles() writes the oldest takeNextBatch() result to whichever video is currently being stored. So A/B recordings can swap .cursor.json data if B is persisted before A. Same identity problem applies to discardLatestPending().
Prefer threading a recording/session id through startSession()/endSession() and exposing targeted APIs like takeBatch(recordingId) / discardBatch(recordingId).
shape of the safer API
export interface CursorTelemetryBuffer {
- startSession(): void;
+ startSession(recordingId: string): void;
- takeNextBatch(): CursorTelemetryPoint[];
+ takeBatch(recordingId: string): CursorTelemetryPoint[];
- prependBatch(batch: CursorTelemetryPoint[]): void;
+ prependBatch(recordingId: string, batch: CursorTelemetryPoint[]): void;
- discardLatestPending(): void;
+ discardPending(recordingId: string): void;
}- let active: CursorTelemetryPoint[] = [];
- let pending: CursorTelemetryPoint[][] = [];
+ let activeRecordingId: string | null = null;
+ let active: CursorTelemetryPoint[] = [];
+ let pending: Array<{ recordingId: string; samples: CursorTelemetryPoint[] }> = [];
- startSession() {
+ startSession(recordingId) {
+ activeRecordingId = recordingId;
active = [];
},
endSession() {
let dropped = 0;
- if (active.length > 0) {
- pending.push(active);
+ if (active.length > 0 && activeRecordingId !== null) {
+ pending.push({ recordingId: activeRecordingId, samples: active });
while (pending.length > maxPending) {
pending.shift();
dropped++;
}
}
+ activeRecordingId = null;
active = [];
// ...
},
- takeNextBatch() {
- return pending.shift() ?? [];
+ takeBatch(recordingId) {
+ const index = pending.findIndex((batch) => batch.recordingId === recordingId);
+ if (index === -1) return [];
+ return pending.splice(index, 1)[0].samples;
},Also applies to: 55-79, 139-158
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/cursorTelemetryBuffer.ts` around lines 16 - 20, Current
implementation treats pending batches as a FIFO queue and uses
takeNextBatch()/discardLatestPending(), which causes mis-association when saves
complete out-of-order; modify startSession()/endSession() to accept and
propagate a recording/session id, store pending batches keyed by that id
(instead of pure FIFO), and add targeted APIs takeBatch(recordingId) and
discardBatch(recordingId) (and adjust prependBatch to accept recordingId) so
callers can read/discard/retry batches for a specific recording without risking
cross-association; update usages of takeNextBatch(), discardLatestPending(),
startSession(), and endSession() accordingly (e.g., in
storeRecordedSessionFiles()) to pass the recording id.
Two follow-up fixes for CodeRabbit feedback on the docs commit: - CursorTelemetryPoint JSDoc previously described cx/cy as 'device-pixel positions'. The producer sampleCursorPoint() in electron/ipc/handlers.ts clamps them to the [0, 1] range after dividing by the source display's width/height, so they are normalised ratios, not pixel values. Correct the doc comment accordingly. - createCursorTelemetryBuffer now sanitizes maxActiveSamples and maxPendingBatches: non-finite, zero, or negative values fall back to safe positive-integer defaults. Without this, a caller passing Infinity or NaN would hang the trim loops. New test covers the sanitisation path for both options.
|
I've pushed
On the third point (FIFO → recording-id keyed queue): I'm leaving this out of this PR. The PR's stated scope is preserving per-session batches through a FIFO buffer, and the call site in |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/lib/cursorTelemetryBuffer.test.ts (1)
4-5: nit: keep fixture coordinates normalized
sample(101)givescx: 10.1, which is outside the normalized-ratio shape these cursor points represent. Lowkey cleaner to decouple the timestamp tag from the coordinate fixture so tests don’t accidentally bless impossible telemetry.🧹 Proposed fixture tweak
function sample(tag: number): CursorTelemetryPoint { - return { timeMs: tag, cx: tag / 10, cy: tag / 10 }; + const normalized = (tag % 10) / 10; + return { timeMs: tag, cx: normalized, cy: normalized }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/cursorTelemetryBuffer.test.ts` around lines 4 - 5, The test fixture function sample(tag: number): CursorTelemetryPoint currently derives cx/cy directly from tag (e.g., tag 101 -> cx: 10.1), producing coordinates outside the normalized [0,1] range; update sample to decouple timestamp from coordinates by leaving timeMs as tag but computing normalized coordinates (e.g., cx and cy = (tag % 100) / 100 or tag / 1000 to guarantee values within 0..1) so cursor points remain valid normalized ratios.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/cursorTelemetryBuffer.test.ts`:
- Around line 32-49: Add a regression test that simulates multiple recordings
finalizing out of order by switching the buffer API from FIFO-only
takeNextBatch() to a keyed takeBatch(recordingId) and asserting batches are
fetched by recording ID even if saved out-of-order: modify the existing test
that uses createCursorTelemetryBuffer and startSession/push/endSession to create
two recordings, simulate saving recording 2 before recording 1, then call
takeBatch(recordingId) for each recording ID in reverse save order and assert
the returned batches match the correct timeMs sequences for recording IDs 1 and
2 (replace uses of takeNextBatch() with takeBatch(recordingId) in the test and
add recordingId tracking to sessions).
- Line 1: The warning tests in cursorTelemetryBuffer.test.ts create spies on
console.warn but only call mockRestore() on the happy path, which can leak spies
if an assertion fails; add a global cleanup by inserting afterEach(() =>
vi.restoreAllMocks()) in the test file (placed alongside the existing import
line or at the top-level of the test file) so any console.warn spies (and other
mocks) created by the tests are always restored regardless of assertion
failures.
---
Nitpick comments:
In `@src/lib/cursorTelemetryBuffer.test.ts`:
- Around line 4-5: The test fixture function sample(tag: number):
CursorTelemetryPoint currently derives cx/cy directly from tag (e.g., tag 101 ->
cx: 10.1), producing coordinates outside the normalized [0,1] range; update
sample to decouple timestamp from coordinates by leaving timeMs as tag but
computing normalized coordinates (e.g., cx and cy = (tag % 100) / 100 or tag /
1000 to guarantee values within 0..1) so cursor points remain valid normalized
ratios.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0824cc8b-ccb8-412a-ad85-3fa0b6241e35
📒 Files selected for processing (2)
src/lib/cursorTelemetryBuffer.test.tssrc/lib/cursorTelemetryBuffer.ts
✅ Files skipped from review due to trivial changes (1)
- src/lib/cursorTelemetryBuffer.ts
| @@ -0,0 +1,230 @@ | |||
| import { describe, expect, it, vi } from "vitest"; | |||
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect test config for automatic Vitest mock restoration.
fd -i 'vitest.config.*|vite.config.*|package.json' -t f \
--exec sh -c '
echo "### $1"
rg -n -C2 "\b(restoreMocks|mockReset|clearMocks)\b" "$1" || true
' sh {}Repository: siddharthvaddem/openscreen
Length of output: 132
🏁 Script executed:
cat vitest.config.tsRepository: siddharthvaddem/openscreen
Length of output: 375
🏁 Script executed:
head -50 src/lib/cursorTelemetryBuffer.test.tsRepository: siddharthvaddem/openscreen
Length of output: 1661
🏁 Script executed:
sed -n '143,190p' src/lib/cursorTelemetryBuffer.test.tsRepository: siddharthvaddem/openscreen
Length of output: 1919
Spy restoration can leak if an assertion bails early
Both warning tests restore console.warn only at the happy path. Since vitest.config.ts doesn't set restoreMocks: true, a failed assertion before mockRestore() leaves the spy active for later tests—lowkey risky. Add afterEach(() => vi.restoreAllMocks()) to handle it no matter what.
🧹 Proposed cleanup
-import { describe, expect, it, vi } from "vitest";
+import { afterEach, describe, expect, it, vi } from "vitest";
import { type CursorTelemetryPoint, createCursorTelemetryBuffer } from "./cursorTelemetryBuffer";
@@
describe("createCursorTelemetryBuffer", () => {
+ afterEach(() => {
+ vi.restoreAllMocks();
+ });
+
@@
- warn.mockRestore();
});
@@
- warn.mockRestore();
});Also applies to: 8-8, 143-190
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/cursorTelemetryBuffer.test.ts` at line 1, The warning tests in
cursorTelemetryBuffer.test.ts create spies on console.warn but only call
mockRestore() on the happy path, which can leak spies if an assertion fails; add
a global cleanup by inserting afterEach(() => vi.restoreAllMocks()) in the test
file (placed alongside the existing import line or at the top-level of the test
file) so any console.warn spies (and other mocks) created by the tests are
always restored regardless of assertion failures.
| it("preserves earlier pending batches when a new session starts before store", () => { | ||
| const buf = createCursorTelemetryBuffer({ maxActiveSamples: 10 }); | ||
|
|
||
| // Recording 1 | ||
| buf.startSession(); | ||
| buf.push(sample(101)); | ||
| buf.push(sample(102)); | ||
| buf.endSession(); | ||
|
|
||
| // Recording 2 starts before recording 1's batch has been consumed | ||
| buf.startSession(); | ||
| buf.push(sample(201)); | ||
| buf.endSession(); | ||
|
|
||
| const batch1 = buf.takeNextBatch(); | ||
| const batch2 = buf.takeNextBatch(); | ||
| expect(batch1.map((s) => s.timeMs)).toEqual([101, 102]); | ||
| expect(batch2.map((s) => s.timeMs)).toEqual([201]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect cursor telemetry buffer APIs/usages and tests for FIFO-only persistence.
fd -i 'cursorTelemetryBuffer.ts|cursorTelemetryBuffer.test.ts|handlers.ts|useScreenRecorder.ts' -t f \
--exec sh -c '
echo "### $1"
rg -n -C3 "\b(takeNextBatch|takeBatch|takeBatchForRecordingId|recordingId|storeRecordedSessionFiles|createdAt|prependBatch|discardLatestPending)\b" "$1"
' sh {}Repository: siddharthvaddem/openscreen
Length of output: 10732
Add out-of-order persistence regression once buffer API is keyed
This test covers rapid restart where batches drain in recording order. The actual failure mode—multiple recordings finalizing out of order—still isn't tested. takeNextBatch() is pure FIFO with no recording ID binding in handlers.ts (line 282), so if recording 2 saves before recording 1, it grabs recording 1's batch. Once you implement a keyed batch API (e.g. takeBatch(recordingId)), add a regression that fetches by ID in reverse order.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/cursorTelemetryBuffer.test.ts` around lines 32 - 49, Add a regression
test that simulates multiple recordings finalizing out of order by switching the
buffer API from FIFO-only takeNextBatch() to a keyed takeBatch(recordingId) and
asserting batches are fetched by recording ID even if saved out-of-order: modify
the existing test that uses createCursorTelemetryBuffer and
startSession/push/endSession to create two recordings, simulate saving recording
2 before recording 1, then call takeBatch(recordingId) for each recording ID in
reverse save order and assert the returned batches match the correct timeMs
sequences for recording IDs 1 and 2 (replace uses of takeNextBatch() with
takeBatch(recordingId) in the test and add recordingId tracking to sessions).
|
Thanks for pushing 96765e4, the JSDoc and input validation fixes look good. On the FIFO/recordingId issue: you're right that out-of-order saves aren't reachable with a single serialised recorder, so I'll withdraw that concern. However,
This doesn't require out-of-order saves or IPC surface changes, just key each batch by |
discardLatestPending() popped whichever batch happened to be at the
back of the queue. With a Stop → Record → Discard sequence, the
pending queue can have recording B's batch sitting in front of A's by
the time A's finalize callback resolves (because finalizeRecording
awaits fixWebmDuration), so the discard targets the wrong recording.
Tag each completed batch with the recording id supplied at
startSession() time and replace discardLatestPending() with
discardBatch(recordingId). takeNextBatch() now returns the full
{recordingId, samples} shape so prependBatch() can re-queue it on
write-failure without losing the id. The renderer already owns a
stable recordingId (Date.now() in useScreenRecorder) and the IPC
surface threads it through set-recording-state and
discard-cursor-telemetry.
Adds a regression test that mirrors FabLrc's scenario in PR siddharthvaddem#457:
two recordings finalize, A is discarded after B has already been
queued, and the buffer must drop A while keeping B intact.
|
@FabLrc You're right — Pushed 3b9b419 to fix this without churning the IPC surface much:
Added a regression test mirroring the Stop → Record → Discard sequence: two recordings finalize, A is discarded after B is already queued, and the buffer drops A while leaving B intact. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
electron/ipc/handlers.ts (1)
282-297:⚠️ Potential issue | 🟠 MajorFIFO-only drain can still write telemetry to the wrong recording.
Line 282 always takes the oldest batch, not the batch for the session being stored. If saves finalize out of order, cursor data can be swapped between recordings. You’ll want ID-keyed retrieval (e.g.,
takeBatch(recordingId)) and to pass that ID intostoreRecordedSessionFiles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@electron/ipc/handlers.ts` around lines 282 - 297, The current logic uses cursorTelemetryBuffer.takeNextBatch() (FIFO) which can return the wrong session's telemetry when saves finalize out of order; change to an ID-keyed retrieval such as cursorTelemetryBuffer.takeBatch(recordingId) and ensure the returned batch includes its recordingId (e.g., pendingBatch.recordingId) so you pass that recordingId into storeRecordedSessionFiles; also update error handling to call cursorTelemetryBuffer.prependBatch(pendingBatch) unchanged but ensure prependBatch works with ID-keyed batches so the batch is re-inserted for the correct recording if writeFile throws.
🧹 Nitpick comments (1)
electron/ipc/handlers.ts (1)
561-563: nit: return discard result to the renderer for better observability.
discardBatch(recordingId)already returnsboolean; returning it from this IPC handler would make discard failures visible instead of silent no-ops.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@electron/ipc/handlers.ts` around lines 561 - 563, The IPC handler "discard-cursor-telemetry" currently calls cursorTelemetryBuffer.discardBatch(recordingId) but ignores its boolean return; change the ipcMain.handle callback to return the boolean result from discardBatch so the renderer receives success/failure (i.e., replace the void call with a return of cursorTelemetryBuffer.discardBatch(recordingId) inside the "discard-cursor-telemetry" handler).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@electron/ipc/handlers.ts`:
- Around line 545-546: The code currently treats any JS number as a valid
recordingId before calling cursorTelemetryBuffer.startSession; change the
validation so that recordingId is only accepted when it's a finite positive
integer (use Number.isFinite(recordingId) && Number.isInteger(recordingId) &&
recordingId > 0); otherwise compute id = Date.now() and then call
cursorTelemetryBuffer.startSession(id). Update the id assignment and use the
existing cursorTelemetryBuffer.startSession call (and any downstream logic that
relies on id) to ensure unstable values like NaN/Infinity are never passed.
---
Duplicate comments:
In `@electron/ipc/handlers.ts`:
- Around line 282-297: The current logic uses
cursorTelemetryBuffer.takeNextBatch() (FIFO) which can return the wrong
session's telemetry when saves finalize out of order; change to an ID-keyed
retrieval such as cursorTelemetryBuffer.takeBatch(recordingId) and ensure the
returned batch includes its recordingId (e.g., pendingBatch.recordingId) so you
pass that recordingId into storeRecordedSessionFiles; also update error handling
to call cursorTelemetryBuffer.prependBatch(pendingBatch) unchanged but ensure
prependBatch works with ID-keyed batches so the batch is re-inserted for the
correct recording if writeFile throws.
---
Nitpick comments:
In `@electron/ipc/handlers.ts`:
- Around line 561-563: The IPC handler "discard-cursor-telemetry" currently
calls cursorTelemetryBuffer.discardBatch(recordingId) but ignores its boolean
return; change the ipcMain.handle callback to return the boolean result from
discardBatch so the renderer receives success/failure (i.e., replace the void
call with a return of cursorTelemetryBuffer.discardBatch(recordingId) inside the
"discard-cursor-telemetry" handler).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4fc0fa02-dfb4-44be-9060-05600d212821
📒 Files selected for processing (6)
electron/electron-env.d.tselectron/ipc/handlers.tselectron/preload.tssrc/hooks/useScreenRecorder.tssrc/lib/cursorTelemetryBuffer.test.tssrc/lib/cursorTelemetryBuffer.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/hooks/useScreenRecorder.ts
- src/lib/cursorTelemetryBuffer.ts
| const id = typeof recordingId === "number" ? recordingId : Date.now(); | ||
| cursorTelemetryBuffer.startSession(id); |
There was a problem hiding this comment.
Harden recordingId validation before starting a session.
Line 545 accepts any JS number, including NaN/Infinity. That can produce non-matchable or unstable batch keys. Use a finite positive integer guard before startSession, else fallback to Date.now().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@electron/ipc/handlers.ts` around lines 545 - 546, The code currently treats
any JS number as a valid recordingId before calling
cursorTelemetryBuffer.startSession; change the validation so that recordingId is
only accepted when it's a finite positive integer (use
Number.isFinite(recordingId) && Number.isInteger(recordingId) && recordingId >
0); otherwise compute id = Date.now() and then call
cursorTelemetryBuffer.startSession(id). Update the id assignment and use the
existing cursorTelemetryBuffer.startSession call (and any downstream logic that
relies on id) to ensure unstable values like NaN/Infinity are never passed.
Sync ui-rework with main, which includes: - fix/vp8-vp9-codec-normalization (PR siddharthvaddem#501) - fix/cursor-telemetry-session-isolation (PR siddharthvaddem#457) No source conflicts — changes are in disjoint areas.
Description
The cursor telemetry pipeline in
electron/ipc/handlers.tskept two module-scope arrays —activeCursorSamplesandpendingCursorSamples— and wiped both on everyset-recording-state(true):If the user hit Stop → Record before
store-recorded-sessionfinished writing the previous recording's.cursor.json, the previous samples were lost or replaced with the next session's data, depending on which call landed first.This PR replaces the two arrays with a small FIFO buffer (
createCursorTelemetryBuffer) that:storeRecordedSessionFiles.The buffer lives in its own file (
src/lib/cursorTelemetryBuffer.ts) so it can be unit-tested without booting Electron.Motivation
Reproduces with a purely state-level script against the v1.3.0 logic:
The window is narrow (main-process FS write time) but fully reachable through the HUD's restart-recording path or a keyboard shortcut, and silent data corruption of the resulting
.cursor.jsonis the outcome.Type of Change
Related Issue(s)
Fixes #456
Testing
src/lib/cursorTelemetryBuffer.test.ts(8 cases) — covers:maxPendingBatchescaps the queue.startSession()clears in-progress samples but keeps pending batches.reset()wipes everything.npx vitest run src/lib/cursorTelemetryBuffer.test.ts→ 8/8 pass.npx vitest run→ 48/50 pass. The 2 failures are pre-existing*.browser.test.tscases that requirenpm run test:browser:install(Playwright chromium headless shell) and reproduce identically onmainwithout this change.npx tsc --noEmitis clean.Manual repro:
.cursor.json— before this PR it is empty or contains samples from recording 2; after this PR it contains exactly recording 1's samples.Checklist
Summary by CodeRabbit
New Features
Refactor
Tests