feat: add file upload button with progress bar and improve terminal session limits#753
Conversation
…ession limits - Add upload button to file tree header with file picker dialog - Replace fetch-based upload with XMLHttpRequest for real-time progress tracking - Add progress bar UI in file tree during uploads - Increase file upload size limit from 50MB to 200MB - Extend PTY session timeout from 30min to 4 hours - Increase terminal buffer limit from 5000 to 100000 entries Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR increases server request/upload size limits from 50MB to 200MB, extends PTY session management timeouts and buffering, and implements client-side file upload with real-time progress reporting via XMLHttpRequest. Files are uploaded through a new hook abstraction that coordinates progress, loading states, and integration with tree and header components. ChangesServer Upload Capacity
Terminal Session Management
Client-side File Upload with Progress
Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
server/modules/websocket/services/shell-websocket.service.ts (1)
292-297:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
Array.shift()on a 100,000-element array is O(n) — major hot-path performance regression.
onDatafires for every chunk written by the PTY (potentially thousands of times per second during high-throughput output). Once the buffer is full, every chunk causes a full O(n) re-index of a 100,000-element array. At the previous cap of 5,000 this was tolerable; at 100,000 it will cause measurable CPU spikes and event-loop stalls during sustained terminal output (e.g.,caton a large file,npm install, build output).Consider one of:
- Batch-trim — trim in bulk (O(n) but runs rarely):
⚡ Proposed fix: batch-trim approach
- if (session.buffer.length < 100000) { - session.buffer.push(chunk); - } else { - session.buffer.shift(); - session.buffer.push(chunk); - } + session.buffer.push(chunk); + if (session.buffer.length > 100000) { + // Trim in bulk to amortize the O(n) slice cost + session.buffer = session.buffer.slice(session.buffer.length - 90000); + }
Ring-buffer — O(1) writes; requires adding a
bufferHead: numberfield toPtySessionEntryand reading it out in order on reconnect.Reduce the cap — 100,000 chunks is unlikely to be useful on reconnect (see memory note below) and 10,000–20,000 is typically sufficient for terminal scroll-back.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/modules/websocket/services/shell-websocket.service.ts` around lines 292 - 297, The current onData handler uses session.buffer.shift() which becomes O(n) at the 100,000 cap and will stall the event loop; change to an O(1) ring-buffer by adding a bufferHead: number to the PtySessionEntry, allocate session.buffer as a fixed-capacity array, and on each write store the chunk at index (bufferHead + session.length) % capacity (increment length up to capacity), and when full advance bufferHead = (bufferHead + 1) % capacity instead of calling shift(); update any logic that reads the buffer on reconnect to iterate starting at bufferHead for session.length entries in order.server/index.js (1)
127-138:⚠️ Potential issue | 🟠 Major | ⚡ Quick winKeep global JSON/urlencoded limits low; 200MB here widens DoS risk.
Line 127 and Line 138 now allow very large in-memory request bodies for all JSON/form routes. Multipart upload already bypasses this parser and is handled by multer, so this broad increase adds risk without helping the upload path.
Suggested adjustment
app.use(express.json({ - limit: '200mb', + limit: '10mb', type: (req) => { const contentType = req.headers['content-type'] || ''; if (contentType.includes('multipart/form-data')) { return false; } return contentType.includes('json'); } })); -app.use(express.urlencoded({ limit: '200mb', extended: true })); +app.use(express.urlencoded({ limit: '10mb', extended: true }));🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/index.js` around lines 127 - 138, The global body-parser usage (app.use(express.json(...)) and app.use(express.urlencoded(...))) sets an excessive 200mb limit; reduce the default global limits to a small safe value (e.g. 1mb or another org-approved small limit) and remove the broad 200mb setting, then apply higher limits only on specific routes or handlers that truly need large JSON/urlencoded bodies (or continue using multer for multipart uploads); update the express.json and express.urlencoded configurations in server/index.js accordingly and ensure the multipart skip logic for express.json (contentType.includes('multipart/form-data')) remains intact.
🧹 Nitpick comments (1)
server/modules/websocket/services/shell-websocket.service.ts (1)
33-33: Combined effect of 240-minute timeout and 100,000-chunk buffer creates a large per-session memory footprint.Each idle session holds up to ~100,000 string chunks in memory for up to 4 hours. At a conservative average of 100 bytes per chunk, that is ~10 MB per session. With a handful of concurrent users, abandoned sessions can accumulate tens to hundreds of MB before the timeout fires.
Additionally, on reconnect (lines 221–229) all buffered chunks are replayed synchronously in a
forEach. Sending 100,000 WebSocket frames in a tight loop without back-pressure can flood the client and block the Node.js event loop for a noticeable period.Consider whether 100,000 chunks (vs. the previous 5,000) is practically necessary for the reconnect UX, or whether a more modest cap (e.g., 10,000–20,000) achieves the same scrollback goal with far lower memory cost per session.
Also applies to: 292-292
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/modules/websocket/services/shell-websocket.service.ts` at line 33, The current per-session memory and replay strategy is too heavy: reduce the PTY session idle timeout (symbol PTY_SESSION_TIMEOUT) and cap the buffered chunk count to a much smaller value (e.g., 10,000–20,000 instead of 100,000) wherever the buffer constant is defined/managed, and drop oldest chunks when the cap is exceeded; also change the reconnect replay logic (the code referenced at lines 221–229 that does a synchronous buffer.forEach replay) to stream the buffered chunks in controlled batches with back-pressure (e.g., batch size + setImmediate/nextTick or await a short pause between batches and check ws.readyState) so sending doesn’t block the event loop or flood the client. Ensure the names PTY_SESSION_TIMEOUT and the reconnect replay handler/buffer variable are updated accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@server/index.js`:
- Around line 884-886: The current upload config sets fileSize: 200 * 1024 *
1024 and files: 20 which allows ~4GB of temporary data in os.tmpdir() per
request and risks disk exhaustion; reduce per-file and per-request limits and
add a total-request-size guard: lower fileSize to a safer value (e.g. 10–50MB)
and files to a reasonable concurrent count (e.g. 5), and implement a check that
sums incoming file sizes (or use a library option like maxTotalFileSize if
available) and immediately reject requests when the aggregate exceeds the
threshold; also consider streaming uploads off tmp (use file write stream
handlers or push directly to object storage) and ensure cleanup on errors.
Reference the fileSize and files options and os.tmpdir() in your changes.
In `@src/components/file-tree/hooks/useFileTreeUpload.ts`:
- Around line 109-163: performUpload can run concurrently and clobber shared
state; add a simple guard (preferably an isUploadingRef via
useRef<boolean>(false) or check an existing operationLoading state) at the top
of performUpload to early-return if an upload is in progress, set the guard true
immediately before starting the upload, and clear it in the finally block
alongside the existing
setOperationLoading(false)/setUploadProgress(0)/setDropTarget(null); ensure the
guard is referenced in performUpload (and included in its dependency list if you
use a ref or memoized callback) so overlapping drag/drop or repeated calls
cannot start a second upload while one is active.
---
Outside diff comments:
In `@server/index.js`:
- Around line 127-138: The global body-parser usage (app.use(express.json(...))
and app.use(express.urlencoded(...))) sets an excessive 200mb limit; reduce the
default global limits to a small safe value (e.g. 1mb or another org-approved
small limit) and remove the broad 200mb setting, then apply higher limits only
on specific routes or handlers that truly need large JSON/urlencoded bodies (or
continue using multer for multipart uploads); update the express.json and
express.urlencoded configurations in server/index.js accordingly and ensure the
multipart skip logic for express.json
(contentType.includes('multipart/form-data')) remains intact.
In `@server/modules/websocket/services/shell-websocket.service.ts`:
- Around line 292-297: The current onData handler uses session.buffer.shift()
which becomes O(n) at the 100,000 cap and will stall the event loop; change to
an O(1) ring-buffer by adding a bufferHead: number to the PtySessionEntry,
allocate session.buffer as a fixed-capacity array, and on each write store the
chunk at index (bufferHead + session.length) % capacity (increment length up to
capacity), and when full advance bufferHead = (bufferHead + 1) % capacity
instead of calling shift(); update any logic that reads the buffer on reconnect
to iterate starting at bufferHead for session.length entries in order.
---
Nitpick comments:
In `@server/modules/websocket/services/shell-websocket.service.ts`:
- Line 33: The current per-session memory and replay strategy is too heavy:
reduce the PTY session idle timeout (symbol PTY_SESSION_TIMEOUT) and cap the
buffered chunk count to a much smaller value (e.g., 10,000–20,000 instead of
100,000) wherever the buffer constant is defined/managed, and drop oldest chunks
when the cap is exceeded; also change the reconnect replay logic (the code
referenced at lines 221–229 that does a synchronous buffer.forEach replay) to
stream the buffered chunks in controlled batches with back-pressure (e.g., batch
size + setImmediate/nextTick or await a short pause between batches and check
ws.readyState) so sending doesn’t block the event loop or flood the client.
Ensure the names PTY_SESSION_TIMEOUT and the reconnect replay handler/buffer
variable are updated accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: ff126261-0757-45ce-96e2-35b47c541912
📒 Files selected for processing (5)
server/index.jsserver/modules/websocket/services/shell-websocket.service.tssrc/components/file-tree/hooks/useFileTreeUpload.tssrc/components/file-tree/view/FileTree.tsxsrc/components/file-tree/view/FileTreeHeader.tsx
| fileSize: 200 * 1024 * 1024, // 200MB limit | ||
| files: 20 // Max 20 files at once | ||
| } |
There was a problem hiding this comment.
200MB × 20 files allows multi-GB requests and temp-disk exhaustion.
Line 884 with Line 885 now permits up to ~4GB per request in os.tmpdir(). That is a high outage risk under concurrent uploads.
Suggested adjustment
- limits: {
- fileSize: 200 * 1024 * 1024, // 200MB limit
- files: 20 // Max 20 files at once
- }
+ limits: {
+ fileSize: 200 * 1024 * 1024, // 200MB per file
+ files: 5 // Keep worst-case request size bounded
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fileSize: 200 * 1024 * 1024, // 200MB limit | |
| files: 20 // Max 20 files at once | |
| } | |
| fileSize: 200 * 1024 * 1024, // 200MB per file | |
| files: 5 // Keep worst-case request size bounded | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@server/index.js` around lines 884 - 886, The current upload config sets
fileSize: 200 * 1024 * 1024 and files: 20 which allows ~4GB of temporary data in
os.tmpdir() per request and risks disk exhaustion; reduce per-file and
per-request limits and add a total-request-size guard: lower fileSize to a safer
value (e.g. 10–50MB) and files to a reasonable concurrent count (e.g. 5), and
implement a check that sums incoming file sizes (or use a library option like
maxTotalFileSize if available) and immediately reject requests when the
aggregate exceeds the threshold; also consider streaming uploads off tmp (use
file write stream handlers or push directly to object storage) and ensure
cleanup on errors. Reference the fileSize and files options and os.tmpdir() in
your changes.
| const performUpload = useCallback(async (files: File[], targetPath: string) => { | ||
| if (files.length === 0) { | ||
| return; | ||
| } | ||
|
|
||
| if (!selectedProject) { | ||
| showToast('No project selected', 'error'); | ||
| return; | ||
| } | ||
|
|
||
| setOperationLoading(true); | ||
| setUploadProgress(0); | ||
|
|
||
| try { | ||
| const formData = new FormData(); | ||
| formData.append('targetPath', targetPath); | ||
|
|
||
| const relativePaths: string[] = []; | ||
| files.forEach((file) => { | ||
| const cleanFile = new File([file], file.name.split('/').pop()!, { | ||
| type: file.type, | ||
| lastModified: file.lastModified, | ||
| }); | ||
| formData.append('files', cleanFile); | ||
| relativePaths.push(file.name); | ||
| }); | ||
|
|
||
| formData.append('relativePaths', JSON.stringify(relativePaths)); | ||
|
|
||
| const token = localStorage.getItem('auth-token'); | ||
| const url = `/api/projects/${encodeURIComponent(selectedProject.projectId)}/files/upload`; | ||
|
|
||
| const response = await uploadFilesWithProgress( | ||
| url, | ||
| formData, | ||
| token, | ||
| (percent) => setUploadProgress(percent), | ||
| ); | ||
|
|
||
| if (!response.ok) { | ||
| const data = (await response.json()) as { error?: string }; | ||
| throw new Error(data.error || 'Upload failed'); | ||
| } | ||
|
|
||
| showToast(`Uploaded ${files.length} file(s)`, 'success'); | ||
| onRefresh(); | ||
| } catch (err) { | ||
| console.error('Upload error:', err); | ||
| showToast(err instanceof Error ? err.message : 'Upload failed', 'error'); | ||
| } finally { | ||
| setOperationLoading(false); | ||
| setUploadProgress(0); | ||
| setDropTarget(null); | ||
| } | ||
| }, [selectedProject, onRefresh, showToast]); |
There was a problem hiding this comment.
Guard performUpload against concurrent execution.
performUpload can be entered again (e.g., drag/drop during an active upload), which races shared loading/progress state and can trigger overlapping uploads.
Suggested adjustment
const [operationLoading, setOperationLoading] = useState(false);
const [uploadProgress, setUploadProgress] = useState(0);
+ const uploadInFlightRef = useRef(false);
const treeRef = useRef<HTMLDivElement>(null);
const performUpload = useCallback(async (files: File[], targetPath: string) => {
+ if (uploadInFlightRef.current) {
+ showToast('Upload already in progress', 'error');
+ return;
+ }
if (files.length === 0) {
return;
}
if (!selectedProject) {
showToast('No project selected', 'error');
return;
}
+ uploadInFlightRef.current = true;
setOperationLoading(true);
setUploadProgress(0);
try {
// ...
} finally {
setOperationLoading(false);
setUploadProgress(0);
setDropTarget(null);
+ uploadInFlightRef.current = false;
}
}, [selectedProject, onRefresh, showToast]);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/components/file-tree/hooks/useFileTreeUpload.ts` around lines 109 - 163,
performUpload can run concurrently and clobber shared state; add a simple guard
(preferably an isUploadingRef via useRef<boolean>(false) or check an existing
operationLoading state) at the top of performUpload to early-return if an upload
is in progress, set the guard true immediately before starting the upload, and
clear it in the finally block alongside the existing
setOperationLoading(false)/setUploadProgress(0)/setDropTarget(null); ensure the
guard is referenced in performUpload (and included in its dependency list if you
use a ref or memoized callback) so overlapping drag/drop or repeated calls
cannot start a second upload while one is active.
Summary
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
Improvements