Skip to content

feat: add file upload button with progress bar and improve terminal session limits#753

Open
yuezanhao wants to merge 1 commit intositeboon:mainfrom
yuezanhao:feat/upload-progress-terminal-enhancements
Open

feat: add file upload button with progress bar and improve terminal session limits#753
yuezanhao wants to merge 1 commit intositeboon:mainfrom
yuezanhao:feat/upload-progress-terminal-enhancements

Conversation

@yuezanhao
Copy link
Copy Markdown

@yuezanhao yuezanhao commented May 8, 2026

Summary

  • Add upload button to file tree header with file picker dialog
  • Replace fetch-based upload with XMLHttpRequest for real-time progress tracking
  • Add progress bar UI in file tree during uploads
  • Increase file upload size limit from 50MB to 200MB
  • Extend PTY session timeout from 30min to 4 hours
  • Increase terminal buffer limit from 5000 to 100000 entries

Test plan

  • Drag and drop files into file tree works as before
  • Upload button opens file picker and uploads selected files
  • Progress bar displays during upload and disappears on completion
  • Files larger than 50MB (up to 200MB) upload successfully
  • Terminal sessions remain active for up to 4 hours
  • Long terminal output does not get truncated prematurely

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • New Features

    • Added file upload functionality with real-time progress tracking in the file tree interface.
    • Added upload button to file tree header with visual progress indicator.
  • Improvements

    • Increased file upload size limit from 50MB to 200MB for larger file transfers.
    • Increased request body size limits to 200MB for better payload support.
    • Extended terminal session timeout to 240 minutes for improved session stability.
    • Enhanced terminal output buffering to retain more historical data for reconnection support.

…ession limits

- Add upload button to file tree header with file picker dialog
- Replace fetch-based upload with XMLHttpRequest for real-time progress tracking
- Add progress bar UI in file tree during uploads
- Increase file upload size limit from 50MB to 200MB
- Extend PTY session timeout from 30min to 4 hours
- Increase terminal buffer limit from 5000 to 100000 entries

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 8, 2026

Review Change Stack

📝 Walkthrough

Walkthrough

This PR increases server request/upload size limits from 50MB to 200MB, extends PTY session management timeouts and buffering, and implements client-side file upload with real-time progress reporting via XMLHttpRequest. Files are uploaded through a new hook abstraction that coordinates progress, loading states, and integration with tree and header components.

Changes

Server Upload Capacity

Layer / File(s) Summary
Request Body & Upload Limits
server/index.js
Express JSON/urlencoded request body limits and multer multipart file upload size limits are all increased to 200MB; error message for LIMIT_FILE_SIZE is updated to reflect the new maximum.

Terminal Session Management

Layer / File(s) Summary
PTY Session & Buffer Configuration
server/modules/websocket/services/shell-websocket.service.ts
PTY session idle timeout increases from 30 to 240 minutes; per-session output buffer threshold raises from 5,000 to 100,000 chunks to retain more reconnection history.

Client-side File Upload with Progress

Layer / File(s) Summary
Upload Progress Helper Utility
src/components/file-tree/hooks/useFileTreeUpload.ts
New uploadFilesWithProgress function wraps XMLHttpRequest uploads with progress callbacks, Bearer token authorization header, and returns response status and parsed JSON.
Upload Hook State & Core Logic
src/components/file-tree/hooks/useFileTreeUpload.ts
Hook introduces uploadProgress state and performUpload function to validate project, build FormData, delegate to uploadFilesWithProgress, and handle success/error with cleanup.
Drag-Drop & File Selection Handlers
src/components/file-tree/hooks/useFileTreeUpload.ts
Drag-drop handler is refactored to call centralized performUpload; new handleFileSelect callback enables file input uploads; hook exports both handlers and uploadProgress.
File Input & Upload Button UI
src/components/file-tree/view/FileTreeHeader.tsx
Component imports Upload icon, adds hidden multi-file input, and renders conditional upload button that triggers file selection and calls onUpload callback with selected files.
Tree Component Wiring & Progress Display
src/components/file-tree/view/FileTree.tsx
Passes upload.handleFileSelect to header as onUpload prop; combines upload and operation loading states; renders conditional progress bar overlay showing percentage when upload is in progress.

Possibly Related PRs

Poem

🐰 Hop along, dear files so grand,
Now 200 megs across the land!
Progress bars hop with each byte sent,
Terminal sessions, long time spent.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: adding a file upload button with progress bar and improving terminal session limits, which aligns with the substantive modifications across multiple files.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
server/modules/websocket/services/shell-websocket.service.ts (1)

292-297: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Array.shift() on a 100,000-element array is O(n) — major hot-path performance regression.

onData fires for every chunk written by the PTY (potentially thousands of times per second during high-throughput output). Once the buffer is full, every chunk causes a full O(n) re-index of a 100,000-element array. At the previous cap of 5,000 this was tolerable; at 100,000 it will cause measurable CPU spikes and event-loop stalls during sustained terminal output (e.g., cat on a large file, npm install, build output).

Consider one of:

  1. Batch-trim — trim in bulk (O(n) but runs rarely):
⚡ Proposed fix: batch-trim approach
-          if (session.buffer.length < 100000) {
-            session.buffer.push(chunk);
-          } else {
-            session.buffer.shift();
-            session.buffer.push(chunk);
-          }
+          session.buffer.push(chunk);
+          if (session.buffer.length > 100000) {
+            // Trim in bulk to amortize the O(n) slice cost
+            session.buffer = session.buffer.slice(session.buffer.length - 90000);
+          }
  1. Ring-buffer — O(1) writes; requires adding a bufferHead: number field to PtySessionEntry and reading it out in order on reconnect.

  2. Reduce the cap — 100,000 chunks is unlikely to be useful on reconnect (see memory note below) and 10,000–20,000 is typically sufficient for terminal scroll-back.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@server/modules/websocket/services/shell-websocket.service.ts` around lines
292 - 297, The current onData handler uses session.buffer.shift() which becomes
O(n) at the 100,000 cap and will stall the event loop; change to an O(1)
ring-buffer by adding a bufferHead: number to the PtySessionEntry, allocate
session.buffer as a fixed-capacity array, and on each write store the chunk at
index (bufferHead + session.length) % capacity (increment length up to
capacity), and when full advance bufferHead = (bufferHead + 1) % capacity
instead of calling shift(); update any logic that reads the buffer on reconnect
to iterate starting at bufferHead for session.length entries in order.
server/index.js (1)

127-138: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Keep global JSON/urlencoded limits low; 200MB here widens DoS risk.

Line 127 and Line 138 now allow very large in-memory request bodies for all JSON/form routes. Multipart upload already bypasses this parser and is handled by multer, so this broad increase adds risk without helping the upload path.

Suggested adjustment
 app.use(express.json({
-    limit: '200mb',
+    limit: '10mb',
     type: (req) => {
       const contentType = req.headers['content-type'] || '';
       if (contentType.includes('multipart/form-data')) {
         return false;
       }
       return contentType.includes('json');
     }
 }));
-app.use(express.urlencoded({ limit: '200mb', extended: true }));
+app.use(express.urlencoded({ limit: '10mb', extended: true }));
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@server/index.js` around lines 127 - 138, The global body-parser usage
(app.use(express.json(...)) and app.use(express.urlencoded(...))) sets an
excessive 200mb limit; reduce the default global limits to a small safe value
(e.g. 1mb or another org-approved small limit) and remove the broad 200mb
setting, then apply higher limits only on specific routes or handlers that truly
need large JSON/urlencoded bodies (or continue using multer for multipart
uploads); update the express.json and express.urlencoded configurations in
server/index.js accordingly and ensure the multipart skip logic for express.json
(contentType.includes('multipart/form-data')) remains intact.
🧹 Nitpick comments (1)
server/modules/websocket/services/shell-websocket.service.ts (1)

33-33: Combined effect of 240-minute timeout and 100,000-chunk buffer creates a large per-session memory footprint.

Each idle session holds up to ~100,000 string chunks in memory for up to 4 hours. At a conservative average of 100 bytes per chunk, that is ~10 MB per session. With a handful of concurrent users, abandoned sessions can accumulate tens to hundreds of MB before the timeout fires.

Additionally, on reconnect (lines 221–229) all buffered chunks are replayed synchronously in a forEach. Sending 100,000 WebSocket frames in a tight loop without back-pressure can flood the client and block the Node.js event loop for a noticeable period.

Consider whether 100,000 chunks (vs. the previous 5,000) is practically necessary for the reconnect UX, or whether a more modest cap (e.g., 10,000–20,000) achieves the same scrollback goal with far lower memory cost per session.

Also applies to: 292-292

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@server/modules/websocket/services/shell-websocket.service.ts` at line 33, The
current per-session memory and replay strategy is too heavy: reduce the PTY
session idle timeout (symbol PTY_SESSION_TIMEOUT) and cap the buffered chunk
count to a much smaller value (e.g., 10,000–20,000 instead of 100,000) wherever
the buffer constant is defined/managed, and drop oldest chunks when the cap is
exceeded; also change the reconnect replay logic (the code referenced at lines
221–229 that does a synchronous buffer.forEach replay) to stream the buffered
chunks in controlled batches with back-pressure (e.g., batch size +
setImmediate/nextTick or await a short pause between batches and check
ws.readyState) so sending doesn’t block the event loop or flood the client.
Ensure the names PTY_SESSION_TIMEOUT and the reconnect replay handler/buffer
variable are updated accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@server/index.js`:
- Around line 884-886: The current upload config sets fileSize: 200 * 1024 *
1024 and files: 20 which allows ~4GB of temporary data in os.tmpdir() per
request and risks disk exhaustion; reduce per-file and per-request limits and
add a total-request-size guard: lower fileSize to a safer value (e.g. 10–50MB)
and files to a reasonable concurrent count (e.g. 5), and implement a check that
sums incoming file sizes (or use a library option like maxTotalFileSize if
available) and immediately reject requests when the aggregate exceeds the
threshold; also consider streaming uploads off tmp (use file write stream
handlers or push directly to object storage) and ensure cleanup on errors.
Reference the fileSize and files options and os.tmpdir() in your changes.

In `@src/components/file-tree/hooks/useFileTreeUpload.ts`:
- Around line 109-163: performUpload can run concurrently and clobber shared
state; add a simple guard (preferably an isUploadingRef via
useRef<boolean>(false) or check an existing operationLoading state) at the top
of performUpload to early-return if an upload is in progress, set the guard true
immediately before starting the upload, and clear it in the finally block
alongside the existing
setOperationLoading(false)/setUploadProgress(0)/setDropTarget(null); ensure the
guard is referenced in performUpload (and included in its dependency list if you
use a ref or memoized callback) so overlapping drag/drop or repeated calls
cannot start a second upload while one is active.

---

Outside diff comments:
In `@server/index.js`:
- Around line 127-138: The global body-parser usage (app.use(express.json(...))
and app.use(express.urlencoded(...))) sets an excessive 200mb limit; reduce the
default global limits to a small safe value (e.g. 1mb or another org-approved
small limit) and remove the broad 200mb setting, then apply higher limits only
on specific routes or handlers that truly need large JSON/urlencoded bodies (or
continue using multer for multipart uploads); update the express.json and
express.urlencoded configurations in server/index.js accordingly and ensure the
multipart skip logic for express.json
(contentType.includes('multipart/form-data')) remains intact.

In `@server/modules/websocket/services/shell-websocket.service.ts`:
- Around line 292-297: The current onData handler uses session.buffer.shift()
which becomes O(n) at the 100,000 cap and will stall the event loop; change to
an O(1) ring-buffer by adding a bufferHead: number to the PtySessionEntry,
allocate session.buffer as a fixed-capacity array, and on each write store the
chunk at index (bufferHead + session.length) % capacity (increment length up to
capacity), and when full advance bufferHead = (bufferHead + 1) % capacity
instead of calling shift(); update any logic that reads the buffer on reconnect
to iterate starting at bufferHead for session.length entries in order.

---

Nitpick comments:
In `@server/modules/websocket/services/shell-websocket.service.ts`:
- Line 33: The current per-session memory and replay strategy is too heavy:
reduce the PTY session idle timeout (symbol PTY_SESSION_TIMEOUT) and cap the
buffered chunk count to a much smaller value (e.g., 10,000–20,000 instead of
100,000) wherever the buffer constant is defined/managed, and drop oldest chunks
when the cap is exceeded; also change the reconnect replay logic (the code
referenced at lines 221–229 that does a synchronous buffer.forEach replay) to
stream the buffered chunks in controlled batches with back-pressure (e.g., batch
size + setImmediate/nextTick or await a short pause between batches and check
ws.readyState) so sending doesn’t block the event loop or flood the client.
Ensure the names PTY_SESSION_TIMEOUT and the reconnect replay handler/buffer
variable are updated accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: ff126261-0757-45ce-96e2-35b47c541912

📥 Commits

Reviewing files that changed from the base of the PR and between beb0a50 and e20145f.

📒 Files selected for processing (5)
  • server/index.js
  • server/modules/websocket/services/shell-websocket.service.ts
  • src/components/file-tree/hooks/useFileTreeUpload.ts
  • src/components/file-tree/view/FileTree.tsx
  • src/components/file-tree/view/FileTreeHeader.tsx

Comment thread server/index.js
Comment on lines +884 to 886
fileSize: 200 * 1024 * 1024, // 200MB limit
files: 20 // Max 20 files at once
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

200MB × 20 files allows multi-GB requests and temp-disk exhaustion.

Line 884 with Line 885 now permits up to ~4GB per request in os.tmpdir(). That is a high outage risk under concurrent uploads.

Suggested adjustment
-        limits: {
-            fileSize: 200 * 1024 * 1024, // 200MB limit
-            files: 20 // Max 20 files at once
-        }
+        limits: {
+            fileSize: 200 * 1024 * 1024, // 200MB per file
+            files: 5 // Keep worst-case request size bounded
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fileSize: 200 * 1024 * 1024, // 200MB limit
files: 20 // Max 20 files at once
}
fileSize: 200 * 1024 * 1024, // 200MB per file
files: 5 // Keep worst-case request size bounded
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@server/index.js` around lines 884 - 886, The current upload config sets
fileSize: 200 * 1024 * 1024 and files: 20 which allows ~4GB of temporary data in
os.tmpdir() per request and risks disk exhaustion; reduce per-file and
per-request limits and add a total-request-size guard: lower fileSize to a safer
value (e.g. 10–50MB) and files to a reasonable concurrent count (e.g. 5), and
implement a check that sums incoming file sizes (or use a library option like
maxTotalFileSize if available) and immediately reject requests when the
aggregate exceeds the threshold; also consider streaming uploads off tmp (use
file write stream handlers or push directly to object storage) and ensure
cleanup on errors. Reference the fileSize and files options and os.tmpdir() in
your changes.

Comment on lines +109 to +163
const performUpload = useCallback(async (files: File[], targetPath: string) => {
if (files.length === 0) {
return;
}

if (!selectedProject) {
showToast('No project selected', 'error');
return;
}

setOperationLoading(true);
setUploadProgress(0);

try {
const formData = new FormData();
formData.append('targetPath', targetPath);

const relativePaths: string[] = [];
files.forEach((file) => {
const cleanFile = new File([file], file.name.split('/').pop()!, {
type: file.type,
lastModified: file.lastModified,
});
formData.append('files', cleanFile);
relativePaths.push(file.name);
});

formData.append('relativePaths', JSON.stringify(relativePaths));

const token = localStorage.getItem('auth-token');
const url = `/api/projects/${encodeURIComponent(selectedProject.projectId)}/files/upload`;

const response = await uploadFilesWithProgress(
url,
formData,
token,
(percent) => setUploadProgress(percent),
);

if (!response.ok) {
const data = (await response.json()) as { error?: string };
throw new Error(data.error || 'Upload failed');
}

showToast(`Uploaded ${files.length} file(s)`, 'success');
onRefresh();
} catch (err) {
console.error('Upload error:', err);
showToast(err instanceof Error ? err.message : 'Upload failed', 'error');
} finally {
setOperationLoading(false);
setUploadProgress(0);
setDropTarget(null);
}
}, [selectedProject, onRefresh, showToast]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Guard performUpload against concurrent execution.

performUpload can be entered again (e.g., drag/drop during an active upload), which races shared loading/progress state and can trigger overlapping uploads.

Suggested adjustment
   const [operationLoading, setOperationLoading] = useState(false);
   const [uploadProgress, setUploadProgress] = useState(0);
+  const uploadInFlightRef = useRef(false);
   const treeRef = useRef<HTMLDivElement>(null);

   const performUpload = useCallback(async (files: File[], targetPath: string) => {
+    if (uploadInFlightRef.current) {
+      showToast('Upload already in progress', 'error');
+      return;
+    }
     if (files.length === 0) {
       return;
     }

     if (!selectedProject) {
       showToast('No project selected', 'error');
       return;
     }

+    uploadInFlightRef.current = true;
     setOperationLoading(true);
     setUploadProgress(0);

     try {
       // ...
     } finally {
       setOperationLoading(false);
       setUploadProgress(0);
       setDropTarget(null);
+      uploadInFlightRef.current = false;
     }
   }, [selectedProject, onRefresh, showToast]);
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/file-tree/hooks/useFileTreeUpload.ts` around lines 109 - 163,
performUpload can run concurrently and clobber shared state; add a simple guard
(preferably an isUploadingRef via useRef<boolean>(false) or check an existing
operationLoading state) at the top of performUpload to early-return if an upload
is in progress, set the guard true immediately before starting the upload, and
clear it in the finally block alongside the existing
setOperationLoading(false)/setUploadProgress(0)/setDropTarget(null); ensure the
guard is referenced in performUpload (and included in its dependency list if you
use a ref or memoized callback) so overlapping drag/drop or repeated calls
cannot start a second upload while one is active.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants