-
Notifications
You must be signed in to change notification settings - Fork 62
fix(s3s/http): prevent unbounded memory allocation in POST object (#370) #390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(s3s/http): prevent unbounded memory allocation in POST object (#370) #390
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements a critical security fix to prevent unbounded memory allocation DoS attacks in POST object multipart/form-data handling. The fix follows MinIO's defense-in-depth approach by implementing multiple independent size limits at different layers, transforming the file upload mechanism from buffering to true streaming.
Key Changes:
- Added three safety constants: MAX_FORM_FIELD_SIZE (1 MB), MAX_FORM_FIELDS_SIZE (20 MB), and MAX_FORM_PARTS (1000)
- Implemented streaming file handling instead of buffering entire files in memory
- Extended request infrastructure to support both known-length (vec_stream) and unknown-length (file_stream) body streams
Reviewed Changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| crates/s3s/src/http/multipart.rs | Implements defense-in-depth protections with buffer size checks, per-field limits, total size tracking, and parts count validation; adds comprehensive unit tests for all limit enforcements |
| crates/s3s/src/ops/mod.rs | Removes unsafe aggregate_unlimited() call that buffered entire files in memory; now passes file streams directly to backends for true streaming |
| crates/s3s/src/http/request.rs | Adds file_stream field to S3Extensions to support unknown-length streaming alongside existing vec_stream for known-length data |
| crates/s3s/src/ops/generated.rs | Updates PutObject multipart deserialization to gracefully handle both vec_stream (with content-length) and file_stream (without content-length) |
| crates/s3s/src/ops/generated_minio.rs | Mirrors the generated.rs changes for MinIO compatibility variant |
| codegen/src/v1/ops.rs | Updates code generation template to produce the dual-path stream handling logic for POST object operations |
| crates/s3s/src/service.rs | Adjusts future size limits by 100 bytes (~3%) to accommodate the new file_stream field; minimal overhead for preventing unbounded memory consumption |
dd7a107 to
bba8334
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated no new comments.
1f2f7f8 to
186e522
Compare
186e522 to
870071f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.
870071f to
9568125
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.
…s-project#370) Add defense-in-depth limits to multipart/form-data parsing for POST object to prevent DoS attacks via oversized uploads: - MAX_FORM_FIELD_SIZE: 1 MB per field (matching MinIO) - MAX_FORM_FIELDS_SIZE: 20 MB total for all form fields - MAX_FORM_PARTS: 1000 maximum parts - MAX_POST_OBJECT_FILE_SIZE: 5 GB for file content (matching S3 single PUT limit) Why buffering instead of streaming: A pure streaming solution using FileStream was initially attempted, but it breaks compatibility with s3s-proxy and other consumers that use the AWS SDK. The AWS SDK requires a known Content-Length to compute SHA-256 checksums for SigV4 request signing. Since FileStream returns an unknown remaining length, the SDK throws UnsizedRequestBody errors. MinIO can accept streaming POST uploads directly because its HTTP server parses multipart boundaries at the protocol level without needing size upfront. However, when requests are proxied through s3s-proxy using the AWS SDK, the SDK's cryptographic signing algorithm requires the body size before transmission. The buffering approach with size limits provides: 1. Protection against unbounded memory allocation (DoS mitigation) 2. Compatibility with AWS SDK-based consumers (s3s-proxy, etc.) 3. Known content length for downstream request signing Fixes s3s-project#370 Signed-off-by: Kefu Chai <[email protected]>
9568125 to
c36c22a
Compare
|
We should support streaming ideally, even if it breaks stream consumers like |
Nugine
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Add defense-in-depth limits to multipart/form-data parsing for POST object
to prevent DoS attacks via oversized uploads:
Why buffering instead of streaming:
A pure streaming solution using FileStream was initially attempted, but it
breaks compatibility with s3s-proxy and other consumers that use the AWS SDK.
The AWS SDK requires a known Content-Length to compute SHA-256 checksums for
SigV4 request signing. Since FileStream returns an unknown remaining length,
the SDK throws UnsizedRequestBody errors.
MinIO can accept streaming POST uploads directly because its HTTP server
parses multipart boundaries at the protocol level without needing size upfront.
However, when requests are proxied through s3s-proxy using the AWS SDK, the
SDK's cryptographic signing algorithm requires the body size before transmission.
The buffering approach with size limits provides:
Fixes #370