From 67698a9291d0e3e49f724aeba12430bc2b7b3861 Mon Sep 17 00:00:00 2001 From: onionjake <113088+onionjake@users.noreply.github.com> Date: Fri, 29 Aug 2025 11:35:23 -0600 Subject: [PATCH 1/8] Start moving towards Diataxis Framework This is going to be multiple commits, doing the large sweeping updates almost entirely with AI first, then refining by hand later. --- CLAUDE.md | 264 +++++++ app/(docs)/dcs/getting-started/page.md | 90 ++- app/(docs)/dcs/how-to/_meta.json | 17 + app/(docs)/dcs/how-to/configure-cors.md | 183 +++++ app/(docs)/dcs/how-to/create-buckets.md | 125 ++++ app/(docs)/dcs/how-to/use-rclone.md | 134 ++++ app/(docs)/dcs/reference/_meta.json | 6 + app/(docs)/dcs/reference/cli-commands.md | 276 ++++++++ app/(docs)/dcs/reference/error-codes.md | 146 ++++ app/(docs)/dcs/reference/limits.md | 165 +++++ app/(docs)/dcs/reference/s3-api.md | 208 ++++++ app/(docs)/dcs/third-party-tools/page.md | 69 +- app/(docs)/node/how-to/_meta.json | 17 + .../node/how-to/change-payout-address.md | 230 +++++++ app/(docs)/node/how-to/migrate-node.md | 344 ++++++++++ .../node/how-to/troubleshoot-offline-node.md | 364 ++++++++++ app/(docs)/node/reference/_meta.json | 5 + app/(docs)/node/reference/configuration.md | 310 +++++++++ .../node/reference/dashboard-metrics.md | 275 ++++++++ .../node/reference/system-requirements.md | 288 ++++++++ app/(docs)/node/tutorials/_meta.json | 9 + app/(docs)/node/tutorials/setup-first-node.md | 649 ++++++++++++++++++ .../concepts/object-mount-vs-filesystems.md | 175 +++++ .../object-mount/linux/user-guides/page.md | 130 +++- app/(docs)/object-mount/reference/_meta.json | 5 + .../object-mount/reference/cli-reference.md | 266 +++++++ .../object-mount/reference/compatibility.md | 318 +++++++++ .../object-mount/reference/configuration.md | 289 ++++++++ app/(docs)/object-mount/tutorials/_meta.json | 9 + .../tutorials/your-first-mount.md | 299 ++++++++ 30 files changed, 5622 insertions(+), 43 deletions(-) create mode 100644 CLAUDE.md create mode 100644 app/(docs)/dcs/how-to/_meta.json create mode 100644 app/(docs)/dcs/how-to/configure-cors.md create mode 100644 app/(docs)/dcs/how-to/create-buckets.md create mode 100644 app/(docs)/dcs/how-to/use-rclone.md create mode 100644 app/(docs)/dcs/reference/_meta.json create mode 100644 app/(docs)/dcs/reference/cli-commands.md create mode 100644 app/(docs)/dcs/reference/error-codes.md create mode 100644 app/(docs)/dcs/reference/limits.md create mode 100644 app/(docs)/dcs/reference/s3-api.md create mode 100644 app/(docs)/node/how-to/_meta.json create mode 100644 app/(docs)/node/how-to/change-payout-address.md create mode 100644 app/(docs)/node/how-to/migrate-node.md create mode 100644 app/(docs)/node/how-to/troubleshoot-offline-node.md create mode 100644 app/(docs)/node/reference/_meta.json create mode 100644 app/(docs)/node/reference/configuration.md create mode 100644 app/(docs)/node/reference/dashboard-metrics.md create mode 100644 app/(docs)/node/reference/system-requirements.md create mode 100644 app/(docs)/node/tutorials/_meta.json create mode 100644 app/(docs)/node/tutorials/setup-first-node.md create mode 100644 app/(docs)/object-mount/concepts/object-mount-vs-filesystems.md create mode 100644 app/(docs)/object-mount/reference/_meta.json create mode 100644 app/(docs)/object-mount/reference/cli-reference.md create mode 100644 app/(docs)/object-mount/reference/compatibility.md create mode 100644 app/(docs)/object-mount/reference/configuration.md create mode 100644 app/(docs)/object-mount/tutorials/_meta.json create mode 100644 app/(docs)/object-mount/tutorials/your-first-mount.md diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 000000000..7c753c00f --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,264 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Development Commands + +### Essential Commands +- `npm install` - Install dependencies +- `npm run dev` - Start development server on localhost:3000 +- `npm run build` - Build the site for production (outputs to `/dist`) +- `npm start` - Start production server +- `npm run lint` - Run ESLint to check for code issues +- `npm run prettier` - Format code files (JS/TS/CSS/HTML) +- `npm run mdprettier` - Format Markdown files + +### Build Process +The build process includes a pre-build step that fetches image sizes (`node scripts/fetch-image-sizes.mjs`) before running the Next.js build. + +## Architecture Overview + +This is a **Next.js documentation site** built with: + +- **Next.js 15** with App Router (app directory structure) +- **Markdoc** for parsing and rendering markdown content +- **Tailwind CSS** for styling +- **FlexSearch** for client-side search functionality +- **React components** for interactive elements + +### Key Architecture Concepts + +#### Directory Structure +The site follows Next.js App Router conventions where URLs map to directory structure: +- `app/(docs)/` - Main documentation content +- `app/(blog)/` - Blog posts +- All documentation pages must be named `page.md` +- URL structure mirrors folder structure (e.g., `/app/(docs)/dcs/api/page.md` → `/dcs/api`) + +#### Content Management +- **Markdown with Markdoc**: All content is written in Markdown and processed by Markdoc +- **Custom Components**: Available through Markdoc tags (see `src/markdoc/tags.js`) +- **Frontmatter**: Each page requires frontmatter with `title`, `docId`, and `metadata` +- **Internal Linking**: Use `[](docId:your-doc-id-here)` syntax for cross-references +- **Navigation**: Auto-generated from `_meta.json` files in directories + +#### Styling System +- **Tailwind CSS** with custom Storj brand colors +- **Dark Mode**: Implemented with `next-themes` and class-based dark mode +- **Custom Grid**: Uses CSS Grid with custom templates for sidebar/content/TOC layouts +- **Typography**: Inter font with custom font sizing scale + +#### Search Implementation +- **Client-side search** powered by FlexSearch +- **Auto-indexing** of all documentation content +- **Keyboard shortcut** (⌘K) for quick access +- Configuration in `src/markdoc/search.mjs` + +### Important File Locations + +#### Configuration Files +- `next.config.mjs` - Next.js configuration with Markdoc integration +- `markdoc.config.json` - Markdoc schema configuration +- `src/markdoc/config.mjs` - Markdoc tags and nodes +- `tailwind.config.js` - Tailwind configuration with Storj branding + +#### Core Components +- `src/components/Navigation.jsx` - Main site navigation +- `src/components/Hero.jsx` - Homepage hero section +- `src/components/MarkdownLayout.jsx` - Layout for documentation pages +- `src/components/Search.jsx` - Search functionality +- `src/markdoc/tags.js` - Custom Markdoc components + +#### Content Structure +- `app/(docs)/` - Documentation content organized by product/feature +- `src/markdoc/partials/` - Reusable content snippets +- `public/` - Static assets and installation scripts + +## Content Guidelines + +### Creating New Pages +1. Create `page.md` file in appropriate directory +2. Include required frontmatter: + ```markdown + --- + title: "Page Title" + docId: "unique-16-char-id" + metadata: + title: "Browser Title" + description: "Page description for SEO" + --- + ``` +3. Generate unique docId with `pwgen -1 16` or similar + +### Internal Linking +- Use docId syntax: `[](docId:abc123)` +- Override link text: `[Custom Text](docId:abc123)` +- Add fragments: `[](docId:abc123#section)` + +### Images +- Store in Storj's internal "Website Assets" project +- Use prefix: `https://link.us1.storjshare.io/raw/jua7rls6hkx5556qfcmhrqed2tfa/docs/images` + +## Documentation Style Guide (Diataxis Framework) + +This site follows the [Diataxis framework](https://diataxis.fr/) for systematic technical documentation. Use this guide to determine the appropriate documentation type and writing approach. + +### The Four Documentation Types + +Documentation should serve one of four distinct purposes. Use the **Diataxis Compass** to decide: + +| **Purpose** | **Skill Acquisition (Learning)** | **Skill Application (Working)** | +|-------------|-----------------------------------|----------------------------------| +| **Action-oriented (Doing)** | **Tutorials** | **How-to Guides** | +| **Knowledge-oriented (Understanding)** | **Explanation** | **Reference** | + +### 1. Tutorials (Learning by Doing) + +**Purpose**: Help newcomers learn by completing meaningful tasks +**Example**: "Getting Started with Storj DCS", "Your First Upload" + +#### Writing Guidelines: +- Use first-person plural ("We will create...") +- Provide step-by-step instructions +- Show expected results at each step +- Minimize explanation - focus on actions +- Ensure instructions work reliably +- Build confidence through early wins + +#### Structure: +```markdown +# Tutorial Title +Brief introduction of what they'll achieve + +## What you'll build +Show the end goal + +## Step 1: Setup +Clear, specific instructions + +## Step 2: First action +Expected output/result + +## What's next +Link to related how-to guides +``` + +### 2. How-to Guides (Goal-oriented Solutions) + +**Purpose**: Guide competent users through specific tasks or problems +**Example**: "Configure CORS for Buckets", "Set up Presigned URLs" + +#### Writing Guidelines: +- Use conditional imperatives ("If you want X, do Y") +- Assume basic competence +- Focus on the specific goal +- Address real-world complexity +- Avoid unnecessary background + +#### Structure: +```markdown +# How to [achieve specific goal] +Brief problem/goal statement + +## Prerequisites +What they need to know/have + +## Steps +1. Clear action +2. Next action +3. Final step + +## Verification +How to confirm success +``` + +### 3. Reference (Information Lookup) + +**Purpose**: Provide authoritative technical information +**Example**: "API Endpoints", "CLI Command Reference", "Error Codes" + +#### Writing Guidelines: +- Be strictly descriptive and neutral +- Use austere, factual language +- Organize by system structure +- Provide complete, accurate information +- Avoid opinions or explanations + +#### Structure: +```markdown +# API Reference + +## Endpoints + +### GET /api/buckets +**Description**: Retrieves list of buckets +**Parameters**: +- `limit` (optional): Number of results +**Response**: JSON array of bucket objects +``` + +### 4. Explanation (Conceptual Understanding) + +**Purpose**: Provide context, background, and deeper understanding +**Example**: "Why Decentralized Storage", "Understanding Access Controls" + +#### Writing Guidelines: +- Discuss the bigger picture +- Explain design decisions and trade-offs +- Provide historical context +- Make connections between concepts +- Admit opinions and perspectives + +#### Structure: +```markdown +# Understanding [Concept] +High-level overview + +## Background +Why this matters + +## How it works +Conceptual explanation + +## Design decisions +Why things are this way + +## Related concepts +Links to other explanations +``` + +### Content Organization Guidelines + +#### Choose the Right Type +Ask yourself: +1. **Action or Knowledge?** (What vs. Why/How) +2. **Learning or Working?** (Study vs. Apply) + +#### Common Storj Examples: +- **Tutorial**: "Build Your First App with Storj" +- **How-to**: "Migrate from AWS S3 to Storj DCS" +- **Reference**: "S3 API Compatibility Matrix" +- **Explanation**: "Understanding Storj's Decentralized Architecture" + +#### Writing Best Practices: +- Keep content types pure - don't mix tutorial steps with reference information +- Use consistent language patterns within each type +- Cross-link between related content of different types +- Update `_meta.json` files to reflect content organization + +## Development Notes + +### Static Export +The site is configured for static export (`output: 'export'`) and builds to `/dist` directory for deployment. + +### Image Optimization +Images are unoptimized (`images: { unoptimized: true }`) due to static export requirements. + +### Environment Variables +- `SITE_URL` and `NEXT_PUBLIC_SITE_URL` set to `https://storj.dev` +- Analytics via Plausible in production + +### Custom Webpack Configuration +The Next.js config includes custom webpack rules for processing Markdown files and extracting metadata for canonical URLs. + +# IMPORTANT: Do not use emojis. diff --git a/app/(docs)/dcs/getting-started/page.md b/app/(docs)/dcs/getting-started/page.md index c3f49aefb..7559be57c 100644 --- a/app/(docs)/dcs/getting-started/page.md +++ b/app/(docs)/dcs/getting-started/page.md @@ -15,13 +15,27 @@ redirects: weight: 1 --- -Storj is the leading provider of enterprise-grade, globally distributed cloud object storage. +Storj is the leading provider of enterprise-grade, globally distributed cloud object storage that delivers default multi-region CDN-like performance with zero-trust security at a [cost that's 80%](docId:59T_2l7c1rvZVhI8p91VX) lower than AWS S3. -It is a drop-in replacement for any S3-compatible object storage that is just as durable but with 99.95% availability and better global performance from a single upload. +## What you'll build -Storj delivers default multi-region CDN-like performance with zero-trust security at a [cost that’s 80%](docId:59T_2l7c1rvZVhI8p91VX) lower than AWS S3. +In this tutorial, you'll complete your first complete storage workflow with Storj. By the end, you'll have: -## Before you begin +- Generated S3-compatible credentials for secure access +- Installed and configured command-line tools (rclone or AWS CLI) +- Created your first bucket for file storage +- Uploaded, downloaded, listed, and deleted files +- Managed bucket operations including cleanup + +**Expected time to complete**: 15-20 minutes + +## Prerequisites + +Before you begin, you'll need: + +- A computer with internet access and terminal/command line access +- Administrative privileges to install software on your system +- Basic familiarity with command-line operations To get started, create an account with Storj. You'll automatically begin a free trial that gives you access to try our storage with your [third-party tool](docId:REPde_t8MJMDaE2BU8RfQ) or project. @@ -38,11 +52,13 @@ If you already have a Storj account, log in to get started {% /quick-link %} {% /quick-links %} -## Generate S3 compatible credentials +## Step 1: Generate S3 compatible credentials {% partial file="s3-credentials.md" /%} -## Install command-line tools +**Expected outcome**: You should now have an Access Key ID and Secret Access Key that will allow your applications to authenticate with Storj. + +## Step 2: Install command-line tools Storj works with a variety command-line tools. Rclone is recommended for its compatibility with various cloud providers and ease of use. @@ -121,7 +137,9 @@ However, some may already be familiar with AWS CLI which is also a suitable opti {% /tab %} {% /tabs %} -## Create a bucket +**Expected outcome**: Your command-line tool should now be configured to authenticate with Storj using your credentials. + +## Step 3: Create a bucket Now that the command-line tool is configured, let's make a bucket to store our files. @@ -140,7 +158,9 @@ aws s3 --endpoint-url=https://gateway.storjshare.io mb s3://my-bucket {% /code-group %} -## List buckets +**Expected outcome**: You've successfully created a bucket named "my-bucket" that's ready to store files. + +## Step 4: List buckets The bucket will show up in our bucket list (not to be mistaken with a life's to-do list) @@ -164,7 +184,9 @@ aws s3 --endpoint-url=https://gateway.storjshare.io ls s3:// {% /code-group %} -## Upload file +**Expected outcome**: You should see "my-bucket/" listed, confirming your bucket was created successfully. + +## Step 5: Upload file Next we'll upload a file. Here is an example image of a tree growing hard drives (while Storj doesn't grow hard drives on trees, it does emphasize [sustainability](https://www.storj.io/benefits/sustainability)). Right-click on it and save as `storj-tree.png` to your Downloads. @@ -197,7 +219,9 @@ upload: Downloads/storj-tree.png to s3://my-bucket/storj-tree.png {% /tabs %} -## Download file +**Expected outcome**: The file has been successfully uploaded to your bucket. You should see output confirming the upload completed. + +## Step 6: Download file To retrieve the file, use the same command as upload but reverse the order of the arguments @@ -223,7 +247,9 @@ aws s3 --endpoint-url=https://gateway.storjshare.io cp s3://my-bucket/ ~/Downloa {% /tabs %} -## List files +**Expected outcome**: The file should be downloaded to your local machine. Check your Downloads folder to confirm the file was retrieved successfully. + +## Step 7: List files Let's see what files we have in the bucket. @@ -247,9 +273,9 @@ aws s3 --endpoint-url=https://gateway.storjshare.io ls s3://my-bucket {% /code-group %} -Yep there's the Storj tree! +**Expected outcome**: You should see your uploaded file listed with its size, confirming it's stored in your bucket. -## Delete file +## Step 8: Delete file Okay time to remove the file. @@ -272,7 +298,9 @@ delete: s3://my-bucket/storj-tree.png {% /code-group %} -## Delete buckets +**Expected outcome**: The file should be removed from your bucket. You can verify this by listing files again - the bucket should now be empty. + +## Step 9: Delete buckets Last but not least, we'll delete the bucket. @@ -317,6 +345,36 @@ remove_bucket: my-bucket {% /code-group %} -## Next Steps +**Expected outcome**: Your bucket should be completely removed from your account. Running the list buckets command again should show no buckets. + +## What you've accomplished + +Congratulations! You've successfully completed your first Storj workflow. You now know how to: + +- Generate secure S3-compatible credentials +- Set up command-line tools for Storj access +- Create and manage buckets +- Upload, download, and organize files +- Clean up resources when finished + +## What's next + +Now that you understand the basics, you can explore more advanced Storj capabilities: + +### Integrate with Applications +- [Use Storj with third-party tools](docId:REPde_t8MJMDaE2BU8RfQ) - Connect existing tools like Duplicati, Nextcloud, and more +- Build applications using SDKs for your preferred programming language + +### Advanced Storage Operations +- Set up bucket versioning for file history +- Configure CORS for web applications +- Use presigned URLs for secure direct uploads +- Optimize upload performance for large files + +### Production Deployment +- Learn about Storj's multi-region architecture +- Understand pricing and billing +- Set up monitoring and logging +- Plan your migration from other storage providers -Congratulations on getting started with Storj! +Ready to dive deeper? Check out our [third-party integrations](docId:REPde_t8MJMDaE2BU8RfQ) to see Storj working with popular tools and applications. diff --git a/app/(docs)/dcs/how-to/_meta.json b/app/(docs)/dcs/how-to/_meta.json new file mode 100644 index 000000000..1a7157edd --- /dev/null +++ b/app/(docs)/dcs/how-to/_meta.json @@ -0,0 +1,17 @@ +{ + "title": "How-to Guides", + "nav": [ + { + "title": "Create buckets", + "id": "create-buckets" + }, + { + "title": "Use Rclone", + "id": "use-rclone" + }, + { + "title": "Configure CORS", + "id": "configure-cors" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/configure-cors.md b/app/(docs)/dcs/how-to/configure-cors.md new file mode 100644 index 000000000..788d6efe8 --- /dev/null +++ b/app/(docs)/dcs/how-to/configure-cors.md @@ -0,0 +1,183 @@ +--- +title: How to configure CORS for web applications +docId: configure-cors-how-to +metadata: + title: How to Configure CORS for Storj Web Applications + description: Step-by-step guide to understand and work with Storj's CORS policy for secure web application development. +--- + +This guide explains how to work with Cross-Origin Resource Sharing (CORS) when building web applications that access Storj storage. + +## Prerequisites + +Before configuring CORS for your application, ensure you have: + +- A web application that needs to access Storj storage from a browser +- Basic understanding of CORS and web security concepts +- Storj S3-compatible credentials configured + +## Understanding Storj's CORS policy + +Storj's S3-compatible API includes a permissive CORS policy by default: + +- **Access-Control-Allow-Origin**: `*` (allows access from any domain) +- **Automatic configuration**: No manual CORS setup required +- **Immediate access**: Your web applications can access Storj resources directly + +This eliminates the need for proxy servers or backend-only access patterns common with other storage providers. + +## Secure your application access + +While Storj's permissive CORS policy simplifies development, follow these security best practices: + +### 1. Use restricted access keys + +Create access keys with minimal required permissions: + +```shell +# Create restricted access key for web app +uplink access restrict \ + --readonly \ + --bucket=my-web-app-bucket \ + --path-prefix=public/ \ + my-main-access +``` + +### 2. Implement client-side validation + +Add validation in your web application: + +```javascript +// Example: Validate file types before upload +function validateFile(file) { + const allowedTypes = ['image/jpeg', 'image/png', 'image/gif']; + if (!allowedTypes.includes(file.type)) { + throw new Error('File type not allowed'); + } +} +``` + +### 3. Use presigned URLs for sensitive operations + +Generate time-limited URLs for uploads: + +```javascript +// Request presigned URL from your backend +const response = await fetch('/api/presigned-url', { + method: 'POST', + body: JSON.stringify({ filename: 'user-upload.jpg' }) +}); +const { uploadUrl } = await response.json(); + +// Use presigned URL for direct upload +await fetch(uploadUrl, { + method: 'PUT', + body: file +}); +``` + +## Test CORS access + +Verify your web application can access Storj storage: + +### 1. Create a test HTML page + +```html + + + + Storj CORS Test + + + + + +``` + +### 2. Check browser developer tools + +1. Open the page in your browser +2. Open Developer Tools (F12) +3. Check the Console tab for any CORS errors +4. Verify the Network tab shows successful requests + +## Handle CORS in different frameworks + +### React/Next.js +```javascript +// pages/api/upload.js +export default async function handler(req, res) { + // Set CORS headers if needed for your API routes + res.setHeader('Access-Control-Allow-Origin', 'https://yourdomain.com'); + + // Your Storj integration code +} +``` + +### Vue.js +```javascript +// In your component +async uploadFile(file) { + const formData = new FormData(); + formData.append('file', file); + + try { + const response = await this.$http.put( + 'https://gateway.storjshare.io/v1/buckets/my-bucket/objects/file.jpg', + formData, + { + headers: { + 'Authorization': 'Bearer ' + this.accessToken + } + } + ); + console.log('Upload successful'); + } catch (error) { + console.error('Upload failed:', error); + } +} +``` + +## Troubleshooting CORS issues + +**"Access blocked by CORS policy"**: This typically indicates an issue with your authorization headers or request format, not Storj's CORS policy. + +**Preflight request failures**: Ensure your access tokens are valid and have appropriate permissions. + +**Mixed content warnings**: Use HTTPS for your web application when accessing Storj's HTTPS endpoints. + +**Network errors in development**: Consider using a local development server (like `http-server` or your framework's dev server) instead of opening HTML files directly. + +## Security considerations + +**Risk assessment**: The permissive CORS policy means any website can attempt to access your Storj resources if they have credentials. + +**Mitigation strategies**: +- Use read-only access keys for public content +- Implement server-side validation for sensitive operations +- Monitor access logs for unusual activity +- Rotate access keys regularly + +**Best practices**: +- Store sensitive credentials on your backend, not in client-side code +- Use environment variables for configuration +- Implement proper authentication and authorization in your application + +## Next steps + +Once CORS is working correctly: + +- [Implement presigned URLs for secure uploads](#) +- [Set up client-side file validation](#) +- [Configure bucket policies for web hosting](#) +- [Optimize web application performance](#) \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/create-buckets.md b/app/(docs)/dcs/how-to/create-buckets.md new file mode 100644 index 000000000..0b6aa12df --- /dev/null +++ b/app/(docs)/dcs/how-to/create-buckets.md @@ -0,0 +1,125 @@ +--- +title: How to create buckets +docId: create-buckets-how-to +metadata: + title: How to Create Storj Buckets + description: Step-by-step guide to create Storj buckets using command-line tools or the Storj Console. +--- + +This guide shows you how to create a new bucket in Storj DCS for storing your files and data. + +## Prerequisites + +Before creating a bucket, ensure you have: + +- A Storj account with valid credentials +- One of the following tools configured: + - [Rclone with Storj configuration](docId:AsyYcUJFbO1JI8-Tu8tW3) + - [AWS CLI with Storj endpoint](docId:AsyYcUJFbO1JI8-Tu8tW3) + - [Uplink CLI installed and configured](docId:hFL-goCWqrQMJPcTN82NB) + - Access to the Storj Console web interface + +## Create a bucket + +Choose your preferred method to create a bucket: + +{% tabs %} + +{% tab label="rclone" %} + +```shell {% title="rclone" %} +# link[1:6] https://rclone.org/install/ +# link[8:12] https://rclone.org/commands/rclone_mkdir/ +# terminal +rclone mkdir storj:my-bucket +``` + +Replace `my-bucket` with your desired bucket name. + +{% /tab %} + +{% tab label="aws cli" %} + +```shell {% title="aws cli" %} +# link[1:3] https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html +# terminal +aws s3 --endpoint-url=https://gateway.storjshare.io mb s3://my-bucket +``` + +Replace `my-bucket` with your desired bucket name. + +{% /tab %} + +{% tab label="uplink" %} + +```shell {% title="uplink" %} +# link[1:6] docId:hFL-goCWqrQMJPcTN82NB +# link[8:9] docId:F77kaGpjXx7w-JYv2rkhf +# terminal +uplink mb sj://my-bucket +``` + +Replace `my-bucket` with your desired bucket name. + +{% /tab %} + +{% tab label="Storj Console" %} + +{% partial file="create-bucket.md" /%} + +{% /tab %} + +{% /tabs %} + +## Verify bucket creation + +Confirm your bucket was created successfully by listing your buckets: + +{% tabs %} + +{% tab label="rclone" %} + +```shell +rclone lsf storj: +``` + +{% /tab %} + +{% tab label="aws cli" %} + +```shell +aws s3 --endpoint-url=https://gateway.storjshare.io ls +``` + +{% /tab %} + +{% tab label="uplink" %} + +```shell +uplink ls +``` + +{% /tab %} + +{% /tabs %} + +You should see your new bucket listed in the output. + +## Bucket naming requirements + +When creating buckets, follow these naming conventions: + +- Use only lowercase letters, numbers, and hyphens +- Must be 3-63 characters long +- Cannot start or end with a hyphen +- Must be globally unique across all Storj users +- Cannot contain periods or underscores + +## Next steps + +Now that you have a bucket, you can: + +- [Upload files to your bucket](#) +- [Configure bucket settings like CORS](#) +- [Set up object versioning](#) +- [Configure bucket lifecycle policies](#) \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/use-rclone.md b/app/(docs)/dcs/how-to/use-rclone.md new file mode 100644 index 000000000..c649548c1 --- /dev/null +++ b/app/(docs)/dcs/how-to/use-rclone.md @@ -0,0 +1,134 @@ +--- +title: How to use Rclone with Storj +docId: use-rclone-how-to +metadata: + title: How to Use Rclone with Storj DCS + description: Step-by-step guide to configure and use Rclone with Storj, including choosing between S3-compatible and native integration. +--- + +This guide shows you how to set up and use Rclone with Storj DCS, including how to choose the right integration method for your needs. + +## Prerequisites + +Before using Rclone with Storj, ensure you have: + +- A Storj account with S3-compatible credentials +- Basic familiarity with command-line operations +- Rclone installed on your system + +If you need to set up credentials or install Rclone, follow the [Getting Started guide](docId:AsyYcUJFbO1JI8-Tu8tW3) first. + +## Choose your integration method + +Storj offers two ways to use Rclone, each with different advantages: + +### S3-Compatible Integration +Best for: Upload-heavy workloads, server applications, bandwidth-limited connections + +**Advantages:** +- Faster upload performance +- Reduced network bandwidth usage (1GB file = 1GB uploaded) +- Server-side encryption handled automatically +- Lower system resource usage + +**Trade-offs:** +- Data passes through Storj gateway servers +- Relies on Storj's server-side encryption + +### Native Integration +Best for: Download-heavy workloads, maximum security requirements, distributed applications + +**Advantages:** +- End-to-end client-side encryption +- Faster download performance +- Direct connection to storage nodes +- Maximum privacy and security + +**Trade-offs:** +- Higher upload bandwidth usage (1GB file = ~2.7GB uploaded due to erasure coding) +- More CPU usage for local erasure coding + +## Configure S3-compatible integration + +If you chose S3-compatible integration, configure Rclone with these settings: + +1. Edit your Rclone configuration file: + + ```shell + rclone config file + ``` + +2. Add or update your Storj configuration: + + ```ini + [storj] + type = s3 + provider = Storj + access_key_id = your_access_key + secret_access_key = your_secret_key + endpoint = gateway.storjshare.io + chunk_size = 64Mi + disable_checksum = true + ``` + +3. Test your configuration: + + ```shell + rclone lsf storj: + ``` + +For complete setup instructions and common commands, see the [S3-compatible Rclone guide](docId:AsyYcUJFbO1JI8-Tu8tW3). + +## Configure native integration + +If you chose native integration, follow these steps: + +1. Set up native Rclone integration with Storj's uplink protocol +2. Configure client-side encryption settings +3. Test connectivity to the distributed network + +For detailed setup instructions and commands, see the [Native Rclone guide](docId:Mk51zylAE6xmqP7jUYAuX). + +## Verify your setup + +After configuration, verify Rclone works correctly: + +1. **List buckets** to confirm connectivity: + ```shell + rclone lsf storj: + ``` + +2. **Test upload** with a small file: + ```shell + echo "test content" > test.txt + rclone copy test.txt storj:my-test-bucket/ + ``` + +3. **Test download** to verify the round trip: + ```shell + rclone copy storj:my-test-bucket/test.txt ./downloaded-test.txt + ``` + +4. **Clean up** the test file: + ```shell + rclone delete storj:my-test-bucket/test.txt + ``` + +## Troubleshooting + +**Configuration not found**: Run `rclone config file` to locate your configuration file path. + +**Access denied errors**: Verify your credentials are correct and have the necessary permissions. + +**Slow performance**: For S3-compatible mode, ensure `chunk_size = 64Mi` is set. For native mode, this is expected for uploads due to erasure coding. + +**Connection timeouts**: Check your internet connection and firewall settings. Native mode requires access to distributed storage nodes. + +## Next steps + +Once Rclone is working with Storj: + +- [Optimize upload performance for large files](#) +- [Set up automated sync workflows](#) +- [Configure Rclone for backup applications](#) +- [Explore advanced Rclone features](#) \ No newline at end of file diff --git a/app/(docs)/dcs/reference/_meta.json b/app/(docs)/dcs/reference/_meta.json new file mode 100644 index 000000000..4a63982f8 --- /dev/null +++ b/app/(docs)/dcs/reference/_meta.json @@ -0,0 +1,6 @@ +{ + "cli-commands": "CLI Commands", + "s3-api": "S3 API", + "error-codes": "Error Codes", + "limits": "Limits" +} \ No newline at end of file diff --git a/app/(docs)/dcs/reference/cli-commands.md b/app/(docs)/dcs/reference/cli-commands.md new file mode 100644 index 000000000..686ffb289 --- /dev/null +++ b/app/(docs)/dcs/reference/cli-commands.md @@ -0,0 +1,276 @@ +--- +title: "CLI Commands Reference" +docId: "cli-reference-001" +metadata: + title: "Uplink CLI Commands Reference" + description: "Complete reference for all Uplink CLI commands, flags, and usage patterns for managing Storj DCS storage." +--- + +Complete reference for the Uplink CLI tool commands and options. + +{% callout type="info" %} +For installation instructions, see [Uplink CLI Installation](docId:TbMdOGCAXNWyPpQmH6EOq). +{% /callout %} + +## Global Flags + +| Flag | Description | +| :-------------------- | :---------------------------------------------- | +| `--advanced` | if used in with `-h`, print advanced flags help | +| `--config-dir string` | main directory for uplink configuration | + +## Core Commands + +### uplink access + +Manage access grants for secure access to buckets and objects. + +#### Subcommands + +| Command | Description | +|---------|-------------| +| `access create` | Create a new access grant | +| `access export` | Export an access grant to a string | +| `access import` | Import an access grant from a string | +| `access inspect` | Inspect an access grant | +| `access list` | List stored access grants | +| `access register` | Register an access grant with a satellite | +| `access remove` | Remove an access grant | +| `access restrict` | Create a restricted access grant | +| `access revoke` | Revoke an access grant | +| `access use` | Set default access grant | + +**Usage Examples:** +```bash +uplink access create --name my-access +uplink access export my-access +uplink access restrict my-access --readonly +``` + +### uplink cp + +Copy files between local filesystem and Storj buckets. + +**Syntax:** +``` +uplink cp [source] [destination] [flags] +``` + +**Common Flags:** +- `--recursive, -r` - Copy directories recursively +- `--parallelism int` - Number of parallel transfers (default 1) +- `--parallelism-chunk-size memory` - Size of chunks for parallel transfers + +**Usage Examples:** +```bash +# Upload file +uplink cp local-file.txt sj://mybucket/remote-file.txt + +# Download file +uplink cp sj://mybucket/remote-file.txt local-file.txt + +# Upload directory recursively +uplink cp local-dir/ sj://mybucket/remote-dir/ --recursive +``` + +### uplink ls + +List objects and prefixes in buckets. + +**Syntax:** +``` +uplink ls [path] [flags] +``` + +**Common Flags:** +- `--recursive, -r` - List recursively +- `--encrypted` - Show encrypted object names +- `--pending` - Show pending multipart uploads + +**Usage Examples:** +```bash +# List all buckets +uplink ls + +# List objects in bucket +uplink ls sj://mybucket/ + +# List recursively +uplink ls sj://mybucket/ --recursive +``` + +### uplink mb + +Create a new bucket. + +**Syntax:** +``` +uplink mb sj://bucket-name [flags] +``` + +**Usage Examples:** +```bash +uplink mb sj://my-new-bucket +``` + +### uplink rb + +Remove an empty bucket. + +**Syntax:** +``` +uplink rb sj://bucket-name [flags] +``` + +**Common Flags:** +- `--force` - Remove bucket and all objects + +**Usage Examples:** +```bash +uplink rb sj://my-bucket +uplink rb sj://my-bucket --force +``` + +### uplink rm + +Remove objects from buckets. + +**Syntax:** +``` +uplink rm sj://bucket/path [flags] +``` + +**Common Flags:** +- `--recursive, -r` - Remove recursively +- `--pending` - Remove pending multipart uploads + +**Usage Examples:** +```bash +# Remove single object +uplink rm sj://mybucket/file.txt + +# Remove directory recursively +uplink rm sj://mybucket/folder/ --recursive +``` + +### uplink mv + +Move or rename objects within Storj. + +**Syntax:** +``` +uplink mv sj://source sj://destination +``` + +**Usage Examples:** +```bash +uplink mv sj://mybucket/oldname.txt sj://mybucket/newname.txt +uplink mv sj://mybucket/file.txt sj://anotherbucket/file.txt +``` + +### uplink share + +Create shareable URLs for objects with restricted access. + +**Syntax:** +``` +uplink share [flags] sj://bucket/path +``` + +**Common Flags:** +- `--readonly` - Create read-only access +- `--writeonly` - Create write-only access +- `--not-after time` - Access expires after this time +- `--not-before time` - Access not valid before this time + +**Usage Examples:** +```bash +uplink share sj://mybucket/file.txt --readonly --not-after +24h +uplink share sj://mybucket/ --writeonly +``` + +## Metadata Commands + +### uplink meta + +Manage object metadata. + +#### Subcommands + +| Command | Description | +|---------|-------------| +| `meta get` | Get object metadata | + +**Usage Examples:** +```bash +uplink meta get sj://mybucket/file.txt +``` + +## Configuration Commands + +### uplink setup + +Create initial uplink configuration. + +**Syntax:** +``` +uplink setup [flags] +``` + +This command walks you through the initial configuration process. + +### uplink import + +Import serialized access grant into configuration. + +**Syntax:** +``` +uplink import [name] [serialized access] [flags] +``` + +**Usage Examples:** +```bash +uplink import my-access 13GRuHAiA... +``` + +## Advanced Usage + +### Environment Variables + +- `UPLINK_CONFIG_DIR` - Override configuration directory +- `UPLINK_ACCESS` - Set default access grant +- `UPLINK_DEBUG` - Enable debug output + +### Configuration File + +The uplink configuration is stored at: +- Linux/macOS: `$HOME/.config/storj/uplink/config.yaml` +- Windows: `%AppData%\storj\uplink\config.yaml` + +### Exit Codes + +- `0` - Success +- `1` - General error +- `2` - Access denied +- `3` - Network error + +## Performance Tuning + +### Parallelism Settings + +For large files or directories, adjust parallelism: + +```bash +uplink cp large-file.bin sj://bucket/ --parallelism 10 +uplink cp dir/ sj://bucket/ --recursive --parallelism 8 +``` + +### Chunk Size Optimization + +For very large files, adjust chunk size: + +```bash +uplink cp huge-file.bin sj://bucket/ --parallelism-chunk-size 64MiB +``` + +This reference covers all major Uplink CLI commands and common usage patterns. For specific flag details, use `uplink [command] --help`. \ No newline at end of file diff --git a/app/(docs)/dcs/reference/error-codes.md b/app/(docs)/dcs/reference/error-codes.md new file mode 100644 index 000000000..a8f3b65bc --- /dev/null +++ b/app/(docs)/dcs/reference/error-codes.md @@ -0,0 +1,146 @@ +--- +title: "Error Codes Reference" +docId: "error-codes-ref-001" +metadata: + title: "Error Codes and Troubleshooting Reference" + description: "Reference for common error codes, HTTP status codes, and troubleshooting information for Storj DCS." +--- + +Reference for error codes and common issues when working with Storj DCS. + +## CLI Exit Codes + +| Code | Description | Resolution | +|------|-------------|------------| +| `0` | Success | Operation completed successfully | +| `1` | General error | Check command syntax and parameters | +| `2` | Access denied | Verify access grant permissions | +| `3` | Network error | Check internet connectivity and satellite endpoints | + +## HTTP Status Codes + +### 2xx Success +| Code | Status | Description | +|------|--------|-------------| +| `200` | OK | Request successful | +| `201` | Created | Resource created successfully | +| `204` | No Content | Request successful, no content returned | + +### 4xx Client Errors +| Code | Status | Description | Common Causes | +|------|--------|-------------|---------------| +| `400` | Bad Request | Invalid request format | Malformed JSON, invalid parameters | +| `401` | Unauthorized | Authentication failed | Invalid access key, expired token | +| `403` | Forbidden | Access denied | Insufficient permissions, restricted access | +| `404` | Not Found | Resource not found | Bucket/object doesn't exist, wrong path | +| `409` | Conflict | Resource conflict | Bucket already exists, object locked | + +### 5xx Server Errors +| Code | Status | Description | Resolution | +|------|--------|-------------|------------| +| `500` | Internal Server Error | Server error | Retry request, contact support if persistent | +| `502` | Bad Gateway | Gateway error | Check satellite status, retry request | +| `503` | Service Unavailable | Service temporarily unavailable | Wait and retry with backoff | + +## Common Error Messages + +### Access Grant Errors + +**"Access grant invalid"** +- **Cause**: Malformed or expired access grant +- **Resolution**: Generate new access grant, verify serialization + +**"Insufficient permissions"** +- **Cause**: Access grant lacks required permissions +- **Resolution**: Create access grant with appropriate permissions + +### Network Errors + +**"Dial timeout"** +- **Cause**: Network connectivity issues +- **Resolution**: Check internet connection, firewall settings + +**"Connection refused"** +- **Cause**: Satellite unreachable +- **Resolution**: Verify satellite address, check network access + +### Storage Errors + +**"Bucket already exists"** +- **Cause**: Bucket name already taken +- **Resolution**: Choose different bucket name + +**"Object not found"** +- **Cause**: Object path incorrect or object deleted +- **Resolution**: Verify object path, check bucket listing + +**"Upload failed"** +- **Cause**: Network interruption or insufficient space +- **Resolution**: Retry upload, check available storage + +### S3 API Errors + +**"SignatureDoesNotMatch"** +- **Cause**: Incorrect access credentials or clock skew +- **Resolution**: Verify access keys, sync system clock + +**"NoSuchBucket"** +- **Cause**: Bucket name incorrect or doesn't exist +- **Resolution**: Create bucket or verify bucket name + +**"InvalidAccessKeyId"** +- **Cause**: Access key not recognized +- **Resolution**: Verify access key, regenerate if necessary + +## Troubleshooting Steps + +### General Troubleshooting + +1. **Verify credentials**: Ensure access grant or S3 keys are correct +2. **Check permissions**: Confirm access grant has required permissions +3. **Test connectivity**: Verify network access to satellites +4. **Review syntax**: Double-check command syntax and parameters +5. **Check limits**: Ensure request doesn't exceed service limits + +### Debug Mode + +Enable debug output for detailed error information: + +```bash +# CLI debug mode +export UPLINK_DEBUG=true +uplink ls sj://mybucket/ + +# Or use debug flag +uplink --debug ls sj://mybucket/ +``` + +### Log Analysis + +**CLI logs**: Look for specific error messages and stack traces +**S3 client logs**: Enable verbose logging in S3 client configuration +**Network logs**: Use tools like `curl` or `wget` to test endpoints + +### Performance Issues + +**Slow uploads/downloads**: +- Adjust parallelism settings +- Check network bandwidth +- Consider chunked upload for large files + +**Timeouts**: +- Increase client timeout settings +- Use smaller chunk sizes +- Check for network stability + +## Getting Help + +When reporting issues, include: + +1. **Error message**: Complete error text and codes +2. **Command used**: Full command with parameters (sanitize credentials) +3. **Environment**: OS, CLI version, client library version +4. **Network**: Connection type and any proxies/firewalls +5. **Timing**: When error occurs and frequency + +For persistent issues, contact support through the [support portal](https://supportdcs.storj.io/). \ No newline at end of file diff --git a/app/(docs)/dcs/reference/limits.md b/app/(docs)/dcs/reference/limits.md new file mode 100644 index 000000000..27467bd61 --- /dev/null +++ b/app/(docs)/dcs/reference/limits.md @@ -0,0 +1,165 @@ +--- +title: "Service Limits Reference" +docId: "service-limits-ref-001" +metadata: + title: "Storj DCS Service Limits and Specifications" + description: "Complete reference for service limits, quotas, and technical specifications for Storj DCS object storage." +--- + +Complete reference for service limits and technical specifications. + +## Storage Limits + +| Resource | Limit | Notes | +|----------|-------|-------| +| **Buckets per account** | 100 | Contact support for increases | +| **Objects per bucket** | No limit | | +| **Object size** | No limit | Unlike AWS S3's 5 TiB limit | +| **Minimum object size** | 0 B | Empty objects supported | +| **Maximum PUT operation size** | No limit | Use multipart for large objects | +| **Object name length (encrypted)** | 1,280 characters | Path encryption adds overhead | +| **Object metadata size** | 2 KiB | Custom metadata storage | + +## Bucket Limits + +| Resource | Limit | Notes | +|----------|-------|-------| +| **Bucket name minimum length** | 3 characters | | +| **Bucket name maximum length** | 63 characters | | +| **Bucket name format** | DNS-compliant | Lowercase letters, numbers, hyphens | + +## Multipart Upload Limits + +| Resource | Limit | Notes | +|----------|-------|-------| +| **Maximum parts per upload** | 10,000 | Standard S3 limit | +| **Minimum part size** | 5 MiB | Last part can be smaller | +| **Maximum part size** | 5 GiB | Standard S3 limit | +| **Parts returned per list request** | 10,000 | Pagination available | + +## API Request Limits + +| Operation | Limit | Notes | +|-----------|-------|-------| +| **Objects per ListObjects request** | 1,000 | Use pagination for more | +| **Multipart uploads per list request** | 1,000 | Pagination available | +| **Parts per ListParts request** | 10,000 | | + +## Network and Performance + +| Resource | Specification | Notes | +|----------|---------------|-------| +| **Upload bandwidth** | No artificial limits | Limited by network and node capacity | +| **Download bandwidth** | No artificial limits | Limited by network and node capacity | +| **Concurrent connections** | No specific limit | Best practice: 10-100 concurrent | +| **Request rate** | No specific limit | Use exponential backoff for retries | + +## Access and Security + +| Resource | Limit | Notes | +|----------|-------|-------| +| **Access grants per account** | No limit | Store securely | +| **Access grant size** | ~1-2 KB typical | Varies based on restrictions | +| **Encryption key size** | 32 bytes | AES-256 encryption | +| **Macaroon restrictions** | 64 KB serialized | Access grant restrictions | + +## Geographic Distribution + +| Resource | Specification | Notes | +|----------|---------------|-------| +| **Default redundancy** | 80 pieces (29 required) | Erasure coding parameters | +| **Storage nodes** | Thousands globally | Decentralized network | +| **Satellite regions** | Multiple | US1, EU1, AP1 available | + +## Large Object Considerations + +### Objects Larger Than 5 TiB + +Unlike AWS S3, Storj supports objects larger than 5 TiB. Configure S3 clients appropriately: + +**Required multipart configuration for 6 TiB file:** +```bash +aws configure set s3.multipart_chunksize 630MiB +``` + +**Formula for chunk size:** +``` +chunk_size = object_size / 10000 (rounded up to nearest MiB) +``` + +## Rate Limiting and Backoff + +### Best Practices + +**Recommended retry strategy:** +- Initial delay: 100ms +- Maximum delay: 30 seconds +- Exponential backoff with jitter +- Maximum retry attempts: 5 + +**Connection pooling:** +- Reuse HTTP connections +- Limit concurrent connections per endpoint +- Use appropriate timeout values + +## Monitoring and Quotas + +### Account Usage Monitoring + +Monitor usage through: +- Satellite web console +- CLI commands: `uplink ls --recursive` +- S3 API: ListBuckets, ListObjects + +### Cost Optimization + +**Storage efficiency:** +- Delete unnecessary objects regularly +- Use object expiration for temporary data +- Monitor duplicate objects + +**Bandwidth optimization:** +- Use appropriate parallelism settings +- Implement client-side caching where appropriate +- Consider CDN for frequently accessed public data + +## Regional Specifications + +### Placement Options + +| Region | Description | Compliance | +|--------|-------------|------------| +| **Global** | Worldwide distributed | Standard | +| **US-Select-1** | Continental US only | SOC 2 Type 2 | + +### Performance Characteristics + +**Global placement:** +- Lowest cost +- Best global performance +- Highest durability + +**US-Select-1 placement:** +- Compliance focused +- US-based infrastructure +- Premium pricing + +## Support and Escalation + +### Limit Increase Requests + +For limit increases, contact support with: +- Current usage patterns +- Projected growth requirements +- Business justification +- Timeline requirements + +### Enterprise Features + +Additional limits and features available for enterprise customers: +- Custom redundancy parameters +- Private satellite deployment +- Dedicated support channels +- SLA guarantees + +This reference covers all standard service limits. For enterprise requirements or limit increases, contact [Storj support](https://supportdcs.storj.io/). \ No newline at end of file diff --git a/app/(docs)/dcs/reference/s3-api.md b/app/(docs)/dcs/reference/s3-api.md new file mode 100644 index 000000000..ff9c5c473 --- /dev/null +++ b/app/(docs)/dcs/reference/s3-api.md @@ -0,0 +1,208 @@ +--- +title: "S3 API Reference" +docId: "s3-api-reference-001" +metadata: + title: "S3 API Compatibility Reference" + description: "Complete reference for S3 API compatibility with Storj DCS, including supported operations, limits, and Storj-specific extensions." +--- + +Complete reference for S3 API compatibility with Storj DCS. + +## API Compatibility Overview + +The Storj S3-compatible Gateway supports a RESTful API that is compatible with the basic data access model of the Amazon S3 API. + +### Support Definitions + +- **Full** - Complete support for all features except those requiring unsupported dependencies +- **Partial** - Limited support (see specific caveats) +- **No** - Not supported + +## Supported Operations + +### Bucket Operations + +| Operation | Support | Notes | +|-----------|---------|-------| +| CreateBucket | Full | | +| DeleteBucket | Full | | +| HeadBucket | Full | | +| ListBuckets | Full | | +| GetBucketLocation | Full | Gateway-MT only | +| GetBucketTagging | Full | | +| PutBucketTagging | Full | | +| DeleteBucketTagging | Full | | +| GetBucketVersioning | Yes | See [Object Versioning](docId:oogh5vaiGei6atohm5thi) | +| PutBucketVersioning | Yes | See [Object Versioning](docId:oogh5vaiGei6atohm5thi) | + +### Object Operations + +| Operation | Support | Notes | +|-----------|---------|-------| +| PutObject | Full | | +| GetObject | Partial | Need to add partNumber parameter support | +| DeleteObject | Full | | +| DeleteObjects | Full | | +| HeadObject | Full | | +| CopyObject | Full | Supports objects up to ~671 GB (vs AWS 5 GB limit) | +| GetObjectAttributes | Partial | Etag, StorageClass, and ObjectSize only | +| GetObjectTagging | Full | Tags can be modified outside tagging endpoints | +| PutObjectTagging | Full | Tags can be modified outside tagging endpoints | +| DeleteObjectTagging | Full | Tags can be modified outside tagging endpoints | + +### Object Lock Operations + +| Operation | Support | Notes | +|-----------|---------|-------| +| GetObjectLockConfiguration | Yes | See [Object Lock](docId:gjrGzPNnhpYrAGTTAUaj) | +| PutObjectLockConfiguration | Yes | See [Object Lock](docId:gjrGzPNnhpYrAGTTAUaj) | +| GetObjectLegalHold | Yes | See [Object Lock](docId:gjrGzPNnhpYrAGTTAUaj) | +| PutObjectLegalHold | Yes | See [Object Lock](docId:gjrGzPNnhpYrAGTTAUaj) | +| GetObjectRetention | Yes | See [Object Lock](docId:gjrGzPNnhpYrAGTTAUaj) | +| PutObjectRetention | Yes | See [Object Lock](docId:gjrGzPNnhpYrAGTTAUaj) | + +### Multipart Upload Operations + +| Operation | Support | Notes | +|-----------|---------|-------| +| CreateMultipartUpload | Full | | +| UploadPart | Full | | +| UploadPartCopy | Partial | Available on request | +| CompleteMultipartUpload | Full | | +| AbortMultipartUpload | Full | | +| ListMultipartUploads | Partial | See ListMultipartUploads section | +| ListParts | Full | | + +### Listing Operations + +| Operation | Support | Notes | +|-----------|---------|-------| +| ListObjects | Partial | See ListObjects section | +| ListObjectsV2 | Partial | See ListObjects section | +| ListObjectVersions | Yes | See [Object Versioning](docId:oogh5vaiGei6atohm5thi) | + +## Service Limits + +| Limit | Value | +|-------|--------| +| Maximum buckets per account | 100 | +| Maximum objects per bucket | No limit | +| Maximum object size | No limit | +| Minimum object size | 0 B | +| Maximum object size per PUT | No limit | +| Maximum parts per multipart upload | 10,000 | +| Minimum part size | 5 MiB (last part can be 0 B) | +| Maximum parts returned per list request | 10,000 | +| Maximum objects per list request | 1,000 | +| Maximum multipart uploads per list request | 1,000 | +| Maximum bucket name length | 63 characters | +| Minimum bucket name length | 3 characters | +| Maximum encrypted object name length | 1,280 characters | +| Maximum metadata size | 2 KiB | + +## API Behavior Notes + +### ListObjects Behavior + +#### Encrypted Object Keys +Object paths are end-to-end encrypted. Since we don't use ordering-preserving encryption, lexicographical ordering may not match expectations: + +- **Forward-slash terminated prefix/delimiter**: Fast listing in encrypted path order +- **Non-forward-slash terminated prefix/delimiter**: Exhaustive listing in correct lexicographical order + +#### Unencrypted Object Keys +Always lists in lexicographical order per S3 specification. + +### ListMultipartUploads Behavior + +- Same ordering characteristics as ListObjects +- Only supports forward-slash terminated prefixes and delimiters +- `UploadIdMarker` and `NextUploadIdMarker` not supported + +### GetBucketLocation Response + +Returns placement regions for bucket data: + +| Value | Description | +|-------|-------------| +| `global` | Stored on global public network | +| `us-select-1` | SOC 2 Type 2 certified US facilities | + +## Storj-Specific Extensions + +### Object TTL (Time To Live) + +Set object expiration using the `X-Amz-Meta-Object-Expires` header: + +**Supported Formats:** +- Duration: `+300ms`, `+1.5h`, `+2h45m` +- RFC3339 timestamp: `2024-05-19T00:10:55Z` +- `none` for no expiration + +**Example:** +```bash +aws s3 --endpoint-url https://gateway.storjshare.io cp file s3://bucket/object \ + --metadata Object-Expires=+2h +``` + +### ListBucketsWithAttribution (Gateway-MT only) + +Returns bucket listing with attribution information. + +**Request:** +```http +GET /?attribution HTTP/1.1 +Host: gateway.storjshare.io +``` + +**Response includes additional Attribution element:** +```xml + + string + timestamp + string + +``` + +## Large Object Handling + +### Objects Larger Than 5 TiB + +For objects exceeding AWS S3's 5 TiB limit, configure multipart chunk size: + +```bash +# For 6 TiB files, set chunk size to ~630 MiB +aws --profile storj configure set s3.multipart_chunksize 630MiB +aws --profile storj --endpoint-url https://gateway.storjshare.io s3 cp 6TiB_file s3://bucket/ +``` + +## Client Compatibility + +### Python boto3 / AWS CLI + +**Supported versions:** boto3 up to 1.35.99 + +**Issue:** Newer versions enable default integrity protections not yet supported by Storj. + +**Recommendation:** Downgrade rather than using `WHEN_REQUIRED` workaround. + +## Unsupported Features + +### Security Features +- ACL operations (GetObjectAcl, PutObjectAcl, etc.) +- Bucket policies (except Gateway-ST with --website) +- Public access blocks + +### Advanced Features +- Lifecycle management +- Cross-region replication +- Analytics configurations +- Metrics configurations +- Inventory configurations +- Notification configurations +- Intelligent tiering +- Acceleration +- Website hosting +- Logging (available on request) + +This reference provides complete S3 API compatibility information for integration planning and troubleshooting. \ No newline at end of file diff --git a/app/(docs)/dcs/third-party-tools/page.md b/app/(docs)/dcs/third-party-tools/page.md index 308c3f3ca..baa60067f 100644 --- a/app/(docs)/dcs/third-party-tools/page.md +++ b/app/(docs)/dcs/third-party-tools/page.md @@ -7,14 +7,36 @@ redirects: - /dcs/file-transfer - /dcs/multimedia-storage-and-streaming metadata: - title: Guides to Using Third-Party Tools + title: How to Use Third-Party Tools with Storj description: - Step-by-step guides on leveraging third-party tools, including backups, + Practical how-to guides for integrating Storj with popular third-party tools for backups, large file handling, file management, content delivery, scientific applications, and cloud operations. --- -Practical step-by-step guides to help you achieve a specific goal. Most useful when you're trying to get something done. +This section contains practical how-to guides for integrating Storj DCS with popular third-party tools and applications. These guides help you achieve specific goals with step-by-step instructions. + +## Prerequisites + +Before using most third-party tools with Storj, ensure you have: + +- A Storj account with valid S3-compatible credentials +- The third-party tool installed and accessible +- Basic familiarity with the tool's interface or command-line usage +- Network connectivity and appropriate firewall configurations + +For credential setup, see the [Getting Started guide](docId:AsyYcUJFbO1JI8-Tu8tW3). + +## How to choose the right tool + +Each tool category serves different use cases: + +- **Backups**: Automated, scheduled data protection with versioning and retention +- **Large Files**: Optimized handling of multi-gigabyte files and datasets +- **File Management**: User-friendly interfaces for everyday file operations +- **Content Delivery**: Web hosting, media streaming, and public file sharing +- **Scientific**: Research data management, analysis pipelines, and collaboration +- **Cloud Ops**: Infrastructure automation, monitoring, and DevOps workflows ## Backups @@ -45,3 +67,44 @@ Practical step-by-step guides to help you achieve a specific goal. Most useful w {% tag-links tag="cloud-ops" directory="./app/(docs)/dcs/third-party-tools" %} {% /tag-links %} + +## Verification steps + +After configuring any third-party tool with Storj: + +1. **Test connectivity**: Verify the tool can list your buckets or existing files +2. **Test upload**: Upload a small test file to confirm write access +3. **Test download**: Download the test file to verify read access +4. **Check permissions**: Ensure the tool has appropriate access for your use case +5. **Validate settings**: Confirm endpoint URLs, regions, and other configuration + +## Common troubleshooting + +**"Access Denied" errors**: +- Verify your S3 credentials are correct and active +- Check that your access key has the required permissions +- Ensure you're using the correct endpoint: `gateway.storjshare.io` + +**Connection timeouts**: +- Check your internet connection and firewall settings +- Verify the tool supports custom S3 endpoints +- Try reducing concurrent connection limits in tool settings + +**Upload/download failures**: +- For large files, ensure the tool supports multipart uploads +- Check available disk space and network stability +- Verify file paths and naming conventions are correct + +**Performance issues**: +- Use the recommended chunk/part size of 64MB for uploads +- Enable multipart uploads for files larger than 64MB +- Consider network latency and bandwidth limitations + +## Getting help + +If you encounter issues not covered in individual tool guides: + +1. Check the tool's official documentation for S3 compatibility +2. Review Storj's [S3 API compatibility reference](docId:eZ4caegh9queuQuaazoo) +3. Search the [Storj community forum](https://forum.storj.io) for similar issues +4. Contact Storj support with specific error messages and configuration details diff --git a/app/(docs)/node/how-to/_meta.json b/app/(docs)/node/how-to/_meta.json new file mode 100644 index 000000000..f868d38e6 --- /dev/null +++ b/app/(docs)/node/how-to/_meta.json @@ -0,0 +1,17 @@ +{ + "title": "How-to Guides", + "nav": [ + { + "title": "Change payout address", + "id": "change-payout-address" + }, + { + "title": "Migrate node", + "id": "migrate-node" + }, + { + "title": "Troubleshoot offline node", + "id": "troubleshoot-offline-node" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/node/how-to/change-payout-address.md b/app/(docs)/node/how-to/change-payout-address.md new file mode 100644 index 000000000..80868e32e --- /dev/null +++ b/app/(docs)/node/how-to/change-payout-address.md @@ -0,0 +1,230 @@ +--- +title: How to change your payout address +docId: change-payout-address-how-to +metadata: + title: How to Change Your Storage Node Payout Address + description: Step-by-step guide to update the wallet address where you receive payments for your storage node operations. +--- + +This guide shows you how to change the wallet address where you receive payments for operating your storage node. + +## Prerequisites + +Before changing your payout address, ensure you have: + +- A running Storj storage node (CLI or Windows GUI installation) +- Administrative access to the system running your node +- A valid wallet address that supports the payment tokens you'll receive +- Backup of your current configuration (recommended) + +## Important considerations + +**Timing**: You can change your payout address at any time, but changes only affect future payments. Any pending payments will still be sent to your previous address. + +**Wallet compatibility**: Ensure your new wallet address supports the token types used for payouts (currently STORJ tokens and other cryptocurrencies). + +**Verification**: Double-check your new wallet address is correct - incorrect addresses may result in lost payments. + +## Change payout address + +Choose the method that matches your storage node installation: + +{% tabs %} + +{% tab label="CLI Install (Docker)" %} + +### Step 1: Stop the storage node + +Stop your running storage node container safely: + +```bash +docker stop -t 300 storagenode +docker rm storagenode +``` + +The `-t 300` flag allows the node 5 minutes to gracefully shut down and complete any ongoing operations. + +### Step 2: Update configuration + +Edit your configuration file to add or update the wallet address. The location depends on how you set up your node: + +**If using config.yaml file:** + +```bash +# Edit the config file (adjust path as needed) +nano /path/to/your/storagenode/config.yaml +``` + +Find the `operator.wallet` section and update it: + +```yaml +operator: + wallet: "0xYourNewWalletAddressHere" +``` + +**If using environment variables or command-line parameters:** + +Update your docker run command to include the new wallet address: + +```bash +# Example docker run with new wallet address +docker run -d --restart unless-stopped \ + --name storagenode \ + -p 28967:28967/tcp \ + -p 28967:28967/udp \ + -p 14002:14002 \ + -e WALLET="0xYourNewWalletAddressHere" \ + -e EMAIL="your@email.com" \ + -e ADDRESS="your.external.address:28967" \ + -e STORAGE="1TB" \ + -v /path/to/identity:/app/identity \ + -v /path/to/storage:/app/config \ + storjlabs/storagenode:latest +``` + +### Step 3: Restart the storage node + +Start your storage node with the updated configuration: + +```bash +# If using config.yaml, use your standard docker run command +# If using the command above with updated parameters, run it now +``` + +{% /tab %} + +{% tab label="Windows GUI Install" %} + +### Step 1: Stop the storage node service + +Open an elevated PowerShell window (Run as Administrator) and stop the service: + +```powershell +Stop-Service storagenode +``` + +Alternatively, you can use the Windows Services applet: +1. Press `Win + R`, type `services.msc`, and press Enter +2. Find "Storj V3 Storage Node" in the list +3. Right-click and select "Stop" + +### Step 2: Edit configuration file + +Open the configuration file with a text editor. **Important**: Use Notepad++ or another advanced text editor - the regular Windows Notepad may not work properly with the file format. + +```powershell +# Open the config file location +notepad++ "C:\Program Files\Storj\Storage Node\config.yaml" +``` + +Find the `operator.wallet` line and update it with your new wallet address: + +```yaml +operator: + wallet: "0xYourNewWalletAddressHere" +``` + +Save the file. + +### Step 3: Restart the storage node service + +Restart the service to apply the changes: + +```powershell +Start-Service storagenode +``` + +Or using the Windows Services applet: +1. Right-click "Storj V3 Storage Node" +2. Select "Start" + +{% /tab %} + +{% /tabs %} + +## Verify the change + +After restarting your storage node, verify the new payout address is configured correctly: + +### Check the dashboard + +1. Access your node's web dashboard (usually at `http://localhost:14002`) +2. Log in with your authentication credentials +3. Look for the wallet address displayed in the node information section +4. Confirm it matches your new address + +### Check the logs + +Review your storage node logs to confirm successful startup with the new configuration: + +{% tabs %} + +{% tab label="CLI Install" %} + +```bash +# View recent logs +docker logs storagenode --tail 50 + +# Look for lines confirming the wallet address +# Should not show any errors about invalid wallet format +``` + +{% /tab %} + +{% tab label="Windows GUI Install" %} + +Check the logs in the installation directory: + +```powershell +# View recent log entries +Get-Content "C:\Program Files\Storj\Storage Node\logs\*" -Tail 50 +``` + +{% /tab %} + +{% /tabs %} + +Look for log entries that confirm your node started successfully without wallet-related errors. + +## Troubleshooting + +**Service won't start after change**: +- Verify the wallet address format is correct (typically starts with "0x" for Ethereum addresses) +- Check that you saved the configuration file properly +- Review logs for specific error messages + +**Dashboard shows old address**: +- Clear your browser cache and reload the dashboard +- Wait a few minutes for the dashboard to update +- Verify you restarted the service completely + +**Invalid wallet address format errors**: +- Confirm your wallet address is valid for the payment system +- Check for extra spaces or characters in the configuration +- Ensure you're using the correct address format (e.g., Ethereum format for STORJ tokens) + +**Configuration file changes not taking effect**: +- Verify you have write permissions to the configuration file +- Confirm you're editing the correct configuration file path +- Make sure the service completely stopped before making changes + +## Important notes + +**Payment timing**: The address change takes effect immediately for new payments, but any payments already processed will still go to your previous address. + +**Multiple nodes**: If you operate multiple storage nodes, you'll need to update each one individually following these steps. + +**Backup configuration**: Always keep a backup of your working configuration before making changes. + +**Address validation**: Some storage node software versions may validate wallet addresses. If you receive validation errors, double-check your address format. + +## Next steps + +After successfully changing your payout address: + +- Monitor your node's operation to ensure it continues running normally +- [Set up monitoring for your node performance](#) +- [Learn about payment schedules and amounts](#) +- [Configure additional node settings](#) + +For other storage node configuration changes, see the [complete configuration guide](#). \ No newline at end of file diff --git a/app/(docs)/node/how-to/migrate-node.md b/app/(docs)/node/how-to/migrate-node.md new file mode 100644 index 000000000..153a7e9ce --- /dev/null +++ b/app/(docs)/node/how-to/migrate-node.md @@ -0,0 +1,344 @@ +--- +title: How to migrate your node to a new device +docId: migrate-node-how-to +metadata: + title: How to Migrate Your Storage Node to a New Device + description: Complete step-by-step guide to safely migrate your Storj storage node to new hardware or storage location while preserving data and reputation. +--- + +This guide shows you how to migrate your storage node to a new device, drive, or location while preserving your node's reputation and stored data. + +## Prerequisites + +Before migrating your storage node, ensure you have: + +- A running Storj storage node that you want to migrate +- Access to both the source and destination systems +- Sufficient storage space on the destination (at least equal to your current data) +- Network access between source and destination (if different machines) +- Administrative privileges on both systems +- Time to complete the migration (can take several hours for large datasets) + +## Important considerations + +**Downtime**: Plan for some downtime during the final migration steps. Minimize this by pre-copying data while your node is running. + +**Reputation preservation**: Your node's identity must be preserved exactly to maintain your reputation and avoid disqualification. + +**Platform compatibility**: If migrating across different architectures (x86 to ARM, etc.), additional steps are required. + +**Network storage warning**: Network-attached storage is not supported and may cause performance issues or disqualification. + +## Locate your current data + +First, identify where your storage node data is currently located: + +{% tabs %} + +{% tab label="Windows GUI Install" %} + +**Identity folder**: `%APPDATA%\Storj\Identity\storagenode` +**Orders folder**: `%ProgramFiles%\Storj\Storage Node\orders` +**Storage data**: The path specified in your configuration + +**Find exact paths**: +```powershell +# Check configuration for data paths +Get-Content "C:\Program Files\Storj\Storage Node\config.yaml" | Select-String "storage-dir\|identity-dir" +``` + +{% /tab %} + +{% tab label="Linux/macOS CLI Install" %} + +**Linux identity**: `~/.local/share/storj/identity/storagenode` +**macOS identity**: `~/Library/Application Support/Storj/identity/storagenode` +**Data location**: Specified in your Docker run command or configuration + +**Find exact paths**: +```bash +# Check your Docker run command or configuration +docker inspect storagenode | grep -E "Source|Destination" +``` + +{% /tab %} + +{% /tabs %} + +## Migration methods + +Choose the migration method that matches your setup: + +## Method 1: Same-platform migration (recommended) + +This method works for migrating between systems with the same architecture (e.g., x86-64 to x86-64). + +### Step 1: Prepare the destination + +Set up the destination paths on your new system: + +```bash +# Create destination directories (adjust paths as needed) +mkdir -p /mnt/storj-new/identity +mkdir -p /mnt/storj-new/storage +mkdir -p /mnt/storj-new/orders +``` + +### Step 2: Copy identity files (critical first step) + +**Important**: Copy identity files first while your node is running: + +```bash +# Copy identity (must be exact - any corruption causes disqualification) +rsync -aP /source/identity/storagenode/ /mnt/storj-new/identity/ +``` + +**Verify identity copy**: +```bash +# Compare file counts and sizes +find /source/identity/storagenode -type f | wc -l +find /mnt/storj-new/identity -type f | wc -l + +# Files should match exactly +``` + +### Step 3: Pre-copy orders and data (while node running) + +Start copying data while your node is still operational to minimize downtime: + +```bash +# Copy orders folder +rsync -aP /source/orders/ /mnt/storj-new/orders/ + +# Copy storage data (this may take hours for large datasets) +rsync -aP /source/storage/ /mnt/storj-new/storage/ +``` + +### Step 4: Repeat sync to minimize differences + +Run the copy commands multiple times to reduce the amount of data to transfer during downtime: + +```bash +# Repeat these commands until differences are minimal +rsync -aP /source/orders/ /mnt/storj-new/orders/ +rsync -aP /source/storage/ /mnt/storj-new/storage/ +``` + +### Step 5: Final migration (downtime required) + +When you're ready for the final migration: + +**Stop your storage node**: + +{% tabs %} + +{% tab label="CLI Install" %} + +```bash +# Stop the container gracefully (allows up to 5 minutes for cleanup) +docker stop -t 300 storagenode +docker rm storagenode +``` + +{% /tab %} + +{% tab label="Windows GUI Install" %} + +```powershell +# Stop the Windows service +Stop-Service storagenode +``` + +{% /tab %} + +{% /tabs %} + +**Complete the final sync**: + +```bash +# Final sync with --delete to ensure exact copy +rsync -aP --delete /source/orders/ /mnt/storj-new/orders/ +rsync -aP --delete /source/storage/ /mnt/storj-new/storage/ +``` + +**Copy configuration files**: + +```bash +# Copy configuration and other important files +cp /source/config.yaml /mnt/storj-new/ +cp /source/revocations.db /mnt/storj-new/ + +# Preserve any other important files in your config directory +``` + +### Step 6: Update your configuration + +Update your node configuration to use the new paths: + +{% tabs %} + +{% tab label="CLI Install" %} + +Update your Docker run command to use the new mount points: + +```bash +# Example updated docker run command +docker run -d --restart unless-stopped \ + --name storagenode \ + -p 28967:28967/tcp \ + -p 28967:28967/udp \ + -p 14002:14002 \ + -e WALLET="0xYourWalletAddress" \ + -e EMAIL="your@email.com" \ + -e ADDRESS="your.external.address:28967" \ + -e STORAGE="2TB" \ + --mount type=bind,source=/mnt/storj-new/identity,destination=/app/identity \ + --mount type=bind,source=/mnt/storj-new,destination=/app/config \ + storjlabs/storagenode:latest +``` + +**Important mount point notes**: +- Use `/mnt/storj-new` as the config mount source (not `/mnt/storj-new/storage`) +- The container automatically creates a `storage` subdirectory +- Ensure your data is in `/mnt/storj-new/storage/` on the host + +{% /tab %} + +{% tab label="Windows GUI Install" %} + +Update the configuration file: + +```powershell +# Edit the config file with new paths +notepad++ "C:\Program Files\Storj\Storage Node\config.yaml" +``` + +Update paths in the configuration file: +```yaml +storage-dir: "C:\NewStorjLocation\storage" +identity-dir: "C:\NewStorjLocation\identity" +``` + +Start the service: +```powershell +Start-Service storagenode +``` + +{% /tab %} + +{% /tabs %} + +## Method 2: Cross-platform migration + +If migrating between different architectures (x86-64 to ARM, Windows to Linux, etc.): + +### Additional step: Remove platform-specific binaries + +Before starting your migrated node, remove old binaries: + +```bash +# Remove binaries from the storage location +rm -rf /mnt/storj-new/storage/bin/ + +# The container will download appropriate binaries for the new platform +``` + +### Follow same process as Method 1 + +Complete all other steps from Method 1, but include the binary removal step before starting your node on the new platform. + +## Verification + +After migration, verify your node is working correctly: + +### Check node startup + +Monitor logs during startup: + +{% tabs %} + +{% tab label="CLI Install" %} + +```bash +# Follow logs in real-time +docker logs storagenode -f + +# Look for successful startup messages and no error about missing data +``` + +{% /tab %} + +{% tab label="Windows GUI Install" %} + +```powershell +# Check recent logs +Get-Content "C:\Program Files\Storj\Storage Node\logs\*" -Tail 50 +``` + +{% /tab %} + +{% /tabs %} + +### Verify data integrity + +Confirm your data migrated correctly: + +1. **Check dashboard**: Access your node dashboard (usually `http://localhost:14002`) +2. **Verify storage usage**: Should match your previous usage amounts +3. **Monitor for errors**: Watch for any data corruption or missing file errors +4. **Check reputation**: Your reputation scores should remain unchanged + +### Monitor for issues + +Watch your node for the first 24-48 hours after migration: + +- No disqualification warnings +- Normal audit success rates +- Proper connectivity to all satellites +- Expected payout calculations + +## Troubleshooting + +**Node starts but shows empty storage**: +- Verify the mount paths in your Docker run command +- Ensure data is located in the correct subdirectories +- Check file permissions on the new location + +**Identity-related errors**: +- Verify identity files copied completely and without corruption +- Check that identity directory permissions allow reading +- Ensure no extra or missing files in identity directory + +**Performance issues after migration**: +- Verify the new storage location has adequate I/O performance +- Check network connectivity between node and satellites +- Monitor system resource usage (CPU, memory, disk I/O) + +**Database errors**: +- Ensure all database files copied completely +- Verify database files are not corrupted (compare file sizes) +- Check that storage location has adequate free space + +## Important warnings + +**Critical identity preservation**: Any corruption or modification of identity files will result in immediate disqualification. Always verify identity files copied perfectly. + +**Avoid network storage**: Network-attached storage can cause performance issues and potential disqualification due to latency and reliability concerns. + +**Don't rush the process**: Take time to verify each step. A failed migration can result in permanent disqualification and loss of earnings. + +**Test with a backup**: If possible, test the migration process with a copy of your data before migrating your production node. + +## Next steps + +After successful migration: + +- [Set up monitoring for your node](#) to track performance +- [Optimize node configuration](#) for your new environment +- [Plan for future backups](#) of your node data +- [Consider disaster recovery](#) planning for your infrastructure + +For additional migration scenarios, see: +- [Migrate between Windows installations](#) +- [Migrate from CLI to GUI installation](#) +- [Set up redundant storage configurations](#) \ No newline at end of file diff --git a/app/(docs)/node/how-to/troubleshoot-offline-node.md b/app/(docs)/node/how-to/troubleshoot-offline-node.md new file mode 100644 index 000000000..6614384e9 --- /dev/null +++ b/app/(docs)/node/how-to/troubleshoot-offline-node.md @@ -0,0 +1,364 @@ +--- +title: How to troubleshoot an offline node +docId: troubleshoot-offline-node-how-to +metadata: + title: How to Troubleshoot Storage Node Offline Issues + description: Step-by-step guide to diagnose and fix storage node connectivity issues when your node appears offline or unreachable. +--- + +This guide helps you diagnose and resolve issues when your storage node appears offline or unreachable to the Storj network. + +## Prerequisites + +Before troubleshooting, ensure you have: + +- Access to your storage node system and configuration +- Administrative privileges on your router/firewall +- Basic understanding of port forwarding concepts +- Your node's external address and port information + +## Identify the problem + +**Signs your node is offline**: +- Email notifications about node being offline +- Dashboard warnings about connectivity issues +- Low audit success rates or failed audits +- Reduced earnings or payout warnings + +**Common causes**: +- Port forwarding issues +- Firewall blocking connections +- Dynamic IP address changes +- Node configuration errors +- Internet connectivity problems + +## Step-by-step troubleshooting + +Follow these steps in order to diagnose and fix offline issues: + +### Step 1: Verify node identity + +Ensure your node identity is intact and valid: + +```bash +# For CLI installations - check identity files exist +ls -la /path/to/identity/storagenode/ + +# Should show files like: ca.cert, identity.cert, ca.key, identity.key +# If any files are missing or corrupted, your node will be offline +``` + +**For Windows GUI installations**: +```powershell +# Check identity folder contents +Get-ChildItem "$env:APPDATA\Storj\Identity\storagenode" +``` + +**If identity files are missing**: You cannot recover - this results in permanent disqualification. You'll need to create a new node with a new identity. + +### Step 2: Check port forwarding configuration + +Verify your router forwards the correct port to your node: + +**Required port forwarding**: +- **Port**: 28967 +- **Protocol**: Both TCP and UDP +- **Destination**: Internal IP of your node system +- **External IP**: Should match your public IP + +**Test port forwarding**: + +1. **Find your public IP**: + ```bash + curl ifconfig.me + ``` + +2. **Test port accessibility**: + - Visit [https://www.yougetsignal.com/tools/open-ports/](https://www.yougetsignal.com/tools/open-ports/) + - Enter your public IP and port 28967 + - Click "Check" - should show "Open" if working correctly + +### Step 3: Verify external address configuration + +Check that your node is configured with the correct external address: + +{% tabs %} + +{% tab label="CLI Install (Docker)" %} + +**Check your Docker run command**: +```bash +# View your container configuration +docker inspect storagenode | grep -A5 -B5 ADDRESS + +# Should show something like: +# "ADDRESS=your.external.address:28967" +``` + +**If ADDRESS is incorrect, update it**: +```bash +# Stop and remove container +docker stop -t 300 storagenode +docker rm storagenode + +# Restart with correct ADDRESS +docker run -d --restart unless-stopped \ + --name storagenode \ + -p 28967:28967/tcp \ + -p 28967:28967/udp \ + -p 14002:14002 \ + -e ADDRESS="your.correct.external.address:28967" \ + # ... other parameters +``` + +{% /tab %} + +{% tab label="Windows GUI Install" %} + +**Check configuration file**: +```powershell +# View current external address setting +Get-Content "C:\Program Files\Storj\Storage Node\config.yaml" | Select-String "external-address" +``` + +**Update if incorrect**: +1. Stop the service: `Stop-Service storagenode` +2. Edit config with Notepad++: `notepad++ "C:\Program Files\Storj\Storage Node\config.yaml"` +3. Update the line: + ```yaml + contact.external-address: your.correct.external.address:28967 + ``` +4. Save and restart: `Start-Service storagenode` + +{% /tab %} + +{% /tabs %} + +### Step 4: Handle dynamic IP addresses + +If your internet connection has a dynamic IP that changes: + +**Set up Dynamic DNS (DDNS)**: + +1. **Register with a DDNS provider** (e.g., [NoIP](https://www.noip.com/), DynDNS) +2. **Create a domain** (e.g., `mynode.ddns.net`) +3. **Configure automatic updates**: + + **Option A: Router configuration**: + - Access router admin panel + - Find DDNS section + - Enter your DDNS provider credentials + - Enable automatic IP updates + + **Option B: Client software**: + - Download provider's update client (e.g., NoIP DUC) + - Configure with your credentials + - Install and run on your node system + +4. **Update node configuration** to use your DDNS domain instead of IP address + +**Important**: Only use ONE update method (router OR client software), not both. + +### Step 5: Configure firewall rules + +Ensure your firewall allows storage node traffic: + +{% tabs %} + +{% tab label="Windows Firewall" %} + +**Add inbound rule**: +```powershell +# Allow inbound traffic on port 28967 +New-NetFirewallRule -DisplayName "Storj Node Inbound" -Direction Inbound -Protocol TCP -LocalPort 28967 -Action Allow +New-NetFirewallRule -DisplayName "Storj Node Inbound UDP" -Direction Inbound -Protocol UDP -LocalPort 28967 -Action Allow +``` + +**Add outbound rule** (if you have restrictive outbound rules): +```powershell +# Allow outbound traffic from storage node +New-NetFirewallRule -DisplayName "Storj Node Outbound" -Direction Outbound -Action Allow +``` + +{% /tab %} + +{% tab label="Linux Firewall (UFW)" %} + +**Allow required ports**: +```bash +# Allow inbound traffic on port 28967 +sudo ufw allow 28967/tcp +sudo ufw allow 28967/udp + +# Reload firewall +sudo ufw reload +``` + +{% /tab %} + +{% tab label="Linux Firewall (iptables)" %} + +**Add rules**: +```bash +# Allow inbound traffic +sudo iptables -A INPUT -p tcp --dport 28967 -j ACCEPT +sudo iptables -A INPUT -p udp --dport 28967 -j ACCEPT + +# Allow outbound traffic (if you have restrictive rules) +sudo iptables -A OUTPUT -j ACCEPT + +# Save rules (method varies by distribution) +sudo iptables-save > /etc/iptables/rules.v4 +``` + +{% /tab %} + +{% /tabs %} + +### Step 6: Test connectivity + +After making changes, test your node's connectivity: + +**Check dashboard**: +1. Access your node dashboard (usually `http://localhost:14002`) +2. Look for connectivity status indicators +3. Check for error messages or warnings + +**Monitor logs**: + +{% tabs %} + +{% tab label="CLI Install" %} + +```bash +# Follow logs in real-time +docker logs storagenode -f + +# Look for connection success/failure messages +# Should see successful communication with satellites +``` + +{% /tab %} + +{% tab label="Windows GUI Install" %} + +```powershell +# Check recent logs +Get-Content "C:\Program Files\Storj\Storage Node\logs\*" -Tail 100 | Select-String "error\|offline\|connection" +``` + +{% /tab %} + +{% /tabs %} + +**Use external tools**: +```bash +# Test from external system (if available) +telnet your.external.address 28967 + +# Should connect successfully +``` + +## Verification checklist + +After troubleshooting, verify these items are correct: + +- [ ] **Identity files**: Present and intact +- [ ] **Port forwarding**: 28967 TCP+UDP forwarded to correct internal IP +- [ ] **External address**: Correct IP/domain and port in node configuration +- [ ] **DDNS**: Configured and updating if using dynamic IP +- [ ] **Firewall**: Allows inbound traffic on port 28967 +- [ ] **Router firewall**: Not blocking the storage node traffic +- [ ] **Network connectivity**: Node can reach the internet +- [ ] **Dashboard**: Shows node as online and connected + +## Common issues and solutions + +**Port still shows closed after forwarding**: +- Verify internal IP hasn't changed (DHCP reassignment) +- Check router has correct port forwarding syntax +- Some routers require reboot after port forwarding changes +- Verify no double-NAT situation (router behind another router) + +**Node works intermittently**: +- Usually indicates dynamic IP issues +- Set up DDNS as described above +- Consider static IP from ISP if available + +**Firewall software blocks despite rules**: +- Some antivirus software includes firewalls that override system settings +- Check antivirus software firewall settings +- Consider temporarily disabling to test (remember to re-enable) + +**ISP blocks or throttles traffic**: +- Some ISPs block or limit certain ports +- Contact ISP to verify no restrictions on port 28967 +- Consider using a VPN as a workaround (though this may impact performance) + +**Double-NAT situation**: +- Occurs when your router is behind another router/modem +- Both devices need port forwarding configuration +- Consider setting upstream device to bridge mode if possible + +## Advanced troubleshooting + +If basic steps don't resolve the issue: + +**Check for IP conflicts**: +```bash +# Verify no other device uses same internal IP +nmap -sP your.network.range.0/24 +``` + +**Test from different networks**: +- Use mobile hotspot to test external connectivity +- Helps identify ISP-specific issues + +**Check satellite connectivity**: +```bash +# Test connectivity to known Storj satellites (example) +ping satellite.address.storj.io +``` + +**Review detailed logs**: +```bash +# Enable debug logging (if supported in your version) +# Look for specific error patterns +``` + +## When to seek help + +Contact support if: + +- You've followed all steps but node remains offline +- Your ISP confirms no restrictions but connectivity fails +- Hardware appears to be failing +- You suspect account or identity issues + +**Provide this information when seeking help**: +- Your node ID +- External IP address and port +- Router make/model +- Operating system details +- Relevant log entries showing errors +- Results of port forwarding tests + +## Prevention tips + +To avoid future offline issues: + +- Set up monitoring alerts for your node status +- Use DDNS from the start if you have dynamic IP +- Document your port forwarding configuration +- Regularly backup your identity and configuration files +- Monitor router firmware updates that might reset configurations +- Consider uninterruptible power supply (UPS) for stability + +## Next steps + +Once your node is back online: + +- [Monitor node performance](#) to ensure stable operation +- [Set up automated monitoring](#) to detect future issues quickly +- [Optimize node configuration](#) for better reliability +- [Plan backup strategies](#) to prevent data loss \ No newline at end of file diff --git a/app/(docs)/node/reference/_meta.json b/app/(docs)/node/reference/_meta.json new file mode 100644 index 000000000..873d257d0 --- /dev/null +++ b/app/(docs)/node/reference/_meta.json @@ -0,0 +1,5 @@ +{ + "configuration": "Configuration", + "dashboard-metrics": "Dashboard Metrics", + "system-requirements": "System Requirements" +} \ No newline at end of file diff --git a/app/(docs)/node/reference/configuration.md b/app/(docs)/node/reference/configuration.md new file mode 100644 index 000000000..e04206f2d --- /dev/null +++ b/app/(docs)/node/reference/configuration.md @@ -0,0 +1,310 @@ +--- +title: "Storage Node Configuration Reference" +docId: "node-config-ref-001" +metadata: + title: "Storage Node Configuration Reference" + description: "Complete reference for Storage Node configuration parameters, config.yaml options, and environment variables." +--- + +Complete reference for Storage Node configuration options and parameters. + +## Configuration File Location + +### Docker Installation +- **Path**: `$HOME/storj/storagenode/config.yaml` +- **Mount**: Bound to `/app/config/config.yaml` in container + +### Native Installation + +#### Linux +- **Path**: `~/.local/share/storj/storagenode/config.yaml` +- **System**: `/etc/storj/storagenode/config.yaml` + +#### Windows +- **Path**: `C:\Program Files\Storj\Storage Node\config.yaml` +- **User**: `%APPDATA%\Storj\Storage Node\config.yaml` + +#### macOS +- **Path**: `~/Library/Application Support/storj/storagenode/config.yaml` + +## Core Configuration Parameters + +### Identity and Network + +| Parameter | Type | Description | Example | +|-----------|------|-------------|---------| +| `identity.cert-path` | string | Identity certificate path | `/app/identity/identity.cert` | +| `identity.key-path` | string | Identity private key path | `/app/identity/identity.key` | +| `server.address` | string | External address for node | `your-ddns-hostname:28967` | +| `server.private-address` | string | Internal listening address | `0.0.0.0:28967` | + +### Storage Configuration + +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `storage.allocated-disk-space` | string | Total allocated space | `1TB` | +| `storage2.allocated-disk-space` | string | Storage v2 allocated space | `1TB` | +| `storage.path` | string | Data storage directory path | `/app/config` | +| `storage2.path` | string | Storage v2 data path | `/app/config/storage2` | + +### Bandwidth Allocation + +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `storage.allocated-bandwidth` | string | Monthly bandwidth limit | `2TB` | +| `server.revocation-dburl` | string | Revocation database path | `bolt://path/to/revocations.db` | + +### Satellite Configuration + +| Parameter | Type | Description | +|-----------|------|-------------| +| `contact.external-address` | string | Node's external contact address | +| `storage2.trust.sources` | array | Trusted satellite URLs | + +### Database Settings + +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `pieces.database-url` | string | Pieces database connection | `bolt://path/to/piecestore.db` | +| `filestore.write-buffer-size` | string | Write buffer size | `128KB` | +| `storage2.database-url` | string | Storage v2 database URL | `bolt://path/to/storage2.db` | + +### Network and Performance + +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `server.use-peer-ca-whitelist` | boolean | Use peer CA whitelist | `true` | +| `console.address` | string | Web dashboard address | `127.0.0.1:14002` | +| `console.static-dir` | string | Web assets directory | `/app/static` | + +## Docker Configuration Examples + +### Basic Docker Command +```bash +docker run -d --restart unless-stopped \ + --stop-timeout 300 \ + -p 28967:28967/tcp \ + -p 28967:28967/udp \ + -p 14002:14002 \ + --name storagenode \ + --user $(id -u):$(id -g) \ + --mount type=bind,source=$HOME/storj/identity/storagenode,destination=/app/identity \ + --mount type=bind,source=$HOME/storj/storagenode,destination=/app/config \ + -e WALLET="your-wallet-address" \ + -e EMAIL="your-email@example.com" \ + -e ADDRESS="your-ddns-hostname:28967" \ + -e STORAGE="2TB" \ + storjlabs/storagenode:latest +``` + +### Docker Compose Configuration +```yaml +version: '3.8' +services: + storagenode: + image: storjlabs/storagenode:latest + container_name: storagenode + restart: unless-stopped + stop_grace_period: 300s + ports: + - "28967:28967/tcp" + - "28967:28967/udp" + - "14002:14002" + volumes: + - /home/user/storj/identity/storagenode:/app/identity + - /home/user/storj/storagenode:/app/config + environment: + - WALLET=your-wallet-address + - EMAIL=your-email@example.com + - ADDRESS=your-ddns-hostname:28967 + - STORAGE=2TB + - BANDWIDTH=2TB + user: "${UID}:${GID}" +``` + +## Environment Variables + +### Required Variables + +| Variable | Description | Example | +|----------|-------------|---------| +| `WALLET` | Ethereum wallet address for payments | `0x1234567890abcdef...` | +| `EMAIL` | Contact email address | `operator@example.com` | +| `ADDRESS` | External node address | `node.example.com:28967` | +| `STORAGE` | Allocated disk space | `2TB`, `500GB` | + +### Optional Variables + +| Variable | Description | Default | +|----------|-------------|---------| +| `BANDWIDTH` | Monthly bandwidth allocation | `2TB` | +| `LOG_LEVEL` | Logging verbosity | `info` | +| `STORAGE2_TRUST_SOURCES` | Comma-separated satellite URLs | Default satellites | + +## Advanced Configuration + +### Custom Satellite Configuration +```yaml +storage2: + trust: + sources: + - "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFHpkmw2GT1RtLUod@satellite.example.com:7777" + exclusions: + sources: [] + cache-url: "trust://path/to/trust-cache.json" +``` + +### Database Tuning +```yaml +pieces: + database-url: "postgres://user:pass@localhost/storagenode?sslmode=disable" + +storage2: + database-url: "postgres://user:pass@localhost/storage2?sslmode=disable" + +# OR for SQLite with custom settings +pieces: + database-url: "sqlite3://path/to/piecestore.db?cache=shared&mode=rwc&_journal_mode=WAL" +``` + +### Performance Tuning +```yaml +filestore: + write-buffer-size: 256KB + force-sync: true + +storage2: + monitor: + minimum-disk-space: 500MB + minimum-bandwidth: 1MB +``` + +## Network Configuration + +### Port Configuration + +| Port | Protocol | Purpose | Required | +|------|----------|---------|----------| +| `28967` | TCP/UDP | Storage node communication | Yes | +| `14002` | TCP | Web dashboard (local only) | Optional | + +### Firewall Rules + +#### Linux (iptables) +```bash +# Allow incoming storage node traffic +iptables -A INPUT -p tcp --dport 28967 -j ACCEPT +iptables -A INPUT -p udp --dport 28967 -j ACCEPT + +# Allow outgoing traffic +iptables -A OUTPUT -p tcp --dport 28967 -j ACCEPT +iptables -A OUTPUT -p udp --dport 28967 -j ACCEPT +``` + +#### Router/Firewall Configuration +- **External Port**: 28967 (TCP/UDP) +- **Internal Port**: 28967 (TCP/UDP) +- **Protocol**: Both TCP and UDP required +- **Direction**: Bidirectional + +## Logging Configuration + +### Log Levels + +| Level | Description | Use Case | +|-------|-------------|----------| +| `debug` | Very verbose output | Development/troubleshooting | +| `info` | General information | Normal operation | +| `warn` | Warning messages | Monitoring issues | +| `error` | Error messages only | Production minimal | + +### Log Configuration +```yaml +log: + level: info + output: stdout + caller: false + stack: false + encoding: console +``` + +### Docker Logging +```bash +# View logs +docker logs storagenode + +# Follow logs +docker logs -f storagenode + +# View specific number of lines +docker logs --tail 100 storagenode +``` + +## Health Check Configuration + +### Built-in Health Checks +```yaml +console: + address: 127.0.0.1:14002 + +healthcheck: + enabled: true + interval: 30s + timeout: 10s +``` + +### External Monitoring +```bash +# Health check endpoint +curl http://localhost:14002/api/sno + +# Satellite status +curl http://localhost:14002/api/sno/satellites +``` + +## Configuration Validation + +### Syntax Check +```bash +# Docker validation +docker run --rm -v $HOME/storj/storagenode:/app/config \ + storjlabs/storagenode:latest --config-dir /app/config --help + +# Native installation +storagenode --config-dir ~/.local/share/storj/storagenode --help +``` + +### Common Configuration Errors + +| Error | Cause | Solution | +|-------|-------|----------| +| Identity verification failed | Wrong identity path | Check identity.cert-path and identity.key-path | +| Address not reachable | Firewall/NAT issues | Configure port forwarding | +| Disk space unavailable | Insufficient storage | Increase allocated-disk-space or free up space | +| Database corruption | Improper shutdown | Restore from backup or rebuild | + +## Migration and Backup + +### Configuration Backup +```bash +# Backup entire config directory +tar -czf storagenode-config-backup-$(date +%Y%m%d).tar.gz \ + -C $HOME/storj storagenode/ + +# Backup just configuration file +cp $HOME/storj/storagenode/config.yaml \ + $HOME/storj/storagenode/config.yaml.backup +``` + +### Configuration Migration +```bash +# Copy to new location +rsync -av $HOME/storj/storagenode/ /new/path/storagenode/ + +# Update docker mount points +docker run ... \ + --mount type=bind,source=/new/path/storagenode,destination=/app/config \ + ... +``` + +This reference covers all major Storage Node configuration options. For specific deployment scenarios, refer to the installation guides for your platform. \ No newline at end of file diff --git a/app/(docs)/node/reference/dashboard-metrics.md b/app/(docs)/node/reference/dashboard-metrics.md new file mode 100644 index 000000000..3b9899df7 --- /dev/null +++ b/app/(docs)/node/reference/dashboard-metrics.md @@ -0,0 +1,275 @@ +--- +title: "Dashboard Metrics Reference" +docId: "node-dashboard-ref-001" +metadata: + title: "Storage Node Dashboard Metrics Reference" + description: "Complete reference for all Storage Node dashboard metrics, monitoring data, and performance indicators." +--- + +Complete reference for Storage Node dashboard metrics and monitoring information. + +## Accessing the Dashboard + +### Local Dashboard +- **URL**: `http://localhost:14002` (default) +- **Access**: Local machine only (for security) +- **Port**: Configurable in `config.yaml` (`console.address`) + +### External Dashboard Access + +For remote monitoring, use SSH tunneling: +```bash +# SSH tunnel to access remote node dashboard +ssh -L 14002:localhost:14002 user@your-node-server +# Then access http://localhost:14002 locally +``` + +## Overview Metrics + +### Node Status Indicators + +| Metric | Description | Values | +|--------|-------------|--------| +| **Node Status** | Overall node health | Online, Offline, Disqualified | +| **Uptime** | Time since node started | Hours, days | +| **Last Ping** | Last successful satellite ping | Timestamp | +| **Node Version** | Storage node software version | e.g., `v1.95.1` | + +### Suspension and Disqualification + +| Status | Description | Impact | +|--------|-------------|--------| +| **Good Standing** | Node operating normally | Full participation | +| **Suspended** | Temporary suspension from satellite | No new data, existing data served | +| **Disqualified** | Permanent removal from satellite | Data deleted, no participation | + +## Storage Metrics + +### Disk Usage + +| Metric | Description | Calculation | +|--------|-------------|-------------| +| **Used Space** | Currently stored data | Sum of all piece sizes | +| **Available Space** | Remaining allocated space | Allocated - Used | +| **Allocated Space** | Total space allocated to node | From configuration | +| **Trash** | Data marked for deletion | Pending garbage collection | + +### Storage Breakdown by Satellite + +| Field | Description | +|-------|-------------| +| **Satellite ID** | Unique satellite identifier | +| **Data Stored** | Amount of data from this satellite | +| **Percentage** | Portion of total storage from satellite | + +## Bandwidth Metrics + +### Current Period (Monthly) + +| Metric | Description | Reset Period | +|--------|-------------|--------------| +| **Ingress** | Data uploaded to node | Monthly (satellite billing cycle) | +| **Egress** | Data downloaded from node | Monthly | +| **Total Bandwidth** | Ingress + Egress | Monthly | +| **Remaining Bandwidth** | Allocated - Used | Monthly | + +### Bandwidth by Satellite + +| Field | Description | +|-------|-------------| +| **Satellite** | Satellite name/ID | +| **Ingress** | Upload traffic from satellite | +| **Egress** | Download traffic to satellite | +| **Total** | Combined satellite bandwidth | + +## Earnings Metrics + +### Current Month + +| Metric | Description | Currency | +|--------|-------------|----------| +| **Estimated Earnings** | Projected month earnings | STORJ tokens | +| **Disk Space Compensation** | Payment for storage | STORJ tokens | +| **Bandwidth Compensation** | Payment for traffic | STORJ tokens | +| **Payout Address** | Wallet receiving payments | Ethereum address | + +### Earnings History + +| Field | Description | +|-------|-------------| +| **Month** | Billing period | +| **Disk Average** | Average monthly disk usage | +| **Bandwidth** | Total monthly bandwidth | +| **Payout** | Amount paid | +| **Receipt** | Payment transaction ID | + +## Reputation Metrics + +### Audit Scores + +| Metric | Range | Description | +|--------|-------|-------------| +| **Audit Score** | 0-100% | Success rate for audit requests | +| **Suspension Score** | 0-100% | Threshold: <60% triggers suspension | +| **Disqualification Score** | 0-100% | Threshold: <60% triggers disqualification | + +### Online Score + +| Metric | Range | Description | +|--------|-------|-------------| +| **Online Score** | 0-100% | Node availability percentage | +| **Downtime Events** | Count | Number of offline periods | +| **Last Offline** | Timestamp | Most recent offline event | + +### Uptime Tracking + +| Field | Description | +|-------|-------------| +| **Current Uptime** | Continuous online time | +| **Today** | Uptime percentage for current day | +| **This Month** | Uptime percentage for current month | +| **All Time** | Historical uptime average | + +## Satellite-Specific Metrics + +### Per-Satellite Data + +| Satellite | Description | +|-----------|-------------| +| **12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFHpkmw2GT1RtLUod** | US Central | +| **12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs** | Europe North | +| **1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE** | Asia Pacific | + +### Satellite Metrics + +| Metric | Description | +|--------|-------------| +| **Node Age** | Time since first contact with satellite | +| **Vetted Status** | Whether node is vetted (trusted) | +| **Joined Date** | When node first connected | +| **Data Stored** | Current data volume | +| **Audit Success Rate** | Historical audit performance | + +## System Performance + +### Resource Utilization + +| Metric | Description | Units | +|--------|-------------|-------| +| **CPU Usage** | Processor utilization | Percentage | +| **Memory Usage** | RAM consumption | MB/GB | +| **Disk I/O** | Read/write operations | IOPS | +| **Network I/O** | Network throughput | Mbps | + +### Database Metrics + +| Metric | Description | +|--------|-------------| +| **Pieces Database Size** | Piece metadata database size | +| **Info Database Size** | Node information database size | +| **Database Queries** | Query performance metrics | + +## Notifications and Alerts + +### Dashboard Notifications + +| Type | Description | Action Required | +|------|-------------|----------------| +| **Version Update** | New software version available | Update recommended | +| **Low Disk Space** | Storage nearly full | Free up space or increase allocation | +| **Suspension Warning** | Reputation score declining | Investigate connectivity/performance | +| **Payment Info** | Payout information | Check wallet address | + +### Health Indicators + +| Indicator | Status | Meaning | +|-----------|--------|---------| +| 🟢 Green | Healthy | All systems normal | +| 🟡 Yellow | Warning | Attention needed | +| 🔴 Red | Critical | Immediate action required | + +## API Endpoints for Monitoring + +### Dashboard API + +| Endpoint | Description | Response | +|----------|-------------|----------| +| `/api/sno` | Node overview data | JSON summary | +| `/api/sno/satellites` | Satellite-specific data | JSON per satellite | +| `/api/sno/estimated-payouts` | Earnings estimates | JSON payout data | + +### Monitoring Script Example + +```bash +#!/bin/bash +# Basic node health check +response=$(curl -s http://localhost:14002/api/sno) +status=$(echo $response | jq -r '.status') + +if [ "$status" = "online" ]; then + echo "Node is healthy" +else + echo "Node issue detected: $status" +fi +``` + +## Historical Data Tracking + +### Data Retention + +| Metric | Retention Period | Purpose | +|--------|------------------|---------| +| **Bandwidth** | 12 months | Payout calculation | +| **Storage** | 12 months | Trend analysis | +| **Audit Results** | Permanent | Reputation tracking | +| **Uptime** | 12 months | Performance monitoring | + +### Export Options + +Dashboard data can be extracted via: +- **API endpoints** - Real-time data +- **Log files** - Historical events +- **Database queries** - Direct data access + +## Performance Optimization + +### Key Metrics to Monitor + +1. **Audit Success Rate** - Should stay >95% +2. **Online Score** - Should stay >98% +3. **Bandwidth Utilization** - Higher is better +4. **Storage Growth** - Indicates network demand + +### Warning Thresholds + +| Metric | Warning | Critical | +|--------|---------|----------| +| **Audit Score** | <85% | <60% | +| **Online Score** | <95% | <90% | +| **Disk Free** | <10% | <5% | +| **Version Behind** | >1 version | >3 versions | + +## Troubleshooting Dashboard Issues + +### Dashboard Not Accessible + +1. **Check port binding**: + ```bash + netstat -tulnp | grep 14002 + ``` + +2. **Verify configuration**: + ```yaml + console: + address: 127.0.0.1:14002 + ``` + +3. **Check firewall rules** (if accessing remotely) + +### Missing Data + +1. **Restart node** if metrics not updating +2. **Check database integrity** +3. **Verify satellite connectivity** + +This reference covers all dashboard metrics for effective Storage Node monitoring and management. Use these metrics to ensure optimal node performance and maximize earnings. \ No newline at end of file diff --git a/app/(docs)/node/reference/system-requirements.md b/app/(docs)/node/reference/system-requirements.md new file mode 100644 index 000000000..401efb2bd --- /dev/null +++ b/app/(docs)/node/reference/system-requirements.md @@ -0,0 +1,288 @@ +--- +title: "System Requirements Reference" +docId: "node-system-req-ref-001" +metadata: + title: "Storage Node System Requirements Reference" + description: "Complete reference for Storage Node hardware, software, and network requirements for optimal performance." +--- + +Complete reference for Storage Node system requirements and specifications. + +## Hardware Requirements + +### Minimum Requirements + +| Component | Requirement | Notes | +|-----------|-------------|-------| +| **CPU** | 1 core, 1 GHz | ARM or x86_64 | +| **RAM** | 1 GB | Minimum for basic operation | +| **Storage** | 500 GB available | Dedicated to Storj (not OS) | +| **Network** | 1 Mbps up/down | Sustained bandwidth | + +### Recommended Requirements + +| Component | Requirement | Benefit | +|-----------|-------------|---------| +| **CPU** | 2+ cores, 2+ GHz | Better concurrent processing | +| **RAM** | 4+ GB | Improved caching and performance | +| **Storage** | 2+ TB available | Higher earning potential | +| **Network** | 10+ Mbps up/down | Faster data transfers | + +### Optimal Performance Configuration + +| Component | Specification | Purpose | +|-----------|---------------|---------| +| **CPU** | 4+ cores, modern architecture | Handle multiple satellite operations | +| **RAM** | 8+ GB | Large cache for frequently accessed data | +| **Storage** | 8+ TB, SSD or NVMe | Maximum storage capacity and speed | +| **Network** | 50+ Mbps symmetric | High-throughput data transfers | + +## Storage Requirements + +### Storage Types + +| Type | Performance | Reliability | Cost | Recommendation | +|------|-------------|-------------|------|----------------| +| **HDD (7200 RPM)** | Good | Good | Low | ✅ Recommended for most setups | +| **SSD (SATA)** | Excellent | Excellent | Medium | ⭐ Optimal for performance | +| **NVMe SSD** | Outstanding | Excellent | High | 🚀 Best performance | +| **USB/External** | Poor | Variable | Low | ❌ Not recommended | + +### Storage Considerations + +| Factor | Requirement | Rationale | +|--------|-------------|-----------| +| **Dedicated Drive** | Highly recommended | Prevents OS disk space conflicts | +| **File System** | NTFS, ext4, XFS | Reliable journaling file systems | +| **Available Space** | 90% of drive or less | Leave space for metadata and growth | +| **SMART Monitoring** | Essential | Early failure detection | + +### Storage Allocation Guidelines + +``` +Example for 2TB drive: +├── OS/System: 100 GB (separate drive preferred) +├── Storj Data: 1,800 GB (allocated to node) +└── Free Space: 100 GB (buffer for operations) +``` + +## Network Requirements + +### Internet Connection + +| Requirement | Specification | Purpose | +|-------------|---------------|---------| +| **Connection Type** | Residential/Business | Stable, always-on connection | +| **Bandwidth** | 1+ Mbps sustained | Handle storage/retrieval requests | +| **Data Cap** | Unlimited preferred | Monthly bandwidth usage varies | +| **Latency** | <100ms to satellites | Responsive to network requests | + +### Network Specifications + +| Metric | Minimum | Recommended | Optimal | +|--------|---------|-------------|---------| +| **Upload Speed** | 1 Mbps | 5 Mbps | 25+ Mbps | +| **Download Speed** | 1 Mbps | 5 Mbps | 25+ Mbps | +| **Monthly Data** | 2+ TB | 5+ TB | Unlimited | +| **Uptime** | 95% | 98% | 99.5%+ | + +### Port and Protocol Requirements + +| Protocol | Port | Direction | Purpose | +|----------|------|-----------|---------| +| **TCP** | 28967 | Inbound/Outbound | Primary communication | +| **UDP** | 28967 | Inbound/Outbound | QUIC protocol | +| **HTTP** | 14002 | Localhost only | Dashboard access | + +## Operating System Support + +### Linux Distributions + +| Distribution | Version | Support Level | Installation Method | +|--------------|---------|---------------|-------------------| +| **Ubuntu** | 18.04+ | Full | Docker, Native | +| **Debian** | 10+ | Full | Docker, Native | +| **CentOS/RHEL** | 7+ | Full | Docker, Native | +| **Fedora** | 30+ | Full | Docker | +| **openSUSE** | 15+ | Full | Docker | +| **Arch Linux** | Latest | Community | Docker | + +### Other Operating Systems + +| OS | Support | Method | Notes | +|----|---------|--------|-------| +| **Windows** | Full | Native installer, Docker | Windows 10/11, Server 2016+ | +| **macOS** | Limited | Docker | Intel and Apple Silicon | +| **FreeBSD** | Community | Docker/Ports | Limited testing | +| **Synology DSM** | Community | Docker | Package available | +| **QNAP** | Community | Docker | Container station | + +## Virtualization and Containers + +### Docker Requirements + +| Component | Requirement | Notes | +|-----------|-------------|-------| +| **Docker Version** | 19.03+ | Supports required features | +| **Docker Compose** | 1.25+ | For compose deployments | +| **Host OS** | Linux, Windows, macOS | Docker Desktop or native | +| **Container Runtime** | Docker or compatible | Podman, containerd support | + +### Virtual Machine Specifications + +| Resource | Minimum | Recommended | +|----------|---------|-------------| +| **vCPU** | 1 core | 2+ cores | +| **vRAM** | 1 GB | 4+ GB | +| **vDisk** | 500 GB | 2+ TB | +| **Network** | Bridged mode | Direct external access | + +### Hardware Pass-through + +| Component | Benefit | Requirement | +|-----------|---------|-------------| +| **Disk Pass-through** | Better performance | Direct disk access | +| **Network Pass-through** | Lower latency | Dedicated network interface | +| **CPU Pinning** | Consistent performance | Multi-core host system | + +## Network Architecture + +### Home Network Setup + +``` +Internet -> Router -> Storage Node + ↓ + Port Forward + 28967 TCP/UDP +``` + +### Advanced Network Configuration + +``` +Internet -> Firewall -> DMZ -> Storage Node + ↓ + Dedicated VLAN + QoS Priority +``` + +### Dynamic DNS Requirements + +| Scenario | Solution | Purpose | +|----------|----------|---------| +| **Dynamic IP** | DDNS service | Maintain consistent address | +| **Multiple Nodes** | Subdomain per node | Unique addressing | +| **IPv6** | DDNS with AAAA records | Future-proofing | + +## Platform-Specific Requirements + +### Raspberry Pi + +| Model | Minimum | Recommended | Notes | +|-------|---------|-------------|-------| +| **Pi 3B+** | Marginal | Not recommended | Limited performance | +| **Pi 4 (4GB)** | Workable | Entry level | USB 3.0 for storage | +| **Pi 4 (8GB)** | Good | Recommended | Better caching | + +**Pi-Specific Considerations:** +- Use USB 3.0 SSD for storage +- Ensure adequate power supply (3A+) +- Monitor CPU temperature +- Use quality SD card for OS + +### Synology NAS + +| Series | Support | Method | +|--------|---------|--------| +| **Plus Series** | Full | Docker package | +| **Value Series** | Limited | Manual Docker setup | +| **J Series** | Not recommended | Insufficient resources | + +**NAS-Specific Requirements:** +- DSM 6.0+ with Docker support +- Dedicated volume for Storj data +- SSH access for advanced configuration + +### QNAP NAS + +| Architecture | Support | Notes | +|--------------|---------|-------| +| **x86_64** | Full | Container Station | +| **ARM** | Limited | Performance considerations | + +## Power and Environmental + +### Power Requirements + +| Component | Consumption | Annual Cost* | +|-----------|-------------|-------------| +| **Raspberry Pi 4** | 15W | $13 | +| **Mini PC** | 30W | $26 | +| **Desktop PC** | 100W | $88 | +| **Server** | 300W | $263 | + +*Based on $0.10/kWh electricity rate + +### Environmental Considerations + +| Factor | Requirement | Rationale | +|--------|-------------|-----------| +| **Temperature** | 10-35°C (50-95°F) | Component longevity | +| **Humidity** | 20-80% RH | Prevent corrosion | +| **Ventilation** | Adequate airflow | Heat dissipation | +| **Power Stability** | UPS recommended | Prevent data corruption | + +## Security Requirements + +### Network Security + +| Component | Requirement | Purpose | +|-----------|-------------|---------| +| **Firewall** | Port 28967 only | Limit attack surface | +| **Router Security** | WPA3, strong passwords | Secure network access | +| **VPN** | For remote management | Secure administrative access | + +### System Security + +| Component | Requirement | Purpose | +|-----------|-------------|---------| +| **OS Updates** | Regular patching | Security vulnerabilities | +| **User Accounts** | Non-root operation | Principle of least privilege | +| **File Permissions** | Proper ownership | Data protection | + +## Performance Monitoring + +### Key Metrics to Monitor + +| Metric | Tool | Threshold | +|--------|------|-----------| +| **CPU Usage** | htop, Task Manager | <80% sustained | +| **RAM Usage** | free, Task Manager | <90% | +| **Disk I/O** | iostat, Performance Monitor | <80% utilization | +| **Network Usage** | iftop, Resource Monitor | Within bandwidth limits | + +### Monitoring Tools + +| Platform | Tools | Purpose | +|----------|-------|---------| +| **Linux** | htop, iotop, iftop | Real-time monitoring | +| **Windows** | Task Manager, perfmon | System monitoring | +| **Cross-platform** | Grafana, Prometheus | Advanced monitoring | + +## Upgrade Considerations + +### Hardware Upgrade Path + +1. **RAM** - Easy upgrade, immediate benefit +2. **Storage** - Add drives or upgrade to SSD +3. **Network** - Faster internet connection +4. **CPU** - Usually requires new system + +### Capacity Planning + +| Growth Rate | Hardware Planning | +|-------------|------------------| +| **Monthly** | Monitor storage usage trends | +| **Quarterly** | Evaluate performance metrics | +| **Annually** | Plan major upgrades | + +This reference ensures your Storage Node meets all requirements for reliable operation and optimal earnings potential. Regularly review system performance and upgrade components as needed. \ No newline at end of file diff --git a/app/(docs)/node/tutorials/_meta.json b/app/(docs)/node/tutorials/_meta.json new file mode 100644 index 000000000..689149d1f --- /dev/null +++ b/app/(docs)/node/tutorials/_meta.json @@ -0,0 +1,9 @@ +{ + "title": "Tutorials", + "nav": [ + { + "title": "Setup your first node", + "id": "setup-first-node" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/node/tutorials/setup-first-node.md b/app/(docs)/node/tutorials/setup-first-node.md new file mode 100644 index 000000000..6b77c6865 --- /dev/null +++ b/app/(docs)/node/tutorials/setup-first-node.md @@ -0,0 +1,649 @@ +--- +title: Setup your first node +docId: setup-first-storage-node +metadata: + title: Setup Your First Storage Node Tutorial + description: Complete 60-minute tutorial to set up your first Storj storage node from start to finish with step-by-step instructions. +--- + +This comprehensive tutorial walks you through setting up your first Storj storage node from start to finish. By the end, you'll have a running node that earns STORJ tokens for providing storage and bandwidth to the network. + +## What you'll build + +In this 60-minute hands-on tutorial, you'll: + +- Set up the necessary hardware and network infrastructure +- Create a unique node identity for network participation +- Configure port forwarding and firewall settings +- Install and configure the storage node software +- Connect to the Storj network and begin earning rewards +- Set up monitoring and maintenance procedures + +**Expected time to complete**: 60-90 minutes + +## Prerequisites + +Before starting your storage node, ensure you have: + +### Hardware requirements (minimum) +- **CPU**: 1 processor core dedicated to your node +- **Storage**: 500 GB available disk space (non-SMR hard drive recommended) +- **RAM**: 2 GB available (4 GB recommended) +- **Network**: Stable internet connection with minimum 1 Mbps upload, 3 Mbps download per TB capacity + +### Hardware requirements (recommended) +- **CPU**: 1 processor core per TB of storage +- **Storage**: 2 TB+ available space on dedicated drive +- **RAM**: 8 GB+ for optimal performance +- **Network**: 3 Mbps upload, 5 Mbps download per TB capacity +- **Uptime**: 99.5%+ monthly (maximum 3.6 hours downtime/month) + +### System requirements +- **Operating System**: Linux (Ubuntu 18.04+, Debian 9+), Windows 10+, or macOS 10.15+ +- **Administrative privileges**: Ability to install software and configure network settings +- **Router access**: Administrative access to configure port forwarding +- **Static IP or DDNS**: Consistent external address (dynamic DNS acceptable) + +### Important considerations + +**Network setup**: You must be behind a router/firewall, never connect directly to the internet. + +**Power stability**: Consider UPS (Uninterrupted Power Supply) if you experience frequent power outages. + +**Drive selection**: Avoid SMR drives, RAID 0, or network-attached storage for best performance. + +## Step 1: Assess your setup + +Before proceeding, verify your environment meets the requirements: + +### Check your internet connection + +Test your connection speed and stability: + +```bash +# Test connection speed +speedtest-cli + +# Test connection stability (run for several minutes) +ping -c 100 8.8.8.8 + +# Check your public IP +curl ifconfig.me +``` + +**Expected outcome**: Your connection should meet the minimum bandwidth requirements with stable ping times. + +### Verify hardware compatibility + +**Check available disk space**: + +{% tabs %} + +{% tab label="Linux" %} + +```bash +# Check disk space and filesystem +df -h +lsblk -f + +# Verify filesystem type (ext4 recommended for Linux) +mount | grep "your-storage-drive" +``` + +{% /tab %} + +{% tab label="Windows" %} + +```powershell +# Check disk space +Get-WmiObject -Class Win32_LogicalDisk | Select-Object DeviceID,Size,FreeSpace + +# Check filesystem (NTFS recommended for Windows) +Get-Volume +``` + +{% /tab %} + +{% /tabs %} + +**Expected outcome**: You should have adequate free space on a native filesystem (ext4 for Linux, NTFS for Windows). + +## Step 2: Configure network access + +Set up network infrastructure to make your node accessible from the internet: + +### Set up port forwarding + +Configure your router to forward port 28967 to your node system: + +1. **Find your internal IP address**: + + {% tabs %} + + {% tab label="Linux" %} + ```bash + ip addr show + # Look for your primary network interface IP + ``` + {% /tab %} + + {% tab label="Windows" %} + ```powershell + ipconfig /all + # Look for your primary network adapter IP + ``` + {% /tab %} + + {% /tabs %} + +2. **Access router admin panel**: + - Open web browser to your router's IP (usually 192.168.1.1 or 192.168.0.1) + - Log in with admin credentials + +3. **Configure port forwarding**: + - Navigate to Port Forwarding or Virtual Servers section + - Add new rule: + - **Service Name**: Storj Node + - **Port Range**: 28967-28967 + - **Local IP**: Your computer's internal IP + - **Local Port**: 28967 + - **Protocol**: Both TCP and UDP + - Save and apply settings + +### Configure dynamic DNS (if needed) + +If your ISP assigns dynamic IP addresses: + +1. **Sign up for DDNS service** (e.g., NoIP.com) +2. **Create a domain** (e.g., mynode.ddns.net) +3. **Configure auto-update**: + - **Option A**: Configure in router DDNS settings + - **Option B**: Install DDNS client software on your system + +### Test port accessibility + +Verify your port forwarding works: + +1. **Get your public IP**: `curl ifconfig.me` +2. **Test port**: Visit [https://www.yougetsignal.com/tools/open-ports/] +3. **Enter your public IP and port 28967** +4. **Result should show "Open"** + +**Expected outcome**: Port 28967 should be accessible from the internet on both TCP and UDP. + +## Step 3: Configure firewall settings + +Ensure your system firewall allows storage node traffic: + +{% tabs %} + +{% tab label="Linux (UFW)" %} + +```bash +# Allow storage node port +sudo ufw allow 28967/tcp +sudo ufw allow 28967/udp + +# Allow dashboard port (optional, for local access only) +sudo ufw allow from 192.168.0.0/16 to any port 14002 + +# Reload firewall +sudo ufw reload + +# Check status +sudo ufw status +``` + +{% /tab %} + +{% tab label="Windows Defender" %} + +```powershell +# Allow inbound traffic on storage node port +New-NetFirewallRule -DisplayName "Storj Node TCP" -Direction Inbound -Protocol TCP -LocalPort 28967 -Action Allow +New-NetFirewallRule -DisplayName "Storj Node UDP" -Direction Inbound -Protocol UDP -LocalPort 28967 -Action Allow + +# Allow dashboard port for local access +New-NetFirewallRule -DisplayName "Storj Dashboard" -Direction Inbound -Protocol TCP -LocalPort 14002 -Action Allow +``` + +{% /tab %} + +{% /tabs %} + +**Expected outcome**: Firewall should allow traffic on port 28967 and optionally 14002 for the dashboard. + +## Step 4: Create your node identity + +Generate a unique cryptographic identity for your storage node: + +{% tabs %} + +{% tab label="Linux" %} + +### Install identity creation tools + +```bash +# Download identity creation binary +curl -L https://github.com/storj/storj/releases/latest/download/identity_linux_amd64.zip -o identity_linux_amd64.zip +unzip identity_linux_amd64.zip +sudo mv identity /usr/local/bin/ +chmod +x /usr/local/bin/identity +``` + +### Create identity + +```bash +# Create identity (this may take several hours) +identity create storagenode + +# Check progress (in another terminal) +identity status storagenode +``` + +**Identity creation time varies**: +- Fast CPU: 2-8 hours +- Slower CPU: 8-24+ hours +- Raspberry Pi: 1-3+ days + +{% /tab %} + +{% tab label="Windows" %} + +### Download and install identity tools + +1. **Download**: Go to [Storj releases page](https://github.com/storj/storj/releases) +2. **Download**: `identity_windows_amd64.zip` +3. **Extract**: To a folder like `C:\storj-identity\` +4. **Add to PATH**: Or use full path in commands + +### Create identity + +```powershell +# Open PowerShell as Administrator +# Navigate to identity tool location +cd C:\storj-identity\ + +# Create identity (this may take several hours) +.\identity.exe create storagenode + +# Check progress (in another PowerShell window) +.\identity.exe status storagenode +``` + +{% /tab %} + +{% /tabs %} + +**Expected outcome**: After completion, you should have identity files in your identity directory. The process generates cryptographic keys that uniquely identify your node. + +**Important**: Never share or modify your identity files. Losing them means losing your node reputation permanently. + +## Step 5: Install storage node software + +Choose the installation method that works best for your system: + +{% tabs %} + +{% tab label="Linux CLI (Docker)" %} + +### Install Docker + +```bash +# Update package index +sudo apt update + +# Install Docker +curl -fsSL https://get.docker.com -o get-docker.sh +sudo sh get-docker.sh + +# Add user to docker group +sudo usermod -aG docker $USER + +# Log out and back in, then verify +docker --version +``` + +### Create storage directories + +```bash +# Create directories for node data +mkdir -p $HOME/storj/storagenode +mkdir -p $HOME/storj/identity/storagenode + +# Copy identity files +cp -r ~/.local/share/storj/identity/storagenode/* $HOME/storj/identity/storagenode/ +``` + +### Run storage node + +```bash +# Replace values with your actual information +docker run -d --restart unless-stopped \ + --name storagenode \ + -p 28967:28967/tcp \ + -p 28967:28967/udp \ + -p 14002:14002 \ + -e WALLET="0xYOUR_WALLET_ADDRESS_HERE" \ + -e EMAIL="your-email@example.com" \ + -e ADDRESS="your.ddns.domain:28967" \ + -e STORAGE="2TB" \ + --mount type=bind,source=$HOME/storj/identity/storagenode,destination=/app/identity \ + --mount type=bind,source=$HOME/storj/storagenode,destination=/app/config \ + storjlabs/storagenode:latest +``` + +{% /tab %} + +{% tab label="Windows GUI" %} + +### Download Windows installer + +1. **Download**: [Storage Node Windows Installer](https://github.com/storj/storj/releases) +2. **Run installer**: `storagenode_windows_amd64.msi` +3. **Follow wizard**: Accept defaults or customize installation path + +### Configure the node + +1. **Copy identity files**: + ```powershell + # Copy identity to program directory (adjust paths as needed) + Copy-Item -Recurse "C:\Users\$env:USERNAME\.local\share\storj\identity\storagenode\*" "C:\Program Files\Storj\Storage Node\identity\" + ``` + +2. **Edit configuration**: + ```powershell + # Edit config file (use Notepad++ or similar) + notepad++ "C:\Program Files\Storj\Storage Node\config.yaml" + ``` + + **Update these values**: + ```yaml + operator: + email: "your-email@example.com" + wallet: "0xYOUR_WALLET_ADDRESS_HERE" + contact: + external-address: "your.ddns.domain:28967" + storage: + allocated-bandwidth: "2TB" + allocated-disk-space: "2TB" + ``` + +3. **Start the service**: + ```powershell + # Start service + Start-Service storagenode + + # Verify it's running + Get-Service storagenode + ``` + +{% /tab %} + +{% /tabs %} + +**Replace these placeholders**: +- `0xYOUR_WALLET_ADDRESS_HERE`: Your Ethereum wallet address for payments +- `your-email@example.com`: Your contact email +- `your.ddns.domain:28967`: Your external address (IP or domain + port) +- `2TB`: Your desired storage allocation + +**Expected outcome**: Your storage node should start successfully and begin connecting to the Storj network. + +## Step 6: Verify node operation + +Confirm your storage node is working correctly: + +### Check the dashboard + +1. **Open browser**: Navigate to `http://localhost:14002` +2. **Review status**: Should show "Node" connected and stats +3. **Check connectivity**: All satellites should show green status + +### Monitor logs + +{% tabs %} + +{% tab label="Linux CLI" %} + +```bash +# View real-time logs +docker logs storagenode -f + +# Check for errors +docker logs storagenode 2>&1 | grep -i error + +# Look for successful startup messages +docker logs storagenode 2>&1 | grep -i "started" +``` + +{% /tab %} + +{% tab label="Windows GUI" %} + +```powershell +# View recent logs +Get-Content "C:\Program Files\Storj\Storage Node\logs\*" -Tail 50 + +# Follow logs in real-time (PowerShell 7+) +Get-Content "C:\Program Files\Storj\Storage Node\logs\*" -Wait +``` + +{% /tab %} + +{% /tabs %} + +**Good log messages to look for**: +- "Server started" or similar startup confirmation +- Successful connections to satellites +- No persistent error messages +- Initial storage and bandwidth allocations + +### Test external connectivity + +Verify your node is reachable from outside your network: + +```bash +# From another computer/network, test connectivity +telnet your.external.address 28967 + +# Should connect successfully +``` + +**Expected outcome**: Your node should be accessible externally, showing successful connections in logs and dashboard. + +## Step 7: Monitor initial operation + +During your node's first days of operation: + +### Understand the vetting process + +**New node behavior**: +- Initial uploads will be limited (vetting process) +- Storage usage grows slowly over first few months +- Earnings start small and increase over time +- Node reputation builds gradually through successful audits + +**Typical timeline**: +- **Days 1-7**: Very limited activity, system testing +- **Weeks 2-8**: Gradual increase in storage uploads +- **Months 2-12**: Continued growth, reputation building +- **After 12 months**: Full earning potential unlocked + +### Monitor key metrics + +**Daily checks** (first week): +- Node uptime and connectivity +- Log files for errors or warnings +- Dashboard showing satellite connections +- Gradual increase in storage usage + +**Weekly checks** (ongoing): +- Storage utilization trends +- Bandwidth usage patterns +- Audit success rates (should stay >95%) +- Payout predictions and actual earnings + +### Common first-week issues + +**Node appears offline**: +- Verify port forwarding configuration +- Check firewall settings +- Confirm external address is correct +- Test connectivity from external network + +**Low activity/earnings**: +- Normal for new nodes during vetting period +- Ensure node has consistent uptime +- Verify sufficient available storage space +- Be patient - growth takes time + +**Database or storage errors**: +- Check disk space and filesystem health +- Verify permissions on storage directories +- Monitor system resources (CPU, RAM, disk I/O) + +**Expected outcome**: Your node should show stable operation with gradually increasing activity over the first weeks. + +## Step 8: Set up ongoing maintenance + +Establish procedures to keep your node healthy long-term: + +### Automated monitoring + +Set up basic monitoring: + +{% tabs %} + +{% tab label="Linux" %} + +```bash +# Create monitoring script +cat > $HOME/check-storj-node.sh << 'EOF' +#!/bin/bash +# Check if container is running +if ! docker ps | grep -q storagenode; then + echo "ERROR: Storage node container not running" + # Add notification/restart logic here +fi + +# Check disk space +USAGE=$(df $HOME/storj/storagenode | tail -1 | awk '{print $5}' | sed 's/%//') +if [ $USAGE -gt 90 ]; then + echo "WARNING: Storage directory $USAGE% full" +fi +EOF + +# Make executable +chmod +x $HOME/check-storj-node.sh + +# Add to crontab (run every 5 minutes) +(crontab -l 2>/dev/null; echo "*/5 * * * * $HOME/check-storj-node.sh") | crontab - +``` + +{% /tab %} + +{% tab label="Windows" %} + +```powershell +# Create monitoring script +@' +# Check if service is running +$service = Get-Service storagenode -ErrorAction SilentlyContinue +if ($service.Status -ne "Running") { + Write-Host "ERROR: Storage node service not running" + # Add notification/restart logic here +} + +# Check disk space +$disk = Get-WmiObject -Class Win32_LogicalDisk | Where-Object { $_.DeviceID -eq "C:" } +$usage = [math]::Round(($disk.Size - $disk.FreeSpace) / $disk.Size * 100, 2) +if ($usage -gt 90) { + Write-Host "WARNING: Disk ${usage}% full" +} +'@ | Out-File -FilePath "$env:USERPROFILE\check-storj-node.ps1" + +# Set up scheduled task (run every 5 minutes) +schtasks /create /tn "Storj Node Monitor" /tr "powershell.exe -File $env:USERPROFILE\check-storj-node.ps1" /sc minute /mo 5 +``` + +{% /tab %} + +{% /tabs %} + +### Update procedures + +**Software updates**: +- Storage node software updates automatically +- Monitor for update announcements in Storj community +- Plan maintenance windows for major updates + +**System maintenance**: +- Regular system updates and security patches +- Periodic filesystem checks and optimization +- Monitor and clean up log files +- Backup identity files securely + +### Performance optimization + +As your node matures: + +**Storage optimization**: +- Monitor disk I/O performance +- Consider SSD caching for better performance +- Ensure adequate free space (10%+ recommended) + +**Network optimization**: +- Monitor bandwidth utilization +- Optimize router QoS settings if needed +- Consider dedicated internet connection for large nodes + +**Expected outcome**: You should have monitoring and maintenance procedures in place to ensure long-term reliable operation. + +## What you've accomplished + +Congratulations! You've successfully set up your first Storj storage node. You now have: + +- A fully configured storage node connected to the Storj network +- Proper network infrastructure with port forwarding and firewall rules +- Monitoring systems to track node health and performance +- Understanding of the vetting process and earnings timeline +- Maintenance procedures for long-term operation + +## Understanding your node's journey + +**The vetting period**: Your node is now in a 9-month vetting process where Storj gradually increases your allowed storage and bandwidth based on performance. + +**Reputation building**: Your node builds reputation through successful audits, uptime, and reliable service. Better reputation leads to higher earnings. + +**Earnings timeline**: +- Month 1-9: Held amount (50% of earnings held as insurance) +- Month 9+: Held amount returned, full earnings for ongoing service +- Full earning potential: Typically achieved after 12+ months of operation + +## What's next + +Now that your storage node is running: + +### Optimize your setup +- [Monitor and optimize node performance](#) +- [Set up advanced monitoring and alerting](#) +- [Plan for scaling to multiple nodes](#) +- [Implement backup and disaster recovery](#) + +### Join the community +- [Join the Storj community forum](https://forum.storj.io) for support and updates +- [Follow best practices](#) from experienced node operators +- [Stay updated](#) on network changes and opportunities + +### Advanced topics +- [Understanding storage node economics](#) - detailed earnings analysis +- [Multi-node deployment strategies](#) - scaling your operation +- [Hardware optimization](#) - improving performance and efficiency + +### Troubleshooting resources +- [Troubleshoot offline node issues](../how-to/troubleshoot-offline-node) +- [Migrate node to new hardware](../how-to/migrate-node) +- [Change payout address](../how-to/change-payout-address) + +Your storage node is now contributing to the decentralized cloud and you're earning STORJ tokens for providing valuable storage and bandwidth resources to the network. Welcome to the Storj community! \ No newline at end of file diff --git a/app/(docs)/object-mount/concepts/object-mount-vs-filesystems.md b/app/(docs)/object-mount/concepts/object-mount-vs-filesystems.md new file mode 100644 index 000000000..ea69d6bbe --- /dev/null +++ b/app/(docs)/object-mount/concepts/object-mount-vs-filesystems.md @@ -0,0 +1,175 @@ +--- +title: Object Mount vs traditional filesystems +docId: object-mount-vs-filesystems +metadata: + title: Understanding Object Mount vs Traditional Filesystems + description: Conceptual explanation of how Object Mount bridges object storage and POSIX filesystems, with architecture and performance considerations. +--- + +Object Mount represents a fundamental shift in how applications access cloud storage by bridging the gap between POSIX filesystem expectations and object storage characteristics. + +## The traditional filesystem model + +Traditional applications expect filesystems to provide: + +- **Hierarchical structure**: Files organized in directories and subdirectories +- **POSIX compliance**: Standard operations like open, read, write, close, and seek +- **Metadata support**: Permissions, timestamps, ownership, and symbolic links +- **Consistency guarantees**: Immediate visibility of changes across all processes +- **Random access**: Ability to read or write any part of a file efficiently + +These expectations work well with local storage (HDDs, SSDs) and network filesystems (NFS, CIFS) but conflict with object storage design principles. + +## Object storage characteristics + +Object storage systems like Storj, Amazon S3, and Azure Blob Storage are designed for: + +- **Write-once, read-many patterns**: Optimized for immutable data +- **High throughput**: Excellent for large file transfers and streaming +- **Eventual consistency**: Changes may not be immediately visible everywhere +- **Flat namespace**: Objects stored with keys, not hierarchical paths +- **HTTP-based access**: REST APIs rather than POSIX system calls + +This fundamental mismatch means traditional applications cannot directly use object storage as if it were a local filesystem. + +## How Object Mount solves the problem + +Object Mount acts as a translation layer that: + +### Intercepts system calls +- Uses `LD_PRELOAD` to intercept filesystem operations from applications +- Translates POSIX operations into object storage API calls +- Works with both dynamically and statically linked applications +- Requires no application modifications + +### Maps filesystem concepts to objects +- **Files** → Individual objects in the bucket +- **Directories** → Object key prefixes (simulated hierarchy) +- **Metadata** → Object metadata and special tracking objects +- **Permissions** → Cached and synchronized metadata + +### Provides performance optimization +- **Intelligent caching**: Predicts access patterns and caches data locally +- **Write-behind caching**: Buffers writes for optimal object storage interaction +- **Partial reads**: Downloads only needed portions of large files +- **Concurrent operations**: Parallelizes uploads and downloads + +## Architecture comparison + +### Traditional application + local storage +``` +Application → POSIX calls → Kernel VFS → Filesystem → Storage device +``` + +### Traditional application + Object Mount + object storage +``` +Application → POSIX calls → Object Mount interception → Object storage API → Cloud storage +``` + +### Object Mount modes + +**Direct Interception (CLI mode)**: +- Highest performance +- Full POSIX compatibility +- Works in containers and restricted environments +- Requires compatible applications + +**FUSE mode**: +- Broader application compatibility +- Standard mount interface +- Slightly higher overhead +- Requires FUSE kernel module + +**FlexMount (hybrid)**: +- Automatic fallback between modes +- Best of both approaches +- Optimal compatibility and performance + +## Performance characteristics + +### Object storage optimizations + +**Read performance**: +- First access: Download time from object storage +- Subsequent access: Cache speed (near-local performance) +- Large files: Streaming and partial download optimization + +**Write performance**: +- Small writes: Buffered and batched for efficiency +- Large writes: Direct streaming to object storage +- Metadata updates: Cached and synchronized + +**Memory usage**: +- Configurable cache size +- Intelligent eviction policies +- Minimal overhead for inactive files + +### When Object Mount excels + +- **Read-heavy workloads**: Excellent caching makes repeated reads very fast +- **Large file processing**: Streaming and partial access optimization +- **Development workflows**: Seamless access to cloud data +- **Container environments**: No privileged access requirements + +### When to consider alternatives + +- **Write-intensive workloads**: Consider [Object Mount Fusion](./object-mount-fusion) for hybrid storage +- **Real-time applications**: Network latency may impact performance +- **Small random I/O**: Traditional block storage may be more efficient + +## Consistency model + +Object Mount provides **NFS-equivalent consistency** guarantees: + +- **Single client**: All operations are immediately consistent +- **Multiple clients**: Eventually consistent with configurable sync intervals +- **Metadata operations**: Cached with refresh policies +- **File locking**: Supported through object metadata + +## Provider compatibility + +Object Mount works with any S3-compatible storage, but performance varies: + +**Fully tested providers**: +- Amazon S3, Azure Blob Storage, Google Cloud Storage +- Storj, Wasabi, MinIO, Oracle OCI +- NetApp StorageGRID, Dell ECS + +**Community-validated providers**: +- IBM Cloud Object Storage, Backblaze B2 +- DigitalOcean Spaces, Cloudflare R2 + +**Performance considerations by provider**: +- **Latency**: Geographic proximity affects response times +- **API compatibility**: Some providers have S3 API variations +- **Throughput limits**: Provider-specific bandwidth constraints +- **Cost structure**: Different pricing for operations and bandwidth + +## Security model + +Object Mount maintains security through: + +- **Credential isolation**: Applications never see object storage credentials +- **Access control**: Standard POSIX permissions enforced by Object Mount +- **Encryption**: Supports provider-side and client-side encryption +- **Audit trails**: Comprehensive logging of all operations + +## Use case suitability + +**Excellent fit**: +- Media processing and editing workflows +- Data analysis and machine learning pipelines +- Development and testing environments +- Backup and archival applications + +**Good fit with considerations**: +- Database workloads (consider Fusion for write-heavy scenarios) +- Web serving (cache configuration important) +- Collaborative editing (understand consistency implications) + +**May not be suitable**: +- Hard real-time applications requiring guaranteed latency +- Applications requiring strict POSIX lock semantics +- Workloads with extremely high small-write frequencies + +Understanding these fundamental concepts helps you make informed decisions about when and how to deploy Object Mount in your infrastructure. \ No newline at end of file diff --git a/app/(docs)/object-mount/linux/user-guides/page.md b/app/(docs)/object-mount/linux/user-guides/page.md index 07c8ade35..a80c38f2c 100644 --- a/app/(docs)/object-mount/linux/user-guides/page.md +++ b/app/(docs)/object-mount/linux/user-guides/page.md @@ -1,51 +1,133 @@ --- -title: User Guides +title: How-to Guides for Linux docId: ohs0ailohSh0Vie3 - metadata: - title: User Guides + title: Object Mount Linux How-to Guides description: - User Guides Overview + Practical guides for installing, configuring, and using Object Mount on Linux systems with step-by-step instructions. weight: 4 --- -Object Mount is a scalable, high-performance POSIX compatibility layer that lets you interact with files stored on object storage such as Amazon S3, Azure Blob Storage, Google Cloud Storage, or any S3-compatible object store hosted in the cloud or locally. +This section provides step-by-step guides for common Object Mount tasks on Linux. These practical guides help you achieve specific goals with clear instructions and troubleshooting tips. + +## Prerequisites + +Before following these guides, ensure you have: + +- A Linux system (Ubuntu 18.04+, Debian 9+, RHEL 7+, CentOS 7+, or compatible) +- Administrative privileges for installation tasks +- Basic command-line familiarity +- Object storage credentials (S3-compatible) for the provider you plan to use + +## Getting started + +If this is your first time using Object Mount: + +1. **Start here**: [Your first mount tutorial](../../../tutorials/your-first-mount) - Complete hands-on introduction +2. **Understand the concepts**: [Object Mount vs traditional filesystems](../../../concepts/object-mount-vs-filesystems) +3. **Choose your approach**: Review the guides below for your specific needs + +## Installation guides + +Choose the installation method that matches your environment: + +- [Install on Ubuntu/Debian](../installation/debian) - Package installation for APT-based systems +- [Install on RHEL/CentOS](../installation/redhat) - Package installation for RPM-based systems +- [Install generic Linux binary](../installation/glibc) - Universal installation method +- [Install in Alpine Linux](../installation/alpine) - Lightweight container-focused installation +- [Install on macOS](../../macos/installation) - Package installation for macOS systems +- [Install on Windows](../../windows/installation) - Package installation for Windows systems + +## Configuration guides + +Set up Object Mount for your specific object storage provider: -## The package +- [Configure credentials](./credentials) - Set up authentication for your object storage +- [Configure performance settings](./configuration) - Optimize for your workload +- [Set up advanced options](./extraopts) - Additional configuration parameters +- [Configure logging](./appendix) - Set up monitoring and debugging -Object Mount is Linux software: there's a Object Mount Command Line Interface (Object Mount CLI), `cuno`, providing the highest performance and most straightforward way to interact with object storage. This works through a user-mode library, `cuno.so`, which intercepts (both dynamic and static) applications using [LD_PRELOAD] functionality and fast dynamic binary instrumentation. +## Usage guides -Object Mount can also be used with our modified [FUSE] mount solution, [Object Mount on FUSE](./user-guides/basic#object-mount-on-fuse), providing wider support for applications where the [Object Mount CLI cannot be used](./user-guides/limitations#direct-interception). +Learn how to use Object Mount effectively: -To match the best performance with the widest support, consider the hybrid solution: [Object Mount FlexMount](./user-guides/basic#object-mount-flex-mount). +- [Basic operations](./basic) - Mount, unmount, and basic file operations +- [Advanced usage](./advanced) - Complex scenarios and optimization +- [Access patterns](./access) - Understanding performance characteristics +- [Troubleshooting common issues](./limitations) - Solve problems and understand limitations -Access credentials can also be optionally managed by Object Mount. +## Deployment guides -## Wide support for object storage providers +Deploy Object Mount in different environments: -Object Mount has native support for: +- [Container deployment](./k8s) - Use Object Mount in Docker and Kubernetes +- [Multi-user setup](./tips) - Configure for shared environments +- [Uninstall Object Mount](./uninstall) - Clean removal procedures -- [Amazon Web Services S3](https://aws.amazon.com/s3/) -- [Microsoft Azure Blob Storage](https://azure.microsoft.com/en-gb/services/storage/blobs/) -- [Google Cloud Storage](https://cloud.google.com/storage/) (however for best performance we currently recommend using S3 with their [S3 accesspoint](https://cloud.google.com/storage/docs/interoperability)) +## Provider compatibility -In theory, Object Mount supports any S3-compatible object storage provider. In practice, the "S3 API" implementation can have differences in behaviour between providers, so some additional configuration is sometimes necessary. Object Mount has been tested on: +Object Mount works with any S3-compatible storage provider. Performance and compatibility vary: +### Fully supported providers +These providers are regularly tested and validated: + +- [Amazon Web Services S3](https://aws.amazon.com/s3/) - Full compatibility and optimal performance +- [Microsoft Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/) - Complete S3 API support +- [Google Cloud Storage](https://cloud.google.com/storage/) - Use with S3 compatibility layer +- [Storj DCS](https://storj.io/) - Decentralized object storage with full compatibility - [Oracle Cloud Infrastructure Object Storage](https://www.oracle.com/cloud/storage/object-storage.html) -- [Storj](https://storj.io/) -- [Wasabi](https://wasabi.com/) -- [MinIO](https://min.io/) +- [Wasabi Hot Cloud Storage](https://wasabi.com/) - High-performance S3-compatible storage +- [MinIO](https://min.io/) - Self-hosted S3-compatible storage - [NetApp StorageGRID](https://www.netapp.com/data-storage/storagegrid/) -- [Dell ECS Object Storage](https://www.delltechnologies.com/en-gb/storage/ecs/index.htm) +- [Dell ECS Object Storage](https://www.delltechnologies.com/storage/ecs/) -The following providers have not yet been validated; however, users have reported success with: +### Community-validated providers +These providers work with Object Mount based on user reports: - [IBM Cloud Object Storage](https://www.ibm.com/cloud/object-storage) - [Backblaze B2 Cloud Storage](https://www.backblaze.com/cloud-storage) - [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/) -- [Cloudflare R2](https://www.cloudflare.com/en-gb/developer-platform/r2/) +- [Cloudflare R2](https://www.cloudflare.com/developer-platform/r2/) - [Scality](https://www.scality.com/) - [DataDirect Networks (DDN) Storage](https://www.ddn.com) -[fuse]: https://www.kernel.org/doc/html/next/filesystems/fuse.html -[ld_preload]: https://man7.org/linux/man-pages/man8/ld.so.8.html +### Configuration considerations by provider + +**For best performance**: +- Configure appropriate endpoint URLs for your provider +- Set optimal chunk sizes and concurrent connection limits +- Use provider-specific regions or availability zones +- Consider provider bandwidth and operation limits + +**Provider-specific tips**: +- **Google Cloud Storage**: Use S3 interoperability mode for best performance +- **Azure Blob Storage**: Configure hot/cool tier access appropriately +- **Storj**: Benefits from higher concurrency settings due to distributed architecture +- **MinIO**: Optimal for on-premises and edge deployment scenarios + +## Verification and troubleshooting + +After setup, verify your configuration: + +1. **Test connectivity**: Ensure Object Mount can access your storage +2. **Performance validation**: Run benchmarks for your workload +3. **Monitor resources**: Check memory and CPU usage patterns +4. **Review logs**: Examine Object Mount operation logs + +Common issues and solutions: + +- **Mount failures**: Check credentials, endpoints, and network connectivity +- **Performance issues**: Review cache settings and provider-specific optimizations +- **Application compatibility**: Understand [limitations](./limitations) and workarounds +- **Resource usage**: Optimize [configuration](./configuration) for your environment + +## Getting help + +If you need assistance: + +1. Check the specific guide for your use case above +2. Review [troubleshooting guides](./limitations) for common issues +3. Search the [community forum](https://forum.storj.io) for similar problems +4. Contact support with detailed configuration and error information + +For conceptual understanding, see [Object Mount vs traditional filesystems](../../../concepts/object-mount-vs-filesystems) to understand how Object Mount bridges object storage and POSIX filesystems. diff --git a/app/(docs)/object-mount/reference/_meta.json b/app/(docs)/object-mount/reference/_meta.json new file mode 100644 index 000000000..32eb94390 --- /dev/null +++ b/app/(docs)/object-mount/reference/_meta.json @@ -0,0 +1,5 @@ +{ + "cli-reference": "CLI Reference", + "configuration": "Configuration", + "compatibility": "Compatibility" +} \ No newline at end of file diff --git a/app/(docs)/object-mount/reference/cli-reference.md b/app/(docs)/object-mount/reference/cli-reference.md new file mode 100644 index 000000000..12370a923 --- /dev/null +++ b/app/(docs)/object-mount/reference/cli-reference.md @@ -0,0 +1,266 @@ +--- +title: "Object Mount CLI Reference" +docId: "object-mount-cli-ref-001" +metadata: + title: "Object Mount CLI Commands Reference" + description: "Complete reference for Object Mount CLI commands, options, and usage patterns." +--- + +Complete reference for Object Mount CLI commands and options. + +## Core Commands + +### cuno + +Main command for Object Mount CLI operations. + +**Basic Usage:** +```bash +# Launch interactive Object Mount shell +cuno + +# Run single command with Object Mount +cuno run + +# Mount Object Mount FUSE filesystem +cuno mount [options] +``` + +### Command Modes + +#### Interactive Shell Mode +```bash +cuno +``` +Launches an interactive shell with Object Mount interception enabled. The shell prompt will show `(cuno)` to indicate Object Mount is active. + +**Supported shells:** `bash` and `zsh` (full tab completion and wildcard support) + +#### Single Command Execution +```bash +cuno run bash -c "ls s3://mybucket/" +cuno run python script.py +``` +Runs a single command with Object Mount interception enabled. + +#### FUSE Mount Mode +```bash +cuno mount ~/my-mount-point +cuno mount ~/my-mount-point --root s3://mybucket/ +``` +Creates a FUSE filesystem mount at the specified location. + +## Command Options + +### Global Options + +| Option | Description | Example | +|--------|-------------|---------| +| `-o ` | Specify configuration options | `cuno -o "uid=1000 gid=1000"` | +| `--help` | Show help information | `cuno --help` | + +### Mount-Specific Options + +| Option | Description | Example | +|--------|-------------|---------| +| `--root ` | Set root directory for mount | `--root s3://mybucket/folder/` | +| `--foreground` | Run mount in foreground | `cuno mount ~/mnt --foreground` | + +## Configuration Options (`CUNO_OPTIONS`) + +Set via environment variable or `-o` flag: + +```bash +export CUNO_OPTIONS="uid=1000 gid=1000 filemode=0644" +# OR +cuno -o "uid=1000 gid=1000 filemode=0644" +``` + +### Core Options + +| Option | Type | Description | Default | +|--------|------|-------------|---------| +| `uid=` | integer | Set file owner UID | Current user | +| `gid=` | integer | Set file owner GID | Current group | +| `filemode=` | octal | Default file permissions | `0666` | +| `dirmode=` | octal | Default directory permissions | `0777` | + +### Advanced Options + +| Option | Type | Description | +|--------|------|-------------| +| `+static` | flag | Enable static binary interception (default) | +| `-static` | flag | Disable static binary interception | +| `+uricompat[=apps]` | string | Enable URI compatibility for apps | +| `cloudroot=` | string | Custom cloud root path (default: `/cuno`) | +| `cloudrootover` | flag | Override cloud root for FlexMount | + +### URI Compatibility (`+uricompat`) + +Enable URI handling override for specific applications: + +```bash +# Default supported apps (automatic) ++uricompat # rsync, ffmpeg, tar, samtools, igv, fastQC + +# Custom application list ++uricompat=myapp:otherapp + +# Conditional matching (app/arg_index/match_value) ++uricompat=java/2/app.jar:python/*/script.py +``` + +### Cloud Root Customization + +```bash +# Custom cloud root +export CUNO_OPTIONS="cloudroot=/my-storage" +ls /my-storage/s3/mybucket/ # Instead of /cuno/s3/mybucket/ +``` + +## Path Formats + +Object Mount supports multiple path formats for accessing cloud storage: + +### URI Format +```bash +s3://bucket/path/file.txt +az://container/path/file.txt +gs://bucket/path/file.txt +``` + +### Filesystem Format +```bash +/cuno/s3/bucket/path/file.txt +/cuno/az/container/path/file.txt +/cuno/gs/bucket/path/file.txt +``` + +### Custom Cloud Root +```bash +# With cloudroot=/storage +/storage/s3/bucket/path/file.txt +``` + +## Credential Management + +### cuno creds + +Manage cloud storage credentials. + +```bash +# Pair bucket with credentials +cuno creds pair s3://mybucket + +# List paired credentials +cuno creds list + +# Remove credential pairing +cuno creds unpair s3://mybucket +``` + +## Access Modes + +### Direct Interception +- **Default mode** when using `cuno` or `cuno run` +- **Highest performance** access method +- **Works with:** Dynamically linked binaries +- **Limitations:** No support for SUID, Snap, AppImage, Flatpak applications + +### FUSE Mount +- **Compatibility mode** using `cuno mount` +- **Works with:** All application types including static binaries +- **Trade-off:** Lower performance than direct interception + +### FlexMount +- **Hybrid approach** combining both modes +- **Fallback strategy:** Direct interception with FUSE fallback +- **Setup:** Mount FUSE filesystem, then use with Object Mount CLI + +## Environment Variables + +| Variable | Description | Example | +|----------|-------------|---------| +| `CUNO_OPTIONS` | Configuration options | `"uid=1000 filemode=0644"` | +| `LD_PRELOAD` | Manual library preload | Set automatically by `cuno` | + +## Usage Examples + +### Basic File Operations +```bash +cuno +(cuno) $ ls s3://mybucket/ +(cuno) $ cp local-file.txt s3://mybucket/remote-file.txt +(cuno) $ cat s3://mybucket/data.csv | head -10 +``` + +### Application Integration +```bash +# Python script with cloud paths +cuno run python -c "import pandas as pd; df = pd.read_csv('s3://bucket/data.csv')" + +# rsync with cloud storage +cuno run rsync -av s3://source-bucket/ s3://dest-bucket/ + +# Media processing +cuno run ffmpeg -i s3://bucket/input.mp4 s3://bucket/output.mp4 +``` + +### FUSE Mount Usage +```bash +# Mount entire cloud storage +cuno mount ~/cloud-storage + +# Mount specific bucket +cuno mount ~/my-bucket --root s3://mybucket/ + +# Use mounted storage +ls ~/cloud-storage/s3/mybucket/ +cp ~/cloud-storage/s3/mybucket/file.txt . +``` + +## Troubleshooting Commands + +### Check Object Mount Status +```bash +# Verify Object Mount is active (should show (cuno) in prompt) +cuno +(cuno) $ echo $LD_PRELOAD # Should show cuno.so path +``` + +### Debug Mode +```bash +# Enable verbose logging +export CUNO_DEBUG=1 +cuno run ls s3://mybucket/ +``` + +### Mount Debugging +```bash +# Check mount status +mount | grep cuno + +# Unmount if needed +fusermount -u ~/mount-point + +# Foreground mode for debugging +cuno mount ~/test-mount --foreground +``` + +## Performance Tuning + +### Parallelism +Object Mount automatically optimizes for parallel operations. For custom tuning: + +```bash +# Set via application-specific options +cuno run rsync -av --progress s3://source/ s3://dest/ +``` + +### Memory Usage +```bash +# Adjust for large file operations +export CUNO_OPTIONS="cache_size=1GB" +``` + +This reference covers all major Object Mount CLI commands and configuration options. Use `cuno --help` for additional details on specific commands. \ No newline at end of file diff --git a/app/(docs)/object-mount/reference/compatibility.md b/app/(docs)/object-mount/reference/compatibility.md new file mode 100644 index 000000000..436b01c7f --- /dev/null +++ b/app/(docs)/object-mount/reference/compatibility.md @@ -0,0 +1,318 @@ +--- +title: "Compatibility Reference" +docId: "object-mount-compat-ref-001" +metadata: + title: "Object Mount Compatibility Reference" + description: "Complete reference for Object Mount compatibility with operating systems, applications, and cloud storage providers." +--- + +Complete reference for Object Mount compatibility across platforms and applications. + +## Operating System Compatibility + +### Linux Distributions + +| Distribution | Versions | Support Level | Installation Method | +|--------------|----------|---------------|-------------------| +| **Ubuntu** | 18.04+ | Full | DEB package, binary | +| **Debian** | 10+ | Full | DEB package, binary | +| **CentOS/RHEL** | 7+ | Full | RPM package, binary | +| **Fedora** | 30+ | Full | RPM package, binary | +| **openSUSE** | 15+ | Full | RPM package, binary | +| **Alpine Linux** | 3.12+ | Full | Binary (musl) | +| **Arch Linux** | Latest | Full | Binary, AUR | + +### Architecture Support + +| Architecture | Support Level | Notes | +|--------------|---------------|-------| +| **x86_64** | Full | Primary support | +| **ARM64** | Full | Native ARM64 builds | +| **ARMv7** | Limited | Select distributions only | + +### Kernel Requirements + +| Component | Minimum Version | Recommended | +|-----------|----------------|-------------| +| **Linux Kernel** | 3.10+ | 4.0+ | +| **FUSE** | 2.6+ | 3.0+ | +| **glibc** | 2.17+ | 2.27+ | +| **musl** | 1.1.24+ | 1.2.0+ | + +## Application Compatibility + +### Fully Compatible Applications + +Applications that work seamlessly with Object Mount: + +#### Development Tools +- **Python** - All versions, pip, conda +- **Node.js** - npm, yarn, all frameworks +- **Java** - JVM applications, Maven, Gradle +- **Go** - go build, go mod +- **Rust** - cargo, rustc +- **Docker** - Container builds and runtime + +#### Data Processing +- **pandas** - DataFrame operations +- **NumPy** - Array operations +- **Apache Spark** - Distributed processing +- **Dask** - Parallel computing +- **Jupyter** - Notebook operations + +#### Media Processing +- **FFmpeg** - Video/audio transcoding +- **ImageMagick** - Image manipulation +- **Handbrake** - Video encoding +- **Blender** - 3D rendering (with setup) + +#### File Management +- **rsync** - File synchronization +- **tar** - Archive operations +- **zip/unzip** - Compression +- **find/grep** - File search + +### Limited Compatibility Applications + +Applications with known limitations: + +| Application Type | Limitation | Workaround | +|------------------|------------|------------| +| **SUID Binaries** | Security restrictions prevent interception | Use FUSE mount mode | +| **Snap Applications** | Sandboxing prevents LD_PRELOAD | Use FUSE mount, configure permissions | +| **AppImage** | Self-contained prevents interception | Use FUSE mount mode | +| **Flatpak** | Sandboxing restrictions | Use FUSE mount with portal access | +| **Static Binaries** | Limited interception capability | Enable with `+static` flag | + +### Database Compatibility + +| Database | Direct Support | Recommended Approach | +|----------|----------------|---------------------| +| **SQLite** | Yes | Direct file access | +| **PostgreSQL** | No | Use backup/restore workflows | +| **MySQL** | No | Use mysqldump to cloud storage | +| **MongoDB** | Partial | Use mongodump to cloud storage | + +## Cloud Storage Provider Support + +### Supported Providers + +| Provider | Protocol | Authentication | Features | +|----------|----------|----------------|----------| +| **Amazon S3** | `s3://` | Access keys, IAM roles | Full S3 API compatibility | +| **Microsoft Azure** | `az://` | Account keys, SAS tokens | Blob storage support | +| **Google Cloud** | `gs://` | Service accounts, OAuth | Cloud Storage API | +| **Storj DCS** | `s3://` | Access grants, S3 gateway | Native decentralized support | +| **MinIO** | `s3://` | Access keys | Self-hosted S3-compatible | +| **Wasabi** | `s3://` | Access keys | S3-compatible hot storage | + +### Authentication Methods + +#### S3-Compatible Providers +```bash +# AWS credentials file +~/.aws/credentials + +# Environment variables +export AWS_ACCESS_KEY_ID="your-key" +export AWS_SECRET_ACCESS_KEY="your-secret" + +# Storj access grants +cuno creds pair s3://bucket +``` + +#### Azure Blob Storage +```bash +# Connection string +export AZURE_STORAGE_CONNECTION_STRING="your-connection-string" + +# Account key +export AZURE_STORAGE_ACCOUNT="account-name" +export AZURE_STORAGE_KEY="account-key" +``` + +#### Google Cloud Storage +```bash +# Service account key +export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" + +# gcloud authentication +gcloud auth application-default login +``` + +## Shell and Terminal Compatibility + +### Fully Supported Shells + +| Shell | Tab Completion | Wildcard Expansion | Prompt Indication | +|-------|----------------|-------------------|-------------------| +| **bash** | ✅ Full | ✅ Full | ✅ `(cuno)` prefix | +| **zsh** | ✅ Full | ✅ Full | ✅ `(cuno)` prefix | + +### Partially Supported Shells + +| Shell | Basic Usage | Limitations | +|-------|-------------|-------------| +| **fish** | ✅ Yes | No tab completion for cloud paths | +| **tcsh** | ✅ Yes | Limited wildcard support | +| **dash** | ✅ Yes | No advanced features | + +## Container and Virtualization + +### Docker Compatibility + +**Supported Scenarios:** +```bash +# Host-mounted Object Mount +docker run -v ~/cloud-storage:/data ubuntu ls /data/s3/bucket/ + +# Object Mount inside container +docker run -it --privileged ubuntu +# Install Object Mount inside container +``` + +**Known Limitations:** +- Requires `--privileged` for full functionality +- FUSE support needed in container + +### Kubernetes Compatibility + +**CSI Driver Support:** +- Object Mount can be integrated as CSI driver +- Supports persistent volumes backed by cloud storage +- Requires cluster-level FUSE support + +**Pod-level Usage:** +```yaml +apiVersion: v1 +kind: Pod +spec: + containers: + - name: app + securityContext: + privileged: true # Required for FUSE + volumeMounts: + - name: cloud-storage + mountPath: /mnt/cloud +``` + +## Programming Language Integration + +### Python +```python +import pandas as pd + +# Direct file path usage +df = pd.read_csv('s3://bucket/data.csv') +df.to_parquet('s3://bucket/output.parquet') +``` + +**Libraries with confirmed compatibility:** +- pandas, NumPy, SciPy +- scikit-learn, TensorFlow, PyTorch +- Pillow, OpenCV, matplotlib +- boto3 (when using FUSE paths) + +### R +```r +# Direct file operations +data <- read.csv('s3://bucket/data.csv') +write.csv(data, 's3://bucket/output.csv') +``` + +### Node.js +```javascript +const fs = require('fs'); + +// Direct file system operations +const data = fs.readFileSync('s3://bucket/config.json'); +fs.writeFileSync('s3://bucket/output.json', JSON.stringify(result)); +``` + +## Performance Characteristics + +### Access Method Performance + +| Method | Throughput | Latency | CPU Usage | Memory Usage | +|--------|------------|---------|-----------|-------------| +| **Direct Interception** | Highest | Lowest | Low | Low | +| **FUSE Mount** | Moderate | Moderate | Moderate | Moderate | +| **FlexMount** | High | Low | Low-Moderate | Low-Moderate | + +### File Operation Performance + +| Operation | Direct | FUSE | Notes | +|-----------|--------|------|-------| +| **Sequential Read** | Excellent | Good | Optimized for streaming | +| **Random Read** | Good | Fair | Caching helps small reads | +| **Sequential Write** | Excellent | Good | Buffered writes | +| **Random Write** | Good | Fair | May trigger uploads | +| **Metadata Operations** | Excellent | Good | Cached when possible | + +## Known Issues and Limitations + +### General Limitations + +1. **Symbolic Links** - Limited cross-boundary support +2. **Hard Links** - Not supported across cloud boundaries +3. **File Locking** - Advisory locking only +4. **Extended Attributes** - Limited support +5. **Special Files** - No device files, named pipes, sockets + +### Platform-Specific Issues + +#### Linux +- SELinux may require policy adjustments +- AppArmor profiles may need modification +- systemd services require special configuration + +#### Container Environments +- Requires privileged mode for full FUSE support +- Some container runtimes limit LD_PRELOAD usage +- Networking policies may affect cloud storage access + +## Troubleshooting Compatibility + +### Application Not Working + +1. **Check interception status:** + ```bash + echo $LD_PRELOAD # Should show cuno.so + ``` + +2. **Try FUSE mode:** + ```bash + cuno mount ~/cloud-storage + # Use ~/cloud-storage/s3/bucket/ instead of s3://bucket/ + ``` + +3. **Enable compatibility flags:** + ```bash + export CUNO_OPTIONS="+uricompat=myapp" + ``` + +### Permission Issues + +1. **Set appropriate ownership:** + ```bash + export CUNO_OPTIONS="uid=$(id -u) gid=$(id -g)" + ``` + +2. **Adjust file permissions:** + ```bash + export CUNO_OPTIONS="filemode=0664 dirmode=0775" + ``` + +### Container Issues + +1. **Enable privileged mode:** + ```bash + docker run --privileged myapp + ``` + +2. **Mount host Object Mount:** + ```bash + docker run -v ~/cloud:/mnt/cloud myapp + ``` + +This compatibility reference helps determine the best approach for your specific environment and applications. For unlisted applications, test with FUSE mode first, then try direct interception with appropriate compatibility flags. \ No newline at end of file diff --git a/app/(docs)/object-mount/reference/configuration.md b/app/(docs)/object-mount/reference/configuration.md new file mode 100644 index 000000000..524c24c05 --- /dev/null +++ b/app/(docs)/object-mount/reference/configuration.md @@ -0,0 +1,289 @@ +--- +title: "Configuration Reference" +docId: "object-mount-config-ref-001" +metadata: + title: "Object Mount Configuration Reference" + description: "Complete reference for Object Mount configuration parameters, environment variables, and advanced settings." +--- + +Complete reference for Object Mount configuration options and parameters. + +## Configuration Methods + +Object Mount can be configured through: + +1. **Environment Variables** - `CUNO_OPTIONS` +2. **Command Line Options** - `cuno -o "option=value"` +3. **Configuration Files** - Platform-specific locations + +## Environment Variables + +### Primary Configuration + +| Variable | Description | Example | +|----------|-------------|---------| +| `CUNO_OPTIONS` | Main configuration options string | `"uid=1000 gid=1000 filemode=0644"` | +| `CUNO_DEBUG` | Enable debug output | `1` or `true` | +| `LD_PRELOAD` | Library preload (set automatically) | `/path/to/cuno.so` | + +### Usage Examples + +```bash +# Single option +export CUNO_OPTIONS="uid=1000" + +# Multiple options (space-separated, quoted) +export CUNO_OPTIONS="uid=1000 gid=1000 filemode=0644" + +# Debug mode +export CUNO_DEBUG=1 +``` + +## Core Configuration Options + +### File System Permissions + +| Option | Type | Default | Description | +|--------|------|---------|-------------| +| `uid=` | integer | current user | File owner user ID | +| `gid=` | integer | current group | File owner group ID | +| `filemode=` | octal | `0666` | Default file permissions | +| `dirmode=` | octal | `0777` | Default directory permissions | + +**Examples:** +```bash +# Set all files to be owned by root with read-only permissions +export CUNO_OPTIONS="uid=0 gid=0 filemode=0444 dirmode=0555" + +# Web server configuration +export CUNO_OPTIONS="uid=33 gid=33 filemode=0664 dirmode=0775" +``` + +### Static Binary Handling + +| Option | Type | Default | Description | +|--------|------|---------|-------------| +| `+static` | flag | enabled | Enable static binary interception | +| `-static` | flag | disabled | Disable static binary interception | + +**Usage:** +```bash +# Disable static binary support +export CUNO_OPTIONS="-static" + +# Explicitly enable (default behavior) +export CUNO_OPTIONS="+static" +``` + +### Cloud Root Configuration + +| Option | Type | Default | Description | +|--------|------|---------|-------------| +| `cloudroot=` | string | `/cuno` | Custom cloud storage root path | +| `cloudrootover` | flag | disabled | Override cloud root for FlexMount | + +**Examples:** +```bash +# Custom cloud root +export CUNO_OPTIONS="cloudroot=/my-cloud-storage" +# Access: /my-cloud-storage/s3/bucket/ + +# FlexMount override +export CUNO_OPTIONS="cloudrootover cloudroot=/home/user/mount" +``` + +### URI Compatibility + +| Option | Format | Description | +|--------|--------|-------------| +| `+uricompat` | flag | Enable default URI overrides | +| `+uricompat=` | string | Enable for specific applications | +| `+uricompat=//` | string | Conditional URI override | + +**Default Supported Applications:** +- `rsync` +- `ffmpeg` +- `tar` +- `samtools` +- `igv` +- `fastQC` + +**Custom Application Examples:** +```bash +# Enable for custom applications +export CUNO_OPTIONS="+uricompat=myapp:otherapp" + +# Java application with specific JAR +export CUNO_OPTIONS="+uricompat=java/2/myapp.jar" + +# Python script with conditional matching +export CUNO_OPTIONS="+uricompat=python/*/myscript.py" +``` + +## Advanced Configuration + +### Memory and Performance + +| Option | Type | Description | +|--------|------|-------------| +| `cache_size=` | string | Set cache size (e.g., `1GB`, `500MB`) | +| `max_connections=` | integer | Maximum concurrent connections | + +### Network Configuration + +| Option | Type | Description | +|--------|------|-------------| +| `timeout=` | integer | Network request timeout | +| `retry_count=` | integer | Number of retry attempts | + +### Debug and Logging + +| Option | Type | Description | +|--------|------|-------------| +| `debug=` | integer | Debug verbosity level (0-5) | +| `log_file=` | string | Log file path | + +## Access Mode Configuration + +### Core File Access Mode + +Default mode with basic file operations: + +```bash +# Default configuration (no special options needed) +cuno +``` + +**Characteristics:** +- Dynamic ownership (current user) +- Standard permissions (`0666` files, `0777` directories) +- No persistent metadata + +### POSIX File Access Mode + +Enable persistent file system metadata: + +```bash +# Enable POSIX mode during mount +cuno mount ~/mount-point --posix +``` + +**Capabilities:** +- Persistent file ownership +- Modifiable permissions via `chmod`, `chown` +- File timestamps via `touch` +- Extended attributes support + +## Configuration File Locations + +### Linux +- User config: `~/.config/cuno/config.yaml` +- System config: `/etc/cuno/config.yaml` + +### macOS +- User config: `~/Library/Preferences/cuno/config.yaml` +- System config: `/etc/cuno/config.yaml` + +### Windows +- User config: `%APPDATA%\cuno\config.yaml` +- System config: `%PROGRAMDATA%\cuno\config.yaml` + +## Configuration Examples + +### Development Environment +```bash +export CUNO_OPTIONS="uid=1000 gid=1000 filemode=0644 dirmode=0755 +uricompat=python:node:java" +cuno +``` + +### Production Web Server +```bash +export CUNO_OPTIONS="uid=33 gid=33 filemode=0644 dirmode=0755 cloudroot=/var/cloud" +cuno mount /var/www/cloud --root s3://web-assets/ +``` + +### Media Processing Workflow +```bash +export CUNO_OPTIONS="+uricompat=ffmpeg:handbrake:mkvtoolnix cache_size=2GB" +cuno +``` + +### Data Science Environment +```bash +export CUNO_OPTIONS="+uricompat=python:R:jupyter filemode=0664 dirmode=0775" +cuno +``` + +## FlexMount Configuration + +FlexMount combines direct interception with FUSE fallback: + +### Basic FlexMount Setup +```bash +# 1. Create FUSE mount +cuno mount ~/cloud-storage + +# 2. Use with Object Mount CLI +cuno -o "cloudrootover cloudroot=$(realpath ~/cloud-storage)" +``` + +### Advanced FlexMount Configuration +```bash +# Custom cloud root FlexMount +cuno -o "cloudroot=/alt-root" mount ~/storage --root /alt-root +cuno -o "cloudrootover cloudroot=$(realpath ~/storage)" +``` + +## Troubleshooting Configuration + +### Verify Current Configuration +```bash +# Check environment variables +env | grep CUNO + +# Test configuration +cuno run bash -c 'echo "Config test successful"' +``` + +### Common Configuration Issues + +**Permission Problems:** +```bash +# Fix: Set appropriate uid/gid +export CUNO_OPTIONS="uid=$(id -u) gid=$(id -g)" +``` + +**Path Resolution Issues:** +```bash +# Fix: Use absolute paths for cloudroot +export CUNO_OPTIONS="cloudroot=$(realpath ~/cloud-storage)" +``` + +**Application Compatibility:** +```bash +# Fix: Add specific application to uricompat +export CUNO_OPTIONS="+uricompat=myapp" +``` + +## Validation and Testing + +### Configuration Validation +```bash +# Test basic functionality +cuno run ls /cuno/ + +# Test specific path format +cuno run ls s3://test-bucket/ + +# Test FUSE mount +ls ~/mount-point/s3/test-bucket/ +``` + +### Performance Testing +```bash +# Benchmark file operations +time cuno run cp large-file.bin s3://bucket/ +time cuno run rsync -av directory/ s3://bucket/backup/ +``` + +This reference covers all Object Mount configuration options. For platform-specific configuration details, refer to the installation guides for your operating system. \ No newline at end of file diff --git a/app/(docs)/object-mount/tutorials/_meta.json b/app/(docs)/object-mount/tutorials/_meta.json new file mode 100644 index 000000000..9b5a1208c --- /dev/null +++ b/app/(docs)/object-mount/tutorials/_meta.json @@ -0,0 +1,9 @@ +{ + "title": "Tutorials", + "nav": [ + { + "title": "Your first mount", + "id": "your-first-mount" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/object-mount/tutorials/your-first-mount.md b/app/(docs)/object-mount/tutorials/your-first-mount.md new file mode 100644 index 000000000..281403f2e --- /dev/null +++ b/app/(docs)/object-mount/tutorials/your-first-mount.md @@ -0,0 +1,299 @@ +--- +title: Your first mount +docId: your-first-object-mount +metadata: + title: Your First Object Mount Tutorial + description: Complete 15-minute hands-on tutorial to mount and access Storj files using Object Mount with step-by-step instructions. +--- + +This tutorial walks you through mounting your first Storj bucket as a filesystem using Object Mount. By the end, you'll understand how to seamlessly access cloud storage as if it were local files. + +## What you'll build + +In this 15-minute hands-on tutorial, you'll: + +- Install Object Mount on your Linux system +- Configure credentials to access your Storj storage +- Mount a bucket as a local filesystem +- Create, edit, and manage files directly in cloud storage +- Understand the performance characteristics and best practices + +**Expected time to complete**: 15-20 minutes + +## Prerequisites + +Before starting, ensure you have: + +- A Linux system (Ubuntu 18.04+, Debian 9+, RHEL 7+, or CentOS 7+) +- Administrative privileges to install packages +- A Storj account with S3-compatible credentials +- At least one existing bucket with some test files +- Basic command-line familiarity + +If you need to set up credentials or buckets, complete the [Getting Started guide](docId:AsyYcUJFbO1JI8-Tu8tW3) first. + +## Step 1: Install Object Mount + +Choose the installation method for your Linux distribution: + +{% tabs %} + +{% tab label="Ubuntu/Debian" %} + +```shell +# Download and install the package +curl -L https://github.com/storj/edge/releases/latest/download/object-mount_linux_amd64.deb -o object-mount.deb +sudo dpkg -i object-mount.deb +``` + +{% /tab %} + +{% tab label="RHEL/CentOS" %} + +```shell +# Download and install the package +curl -L https://github.com/storj/edge/releases/latest/download/object-mount_linux_amd64.rpm -o object-mount.rpm +sudo rpm -i object-mount.rpm +``` + +{% /tab %} + +{% tab label="Generic Linux" %} + +```shell +# Download and extract the binary +curl -L https://github.com/storj/edge/releases/latest/download/object-mount_linux_amd64.tar.gz -o object-mount.tar.gz +tar -xzf object-mount.tar.gz +sudo mv object-mount /usr/local/bin/ +sudo chmod +x /usr/local/bin/object-mount +``` + +{% /tab %} + +{% /tabs %} + +**Expected outcome**: Object Mount should be installed and available in your PATH. Verify with: + +```shell +object-mount --version +``` + +## Step 2: Configure your credentials + +Create a configuration file with your Storj credentials: + +```shell +# Create config directory +mkdir -p ~/.config/object-mount + +# Create configuration file +cat > ~/.config/object-mount/config.yaml << EOF +credentials: + access_key_id: "your_access_key_here" + secret_access_key: "your_secret_key_here" + endpoint: "https://gateway.storjshare.io" + +# Optional performance settings +cache: + directory: "/tmp/object-mount-cache" + size: "1GB" + +logging: + level: "info" +EOF +``` + +Replace `your_access_key_here` and `your_secret_key_here` with your actual Storj S3 credentials. + +**Expected outcome**: Your configuration file should be created successfully. Test connectivity: + +```shell +object-mount test-connection +``` + +## Step 3: Create a mount point + +Prepare a directory where your bucket will appear: + +```shell +# Create mount directory +mkdir -p ~/storj-mount + +# Verify the directory is empty +ls -la ~/storj-mount +``` + +**Expected outcome**: You should see an empty directory that will serve as your mount point. + +## Step 4: Mount your bucket + +Now mount your Storj bucket as a filesystem: + +```shell +# Mount bucket (replace 'my-bucket' with your actual bucket name) +object-mount mount my-bucket ~/storj-mount + +# Verify mount succeeded +mount | grep object-mount +``` + +You should see output indicating the mount is active. + +**Expected outcome**: Your bucket is now mounted and accessible as a local directory. The command should complete without errors. + +## Step 5: Explore your mounted storage + +Navigate to your mount point and explore: + +```shell +# Change to mount directory +cd ~/storj-mount + +# List files (should show your bucket contents) +ls -la + +# Check filesystem type +df -h ~/storj-mount +``` + +**Expected outcome**: You should see the files and directories from your Storj bucket listed as if they were local files. + +## Step 6: Create and edit files + +Now let's create and modify files directly in cloud storage: + +```shell +# Create a new file +echo "Hello from Object Mount!" > test-file.txt + +# View the file +cat test-file.txt + +# Edit the file with your preferred editor +nano test-file.txt # or vim, emacs, etc. +``` + +Add some additional content and save the file. + +**Expected outcome**: You should be able to create, view, and edit files seamlessly. The changes are automatically synced to your Storj storage. + +## Step 7: Test file operations + +Perform various file operations to understand Object Mount capabilities: + +```shell +# Create a directory +mkdir project-files + +# Copy files +cp test-file.txt project-files/copy-of-test.txt + +# Move/rename files +mv test-file.txt renamed-test.txt + +# Check file permissions +ls -la renamed-test.txt + +# Create a symbolic link +ln -s renamed-test.txt link-to-test.txt +``` + +**Expected outcome**: All standard filesystem operations should work normally, with changes reflected in your Storj storage. + +## Step 8: Monitor performance + +Open a second terminal and monitor Object Mount activity: + +```shell +# In second terminal - monitor mount activity +object-mount status + +# Check cache usage +du -sh /tmp/object-mount-cache + +# Monitor real-time activity (if available) +object-mount logs --follow +``` + +Try copying a larger file in your first terminal and watch the activity in the second. + +**Expected outcome**: You should see Object Mount efficiently managing data transfer and caching operations. + +## Step 9: Understand the object storage integration + +Verify that your files are actually stored in Storj: + +```shell +# In a third terminal, check your bucket using CLI tools +# (if you have rclone or aws cli configured) +rclone ls storj:my-bucket + +# Or use the Storj Console web interface +# Navigate to your bucket and verify files are there +``` + +**Expected outcome**: Files created through Object Mount should be visible in your Storj bucket through other access methods. + +## Step 10: Unmount and cleanup + +When finished, properly unmount your storage: + +```shell +# Change out of mount directory +cd ~ + +# Unmount the filesystem +object-mount unmount ~/storj-mount + +# Verify unmount +ls ~/storj-mount +# (should be empty) + +# Clean up cache if desired +rm -rf /tmp/object-mount-cache +``` + +**Expected outcome**: The mount should be cleanly removed and the mount directory should be empty. + +## What you've accomplished + +Congratulations! You've successfully used Object Mount to: + +- Install and configure Object Mount for Storj access +- Mount cloud storage as a local filesystem +- Perform standard file operations on cloud-stored data +- Experience the seamless integration between POSIX applications and object storage +- Monitor performance and understand caching behavior + +## Understanding what happened + +**Object Mount magic**: Object Mount intercepted your filesystem calls and translated them to object storage operations. When you created `test-file.txt`, it became an object in your Storj bucket. When you edited it, Object Mount optimized the process using caching and prediction. + +**Performance characteristics**: +- **Reads**: Very fast due to intelligent caching +- **Writes**: Optimized with write-behind caching +- **Metadata operations**: Cached for performance +- **Large files**: Handled efficiently with streaming and partial reads + +**POSIX compliance**: Object Mount provides NFS-equivalent consistency guarantees while maintaining compatibility with all standard filesystem operations. + +## What's next + +Now that you understand the basics of Object Mount: + +### Explore Advanced Features +- [Set up Object Mount Fusion](docId:Xaegoh6iedietahf) for enhanced performance with frequent writes +- [Configure POSIX permissions and metadata](#) for multi-user environments +- [Optimize performance settings](#) for your specific workload + +### Production Deployment +- [Install Object Mount in containerized environments](#) +- [Set up monitoring and logging](#) +- [Configure high-availability mounts](#) + +### Application Integration +- [Use Object Mount with media editing workflows](#) +- [Integrate with data processing pipelines](#) +- [Set up development environments with cloud storage](#) + +Ready to explore media workflows? Check out [Object Mount for video editing](docId:media-workflows) to see how to edit large media files directly from cloud storage. \ No newline at end of file From 58937b07a81098ddc6976f76f81896dd102c29db Mon Sep 17 00:00:00 2001 From: onionjake <113088+onionjake@users.noreply.github.com> Date: Fri, 29 Aug 2025 13:47:05 -0600 Subject: [PATCH 2/8] Next phase of adopting Diataxis --- app/(docs)/dcs/how-to/_meta.json | 20 + app/(docs)/dcs/how-to/migrate-from-s3.md | 273 ++++++ .../dcs/how-to/optimize-upload-performance.md | 175 ++++ app/(docs)/dcs/how-to/setup-bucket-logging.md | 171 ++++ .../dcs/how-to/setup-object-versioning.md | 126 +++ app/(docs)/dcs/how-to/use-presigned-urls.md | 168 ++++ app/(docs)/dcs/tutorials/_meta.json | 13 + .../dcs/tutorials/build-your-first-app.md | 827 ++++++++++++++++++ .../tutorials/your-first-week-with-storj.md | 577 ++++++++++++ app/(docs)/node/how-to/_meta.json | 12 + .../node/how-to/fix-database-corruption.md | 267 ++++++ .../node/how-to/monitor-node-performance.md | 405 +++++++++ app/(docs)/node/how-to/setup-remote-access.md | 382 ++++++++ app/(docs)/object-mount/how-to/_meta.json | 25 + .../how-to/configure-posix-permissions.md | 204 +++++ .../how-to/install-debian-ubuntu.md | 181 ++++ .../how-to/install-rhel-centos.md | 280 ++++++ .../how-to/optimize-large-files.md | 255 ++++++ .../how-to/troubleshoot-mount-issues.md | 286 ++++++ 19 files changed, 4647 insertions(+) create mode 100644 app/(docs)/dcs/how-to/migrate-from-s3.md create mode 100644 app/(docs)/dcs/how-to/optimize-upload-performance.md create mode 100644 app/(docs)/dcs/how-to/setup-bucket-logging.md create mode 100644 app/(docs)/dcs/how-to/setup-object-versioning.md create mode 100644 app/(docs)/dcs/how-to/use-presigned-urls.md create mode 100644 app/(docs)/dcs/tutorials/_meta.json create mode 100644 app/(docs)/dcs/tutorials/build-your-first-app.md create mode 100644 app/(docs)/dcs/tutorials/your-first-week-with-storj.md create mode 100644 app/(docs)/node/how-to/fix-database-corruption.md create mode 100644 app/(docs)/node/how-to/monitor-node-performance.md create mode 100644 app/(docs)/node/how-to/setup-remote-access.md create mode 100644 app/(docs)/object-mount/how-to/_meta.json create mode 100644 app/(docs)/object-mount/how-to/configure-posix-permissions.md create mode 100644 app/(docs)/object-mount/how-to/install-debian-ubuntu.md create mode 100644 app/(docs)/object-mount/how-to/install-rhel-centos.md create mode 100644 app/(docs)/object-mount/how-to/optimize-large-files.md create mode 100644 app/(docs)/object-mount/how-to/troubleshoot-mount-issues.md diff --git a/app/(docs)/dcs/how-to/_meta.json b/app/(docs)/dcs/how-to/_meta.json index 1a7157edd..b87daf07d 100644 --- a/app/(docs)/dcs/how-to/_meta.json +++ b/app/(docs)/dcs/how-to/_meta.json @@ -12,6 +12,26 @@ { "title": "Configure CORS", "id": "configure-cors" + }, + { + "title": "Set up object versioning", + "id": "setup-object-versioning" + }, + { + "title": "Use presigned URLs", + "id": "use-presigned-urls" + }, + { + "title": "Optimize upload performance", + "id": "optimize-upload-performance" + }, + { + "title": "Set up bucket logging", + "id": "setup-bucket-logging" + }, + { + "title": "Migrate from AWS S3", + "id": "migrate-from-s3" } ] } \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/migrate-from-s3.md b/app/(docs)/dcs/how-to/migrate-from-s3.md new file mode 100644 index 000000000..618f8121b --- /dev/null +++ b/app/(docs)/dcs/how-to/migrate-from-s3.md @@ -0,0 +1,273 @@ +--- +title: Migrate from AWS S3 +docId: migrate-from-s3-guide +metadata: + title: How to Migrate from AWS S3 to Storj DCS + description: Complete guide to migrate your data and applications from Amazon S3 to Storj DCS +--- + +Migrate your data and applications from Amazon S3 to Storj DCS with minimal disruption to your workflows. + +## Prerequisites + +- AWS S3 buckets and data to migrate +- AWS CLI or S3-compatible tools (Rclone recommended) +- Storj DCS account with project set up +- S3-compatible credentials for Storj DCS + +## Migration planning + +### Assess your current setup + +1. **Inventory your S3 buckets**: List all buckets and estimate data volumes +2. **Document S3 features in use**: Versioning, lifecycle policies, CORS, etc. +3. **Review access patterns**: Identify high-traffic vs. archival data +4. **Check integrations**: Note applications and services using S3 + +### Create migration timeline + +- **Small datasets (< 1TB)**: Can typically migrate in hours +- **Medium datasets (1-10TB)**: Plan for 1-3 days +- **Large datasets (> 10TB)**: May require 1-2 weeks with parallel transfers + +## Set up Storj DCS + +### Create destination buckets + +Match your S3 bucket structure in Storj: + +```bash +# Create buckets using uplink CLI +uplink mb sj://production-data +uplink mb sj://staging-assets +uplink mb sj://backup-files +``` + +### Configure equivalent features + +Enable features that match your S3 setup: +- **Object versioning**: [Set up versioning](docId:setup-object-vers1) if used in S3 +- **CORS policies**: [Configure CORS](docId:configure-cors) for web applications +- **Bucket logging**: [Request logging](docId:setup-bucket-logging) if needed + +## Choose migration method + +### Method 1: Rclone (recommended) + +Best for most migrations due to parallel transfers and resume capability. + +#### Configure Rclone for both providers + +```bash +# Configure AWS S3 source +rclone config create aws-source s3 \ + provider=AWS \ + access_key_id=YOUR_AWS_ACCESS_KEY \ + secret_access_key=YOUR_AWS_SECRET_KEY \ + region=us-east-1 + +# Configure Storj destination +rclone config create storj-dest s3 \ + provider=Other \ + access_key_id=YOUR_STORJ_ACCESS_KEY \ + secret_access_key=YOUR_STORJ_SECRET_KEY \ + endpoint=https://gateway.storjshare.io +``` + +#### Perform the migration + +```bash +# Migrate single bucket with progress tracking +rclone copy aws-source:source-bucket storj-dest:dest-bucket \ + --progress --stats 30s \ + --transfers 4 \ + --s3-chunk-size 64M \ + --checksum + +# Migrate multiple buckets +rclone copy aws-source: storj-dest: \ + --progress --stats 30s \ + --transfers 2 \ + --s3-chunk-size 64M \ + --exclude "*.tmp" +``` + +### Method 2: AWS CLI to Storj + +Good for scripted migrations and AWS CLI users. + +#### Set up dual configuration + +```bash +# Configure AWS CLI with Storj profile +aws configure set profile.storj.s3.endpoint_url https://gateway.storjshare.io +aws configure set profile.storj.aws_access_key_id YOUR_STORJ_ACCESS_KEY +aws configure set profile.storj.aws_secret_access_key YOUR_STORJ_SECRET_KEY +``` + +#### Migrate data + +```bash +# Sync bucket contents +aws s3 sync s3://aws-source-bucket s3://storj-dest-bucket \ + --profile storj \ + --exclude "*.tmp" \ + --delete +``` + +## Optimize migration performance + +### Large file transfers + +```bash +# Use maximum parallelism for large files +rclone copy aws-source:large-files storj-dest:large-files \ + --transfers 2 \ + --s3-upload-concurrency 32 \ + --s3-chunk-size 64M \ + --progress +``` + +### Many small files + +```bash +# Increase concurrent transfers for small files +rclone copy aws-source:small-files storj-dest:small-files \ + --transfers 8 \ + --s3-chunk-size 64M \ + --progress +``` + +### Resume interrupted transfers + +```bash +# Rclone automatically resumes with same command +rclone copy aws-source:bucket storj-dest:bucket \ + --progress \ + --transfers 4 +``` + +## Update applications + +### Change S3 endpoints + +Update your application configuration to use Storj: + +```python +# Before (AWS S3) +s3_client = boto3.client('s3', region_name='us-east-1') + +# After (Storj DCS) +s3_client = boto3.client( + 's3', + endpoint_url='https://gateway.storjshare.io', + aws_access_key_id='your_storj_access_key', + aws_secret_access_key='your_storj_secret_key' +) +``` + +### Update SDK configurations + +Most S3-compatible SDKs only need endpoint URL changes: + +```javascript +// Node.js AWS SDK v3 +const s3Client = new S3Client({ + endpoint: "https://gateway.storjshare.io", + credentials: { + accessKeyId: "your_storj_access_key", + secretAccessKey: "your_storj_secret_key" + }, + region: "us-east-1" // Required but not used by Storj +}); +``` + +## Verification + +### Validate data integrity + +```bash +# Compare object counts +aws s3api list-objects-v2 --bucket aws-source | jq '.KeyCount' +aws s3api list-objects-v2 --bucket storj-dest --profile storj | jq '.KeyCount' + +# Verify file checksums with rclone +rclone check aws-source:bucket storj-dest:bucket --one-way +``` + +### Test application functionality + +1. **Update staging environment**: Test applications against Storj endpoints +2. **Verify uploads/downloads**: Confirm all operations work correctly +3. **Check performance**: Monitor transfer speeds and latency +4. **Test error handling**: Ensure graceful handling of any compatibility issues + +## Production cutover + +### Gradual migration approach + +1. **Phase 1**: Migrate archival/backup data first +2. **Phase 2**: Migrate staging environments +3. **Phase 3**: Switch production traffic during low-usage periods + +### DNS and load balancer updates + +For applications using custom domains: +- Update DNS CNAME records to point to Storj endpoints +- Modify load balancer configurations +- Update CDN origin settings if applicable + +## Post-migration cleanup + +### Monitor performance + +Track key metrics for the first few weeks: +- Transfer speeds and latency +- Error rates and failed requests +- Storage costs compared to S3 + +### Decommission AWS resources + +After successful migration: +1. **Backup verification**: Ensure all data migrated correctly +2. **Stop S3 lifecycle policies**: Prevent unexpected deletions +3. **Delete S3 buckets**: Remove old buckets to stop billing +4. **Clean up IAM roles**: Remove unused S3 access policies + +## Troubleshooting + +**Slow migration speeds**: +- Increase `--transfers` and `--s3-upload-concurrency` +- Check bandwidth limitations +- Consider migrating during off-peak hours + +**Authentication errors**: +- Verify Storj S3 credentials are correct +- Ensure endpoint URL uses `https://gateway.storjshare.io` +- Check that access grants have proper permissions + +**Application compatibility issues**: +- Review [S3 API compatibility](docId:your-s3-compat-doc) documentation +- Test specific S3 features your application uses +- Contact Storj support for compatibility questions + +## Cost optimization + +### Compare ongoing costs + +Monitor your new costs structure: +- **Storage**: $4/TB/month vs. S3 pricing +- **Bandwidth**: $7/TB egress vs. S3 data transfer costs +- **Operations**: No per-request charges vs. S3 request pricing + +### Implement cost controls + +- Set up billing alerts +- Monitor usage patterns +- Optimize data lifecycle policies + +## Next steps + +- Set up [performance monitoring](docId:your-monitoring-guide) for ongoing optimization +- Configure [backup strategies](docId:your-backup-guide) for critical data +- Learn about [advanced Storj features](docId:your-advanced-guide) to maximize benefits \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/optimize-upload-performance.md b/app/(docs)/dcs/how-to/optimize-upload-performance.md new file mode 100644 index 000000000..82b504e5f --- /dev/null +++ b/app/(docs)/dcs/how-to/optimize-upload-performance.md @@ -0,0 +1,175 @@ +--- +title: Optimize upload performance +docId: optimize-upload-perf +metadata: + title: How to Optimize Upload Performance - Storj DCS + description: Improve file upload speeds using parallel transfers with Rclone and Uplink CLI +--- + +Optimize your upload performance to Storj DCS using parallel transfers and proper configuration settings. + +## Prerequisites + +- Storj DCS account with project and bucket set up +- Rclone configured with Storj (recommended for multiple files) +- OR Uplink CLI installed (for single large files) +- Files ready for upload +- Sufficient RAM for parallel transfers + +## Choose the right tool + +**For multiple small-to-medium files (< 1GB each)**: Use Rclone with `--transfers` flag +**For single large files (> 1GB)**: Use Uplink CLI with `--parallelism` flag +**For mixed workloads**: Start with Rclone + +## Optimize single large file uploads + +### Calculate optimal settings + +For a large file, determine concurrency based on file size: +- File size ÷ 64MB = maximum concurrency segments +- Each segment uses ~64MB of RAM during transfer + +Example for 1GB file: 1024MB ÷ 64MB = 16 segments maximum + +### Upload with Rclone + +```bash +# Upload 1GB file with optimal parallelism +rclone copy --progress \ + --s3-upload-concurrency 16 \ + --s3-chunk-size 64M \ + large-file.zip remote:bucket +``` + +### Upload with Uplink CLI + +```bash +# Upload with native Storj performance +uplink cp large-file.zip sj://bucket/large-file.zip --parallelism 4 +``` + +## Optimize multiple file uploads + +### Upload multiple files simultaneously + +```bash +# Upload 4 files at once with Rclone +rclone copy --progress \ + --transfers 4 \ + --s3-upload-concurrency 16 \ + --s3-chunk-size 64M \ + /local/folder remote:bucket +``` + +### Calculate memory usage + +Memory usage = transfers × concurrency × chunk size +- 4 transfers × 16 concurrency × 64MB = 4GB RAM required + +## Configuration examples + +### Small files (< 100MB) +```bash +rclone copy --progress \ + --transfers 10 \ + --s3-chunk-size 64M \ + /local/photos remote:bucket +``` + +### Medium files (100MB - 1GB) +```bash +rclone copy --progress \ + --transfers 4 \ + --s3-upload-concurrency 8 \ + --s3-chunk-size 64M \ + /local/videos remote:bucket +``` + +### Large files (> 1GB) +```bash +rclone copy --progress \ + --transfers 2 \ + --s3-upload-concurrency 32 \ + --s3-chunk-size 64M \ + /local/archives remote:bucket +``` + +## Monitor and adjust performance + +### Watch transfer progress + +Add monitoring flags to see performance: +```bash +rclone copy --progress --stats 30s \ + --transfers 4 \ + --s3-upload-concurrency 16 \ + --s3-chunk-size 64M \ + /local/folder remote:bucket +``` + +### Test different settings + +1. Start with conservative settings +2. Monitor RAM and CPU usage +3. Gradually increase concurrency/transfers +4. Find optimal balance for your system + +## Verification + +1. **Check upload completion**: Verify all files appear in your bucket +2. **Monitor system resources**: Ensure RAM/CPU usage stays manageable +3. **Measure throughput**: Compare upload speeds with different settings +4. **Test large files**: Confirm segment parallelism works correctly + +## Troubleshooting + +**Out of memory errors**: +- Reduce `--transfers` or `--s3-upload-concurrency` +- Monitor RAM usage during transfers +- Consider smaller `--s3-chunk-size` for very memory-limited systems + +**Slow upload speeds**: +- Increase concurrency if you have available RAM +- Check network bandwidth limitations +- Try Uplink CLI for single large files + +**Transfer failures**: +- Reduce parallelism settings and retry +- Check network stability +- Verify bucket permissions and access + +## Advanced optimization + +### System-specific tuning + +Calculate optimal settings for your system: +```bash +# Check available RAM +free -h + +# Monitor transfer performance +htop + +# Test network bandwidth +speedtest-cli +``` + +### Batch operations + +For regular uploads, create scripts with optimized settings: +```bash +#!/bin/bash +# upload-optimized.sh +rclone copy --progress \ + --transfers 6 \ + --s3-upload-concurrency 12 \ + --s3-chunk-size 64M \ + "$1" remote:bucket +``` + +## Next steps + +- Learn about [download performance optimization](docId:optimize-download-perf) +- Set up [monitoring and analytics](docId:bucket-logging) for transfer metrics +- Configure [automated sync](docId:use-rclone) for ongoing file management \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/setup-bucket-logging.md b/app/(docs)/dcs/how-to/setup-bucket-logging.md new file mode 100644 index 000000000..8e07f83fa --- /dev/null +++ b/app/(docs)/dcs/how-to/setup-bucket-logging.md @@ -0,0 +1,171 @@ +--- +title: Set up bucket logging +docId: setup-bucket-logging +metadata: + title: How to Set Up Bucket Logging - Storj DCS + description: Enable server access logging for your Storj DCS buckets to track all requests and operations +--- + +Enable bucket logging to track all access requests to your buckets. This feature is available upon request and provides detailed logs in S3 server access log format. + +## Prerequisites + +- Active Storj DCS project +- Target bucket to monitor +- Destination bucket for storing logs +- Write-only access grant for the destination bucket + +## Request bucket logging activation + +### Submit support request + +1. Go to [Storj DCS Support](https://supportdcs.storj.io/hc/en-us/requests/new?ticket_form_id=360000379291) +2. Select "Enable Bucket Logging" as the subject +3. Allow up to two weeks for processing + +### Prepare required information + +Gather the following details for your request: + +**Source bucket information:** +- Satellite (AP1, EU1, or US1) +- Project name +- Bucket name(s) to monitor + +**Destination bucket information:** +- Destination project name +- Destination bucket name +- Optional: Key prefix for log files +- Write-only access grant (see next section) + +## Create write-only access grant + +### Generate the access grant + +1. **Open Storj console**: Log in to your satellite UI +2. **Create new access**: Click "New Access Key" → "Access Grant" +3. **Name the grant**: Use descriptive name like "bucket-logging-destination" + +### Configure advanced options + +4. **Select Advanced Options**: Click "Advanced Options" on the second screen +5. **Set encryption passphrase**: Enter a secure passphrase + +{% callout type="warning" %} +**Important:** Save this passphrase securely. You'll need it to decrypt log data later. +{% /callout %} + +### Set permissions + +6. **Choose Write Only**: Select "Write Only" permissions +7. **Limit to destination bucket**: Specify the exact bucket for logs +8. **Set no expiration**: Select "No Expiration" to ensure continuous logging + +### Complete access grant creation + +9. **Review settings**: Verify all selections are correct +10. **Create access**: Click "Create Access" to generate the grant +11. **Save the access grant**: Copy and securely store the generated access grant string + +## Submit your logging request + +Include this information in your support ticket: + +``` +Subject: Enable Bucket Logging + +Source Bucket Details: +- Satellite: US1 +- Project Name: my-production-project +- Bucket Name: my-monitored-bucket + +Destination Details: +- Destination Project: my-logging-project +- Destination Bucket: access-logs-bucket +- Prefix (optional): prod-logs/ +- Write-only Access Grant: [paste your access grant here] +``` + +## Verification + +After logging is enabled by Storj support: + +### Check for log files + +1. **Wait for activity**: Perform some operations on your monitored bucket +2. **Check destination bucket**: Look for log files in your destination bucket +3. **Verify log format**: Confirm logs follow the expected naming pattern: + ``` + [prefix]YYYY-MM-DD-hh-mm-ss-[UniqueString] + ``` + +### Download and examine logs + +```bash +# Download recent log files +uplink cp sj://logs-bucket/prod-logs/ ./logs/ --recursive + +# Examine log content (example format) +cat 2024-08-29-15-30-45-ABC123.log +``` + +## Understanding log format + +Log entries follow [Amazon S3 Server Access Log Format](https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html) with these key fields: + +- **Timestamp**: When the request occurred +- **Remote IP**: Client IP address +- **Requester**: Authenticated user ID +- **Request ID**: Unique identifier for the request +- **Operation**: API operation performed (GET, PUT, DELETE, etc.) +- **Key**: Object key accessed +- **HTTP Status**: Response code (200, 404, etc.) +- **Bytes Sent**: Size of response +- **User Agent**: Client application identifier + +### Example log entry + +``` +project-id bucket-name [29/Aug/2024:15:30:45 +0000] 192.168.1.100 +user-id ABC123 GetObject myfile.pdf "GET /bucket-name/myfile.pdf HTTP/1.1" +200 - 1024 - - - "curl/7.68.0" - request-signature SigV4 +``` + +## Troubleshooting + +**No log files appearing**: +- Verify bucket logging was activated by support +- Confirm the access grant has write permissions to destination bucket +- Check that monitored bucket has actual activity + +**Cannot decrypt log files**: +- Ensure you're using the correct encryption passphrase from access grant creation +- Verify the access grant hasn't expired + +**Access denied errors**: +- Confirm the write-only access grant is valid +- Check that destination bucket exists and is accessible +- Verify project permissions + +## Monitor and manage logs + +### Log rotation and storage + +Logs are automatically created with timestamps. Consider: +- Setting up lifecycle policies for log retention +- Monitoring storage costs for log accumulation +- Implementing automated log processing pipelines + +### Analyze access patterns + +Use logs to: +- Monitor access frequency and patterns +- Identify unusual access attempts +- Track bandwidth usage per client +- Audit compliance requirements + +## Next steps + +- Set up [automated log processing](docId:your-log-processing-guide) +- Configure [monitoring alerts](docId:your-monitoring-guide) for unusual access patterns +- Learn about [security best practices](docId:your-security-guide) for log management \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/setup-object-versioning.md b/app/(docs)/dcs/how-to/setup-object-versioning.md new file mode 100644 index 000000000..edf15cc83 --- /dev/null +++ b/app/(docs)/dcs/how-to/setup-object-versioning.md @@ -0,0 +1,126 @@ +--- +title: Set up object versioning +docId: setup-object-vers1 +metadata: + title: How to Set Up Object Versioning - Storj DCS + description: Step-by-step guide to enable object versioning on your Storj DCS buckets for data protection and recovery +--- + +Set up object versioning to preserve, retrieve, and restore every version of every object in your bucket. This adds data protection against accidental deletions and overwrites. + +## Prerequisites + +- Storj DCS account with active project +- S3-compatible credentials (access key and secret key) +- Bucket where you want to enable versioning +- S3-compatible tool or SDK (AWS CLI, boto3, etc.) + +## Enable versioning on a bucket + +### Using AWS CLI + +Configure your credentials and enable versioning: + +```bash +# Configure credentials (one time setup) +aws configure set aws_access_key_id YOUR_ACCESS_KEY +aws configure set aws_secret_access_key YOUR_SECRET_KEY +aws configure set default.region us-east-1 + +# Enable versioning on bucket +aws s3api put-bucket-versioning \ + --bucket YOUR_BUCKET_NAME \ + --versioning-configuration Status=Enabled \ + --endpoint-url https://gateway.storjshare.io +``` + +### Using Python (boto3) + +```python +import boto3 + +# Create S3 client +s3 = boto3.client( + 's3', + aws_access_key_id='YOUR_ACCESS_KEY', + aws_secret_access_key='YOUR_SECRET_KEY', + endpoint_url='https://gateway.storjshare.io' +) + +# Enable versioning +s3.put_bucket_versioning( + Bucket='YOUR_BUCKET_NAME', + VersioningConfiguration={ + 'Status': 'Enabled' + } +) +``` + +## Verify versioning is enabled + +Check the versioning status of your bucket: + +```bash +# Using AWS CLI +aws s3api get-bucket-versioning \ + --bucket YOUR_BUCKET_NAME \ + --endpoint-url https://gateway.storjshare.io +``` + +Expected output: +```json +{ + "Status": "Enabled" +} +``` + +## Upload versioned objects + +Once versioning is enabled, each object upload creates a new version: + +```bash +# Upload the same file multiple times +echo "Version 1" > test-file.txt +aws s3 cp test-file.txt s3://YOUR_BUCKET_NAME/ --endpoint-url https://gateway.storjshare.io + +echo "Version 2" > test-file.txt +aws s3 cp test-file.txt s3://YOUR_BUCKET_NAME/ --endpoint-url https://gateway.storjshare.io +``` + +List all versions: +```bash +aws s3api list-object-versions \ + --bucket YOUR_BUCKET_NAME \ + --endpoint-url https://gateway.storjshare.io +``` + +## Suspend versioning + +To stop creating new versions while keeping existing ones: + +```bash +aws s3api put-bucket-versioning \ + --bucket YOUR_BUCKET_NAME \ + --versioning-configuration Status=Suspended \ + --endpoint-url https://gateway.storjshare.io +``` + +## Verification + +1. **Check versioning status**: Run `get-bucket-versioning` and confirm `Status: "Enabled"` +2. **Upload test files**: Upload the same filename twice and verify multiple versions exist +3. **List versions**: Use `list-object-versions` to see all versions of your objects + +## Troubleshooting + +**"Versioning cannot be enabled" error**: The bucket was created before versioning support. Create a new bucket to use versioning. + +**No versions appearing**: Ensure you're using the S3-compatible gateway endpoint (`https://gateway.storjshare.io`) in your commands. + +**Cost concerns**: Each object version is stored separately and incurs storage costs. Monitor your usage and implement lifecycle policies to manage older versions. + +## Next steps + +- [Configure Object Lock](docId:gjrGzPNnhpYrAGTTAUaj) for additional protection +- Learn about [object lifecycle management](docId:your-lifecycle-doc) +- Set up [bucket logging](docId:your-logging-doc) to track version access \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/use-presigned-urls.md b/app/(docs)/dcs/how-to/use-presigned-urls.md new file mode 100644 index 000000000..f6fa25852 --- /dev/null +++ b/app/(docs)/dcs/how-to/use-presigned-urls.md @@ -0,0 +1,168 @@ +--- +title: Use presigned URLs +docId: use-presigned-urls1 +metadata: + title: How to Use Presigned URLs - Storj DCS + description: Create presigned URLs to allow unauthenticated access to your Storj objects for uploads and downloads +--- + +Create presigned URLs to enable unauthenticated users to upload or download objects without exposing your credentials. + +## Prerequisites + +- Storj DCS account with S3-compatible credentials +- Python 3.x installed +- boto3 library (`pip install boto3`) +- Target bucket already created + +## Create presigned URL for uploads + +### Set up the upload script + +Create `create_upload_url.py`: + +```python +import boto3 + +# Configure your credentials +ACCESS_KEY = "your_access_key_here" +SECRET_KEY = "your_secret_key_here" +ENDPOINT_URL = "https://gateway.storjshare.io" +BUCKET_NAME = "your-bucket-name" + +# Create S3 client +session = boto3.session.Session() +s3 = session.client( + service_name="s3", + aws_access_key_id=ACCESS_KEY, + aws_secret_access_key=SECRET_KEY, + endpoint_url=ENDPOINT_URL +) + +# Generate presigned URL for upload (valid for 1 hour) +upload_url = s3.generate_presigned_url( + 'put_object', + Params={ + "Bucket": BUCKET_NAME, + "Key": "uploads/my-file.txt" # Path where file will be stored + }, + ExpiresIn=3600 # URL expires in 1 hour +) + +print("Upload URL:", upload_url) +``` + +### Run the script + +```bash +python3 create_upload_url.py +``` + +### Use the presigned URL + +Upload a file using curl: + +```bash +curl -X PUT \ + --upload-file local-file.txt \ + "YOUR_GENERATED_PRESIGNED_URL" +``` + +## Create presigned URL for downloads + +### Set up the download script + +Create `create_download_url.py`: + +```python +import boto3 + +# Configure your credentials +ACCESS_KEY = "your_access_key_here" +SECRET_KEY = "your_secret_key_here" +ENDPOINT_URL = "https://gateway.storjshare.io" +BUCKET_NAME = "your-bucket-name" + +# Create S3 client +session = boto3.session.Session() +s3 = session.client( + service_name="s3", + aws_access_key_id=ACCESS_KEY, + aws_secret_access_key=SECRET_KEY, + endpoint_url=ENDPOINT_URL +) + +# Generate presigned URL for download (valid for 1 hour) +download_url = s3.generate_presigned_url( + 'get_object', + Params={ + "Bucket": BUCKET_NAME, + "Key": "path/to/your/file.txt" # Existing file path + }, + ExpiresIn=3600 # URL expires in 1 hour +) + +print("Download URL:", download_url) +``` + +### Use the download URL + +Download the file: + +```bash +curl -o downloaded-file.txt "YOUR_GENERATED_PRESIGNED_URL" +``` + +Or share the URL directly with users to download in their browser. + +## Customize expiration time + +Set different expiration periods based on your needs: + +```python +# 15 minutes +ExpiresIn=900 + +# 24 hours +ExpiresIn=86400 + +# 7 days +ExpiresIn=604800 +``` + +## Verification + +1. **Generate URL**: Run your script and confirm it outputs a valid URL +2. **Test upload**: Use curl to upload a file with the presigned upload URL +3. **Check object**: Verify the object appears in your bucket +4. **Test download**: Generate a download URL and verify you can retrieve the file +5. **Test expiration**: Wait for URL to expire and confirm it no longer works + +## Troubleshooting + +**"Access Denied" errors**: +- Verify your S3 credentials have proper permissions +- Check that the bucket name is correct +- Ensure you're using the correct endpoint URL + +**"URL expired" errors**: +- Generate a new presigned URL +- Increase the `ExpiresIn` value if needed + +**Upload fails**: +- Verify the object key path is valid +- Ensure the bucket allows uploads +- Check that the file size is within limits + +## Security considerations + +- URLs contain temporary credentials in query parameters +- Share URLs only over secure channels (HTTPS) +- Use appropriate expiration times (shorter is more secure) +- Monitor bucket access logs for unauthorized usage + +## Next steps + +- Learn about [Storj Linkshare](docId:sN2GhYgGUtqBVF65GhKEa) as an alternative sharing method +- Set up [static website hosting](docId:GkgE6Egi02wRZtyryFyPz) for public file sharing +- Configure [bucket CORS settings](docId:configure-cors) for web applications \ No newline at end of file diff --git a/app/(docs)/dcs/tutorials/_meta.json b/app/(docs)/dcs/tutorials/_meta.json new file mode 100644 index 000000000..173bb3e2d --- /dev/null +++ b/app/(docs)/dcs/tutorials/_meta.json @@ -0,0 +1,13 @@ +{ + "title": "Tutorials", + "nav": [ + { + "title": "Your first week with Storj", + "id": "your-first-week-with-storj" + }, + { + "title": "Build your first app", + "id": "build-your-first-app" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/dcs/tutorials/build-your-first-app.md b/app/(docs)/dcs/tutorials/build-your-first-app.md new file mode 100644 index 000000000..a52833db0 --- /dev/null +++ b/app/(docs)/dcs/tutorials/build-your-first-app.md @@ -0,0 +1,827 @@ +--- +title: Build your first app +docId: build-first-app-storj +metadata: + title: Build Your First App with Storj DCS + description: 30-minute tutorial to build a simple file sharing web application using Storj DCS +--- + +Build a simple file sharing web application using Storj DCS in just 30 minutes. Perfect for developers new to Storj. + +## What you'll build + +A web application that allows users to: +- Upload files to Storj DCS +- View uploaded files +- Share files with generated links +- Delete files when needed + +**Time to complete**: 30 minutes +**Skill level**: Beginner developer +**Prerequisites**: Basic HTML/JavaScript knowledge, Node.js installed + +## Prerequisites + +- Node.js 16+ installed +- Storj DCS account with project created +- Text editor or IDE +- Web browser + +## Step 1: Set up Storj credentials + +### Create S3-compatible access + +1. **Log in to Storj console** +2. **Go to Access page** +3. **Click "Create S3 Credentials"** +4. **Configure access**: + - Name: "file-share-app" + - Permissions: All + - Buckets: All buckets + - Expiration: None (for tutorial) +5. **Save Access Key and Secret Key** + +### Create a bucket + +6. **Go to Buckets page** +7. **Click "Create Bucket"** +8. **Name it**: "file-share-demo" +9. **Choose your preferred region** +10. **Create the bucket** + +**Expected result**: You have S3-compatible credentials and a bucket ready. + +## Step 2: Set up the project + +### Initialize Node.js project + +```bash +# Create project directory +mkdir storj-file-share +cd storj-file-share + +# Initialize npm project +npm init -y + +# Install dependencies +npm install express multer aws-sdk cors dotenv +npm install --save-dev nodemon +``` + +### Create project structure + +```bash +# Create directories +mkdir public uploads + +# Create files +touch server.js .env +touch public/index.html public/style.css public/app.js +``` + +**Expected result**: Project structure is set up with required dependencies. + +## Step 3: Configure environment + +### Set up environment variables + +Create `.env` file: +```env +STORJ_ACCESS_KEY=your_access_key_here +STORJ_SECRET_KEY=your_secret_key_here +STORJ_BUCKET=file-share-demo +STORJ_ENDPOINT=https://gateway.storjshare.io +PORT=3000 +``` + +**Replace the credentials** with your actual Storj S3 credentials from Step 1. + +**Expected result**: Environment variables are configured securely. + +## Step 4: Build the server + +### Create the Express server + +Create `server.js`: +```javascript +const express = require('express'); +const multer = require('multer'); +const AWS = require('aws-sdk'); +const cors = require('cors'); +const path = require('path'); +require('dotenv').config(); + +const app = express(); +const PORT = process.env.PORT || 3000; + +// Configure AWS SDK for Storj +const s3 = new AWS.S3({ + accessKeyId: process.env.STORJ_ACCESS_KEY, + secretAccessKey: process.env.STORJ_SECRET_KEY, + endpoint: process.env.STORJ_ENDPOINT, + s3ForcePathStyle: true, + signatureVersion: 'v4', + region: 'us-east-1' +}); + +// Middleware +app.use(cors()); +app.use(express.json()); +app.use(express.static('public')); +app.use('/uploads', express.static('uploads')); + +// Configure multer for file uploads +const upload = multer({ + dest: 'uploads/', + limits: { fileSize: 10 * 1024 * 1024 } // 10MB limit +}); + +// Routes +app.get('/', (req, res) => { + res.sendFile(path.join(__dirname, 'public', 'index.html')); +}); + +// Upload file to Storj +app.post('/upload', upload.single('file'), async (req, res) => { + if (!req.file) { + return res.status(400).json({ error: 'No file uploaded' }); + } + + try { + const fileContent = require('fs').readFileSync(req.file.path); + const fileName = `${Date.now()}-${req.file.originalname}`; + + const params = { + Bucket: process.env.STORJ_BUCKET, + Key: fileName, + Body: fileContent, + ContentType: req.file.mimetype + }; + + const result = await s3.upload(params).promise(); + + // Clean up local file + require('fs').unlinkSync(req.file.path); + + res.json({ + success: true, + fileName: fileName, + url: result.Location, + size: req.file.size + }); + } catch (error) { + console.error('Upload error:', error); + res.status(500).json({ error: 'Upload failed' }); + } +}); + +// List files +app.get('/files', async (req, res) => { + try { + const params = { + Bucket: process.env.STORJ_BUCKET, + MaxKeys: 50 + }; + + const data = await s3.listObjectsV2(params).promise(); + + const files = data.Contents.map(file => ({ + name: file.Key, + size: file.Size, + lastModified: file.LastModified + })); + + res.json({ files }); + } catch (error) { + console.error('List files error:', error); + res.status(500).json({ error: 'Failed to list files' }); + } +}); + +// Generate shareable link +app.post('/share/:fileName', async (req, res) => { + try { + const params = { + Bucket: process.env.STORJ_BUCKET, + Key: req.params.fileName, + Expires: 60 * 60 * 24 * 7 // 1 week + }; + + const url = s3.getSignedUrl('getObject', params); + + res.json({ + shareUrl: url, + expires: '7 days' + }); + } catch (error) { + console.error('Share link error:', error); + res.status(500).json({ error: 'Failed to generate share link' }); + } +}); + +// Delete file +app.delete('/files/:fileName', async (req, res) => { + try { + const params = { + Bucket: process.env.STORJ_BUCKET, + Key: req.params.fileName + }; + + await s3.deleteObject(params).promise(); + + res.json({ success: true }); + } catch (error) { + console.error('Delete error:', error); + res.status(500).json({ error: 'Failed to delete file' }); + } +}); + +app.listen(PORT, () => { + console.log(`Server running at http://localhost:${PORT}`); +}); +``` + +**Expected result**: Server is configured with all necessary API endpoints. + +## Step 5: Build the frontend + +### Create the HTML structure + +Create `public/index.html`: +```html + + + + + + Storj File Share + + + +
+
+

🚀 Storj File Share

+

Upload, share, and manage your files with decentralized storage

+
+ +
+
+
+
📁
+

Drag & drop files here or click to browse

+ +
+
+ +
+ +
+

Your Files

+
+
Loading files...
+
+
+
+ + + + + + +``` + +### Add CSS styling + +Create `public/style.css`: +```css +* { + margin: 0; + padding: 0; + box-sizing: border-box; +} + +body { + font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; + background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); + min-height: 100vh; + color: #333; +} + +.container { + max-width: 800px; + margin: 0 auto; + padding: 20px; +} + +header { + text-align: center; + margin-bottom: 40px; + color: white; +} + +header h1 { + font-size: 2.5rem; + margin-bottom: 10px; +} + +.upload-section { + background: white; + padding: 30px; + border-radius: 12px; + box-shadow: 0 8px 32px rgba(0,0,0,0.1); + margin-bottom: 30px; +} + +.upload-area { + border: 3px dashed #ddd; + border-radius: 8px; + padding: 60px 20px; + text-align: center; + cursor: pointer; + transition: all 0.3s ease; +} + +.upload-area:hover { + border-color: #667eea; + background-color: #f8f9ff; +} + +.upload-area.dragover { + border-color: #667eea; + background-color: #f0f2ff; +} + +.upload-icon { + font-size: 3rem; + margin-bottom: 15px; +} + +#fileInput { + display: none; +} + +.progress-bar { + width: 100%; + height: 8px; + background-color: #f0f0f0; + border-radius: 4px; + margin-top: 20px; + overflow: hidden; +} + +.progress-fill { + height: 100%; + background-color: #667eea; + width: 0%; + transition: width 0.3s ease; +} + +.files-section { + background: white; + padding: 30px; + border-radius: 12px; + box-shadow: 0 8px 32px rgba(0,0,0,0.1); +} + +.files-section h2 { + margin-bottom: 20px; + color: #333; +} + +.file-item { + display: flex; + justify-content: space-between; + align-items: center; + padding: 15px; + border: 1px solid #eee; + border-radius: 8px; + margin-bottom: 10px; + transition: all 0.3s ease; +} + +.file-item:hover { + box-shadow: 0 2px 8px rgba(0,0,0,0.1); +} + +.file-info { + flex-grow: 1; +} + +.file-name { + font-weight: 600; + margin-bottom: 5px; +} + +.file-details { + font-size: 0.9rem; + color: #666; +} + +.file-actions { + display: flex; + gap: 10px; +} + +.btn { + padding: 8px 16px; + border: none; + border-radius: 4px; + cursor: pointer; + font-size: 0.9rem; + transition: all 0.3s ease; +} + +.btn-share { + background-color: #667eea; + color: white; +} + +.btn-delete { + background-color: #ff6b6b; + color: white; +} + +.btn:hover { + transform: translateY(-1px); + box-shadow: 0 2px 8px rgba(0,0,0,0.2); +} + +.modal { + position: fixed; + z-index: 1000; + left: 0; + top: 0; + width: 100%; + height: 100%; + background-color: rgba(0,0,0,0.5); +} + +.modal-content { + background-color: white; + margin: 15% auto; + padding: 20px; + border-radius: 8px; + width: 90%; + max-width: 500px; + position: relative; +} + +.close { + position: absolute; + right: 15px; + top: 15px; + font-size: 24px; + cursor: pointer; +} + +.link-container { + display: flex; + gap: 10px; + margin-top: 10px; +} + +.link-container input { + flex-grow: 1; + padding: 10px; + border: 1px solid #ddd; + border-radius: 4px; +} + +.loading { + text-align: center; + padding: 40px; + color: #666; +} + +.empty-state { + text-align: center; + padding: 40px; + color: #666; +} +``` + +### Add JavaScript functionality + +Create `public/app.js`: +```javascript +class FileShareApp { + constructor() { + this.uploadArea = document.getElementById('uploadArea'); + this.fileInput = document.getElementById('fileInput'); + this.filesList = document.getElementById('filesList'); + this.progressBar = document.getElementById('progressBar'); + this.shareModal = document.getElementById('shareModal'); + this.shareLink = document.getElementById('shareLink'); + + this.initializeEventListeners(); + this.loadFiles(); + } + + initializeEventListeners() { + // Upload area click + this.uploadArea.addEventListener('click', () => { + this.fileInput.click(); + }); + + // File input change + this.fileInput.addEventListener('change', (e) => { + this.handleFiles(e.target.files); + }); + + // Drag and drop + this.uploadArea.addEventListener('dragover', (e) => { + e.preventDefault(); + this.uploadArea.classList.add('dragover'); + }); + + this.uploadArea.addEventListener('dragleave', () => { + this.uploadArea.classList.remove('dragover'); + }); + + this.uploadArea.addEventListener('drop', (e) => { + e.preventDefault(); + this.uploadArea.classList.remove('dragover'); + this.handleFiles(e.dataTransfer.files); + }); + } + + async handleFiles(files) { + for (let file of files) { + await this.uploadFile(file); + } + this.loadFiles(); + } + + async uploadFile(file) { + const formData = new FormData(); + formData.append('file', file); + + this.showProgress(); + + try { + const response = await fetch('/upload', { + method: 'POST', + body: formData + }); + + const result = await response.json(); + + if (result.success) { + this.showNotification('File uploaded successfully!', 'success'); + } else { + this.showNotification('Upload failed: ' + result.error, 'error'); + } + } catch (error) { + this.showNotification('Upload error: ' + error.message, 'error'); + } finally { + this.hideProgress(); + } + } + + async loadFiles() { + try { + const response = await fetch('/files'); + const data = await response.json(); + + this.renderFiles(data.files); + } catch (error) { + console.error('Failed to load files:', error); + this.filesList.innerHTML = '
Failed to load files
'; + } + } + + renderFiles(files) { + if (files.length === 0) { + this.filesList.innerHTML = '
No files uploaded yet
'; + return; + } + + this.filesList.innerHTML = files.map(file => ` +
+
+
${file.name}
+
+ ${this.formatFileSize(file.size)} • + ${new Date(file.lastModified).toLocaleDateString()} +
+
+
+ + +
+
+ `).join(''); + } + + async shareFile(fileName) { + try { + const response = await fetch(`/share/${fileName}`, { + method: 'POST' + }); + + const data = await response.json(); + + this.shareLink.value = data.shareUrl; + this.shareModal.style.display = 'block'; + } catch (error) { + this.showNotification('Failed to generate share link', 'error'); + } + } + + async deleteFile(fileName) { + if (!confirm('Are you sure you want to delete this file?')) { + return; + } + + try { + const response = await fetch(`/files/${fileName}`, { + method: 'DELETE' + }); + + if (response.ok) { + this.showNotification('File deleted successfully', 'success'); + this.loadFiles(); + } else { + this.showNotification('Failed to delete file', 'error'); + } + } catch (error) { + this.showNotification('Delete error: ' + error.message, 'error'); + } + } + + showProgress() { + this.progressBar.style.display = 'block'; + this.progressBar.querySelector('.progress-fill').style.width = '100%'; + } + + hideProgress() { + setTimeout(() => { + this.progressBar.style.display = 'none'; + this.progressBar.querySelector('.progress-fill').style.width = '0%'; + }, 1000); + } + + formatFileSize(bytes) { + if (bytes === 0) return '0 Bytes'; + const k = 1024; + const sizes = ['Bytes', 'KB', 'MB', 'GB']; + const i = Math.floor(Math.log(bytes) / Math.log(k)); + return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i]; + } + + showNotification(message, type = 'info') { + // Simple notification - you could enhance this with a proper notification library + const notification = document.createElement('div'); + notification.className = `notification ${type}`; + notification.textContent = message; + notification.style.cssText = ` + position: fixed; + top: 20px; + right: 20px; + padding: 15px 20px; + border-radius: 4px; + color: white; + z-index: 1001; + background-color: ${type === 'error' ? '#ff6b6b' : '#51cf66'}; + `; + + document.body.appendChild(notification); + + setTimeout(() => { + notification.remove(); + }, 3000); + } +} + +// Global functions for modal +function closeShareModal() { + document.getElementById('shareModal').style.display = 'none'; +} + +function copyLink() { + const shareLink = document.getElementById('shareLink'); + shareLink.select(); + document.execCommand('copy'); + app.showNotification('Link copied to clipboard!', 'success'); +} + +// Initialize app +const app = new FileShareApp(); +``` + +**Expected result**: Complete frontend with upload, view, share, and delete functionality. + +## Step 6: Test your application + +### Start the application + +```bash +# Add start script to package.json +npm pkg set scripts.start="node server.js" +npm pkg set scripts.dev="nodemon server.js" + +# Start the development server +npm run dev +``` + +### Test all features + +1. **Open browser** to `http://localhost:3000` +2. **Test file upload**: + - Drag and drop a file + - OR click to browse and select a file + - Verify upload success notification +3. **Test file listing**: + - Check that uploaded file appears in the list + - Verify file size and date are displayed +4. **Test file sharing**: + - Click "Share" button + - Copy the generated link + - Test link in private browser window +5. **Test file deletion**: + - Click "Delete" button + - Confirm deletion + - Verify file is removed from list + +**Expected result**: All features working correctly with Storj storage. + +## Verification checklist + +- [ ] Application starts without errors +- [ ] Files can be uploaded via drag & drop +- [ ] Files can be uploaded via file browser +- [ ] Uploaded files appear in the file list +- [ ] Share links are generated and work +- [ ] Files can be deleted successfully +- [ ] UI is responsive and user-friendly + +## What you've learned + +### Technical skills + +- **Storj integration**: Using S3-compatible API with AWS SDK +- **File upload handling**: Multer middleware for multipart uploads +- **Frontend development**: Modern JavaScript with drag & drop +- **API design**: RESTful endpoints for file operations +- **Security**: Environment variables and signed URLs + +### Storj concepts + +- **S3 compatibility**: Using standard S3 tools and libraries +- **Presigned URLs**: Secure file sharing without credentials +- **Bucket organization**: Organizing files in cloud storage +- **Access management**: Different types of credentials and permissions + +## Next steps + +### Enhance your application + +1. **Add user authentication**: + - Implement user registration/login + - User-specific file storage + - Access control per user + +2. **Improve file management**: + - File versioning support + - Bulk operations (upload/delete multiple files) + - File search and filtering + +3. **Add advanced features**: + - Image thumbnail generation + - File type restrictions + - Upload progress for large files + - File encryption/decryption + +### Learn more Storj features + +- [Object versioning](docId:setup-object-versioning) for file history +- [CORS configuration](docId:configure-cors) for web applications +- [Performance optimization](docId:optimize-upload-performance) for large files +- [Multi-region deployment](docId:setup-multi-region-storage) for global apps + +### Deploy your application + +- Deploy to Vercel, Netlify, or Heroku +- Set up production environment variables +- Configure proper error handling and logging +- Add monitoring and analytics + +Congratulations! You've built your first application with Storj DCS. You now understand how to integrate decentralized storage into web applications and can build more complex projects. \ No newline at end of file diff --git a/app/(docs)/dcs/tutorials/your-first-week-with-storj.md b/app/(docs)/dcs/tutorials/your-first-week-with-storj.md new file mode 100644 index 000000000..86b65a188 --- /dev/null +++ b/app/(docs)/dcs/tutorials/your-first-week-with-storj.md @@ -0,0 +1,577 @@ +--- +title: Your first week with Storj +docId: first-week-storj-tutorial +metadata: + title: Your First Week with Storj DCS - Complete Beginner Tutorial + description: Comprehensive 7-day tutorial to master Storj DCS fundamentals, from account setup to advanced features +--- + +Master Storj DCS in your first week with this comprehensive tutorial that takes you from complete beginner to confident user. + +## What you'll build + +By the end of this tutorial, you'll have: +- A fully configured Storj DCS account +- Multiple buckets with different access levels +- Uploaded and organized files using multiple methods +- A simple web application that uses Storj for storage +- Automated backup processes +- Understanding of costs and optimization + +**Time to complete**: 7 days, 1-2 hours per day +**Skill level**: Complete beginner +**Prerequisites**: Computer with internet access + +## Day 1: Account setup and first upload + +### Create your Storj account + +1. **Go to [storj.io](https://storj.io)** and click "Get Started" +2. **Choose your plan**: + - Free tier: 25GB storage, 25GB bandwidth/month + - Pro tier: Pay-as-you-use pricing +3. **Complete signup** with email verification +4. **Create your first project** named "learning-storj" + +### Set up access credentials + +5. **Go to Access page** in the console +6. **Click "Create Access Grant"** +7. **Name it**: "learning-access" +8. **Select permissions**: All permissions for learning +9. **Choose buckets**: All buckets +10. **Set no expiration** for this tutorial +11. **Generate and save** your access grant securely + +### Create your first bucket + +12. **Go to Buckets page** +13. **Click "Create Bucket"** +14. **Name it**: "my-first-bucket" +15. **Choose a region** close to you +16. **Create the bucket** + +### Make your first upload + +17. **Click on your bucket** to open it +18. **Click "Upload"** +19. **Select a small file** (image, document, etc.) +20. **Complete the upload** +21. **View your file** in the browser + +**Expected outcome**: You have a working Storj account with your first uploaded file. + +### Day 1 verification + +- [ ] Account created and verified +- [ ] Project and access grant created +- [ ] First bucket created and accessible +- [ ] File successfully uploaded and viewable +- [ ] Access grant saved securely + +## Day 2: Command line basics + +### Install Uplink CLI + +**Windows**: +```powershell +# Download from GitHub releases +Invoke-WebRequest https://github.com/storj/storj/releases/latest/download/uplink_windows_amd64.exe -OutFile uplink.exe +``` + +**macOS**: +```bash +brew install uplink +``` + +**Linux**: +```bash +curl -L https://github.com/storj/storj/releases/latest/download/uplink_linux_amd64.zip -o uplink.zip +unzip uplink.zip +sudo mv uplink /usr/local/bin/ +``` + +### Configure Uplink + +1. **Set up access**: + ```bash + uplink access import main-access YOUR_ACCESS_GRANT_HERE + ``` + +2. **Test connection**: + ```bash + uplink ls + ``` + You should see your "my-first-bucket" + +### Practice basic commands + +3. **List bucket contents**: + ```bash + uplink ls sj://my-first-bucket + ``` + +4. **Upload via command line**: + ```bash + uplink cp /local/path/to/file.txt sj://my-first-bucket/uploaded-via-cli.txt + ``` + +5. **Download a file**: + ```bash + uplink cp sj://my-first-bucket/uploaded-via-cli.txt ./downloaded-file.txt + ``` + +6. **Create a new bucket**: + ```bash + uplink mb sj://cli-bucket + ``` + +**Expected outcome**: You can manage Storj storage from the command line. + +### Day 2 verification + +- [ ] Uplink CLI installed and configured +- [ ] Can list buckets and files from command line +- [ ] Successfully uploaded file via CLI +- [ ] Successfully downloaded file via CLI +- [ ] Created new bucket via CLI + +## Day 3: S3 compatibility and third-party tools + +### Get S3-compatible credentials + +1. **In Storj console, go to Access page** +2. **Click "Create S3 Credentials"** +3. **Name**: "s3-compatible-access" +4. **Choose permissions and buckets** +5. **Generate credentials** +6. **Copy Access Key and Secret Key** + +### Configure AWS CLI + +7. **Install AWS CLI** ([instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)) + +8. **Configure AWS CLI for Storj**: + ```bash + aws configure set aws_access_key_id YOUR_STORJ_ACCESS_KEY + aws configure set aws_secret_access_key YOUR_STORJ_SECRET_KEY + aws configure set default.region us-east-1 + ``` + +### Test S3 operations + +9. **List buckets**: + ```bash + aws s3 ls --endpoint-url https://gateway.storjshare.io + ``` + +10. **Upload file**: + ```bash + aws s3 cp test-file.txt s3://my-first-bucket/ --endpoint-url https://gateway.storjshare.io + ``` + +11. **Create and sync directory**: + ```bash + mkdir local-folder + echo "test content" > local-folder/test.txt + aws s3 sync local-folder/ s3://cli-bucket/sync-test/ --endpoint-url https://gateway.storjshare.io + ``` + +### Try a GUI tool (Cyberduck) + +12. **Download and install** [Cyberduck](https://cyberduck.io) +13. **Create new connection**: + - Protocol: Amazon S3 + - Server: gateway.storjshare.io + - Access Key ID: Your Storj access key + - Secret Access Key: Your Storj secret key +14. **Connect and browse** your buckets +15. **Upload/download files** using the GUI + +**Expected outcome**: You can use standard S3 tools with Storj. + +### Day 3 verification + +- [ ] S3-compatible credentials created +- [ ] AWS CLI configured for Storj +- [ ] Successfully used S3 commands with Storj +- [ ] GUI tool (Cyberduck) connected and working +- [ ] Comfortable with multiple access methods + +## Day 4: Web integration and sharing + +### Create shareable links + +1. **In Storj console, select a file** +2. **Click "Share"** +3. **Configure sharing options**: + - Link expires: Set to 1 week + - Password protection: Optional +4. **Generate and copy link** +5. **Test link** in private browser window + +### Set up CORS for web applications + +6. **Go to bucket settings** +7. **Configure CORS policy**: + ```json + [ + { + "allowedHeaders": ["*"], + "allowedMethods": ["GET", "POST", "PUT"], + "allowedOrigins": ["http://localhost:3000"], + "exposeHeaders": ["ETag"] + } + ] + ``` + +### Build a simple web upload form + +8. **Create simple HTML page** (`upload-demo.html`): + ```html + + + + Storj Upload Demo + + +

Upload to Storj

+ + +
+ + + + + + ``` + +9. **Open page in browser** and test upload + +**Expected outcome**: You can share files and integrate Storj into web applications. + +### Day 4 verification + +- [ ] Created and tested shareable links +- [ ] CORS configured for web access +- [ ] Simple web upload form working +- [ ] Understanding of web integration possibilities + +## Day 5: Organization and management + +### Organize your data structure + +1. **Plan your folder structure**: + - `personal/photos/` + - `personal/documents/` + - `work/projects/` + - `backups/` + +2. **Create organized buckets**: + ```bash + uplink mb sj://personal-files + uplink mb sj://work-files + uplink mb sj://automated-backups + ``` + +3. **Upload files to organized structure**: + ```bash + uplink cp family-photo.jpg sj://personal-files/photos/ + uplink cp resume.pdf sj://work-files/documents/ + ``` + +### Set up versioning + +4. **Enable versioning** on important bucket: + ```bash + aws s3api put-bucket-versioning \ + --bucket work-files \ + --versioning-configuration Status=Enabled \ + --endpoint-url https://gateway.storjshare.io + ``` + +5. **Test versioning** by uploading same file twice: + ```bash + echo "Version 1" > test-versioning.txt + aws s3 cp test-versioning.txt s3://work-files/ --endpoint-url https://gateway.storjshare.io + + echo "Version 2" > test-versioning.txt + aws s3 cp test-versioning.txt s3://work-files/ --endpoint-url https://gateway.storjshare.io + ``` + +6. **List versions**: + ```bash + aws s3api list-object-versions \ + --bucket work-files \ + --endpoint-url https://gateway.storjshare.io + ``` + +### Create lifecycle management strategy + +7. **Plan data lifecycle**: + - Active files: Keep in main buckets + - Archive files: Move to archive bucket after 6 months + - Temporary files: Auto-delete after 30 days + +8. **Document your data management strategy** + +**Expected outcome**: Your data is well-organized with proper versioning and lifecycle planning. + +### Day 5 verification + +- [ ] Data organized into logical bucket structure +- [ ] Object versioning enabled and tested +- [ ] Understanding of lifecycle management +- [ ] Documentation of your data strategy + +## Day 6: Automation and scripting + +### Create backup automation script + +1. **Create backup script** (`daily-backup.sh`): + ```bash + #!/bin/bash + + # Configuration + BACKUP_BUCKET="automated-backups" + LOCAL_DIRS=("/home/user/documents" "/home/user/photos") + DATE=$(date +%Y-%m-%d) + + # Create date-based folder + for dir in "${LOCAL_DIRS[@]}"; do + DIR_NAME=$(basename "$dir") + echo "Backing up $dir to $BACKUP_BUCKET/$DATE/$DIR_NAME/" + + uplink cp --recursive "$dir/" "sj://$BACKUP_BUCKET/$DATE/$DIR_NAME/" + + if [ $? -eq 0 ]; then + echo "Backup of $dir completed successfully" + else + echo "Backup of $dir failed" + fi + done + + echo "Daily backup completed at $(date)" + ``` + +2. **Test the backup script**: + ```bash + chmod +x daily-backup.sh + ./daily-backup.sh + ``` + +3. **Set up automated scheduling** (cron on Linux/macOS): + ```bash + crontab -e + # Add line: 0 2 * * * /path/to/daily-backup.sh + ``` + +### Create sync script for active files + +4. **Create sync script** (`sync-work-files.sh`): + ```bash + #!/bin/bash + + LOCAL_WORK="/home/user/work-projects" + REMOTE_BUCKET="work-files" + + echo "Syncing work files..." + uplink sync "$LOCAL_WORK/" "sj://$REMOTE_BUCKET/current-projects/" + + if [ $? -eq 0 ]; then + echo "Sync completed successfully at $(date)" + else + echo "Sync failed at $(date)" + fi + ``` + +### Monitor usage and costs + +5. **Create usage monitoring script**: + ```bash + #!/bin/bash + + echo "=== Storj Usage Report ===" + echo "Date: $(date)" + echo + + echo "Buckets and sizes:" + for bucket in $(uplink ls | grep -v "CREATED" | awk '{print $4}'); do + if [ ! -z "$bucket" ]; then + size=$(uplink ls --recursive sj://$bucket | tail -1 | awk '{print $1}') + echo "$bucket: $size" + fi + done + ``` + +**Expected outcome**: You have automated backup and sync processes running. + +### Day 6 verification + +- [ ] Backup automation script created and tested +- [ ] Sync script for active files working +- [ ] Scheduled tasks configured +- [ ] Usage monitoring in place + +## Day 7: Cost optimization and advanced features + +### Analyze your usage + +1. **Check billing information** in Storj console +2. **Review storage and bandwidth usage** +3. **Identify cost optimization opportunities** + +### Implement cost optimization + +4. **Set up efficient data organization**: + - Keep frequently accessed files in main buckets + - Move archives to separate buckets + - Delete unnecessary temporary files + +5. **Optimize transfer patterns**: + ```bash + # Use efficient batch operations instead of individual file transfers + uplink cp --recursive local-folder/ sj://bucket/folder/ + + # Use compression for large files when possible + tar -czf archive.tar.gz large-folder/ + uplink cp archive.tar.gz sj://bucket/archives/ + ``` + +### Explore advanced features + +6. **Set up object lock** (if available): + ```bash + aws s3api put-object-lock-configuration \ + --bucket important-files \ + --object-lock-configuration ObjectLockEnabled=Enabled \ + --endpoint-url https://gateway.storjshare.io + ``` + +7. **Configure bucket notifications** (using webhooks if available) + +8. **Test presigned URLs**: + ```python + import boto3 + + s3 = boto3.client('s3', + endpoint_url='https://gateway.storjshare.io', + aws_access_key_id='your-access-key', + aws_secret_access_key='your-secret-key' + ) + + # Generate presigned URL + url = s3.generate_presigned_url( + 'get_object', + Params={'Bucket': 'my-first-bucket', 'Key': 'shared-file.txt'}, + ExpiresIn=3600 # 1 hour + ) + print(f"Presigned URL: {url}") + ``` + +### Plan your ongoing usage + +9. **Document your Storj setup**: + - Access credentials and their purposes + - Bucket organization and policies + - Automation scripts and schedules + - Cost optimization strategies + +10. **Create maintenance checklist**: + - Weekly: Review usage and costs + - Monthly: Clean up unnecessary files + - Quarterly: Review and optimize data organization + +**Expected outcome**: You have an optimized, automated Storj setup ready for production use. + +### Day 7 verification + +- [ ] Usage and costs analyzed +- [ ] Cost optimization implemented +- [ ] Advanced features explored +- [ ] Documentation completed +- [ ] Maintenance plan established + +## Tutorial completion checklist + +After completing all 7 days: + +### Technical achievements + +- [ ] Storj account set up and configured +- [ ] Multiple access methods working (console, CLI, S3-compatible) +- [ ] Data organized in logical bucket structure +- [ ] Automation scripts created and scheduled +- [ ] Web integration demonstrated +- [ ] Cost optimization implemented + +### Knowledge gained + +- [ ] Understanding of Storj's architecture and benefits +- [ ] Familiarity with different access methods +- [ ] Knowledge of S3 compatibility and third-party tools +- [ ] Web integration capabilities +- [ ] Data organization best practices +- [ ] Automation and scripting skills +- [ ] Cost optimization strategies + +### Production readiness + +- [ ] Secure credential management +- [ ] Backup and sync processes automated +- [ ] Monitoring and alerting configured +- [ ] Documentation completed +- [ ] Maintenance procedures established + +## What's next + +Now that you've mastered the basics: + +1. **Explore specific use cases**: + - [Build your first app](docId:build-your-first-app) with Storj integration + - [Set up multi-region storage](docId:setup-multi-region-storage) for global applications + - [Migrate from AWS S3](docId:migrate-from-s3) to Storj + +2. **Dive deeper into features**: + - [Optimize upload performance](docId:optimize-upload-performance) + - [Set up object versioning](docId:setup-object-versioning) + - [Configure CORS](docId:configure-cors) for web applications + +3. **Join the community**: + - [Storj Forum](https://forum.storj.io) + - [Discord Community](https://discord.gg/storj) + - [Documentation](https://docs.storj.io) + +Congratulations on completing your first week with Storj! You're now ready to build amazing applications with decentralized cloud storage. \ No newline at end of file diff --git a/app/(docs)/node/how-to/_meta.json b/app/(docs)/node/how-to/_meta.json index f868d38e6..d256ff53a 100644 --- a/app/(docs)/node/how-to/_meta.json +++ b/app/(docs)/node/how-to/_meta.json @@ -12,6 +12,18 @@ { "title": "Troubleshoot offline node", "id": "troubleshoot-offline-node" + }, + { + "title": "Fix database corruption", + "id": "fix-database-corruption" + }, + { + "title": "Set up remote access", + "id": "setup-remote-access" + }, + { + "title": "Monitor node performance", + "id": "monitor-node-performance" } ] } \ No newline at end of file diff --git a/app/(docs)/node/how-to/fix-database-corruption.md b/app/(docs)/node/how-to/fix-database-corruption.md new file mode 100644 index 000000000..2839d460e --- /dev/null +++ b/app/(docs)/node/how-to/fix-database-corruption.md @@ -0,0 +1,267 @@ +--- +title: Fix database corruption +docId: fix-database-corruption +metadata: + title: How to Fix Database Corruption - Storage Node + description: Repair SQLite database corruption on your Storj storage node +--- + +Fix SQLite database corruption that can occur during unplanned shutdowns or system issues. + +## Prerequisites + +- Storage node that's experiencing database corruption +- Administrative access to your system +- SQLite 3.25.2 or later +- Backup space for database files + +## Identify database corruption + +### Check for corruption errors + +Look for these error messages in your storage node logs: +- "database disk image is malformed" +- Database integrity check failures +- SQLite corruption errors + +### Stop the storage node + +**Docker installation**: +```bash +docker stop storagenode +``` + +**Windows GUI**: Use the application interface to stop the node + +**Linux service**: +```bash +sudo systemctl stop storagenode +``` + +## Install SQLite tools + +### Option 1: Use Docker (recommended) + +```bash +# Docker-based SQLite (x86_64 only) +# Replace ${PWD} with absolute path to database location +docker run --rm -it --mount type=bind,source=${PWD},destination=/data sstc/sqlite3 \ + find . -maxdepth 1 -iname "*.db" -print0 -exec sqlite3 '{}' 'PRAGMA integrity_check;' ';' +``` + +### Option 2: Install SQLite directly + +**Linux (Debian/Ubuntu)**: +```bash +sudo apt update && sudo apt install sqlite3 -y +``` + +**Linux (CentOS/RHEL)**: +```bash +sudo yum install sqlite -y +``` + +**Windows**: Download from [SQLite Downloads](https://www.sqlite.org/download.html) + +**Verify version**: +```bash +sqlite3 --version +# Should be 3.25.2 or later +``` + +## Check database integrity + +### Locate database files + +Find your storage node's database files: +- **Docker**: In your mounted storage directory +- **Windows GUI**: Usually `C:\Program Files\Storj\Storage Node\storage\` +- **Linux**: Configured in your `config.yaml` file + +Common database files: +- `bandwidth.db` +- `piece_expiration.db` +- `piece_info.db` +- `reputation.db` +- `satellites.db` +- `storage_usage.db` + +### Check all databases + +**Linux/macOS**: +```bash +find /path/to/storage/ -maxdepth 1 -iname "*.db" -print0 -exec sqlite3 '{}' 'PRAGMA integrity_check;' ';' +``` + +**Windows PowerShell**: +```powershell +Get-ChildItem X:\storagenode\storage\*.db -File | %{$_.Name + " " + $(sqlite3.exe $_.FullName "PRAGMA integrity_check;")} +``` + +**Expected output**: "ok" for healthy databases + +## Repair corrupted databases + +### Create database backups + +```bash +# Back up corrupted database +cp /storage/bandwidth.db /storage/bandwidth.db.bak +``` + +### Extract data from corrupted database + +**Using Docker**: +```bash +# Open shell in container +docker run --rm -it --mount type=bind,source=/path/to/storage,destination=/storage sstc/sqlite3 sh +``` + +**Direct SQLite access**: +```bash +sqlite3 /storage/bandwidth.db +``` + +### Dump database contents + +In the SQLite prompt: +```sql +.mode insert +.output /storage/dump_all.sql +.dump +.exit +``` + +### Clean the dump file + +**Linux/Docker**: +```bash +{ echo "PRAGMA synchronous = OFF ;"; cat /storage/dump_all.sql; } | \ + grep -v -e TRANSACTION -e ROLLBACK -e COMMIT >/storage/dump_all_notrans.sql +``` + +**Windows PowerShell**: +```powershell +$(echo "PRAGMA synchronous = OFF ;"; Get-Content dump_all.sql) | \ + Select-String -NotMatch "TRANSACTION|ROLLBACK|COMMIT" | \ + Set-Content -Encoding utf8 dump_all_notrans.sql +``` + +### Recreate the database + +```bash +# Remove corrupted database (backup exists!) +rm /storage/bandwidth.db + +# Recreate database from dump +sqlite3 /storage/bandwidth.db ".read /storage/dump_all_notrans.sql" +``` + +### Verify repair + +```bash +# Check file size (should be > 0) +ls -l /storage/bandwidth.db + +# Test integrity +sqlite3 /storage/bandwidth.db "PRAGMA integrity_check;" +``` + +## Handle complete corruption + +If the repair process fails: + +### Follow database replacement guide + +For databases that cannot be repaired, you'll lose historical statistics but can continue operating: + +1. **Stop the storage node** +2. **Move corrupted database**: `mv corrupted.db corrupted.db.backup` +3. **Start storage node**: It will create a new, empty database +4. **Monitor for proper operation** + +{% callout type="warning" %} +**Important**: Complete database replacement will reset your node's statistics but won't affect stored data or earnings. +{% /callout %} + +## Restart storage node + +After successful repairs: + +**Docker**: +```bash +docker start storagenode +``` + +**Windows GUI**: Start through the application + +**Linux service**: +```bash +sudo systemctl start storagenode +``` + +## Prevent future corruption + +### Use proper shutdown procedures + +- Always stop storage nodes gracefully before system shutdown +- Avoid force-killing processes or containers +- Use UPS for power protection + +### Check storage configuration + +**Avoid incompatible storage**: +- Don't use NFS or SMB for database storage +- Only network protocol that works with SQLite is iSCSI +- Use local storage or direct-attached drives when possible + +**External drive considerations**: +- Ensure adequate power supply for USB drives +- Avoid drives that spin down during operation +- Prefer internal drives for database storage + +### Update your setup + +**Windows users**: +- Consider migrating to Windows GUI instead of Docker +- Disable write cache on external drives + +**Docker users**: +- Use current docker run command from documentation +- Ensure proper volume mounting + +**Unraid users**: +- Update to latest platform version (6.8.0+) +- Or use stable version 6.6.7 if corruption persists + +## Verification + +- [ ] Storage node starts without database errors +- [ ] Dashboard shows correct information +- [ ] No corruption errors in logs +- [ ] Node successfully communicates with satellites +- [ ] Backup procedures are in place + +## Troubleshooting + +**Repair fails with "database is locked"**: +- Ensure storage node is completely stopped +- Check for background processes accessing the database +- Restart the system if necessary + +**New database is empty after repair**: +- This is expected for completely corrupted databases +- Statistics will rebuild over time +- Stored data and earnings are not affected + +**Repeated corruption**: +- Check storage hardware health +- Review power management settings +- Consider storage configuration changes +- Monitor system logs for hardware issues + +## Next steps + +- Set up [automated monitoring](docId:monitor-node-performance) to detect issues early +- Learn about [node updates](docId:handle-node-updates) to prevent corruption during updates +- Configure [proper backup strategies](docId:your-backup-guide) for critical data \ No newline at end of file diff --git a/app/(docs)/node/how-to/monitor-node-performance.md b/app/(docs)/node/how-to/monitor-node-performance.md new file mode 100644 index 000000000..8cf34e2a8 --- /dev/null +++ b/app/(docs)/node/how-to/monitor-node-performance.md @@ -0,0 +1,405 @@ +--- +title: Monitor node performance +docId: monitor-node-performance +metadata: + title: How to Monitor Storage Node Performance + description: Set up monitoring and alerting for your Storj storage node health and performance +--- + +Monitor your storage node's performance, health metrics, and operational status to ensure optimal operation and early problem detection. + +## Prerequisites + +- Running storage node with dashboard access +- Basic understanding of storage node metrics +- Command line access to your node system +- Optional: Monitoring tools installation permissions + +## Monitor basic node health + +### Use the web dashboard + +**Access dashboard**: +- Local: `http://localhost:14002` +- Remote: Set up [remote access](docId:setup-remote-access) + +**Key metrics to monitor**: +- **Online score**: Should stay above 95% +- **Audit score**: Should remain above 95% +- **Suspension score**: Should stay at 100% +- **Available disk space**: Monitor free space remaining +- **Bandwidth usage**: Track ingress/egress patterns +- **Earnings**: Monitor payout trends + +### Check node status via command line + +**Docker installation**: +```bash +# Check container status +docker ps -f name=storagenode + +# View recent logs +docker logs --tail 100 storagenode + +# Follow live logs +docker logs -f storagenode +``` + +**Windows GUI**: +- Use the built-in dashboard and logs viewer +- Check Windows Event Viewer for system events + +**Linux binary**: +```bash +# Check service status +sudo systemctl status storagenode + +# View logs +sudo journalctl -u storagenode -f +``` + +## Set up log monitoring + +### Monitor critical log events + +**Key log patterns to watch**: +```bash +# Connection issues +grep -i "dial" /path/to/node/logs + +# Database errors +grep -i "database\|sqlite" /path/to/node/logs + +# Disk space warnings +grep -i "disk\|space" /path/to/node/logs + +# Audit failures +grep -i "audit.*failed" /path/to/node/logs + +# Suspension warnings +grep -i "suspend" /path/to/node/logs +``` + +### Automated log monitoring script + +Create `monitor-node.sh`: +```bash +#!/bin/bash +LOG_FILE="/path/to/storagenode.log" +EMAIL="your-email@example.com" + +# Check for critical errors in last 100 lines +ERRORS=$(tail -100 "$LOG_FILE" | grep -i "error\|failed\|suspend\|disqualify") + +if [ ! -z "$ERRORS" ]; then + echo "Critical errors detected in storage node:" | mail -s "Storage Node Alert" "$EMAIL" + echo "$ERRORS" | mail -s "Storage Node Error Details" "$EMAIL" +fi + +# Check disk space +DISK_USAGE=$(df /path/to/storage | awk 'NR==2 {print $5}' | sed 's/%//') +if [ "$DISK_USAGE" -gt 90 ]; then + echo "Disk usage is ${DISK_USAGE}% - critical level reached" | \ + mail -s "Storage Node Disk Alert" "$EMAIL" +fi +``` + +### Set up log rotation + +**Linux logrotate configuration** (`/etc/logrotate.d/storagenode`): +``` +/path/to/storagenode.log { + daily + rotate 30 + compress + delaycompress + missingok + notifempty + copytruncate +} +``` + +## Monitor system resources + +### Track resource usage + +**CPU and Memory monitoring**: +```bash +# Check current resource usage +top -p $(pgrep storagenode) + +# Get storage node process stats +ps aux | grep storagenode + +# Monitor over time +htop -p $(pgrep storagenode) +``` + +**Disk I/O monitoring**: +```bash +# Monitor disk I/O +iostat -x 5 + +# Check specific storage device +iostat -x /dev/sda 5 + +# Monitor disk space +watch -n 30 'df -h /path/to/storage' +``` + +**Network monitoring**: +```bash +# Monitor network connections +netstat -tuln | grep 28967 + +# Track bandwidth usage +iftop -i eth0 + +# Monitor specific ports +ss -tuln | grep -E '(14002|28967)' +``` + +### Create resource monitoring script + +Create `resource-monitor.sh`: +```bash +#!/bin/bash +NODE_PID=$(pgrep storagenode) + +if [ -z "$NODE_PID" ]; then + echo "$(date): Storage node process not found!" >> /var/log/node-monitor.log + exit 1 +fi + +# Get resource usage +CPU_USAGE=$(ps -p $NODE_PID -o %cpu --no-headers) +MEM_USAGE=$(ps -p $NODE_PID -o %mem --no-headers) +DISK_FREE=$(df /path/to/storage | awk 'NR==2 {print $4}') + +# Log metrics +echo "$(date): CPU: ${CPU_USAGE}% | Memory: ${MEM_USAGE}% | Disk Free: ${DISK_FREE}KB" \ + >> /var/log/node-monitor.log + +# Alert thresholds +if (( $(echo "$CPU_USAGE > 80" | bc -l) )); then + echo "High CPU usage: ${CPU_USAGE}%" | mail -s "Node CPU Alert" your-email@example.com +fi + +if (( $(echo "$MEM_USAGE > 90" | bc -l) )); then + echo "High memory usage: ${MEM_USAGE}%" | mail -s "Node Memory Alert" your-email@example.com +fi +``` + +## Set up automated health checks + +### Node connectivity check + +Create `health-check.sh`: +```bash +#!/bin/bash +DASHBOARD_URL="http://localhost:14002" +NODE_ADDRESS="your-node-external-address:28967" + +# Check dashboard availability +if ! curl -s "$DASHBOARD_URL" > /dev/null; then + echo "$(date): Dashboard unreachable" >> /var/log/health-check.log + echo "Storage node dashboard is down" | mail -s "Node Dashboard Alert" your-email@example.com +fi + +# Check external port connectivity (requires external monitoring) +# This would need to be run from external server +# if ! nc -z -v $NODE_ADDRESS 2>/dev/null; then +# echo "External port not reachable" +# fi + +# Check recent successful connections in logs +RECENT_SUCCESS=$(tail -1000 /path/to/node/logs | grep -c "download started\|upload started") +if [ "$RECENT_SUCCESS" -eq 0 ]; then + echo "$(date): No recent activity detected" >> /var/log/health-check.log +fi +``` + +### Audit score monitoring + +Create `audit-monitor.sh`: +```bash +#!/bin/bash +API_ENDPOINT="http://localhost:14002/api/sno" + +# Get audit scores from dashboard API +AUDIT_DATA=$(curl -s "$API_ENDPOINT") + +# Extract audit scores (requires jq) +AUDIT_SCORE=$(echo "$AUDIT_DATA" | jq '.audits.score') +SUSPENSION_SCORE=$(echo "$AUDIT_DATA" | jq '.audits.suspensionScore') + +# Alert on low scores +if (( $(echo "$AUDIT_SCORE < 0.95" | bc -l) )); then + echo "Audit score dropped to $AUDIT_SCORE" | \ + mail -s "Critical: Low Audit Score" your-email@example.com +fi + +if (( $(echo "$SUSPENSION_SCORE < 1.0" | bc -l) )); then + echo "Suspension score is $SUSPENSION_SCORE" | \ + mail -s "Critical: Suspension Risk" your-email@example.com +fi +``` + +## Schedule monitoring tasks + +### Set up cron jobs + +Edit crontab: +```bash +crontab -e +``` + +Add monitoring schedules: +```bash +# Check health every 15 minutes +*/15 * * * * /path/to/health-check.sh + +# Monitor resources every 5 minutes +*/5 * * * * /path/to/resource-monitor.sh + +# Check for critical log events every hour +0 * * * * /path/to/monitor-node.sh + +# Check audit scores every hour +30 * * * * /path/to/audit-monitor.sh + +# Daily summary report +0 9 * * * /path/to/daily-report.sh +``` + +### Windows Task Scheduler + +For Windows nodes, set up scheduled tasks: +```powershell +# Create scheduled task for monitoring +$trigger = New-ScheduledTaskTrigger -Once -At (Get-Date) -RepetitionInterval (New-TimeSpan -Minutes 15) +$action = New-ScheduledTaskAction -Execute "PowerShell" -Argument "-File C:\path\to\monitor-script.ps1" +Register-ScheduledTask -TaskName "StorageNodeMonitor" -Trigger $trigger -Action $action +``` + +## Advanced monitoring with external tools + +### Using Grafana and Prometheus + +**Install Prometheus node exporter**: +```bash +# Download and install node exporter +wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz +tar xvfz node_exporter-*.*-amd64.tar.gz +sudo mv node_exporter-*.*-amd64/node_exporter /usr/local/bin/ +``` + +**Create Storj-specific metrics script**: +```bash +#!/bin/bash +# storj-metrics.sh +METRICS_FILE="/var/lib/node_exporter/textfile_collector/storj.prom" + +# Get node data from API +NODE_DATA=$(curl -s http://localhost:14002/api/sno) + +# Extract metrics and write to Prometheus format +echo "# HELP storj_audit_score Current audit score" > "$METRICS_FILE" +echo "# TYPE storj_audit_score gauge" >> "$METRICS_FILE" +echo "storj_audit_score $(echo $NODE_DATA | jq '.audits.score')" >> "$METRICS_FILE" + +echo "# HELP storj_suspension_score Current suspension score" >> "$METRICS_FILE" +echo "# TYPE storj_suspension_score gauge" >> "$METRICS_FILE" +echo "storj_suspension_score $(echo $NODE_DATA | jq '.audits.suspensionScore')" >> "$METRICS_FILE" +``` + +### Using Nagios/Icinga + +Create Nagios check script: +```bash +#!/bin/bash +# check_storj_node.sh +DASHBOARD_URL="http://localhost:14002/api/sno" + +# Get node status +RESPONSE=$(curl -s "$DASHBOARD_URL") +if [ $? -ne 0 ]; then + echo "CRITICAL - Cannot connect to dashboard" + exit 2 +fi + +# Check audit score +AUDIT_SCORE=$(echo "$RESPONSE" | jq '.audits.score') +if (( $(echo "$AUDIT_SCORE < 0.95" | bc -l) )); then + echo "CRITICAL - Audit score is $AUDIT_SCORE" + exit 2 +elif (( $(echo "$AUDIT_SCORE < 0.98" | bc -l) )); then + echo "WARNING - Audit score is $AUDIT_SCORE" + exit 1 +fi + +echo "OK - Node is healthy, audit score: $AUDIT_SCORE" +exit 0 +``` + +## Create alerting rules + +### Email alerts setup + +Configure mail system: +```bash +# Install mail utilities +sudo apt install mailutils postfix + +# Configure for external SMTP (optional) +sudo dpkg-reconfigure postfix +``` + +### Slack/Discord notifications + +Create webhook notification script: +```bash +#!/bin/bash +# send-alert.sh +WEBHOOK_URL="your-slack-webhook-url" +MESSAGE="$1" + +curl -X POST -H 'Content-type: application/json' \ + --data "{\"text\":\"Storage Node Alert: $MESSAGE\"}" \ + "$WEBHOOK_URL" +``` + +### SMS alerts (using Twilio) + +```bash +#!/bin/bash +# sms-alert.sh +ACCOUNT_SID="your-twilio-sid" +AUTH_TOKEN="your-twilio-token" +FROM="+1234567890" # Your Twilio number +TO="+0987654321" # Your phone number +MESSAGE="$1" + +curl -X POST "https://api.twilio.com/2010-04-01/Accounts/$ACCOUNT_SID/Messages.json" \ + --data-urlencode "From=$FROM" \ + --data-urlencode "To=$TO" \ + --data-urlencode "Body=Storage Node Alert: $MESSAGE" \ + -u "$ACCOUNT_SID:$AUTH_TOKEN" +``` + +## Verification checklist + +- [ ] Dashboard is accessible and showing current data +- [ ] Log monitoring detects critical events +- [ ] Resource monitoring tracks CPU, memory, and disk usage +- [ ] Health checks verify node connectivity +- [ ] Audit score monitoring is working +- [ ] Automated alerts are configured and tested +- [ ] Monitoring scripts are scheduled to run regularly +- [ ] Alert notifications reach you successfully + +## Next steps + +- Set up [node optimization](docId:optimize-node-performance) based on monitoring insights +- Learn about [handling node updates](docId:handle-node-updates) safely +- Configure [backup and disaster recovery](docId:your-backup-guide) procedures \ No newline at end of file diff --git a/app/(docs)/node/how-to/setup-remote-access.md b/app/(docs)/node/how-to/setup-remote-access.md new file mode 100644 index 000000000..dbe9c55e8 --- /dev/null +++ b/app/(docs)/node/how-to/setup-remote-access.md @@ -0,0 +1,382 @@ +--- +title: Set up remote access +docId: setup-remote-access +metadata: + title: How to Set Up Remote Access - Storage Node Dashboard + description: Configure secure remote access to your storage node dashboard using SSH tunneling +--- + +Set up secure remote access to your storage node dashboard from anywhere using SSH tunneling and port forwarding. + +## Prerequisites + +- Storage node with web dashboard enabled +- SSH server installed on the node system +- SSH client on your remote device +- Basic networking knowledge +- Router access for port forwarding (optional) + +## Enable the web dashboard + +### Configure dashboard access + +**Docker installation**: +Ensure dashboard port is exposed in your docker run command: +```bash +docker run -d --restart unless-stopped -p 14002:14002 \ + -e WALLET="your-wallet-address" \ + -e EMAIL="your-email" \ + -e ADDRESS="your-external-address:28967" \ + -e STORAGE="2TB" \ + --mount type=bind,source="path-to-identity",destination=/app/identity \ + --mount type=bind,source="path-to-storage",destination=/app/config \ + --name storagenode storjlabs/storagenode:latest +``` + +**Windows GUI**: Dashboard is automatically available at `http://localhost:14002` + +**Linux binary**: Configure in `config.yaml`: +```yaml +console.address: 127.0.0.1:14002 +``` + +## Install SSH server + +### Linux (Ubuntu/Debian) + +```bash +# Install SSH server +sudo apt update && sudo apt install openssh-server -y + +# Enable and start SSH service +sudo systemctl enable ssh +sudo systemctl start ssh + +# Check SSH status +sudo systemctl status ssh +``` + +### Linux (CentOS/RHEL) + +```bash +# Install SSH server +sudo yum install openssh-server -y + +# Enable and start SSH service +sudo systemctl enable sshd +sudo systemctl start sshd + +# Check SSH status +sudo systemctl status sshd +``` + +### Windows + +Enable OpenSSH Server: +```powershell +# Install OpenSSH Server (Windows 10+) +Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 + +# Start SSH service +Start-Service sshd + +# Set to auto-start +Set-Service -Name sshd -StartupType 'Automatic' +``` + +### macOS + +```bash +# Enable SSH (Remote Login) +sudo systemsetup -setremotelogin on + +# Check status +sudo systemsetup -getremotelogin +``` + +## Configure SSH security + +### Generate SSH key pair + +On your client device: + +```bash +# Generate SSH key pair +ssh-keygen -t rsa -b 4096 -C "storagenode-access" + +# Accept default location or specify custom path +# Enter passphrase for added security (optional but recommended) +``` + +### Copy public key to server + +**Linux/macOS**: +```bash +# Copy public key to server +ssh-copy-id -i ~/.ssh/id_rsa.pub username@your-node-server + +# Alternative method if ssh-copy-id not available +cat ~/.ssh/id_rsa.pub | ssh username@your-node-server "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys" +``` + +**Windows**: +```powershell +# Copy public key content +Get-Content ~/.ssh/id_rsa.pub | ssh username@your-node-server "cat >> ~/.ssh/authorized_keys" +``` + +### Secure SSH configuration + +Edit SSH server configuration: + +**Linux**: Edit `/etc/ssh/sshd_config` +**Windows**: Edit `%programdata%\ssh\sshd_config` + +```bash +# Recommended security settings +PubkeyAuthentication yes +PasswordAuthentication no +PermitRootLogin no +Port 2222 # Use non-standard port for additional security +MaxAuthTries 3 +``` + +Restart SSH service: +```bash +# Linux +sudo systemctl restart ssh + +# Windows +Restart-Service sshd +``` + +## Set up local SSH tunneling + +### Basic port forwarding + +From your client device: + +```bash +# Forward local port 14002 to remote dashboard +ssh -L 14002:localhost:14002 username@your-node-server + +# Keep connection open and access dashboard at: +# http://localhost:14002 +``` + +### Advanced tunneling options + +**Background tunnel**: +```bash +# Run tunnel in background +ssh -f -N -L 14002:localhost:14002 username@your-node-server + +# Kill background tunnel later +pkill -f "ssh.*14002:localhost:14002" +``` + +**Auto-reconnect tunnel**: +```bash +# Create persistent tunnel with autossh (install autossh first) +autossh -M 20000 -f -N -L 14002:localhost:14002 username@your-node-server +``` + +**Multiple port forwarding**: +```bash +# Forward multiple services +ssh -L 14002:localhost:14002 -L 8080:localhost:8080 username@your-node-server +``` + +## Configure router port forwarding + +### Set up external access (optional) + +For direct external access without local tunneling: + +1. **Log in to your router's admin interface** +2. **Navigate to Port Forwarding settings** +3. **Create new forwarding rule**: + - External Port: 2222 (custom SSH port) + - Internal Port: 2222 + - Internal IP: Your node server's local IP + - Protocol: TCP +4. **Save and apply settings** + +**Security note**: Only forward SSH port, NOT the dashboard port directly. + +## Mobile access setup + +### Using Termius (Android/iOS) + +**Install Termius app** and configure: + +1. **Create new host**: + - Hostname: Your public IP or domain + - Port: 2222 (your SSH port) + - Username: Your SSH username + +2. **Add your SSH key**: + - Go to Keychain + - Import or generate SSH key + - Associate key with your host + +3. **Set up port forwarding**: + - Go to Port Forwarding + - Create new rule: + - Port forward from: 14002 + - Host to: 127.0.0.1 + - Port to: 14002 + - Address: 127.0.0.1 + +4. **Connect and access dashboard**: + - Connect to your host + - Enable port forwarding + - Open browser to `http://localhost:14002` + +## Create connection scripts + +### Automated connection script + +**Linux/macOS** (`connect-node.sh`): +```bash +#!/bin/bash +echo "Connecting to storage node dashboard..." +ssh -L 14002:localhost:14002 username@your-node-server -N & +SSH_PID=$! + +echo "SSH tunnel established (PID: $SSH_PID)" +echo "Dashboard available at: http://localhost:14002" +echo "Press Ctrl+C to disconnect" + +# Wait for user interrupt +trap "kill $SSH_PID; exit" INT +wait $SSH_PID +``` + +**Windows** (`connect-node.bat`): +```batch +@echo off +echo Connecting to storage node dashboard... +start "SSH Tunnel" ssh -L 14002:localhost:14002 username@your-node-server -N +echo Dashboard available at: http://localhost:14002 +echo Press any key to disconnect... +pause +taskkill /F /IM ssh.exe +``` + +Make scripts executable: +```bash +chmod +x connect-node.sh +``` + +## Test and verify access + +### Verify SSH connection + +```bash +# Test SSH connection +ssh username@your-node-server + +# Should connect without password (using key) +``` + +### Test dashboard access + +1. **Establish SSH tunnel** +2. **Open browser to `http://localhost:14002`** +3. **Verify dashboard loads correctly** +4. **Check all dashboard functions work** + +### Test from different networks + +- Connect from different WiFi networks +- Test mobile access +- Verify connection stability + +## Troubleshooting + +### SSH connection issues + +**Permission denied**: +```bash +# Check SSH key permissions +chmod 600 ~/.ssh/id_rsa +chmod 644 ~/.ssh/id_rsa.pub + +# Verify public key is on server +ssh username@server "cat ~/.ssh/authorized_keys" +``` + +**Connection timeout**: +- Check firewall settings on server +- Verify SSH service is running +- Confirm correct IP address and port + +**Port forwarding not working**: +```bash +# Check if port is in use locally +netstat -an | grep 14002 + +# Try different local port +ssh -L 14003:localhost:14002 username@server +``` + +### Dashboard access issues + +**Dashboard not loading**: +- Verify storage node is running +- Check dashboard is enabled in configuration +- Confirm port 14002 is correct + +**Partial functionality**: +- Check browser console for errors +- Try different browser +- Clear browser cache + +### Mobile app issues + +**Termius connection fails**: +- Verify host configuration +- Check SSH key is properly imported +- Test connection without port forwarding first + +**Port forwarding doesn't work**: +- Ensure rule is enabled +- Check local port isn't in use +- Try refreshing the connection + +## Security best practices + +### Secure your setup + +- Use strong SSH key passphrases +- Disable password authentication +- Use non-standard SSH ports +- Implement fail2ban for brute force protection +- Regular security updates + +### Monitor access + +```bash +# Monitor SSH login attempts +sudo tail -f /var/log/auth.log | grep ssh + +# Check active SSH connections +who +``` + +### Firewall configuration + +```bash +# Allow SSH on custom port +sudo ufw allow 2222/tcp + +# Block dashboard port from external access +sudo ufw deny 14002/tcp +``` + +## Next steps + +- Set up [monitoring and alerts](docId:monitor-node-performance) for your storage node +- Learn about [node optimization](docId:optimize-node-performance) for better performance +- Configure [automated backups](docId:your-backup-guide) for important node data \ No newline at end of file diff --git a/app/(docs)/object-mount/how-to/_meta.json b/app/(docs)/object-mount/how-to/_meta.json new file mode 100644 index 000000000..330aa0148 --- /dev/null +++ b/app/(docs)/object-mount/how-to/_meta.json @@ -0,0 +1,25 @@ +{ + "title": "How-to Guides", + "nav": [ + { + "title": "Install on Debian/Ubuntu", + "id": "install-debian-ubuntu" + }, + { + "title": "Install on RHEL/CentOS", + "id": "install-rhel-centos" + }, + { + "title": "Configure POSIX permissions", + "id": "configure-posix-permissions" + }, + { + "title": "Optimize for large files", + "id": "optimize-large-files" + }, + { + "title": "Troubleshoot mount issues", + "id": "troubleshoot-mount-issues" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/object-mount/how-to/configure-posix-permissions.md b/app/(docs)/object-mount/how-to/configure-posix-permissions.md new file mode 100644 index 000000000..14ac313d8 --- /dev/null +++ b/app/(docs)/object-mount/how-to/configure-posix-permissions.md @@ -0,0 +1,204 @@ +--- +title: Configure POSIX permissions +docId: configure-posix-permissions +metadata: + title: How to Configure POSIX Permissions - Object Mount + description: Enable and configure POSIX mode for traditional file system permissions on Object Mount +--- + +Configure POSIX permissions to enable traditional Unix-style file permissions, ownership, and metadata on your Object Mount filesystems. + +## Prerequisites + +- Object Mount installed and activated +- Write access to the target bucket +- S3 credentials with `PutObject` and `DeleteObject` permissions +- Understanding of when POSIX mode is needed + +## When to use POSIX mode + +**Enable POSIX mode if you need**: +- Applications that check file permissions +- Symbolic link support +- Fine-grained user/group permissions +- Shared filesystem access across users +- Traditional Unix file metadata (mtime, ctime, ownership) + +**Skip POSIX mode if**: +- Using read-only credentials (POSIX mode will fail) +- Only need basic file browsing and download +- Working with simple media workflows +- Want maximum performance (POSIX adds overhead) + +## Enable POSIX mode on new mount + +### Create POSIX-enabled mount + +1. **Open Object Mount application** +2. **Go to Mounts tab** +3. **Click "Create new mount"** +4. **Configure basic settings**: + - Mount name: Choose descriptive name + - Credentials: Select your S3 credentials + - Bucket: Choose target bucket +5. **Enable POSIX mode**: Check "Enable POSIX mode" checkbox +6. **Save and activate mount** + +### Verify POSIX activation + +```bash +# Check mount options +mount | grep cuno + +# Verify POSIX metadata file exists in bucket +# (This is created automatically at bucket root) +``` + +## Configure POSIX permissions + +### Set default permissions + +POSIX mode creates files and directories with default permissions: +- **Files**: 644 (owner read/write, group/others read) +- **Directories**: 755 (owner read/write/execute, group/others read/execute) + +### Customize permission behavior + +In Object Mount preferences: +1. **Go to Preferences > Advanced Settings** +2. **Configure POSIX options**: + - Default file permissions + - Default directory permissions + - User/group mapping behavior + +### Handle existing files + +Files uploaded without POSIX mode: +- Will inherit default POSIX permissions when accessed +- Metadata gets added automatically on first access +- Original files remain unchanged in cloud storage + +## Manage file ownership + +### Set user/group ownership + +```bash +# Change ownership (if mount supports it) +chown user:group /path/to/mounted/file + +# Change group only +chgrp group /path/to/mounted/file +``` + +### Configure user mapping + +For multi-user environments: +1. **Map cloud storage users to local system users** +2. **Configure in Object Mount preferences** +3. **Set consistent UID/GID mappings across systems** + +## Working with symbolic links + +### Create symbolic links + +```bash +# Create symbolic link on mounted filesystem +ln -s /target/file /path/to/mount/symlink + +# Verify link creation +ls -la /path/to/mount/ +``` + +### Limitations + +- Symbolic links work within the mounted filesystem +- Cross-filesystem links may not work as expected +- Some cloud storage providers have symlink limitations + +## Troubleshooting POSIX mode + +### Mount fails to activate + +**Check write permissions**: +```bash +# Test bucket write access +echo "test" > /tmp/testfile +# Try to upload test file to verify write access +``` + +**Common causes**: +- Read-only credentials (POSIX requires write access) +- Bucket permissions don't allow object creation +- Network connectivity issues + +### Permission errors + +**"Operation not permitted" errors**: +- Verify POSIX mode is enabled on the mount +- Check that metadata file exists in bucket root +- Ensure credentials have adequate permissions + +**Files show wrong ownership**: +- Check user/group mapping configuration +- Verify UID/GID settings in Object Mount preferences + +### Performance issues + +**POSIX mode adds overhead**: +- Each file access requires metadata lookup +- Consider disabling if not needed +- Use caching settings to improve performance + +## Advanced POSIX configuration + +### Optimize metadata caching + +1. **Enable metadata cache** in Preferences +2. **Set appropriate cache duration**: + - Longer for read-heavy workloads + - Shorter for frequently changing files +3. **Monitor cache performance** + +### Configure for shared access + +For team environments: +- Set consistent POSIX permissions across all mounts +- Configure shared user/group mappings +- Implement access control policies + +### Integration with backup tools + +Many backup tools expect POSIX metadata: +```bash +# rsync with POSIX metadata preservation +rsync -avX /local/files/ /mounted/storage/ + +# tar with extended attributes +tar --xattrs -czf backup.tar.gz /mounted/files/ +``` + +## Disable POSIX mode + +### Remove POSIX from existing mount + +1. **Unmount the filesystem** +2. **Edit mount configuration** +3. **Uncheck "Enable POSIX mode"** +4. **Remount filesystem** + +**Note**: Metadata files remain in cloud storage but won't be actively used. + +## Verification checklist + +- [ ] Mount created with POSIX mode enabled +- [ ] Write access confirmed for target bucket +- [ ] File permissions work as expected +- [ ] Ownership changes are preserved +- [ ] Symbolic links function correctly +- [ ] Performance is acceptable for your use case + +## Next steps + +- [Optimize for large files](docId:optimize-large-files) with POSIX-aware applications +- [Troubleshoot mount issues](docId:troubleshoot-mount-issues) if problems arise +- Learn about [media workflow integration](docId:your-media-guide) with POSIX-enabled mounts \ No newline at end of file diff --git a/app/(docs)/object-mount/how-to/install-debian-ubuntu.md b/app/(docs)/object-mount/how-to/install-debian-ubuntu.md new file mode 100644 index 000000000..51018e640 --- /dev/null +++ b/app/(docs)/object-mount/how-to/install-debian-ubuntu.md @@ -0,0 +1,181 @@ +--- +title: Install on Debian/Ubuntu +docId: install-debian-ubuntu +metadata: + title: How to Install Object Mount on Debian/Ubuntu + description: Step-by-step installation guide for Object Mount on Debian and Ubuntu systems +--- + +Install Object Mount on Debian or Ubuntu systems to mount cloud storage as a local filesystem. + +## Prerequisites + +- Debian 10+ or Ubuntu 18.04+ +- Administrative privileges (sudo access) +- Active internet connection +- Storj DCS account with S3-compatible credentials + +## Download and install + +### Download the installer package + +```bash +# Download the latest Debian package +wget https://github.com/cunoFS/cunoFS/releases/latest/download/cuno_amd64_glibc_deb.run +``` + +### Extract the installation files + +```bash +# Make the installer executable and run it +chmod +x cuno_amd64_glibc_deb.run + +# Extract files (accept EULA interactively) +sh cuno_amd64_glibc_deb.run + +# OR accept EULA automatically +CUNO_INSTALL_ACCEPT_EULA="yes" sh cuno_amd64_glibc_deb.run +``` + +### Install the package + +```bash +# Update package list +sudo apt update + +# Install Object Mount and dependencies +# Note: Use ./ to install local package +cd cuno_*_amd64_glibc/ +sudo apt install ./cuno_*_amd64.deb +``` + +## Activate Object Mount + +### Choose activation method + +During installation, you'll be prompted to activate Object Mount: + +**Option 1: Start free trial** +```bash +# Set environment variable for automatic trial activation +CUNO_INSTALL_LICENSE="trial" sudo apt install ./cuno_*_amd64.deb +``` + +**Option 2: Use license key** +```bash +# For existing license key +CUNO_INSTALL_LICENSE="your-license-key-here" sudo apt install ./cuno_*_amd64.deb +``` + +**Option 3: Skip activation** +```bash +# Skip activation during install (activate later in app) +CUNO_INSTALL_LICENSE="none" sudo apt install ./cuno_*_amd64.deb +``` + +### Set environment variables + +```bash +# Add to your shell profile (~/.bashrc or ~/.zshrc) +echo 'export CUNO_ROOT=/opt/cuno' >> ~/.bashrc + +# Reload your shell configuration +source ~/.bashrc +``` + +## Launch Object Mount + +### Start the application + +```bash +# Launch Object Mount GUI +object-mount + +# OR launch from command line +cuno +``` + +### Verify installation + +Check that Object Mount is properly installed: + +```bash +# Check version +cuno --version + +# Check installation directory +ls -la /opt/cuno/ +``` + +## Configure credentials + +### Add Storj DCS credentials + +1. **Open Object Mount application** +2. **Go to Credentials tab** +3. **Click "Import Credentials"** +4. **Select "S3-Compatible"** +5. **Enter your details**: + - Access Key: Your Storj access key + - Secret Key: Your Storj secret key + - Endpoint: `https://gateway.storjshare.io` + - Region: `us-east-1` + +### Test connection + +1. **Go to Mounts tab** +2. **Create new mount** +3. **Select your credentials** +4. **Choose a bucket** +5. **Test the connection** + +## Verification + +1. **Check service status**: Verify Object Mount service is running +2. **Test mount creation**: Create a test mount with your credentials +3. **Browse files**: Navigate mounted storage in file manager +4. **Verify read/write**: Test file operations if you have write access + +## Troubleshooting + +**Installation fails with dependency errors**: +```bash +# Install missing dependencies manually +sudo apt install -f +``` + +**Permission denied during install**: +```bash +# Ensure you're using sudo +sudo apt install ./cuno_*_amd64.deb +``` + +**Application won't start**: +```bash +# Check if required dependencies are installed +dpkg -l | grep cuno + +# Check system logs for errors +journalctl -u cuno* +``` + +**Mount not appearing**: +- Verify credentials are correct +- Check that buckets are accessible +- Ensure FUSE is available: `sudo apt install fuse` + +## Uninstall (if needed) + +```bash +# Remove Object Mount package +sudo apt remove cuno + +# Remove configuration files (optional) +rm -rf ~/.config/Object Mount/ +``` + +## Next steps + +- [Configure POSIX permissions](docId:configure-posix-permissions) for advanced file system compatibility +- [Optimize for large files](docId:optimize-large-files) if working with media workflows +- [Set up Docker container](docId:setup-docker-container) for containerized environments \ No newline at end of file diff --git a/app/(docs)/object-mount/how-to/install-rhel-centos.md b/app/(docs)/object-mount/how-to/install-rhel-centos.md new file mode 100644 index 000000000..194a69f1c --- /dev/null +++ b/app/(docs)/object-mount/how-to/install-rhel-centos.md @@ -0,0 +1,280 @@ +--- +title: Install on RHEL/CentOS +docId: install-rhel-centos +metadata: + title: How to Install Object Mount on RHEL/CentOS + description: Step-by-step installation guide for Object Mount on Red Hat Enterprise Linux and CentOS systems +--- + +Install Object Mount on Red Hat Enterprise Linux (RHEL) or CentOS systems to mount cloud storage as a local filesystem. + +## Prerequisites + +- RHEL 8+ or CentOS 8+ +- Administrative privileges (sudo access) +- Active internet connection +- Storj DCS account with S3-compatible credentials + +## Download and install + +### Download the installer package + +```bash +# Download the latest RPM-compatible package +wget https://github.com/cunoFS/cunoFS/releases/latest/download/cuno_amd64_glibc_rpm.run +``` + +### Extract the installation files + +```bash +# Make the installer executable and run it +chmod +x cuno_amd64_glibc_rpm.run + +# Extract files (accept EULA interactively) +sh cuno_amd64_glibc_rpm.run + +# OR accept EULA automatically +CUNO_INSTALL_ACCEPT_EULA="yes" sh cuno_amd64_glibc_rpm.run +``` + +### Install system dependencies + +```bash +# Update package list +sudo yum update + +# Install FUSE and other dependencies +sudo yum install fuse fuse-libs + +# For RHEL 8+/CentOS 8+, you may need EPEL repository +sudo yum install epel-release +``` + +### Install the package + +```bash +# Navigate to extracted directory +cd cuno_*_amd64_glibc/ + +# Install Object Mount using yum/dnf +sudo yum localinstall cuno_*_amd64.rpm + +# OR use dnf on newer systems +sudo dnf localinstall cuno_*_amd64.rpm +``` + +## Activate Object Mount + +### Choose activation method + +During installation, you'll be prompted to activate Object Mount: + +**Option 1: Start free trial** +```bash +# Set environment variable for automatic trial activation +CUNO_INSTALL_LICENSE="trial" sudo yum localinstall cuno_*_amd64.rpm +``` + +**Option 2: Use license key** +```bash +# For existing license key +CUNO_INSTALL_LICENSE="your-license-key-here" sudo yum localinstall cuno_*_amd64.rpm +``` + +**Option 3: Skip activation** +```bash +# Skip activation during install (activate later in app) +CUNO_INSTALL_LICENSE="none" sudo yum localinstall cuno_*_amd64.rpm +``` + +### Set environment variables + +```bash +# Add to your shell profile (~/.bashrc or ~/.zshrc) +echo 'export CUNO_ROOT=/opt/cuno' >> ~/.bashrc + +# Reload your shell configuration +source ~/.bashrc +``` + +## Configure firewall and SELinux + +### Adjust firewall settings + +```bash +# Allow FUSE filesystem access (if needed) +sudo firewall-cmd --permanent --add-service=nfs +sudo firewall-cmd --reload + +# OR disable firewall temporarily for testing +sudo systemctl stop firewalld +``` + +### Handle SELinux policies + +```bash +# Check SELinux status +getenforce + +# If SELinux is enforcing, you may need to allow FUSE access +sudo setsebool -P use_fusefs_home_dirs on + +# Create custom SELinux policy if needed (advanced) +# Consult your security administrator for production systems +``` + +## Launch Object Mount + +### Start the application + +```bash +# Launch Object Mount GUI (if display available) +object-mount + +# OR launch from command line +cuno + +# Check if running as service +systemctl status cuno +``` + +### Verify installation + +Check that Object Mount is properly installed: + +```bash +# Check version +cuno --version + +# Check installation directory +ls -la /opt/cuno/ + +# Verify FUSE module is loaded +lsmod | grep fuse +``` + +## Configure credentials + +### Add Storj DCS credentials + +1. **Open Object Mount application** +2. **Go to Credentials tab** +3. **Click "Import Credentials"** +4. **Select "S3-Compatible"** +5. **Enter your details**: + - Access Key: Your Storj access key + - Secret Key: Your Storj secret key + - Endpoint: `https://gateway.storjshare.io` + - Region: `us-east-1` + +### Test connection + +1. **Go to Mounts tab** +2. **Create new mount** +3. **Select your credentials** +4. **Choose a bucket** +5. **Test the connection** + +## Handle RHEL/CentOS-specific issues + +### User permissions + +```bash +# Add your user to the fuse group +sudo usermod -a -G fuse $USER + +# Log out and back in, or use newgrp +newgrp fuse +``` + +### Mount point permissions + +```bash +# Create mount directory with proper permissions +sudo mkdir -p /mnt/object-mount +sudo chown $USER:$USER /mnt/object-mount +sudo chmod 755 /mnt/object-mount +``` + +### Service configuration + +```bash +# Enable Object Mount service (if available) +sudo systemctl enable cuno + +# Start the service +sudo systemctl start cuno + +# Check service status +sudo systemctl status cuno +``` + +## Verification + +1. **Check service status**: Verify Object Mount service is running +2. **Test mount creation**: Create a test mount with your credentials +3. **Browse files**: Navigate mounted storage in file manager +4. **Verify read/write**: Test file operations if you have write access + +## Troubleshooting + +**Installation fails with dependency errors**: +```bash +# Install missing dependencies manually +sudo yum install -y fuse fuse-libs + +# Check for conflicting packages +rpm -qa | grep fuse +``` + +**Permission denied during install**: +```bash +# Ensure you're using sudo +sudo yum localinstall cuno_*_amd64.rpm + +# Check file permissions +ls -la cuno_*_amd64.rpm +``` + +**FUSE module not loaded**: +```bash +# Load FUSE module manually +sudo modprobe fuse + +# Add to modules to load at boot +echo "fuse" | sudo tee -a /etc/modules-load.d/fuse.conf +``` + +**SELinux blocking mounts**: +```bash +# Check for SELinux denials +sudo ausearch -m avc -ts recent + +# Temporarily disable SELinux for testing +sudo setenforce 0 + +# Re-enable after testing +sudo setenforce 1 +``` + +**Mount not appearing**: +- Verify credentials are correct +- Check that buckets are accessible +- Ensure user is in fuse group +- Check firewall isn't blocking connections + +## Uninstall (if needed) + +```bash +# Remove Object Mount package +sudo yum remove cuno + +# Remove configuration files (optional) +rm -rf ~/.config/Object Mount/ +``` + +## Next steps + +- [Configure POSIX permissions](docId:configure-posix-permissions) for advanced file system compatibility +- [Optimize for large files](docId:optimize-large-files) if working with media workflows +- [Set up Docker container](docId:setup-docker-container) for containerized environments \ No newline at end of file diff --git a/app/(docs)/object-mount/how-to/optimize-large-files.md b/app/(docs)/object-mount/how-to/optimize-large-files.md new file mode 100644 index 000000000..a3a0dec05 --- /dev/null +++ b/app/(docs)/object-mount/how-to/optimize-large-files.md @@ -0,0 +1,255 @@ +--- +title: Optimize for large files +docId: optimize-large-files +metadata: + title: How to Optimize Object Mount for Large Files + description: Configure Object Mount for optimal performance with large files and media workflows +--- + +Optimize Object Mount configuration for working with large files, media assets, and high-bandwidth workflows. + +## Prerequisites + +- Object Mount installed and activated +- Mount with write access (for caching and prefetch) +- Understanding of your file sizes and workflow patterns +- Adequate local storage for caching (recommended: 10-50GB) + +## Configure caching for large files + +### Enable data caching + +**Set up local data cache**: +1. **Go to Preferences > Cache Settings** +2. **Enable Data Cache**: Check the box +3. **Set cache size**: Allocate 10-50GB based on available disk space +4. **Choose cache location**: Use fastest local storage (SSD preferred) +5. **Configure cache policies**: + - **Aggressive caching** for frequently accessed files + - **Conservative caching** to preserve local disk space + +### Optimize metadata caching + +**Configure metadata cache**: +1. **Enable Metadata Cache**: Check the box +2. **Set TTL (Time To Live)**: + - 15-30 minutes for active projects + - 5 minutes for frequently changing directories +3. **Cache directory listings**: Enable for faster browsing + +### Cache location best practices + +**Choose optimal cache directory**: +```bash +# macOS: Use high-speed SSD +~/Library/Caches/Object Mount/ + +# Linux: Place on fastest available storage +~/.cache/object-mount/ + +# Windows: Use SSD if available +C:\Users\%username%\AppData\Local\Object Mount\cache\ +``` + +## Configure connection optimization + +### Adjust connection pool settings + +**Optimize for high-bandwidth connections**: +1. **Go to Preferences > Advanced Settings** +2. **S3 Connection Pool Size**: + - **50-100**: Basic broadband (< 100 Mbps) + - **150-200**: High-speed fiber (1 Gbps+) + - **250-300**: Very high-speed connections (10 Gbps+) +3. **Concurrent transfers**: Match your bandwidth capacity + +### Enable file prefetching + +**Configure automatic prefetch**: +1. **Set environment variable**: `CUNO_OPTIONS=-filePrefetch` +2. **Add to shell profile** (macOS/Linux): + ```bash + echo 'export CUNO_OPTIONS="-filePrefetch"' >> ~/.bashrc + source ~/.bashrc + ``` +3. **Windows**: Set in System Environment Variables + +## Optimize transfer behavior + +### Configure multi-part uploads + +**For files larger than 64MB**: +1. **Enable multi-part uploads** in Advanced Settings +2. **Set chunk size**: + - **64MB**: Good balance for most connections + - **128MB**: For very high-speed connections + - **32MB**: For slower or unstable connections + +### Adjust timeout settings + +**Increase timeouts for large files**: +1. **Connection timeout**: 60-120 seconds +2. **Read timeout**: 300-600 seconds +3. **Write timeout**: 600-1200 seconds for very large files + +## Optimize for specific file types + +### Video and media files + +**Configuration for media workflows**: +```bash +# Set optimal options for media work +export CUNO_OPTIONS="-filePrefetch -chunkSize=128M -connectionPool=200" +``` + +**Media-specific settings**: +- **Enable aggressive caching** for active projects +- **Use local scratch disks** for rendering and temporary files +- **Keep media cache on local storage** (not on mounted storage) + +### Large datasets and archives + +**For scientific data, backups, archives**: +- **Conservative caching**: Only cache frequently accessed files +- **Higher connection pools**: Maximize parallel transfers +- **Longer metadata TTL**: Reduce API calls for static datasets + +### CAD and design files + +**For engineering and design workflows**: +- **Moderate caching**: Balance between performance and storage +- **Shorter timeouts**: Faster failure detection +- **Enable version-aware caching**: If files change frequently + +## Monitor and tune performance + +### Monitor transfer speeds + +**Track performance metrics**: +```bash +# Monitor network utilization +iftop -i your-interface # Linux/macOS +netstat -e # Windows + +# Check Object Mount process usage +top -p $(pgrep cuno) # Linux +Activity Monitor # macOS +Task Manager # Windows +``` + +### Identify bottlenecks + +**Common performance limiters**: +- **Network bandwidth**: Test with speed test tools +- **Local disk I/O**: Monitor disk queue depth and utilization +- **CPU usage**: Check if encryption/compression is CPU-bound +- **Memory usage**: Ensure adequate RAM for caching + +### Adjust based on usage patterns + +**Sequential access patterns** (video playback): +- Increase prefetch buffer size +- Use larger chunk sizes +- Enable aggressive read-ahead caching + +**Random access patterns** (browsing, thumbnails): +- Smaller chunk sizes +- More conservative caching +- Higher connection pool for parallel requests + +## Application-specific optimization + +### Creative applications (Adobe, Avid, etc.) + +**Optimize for NLE workflows**: +1. **Proxy workflows**: Use Object Mount for source media, local storage for proxies +2. **Media cache location**: Keep application caches on local fast storage +3. **Project files**: Store project files locally, media on mounted storage + +**Configuration example**: +```bash +# Optimized for video editing +export CUNO_OPTIONS="-filePrefetch -chunkSize=64M -connectionPool=150 -cacheSize=20GB" +``` + +### Database and structured data + +**For applications accessing databases**: +- **Disable POSIX mode** unless specifically required +- **Use smaller chunk sizes** for frequent small reads +- **Enable metadata caching** with shorter TTL + +## Handle very large files (> 10GB) + +### Special considerations + +**For extremely large files**: +1. **Monitor memory usage**: Large files can consume significant RAM +2. **Increase timeouts significantly**: Allow time for complete transfers +3. **Use wired connections**: Avoid WiFi for critical large file operations +4. **Plan for interruptions**: Ensure resume capability is working + +### Test large file operations + +**Before production use**: +```bash +# Test large file upload +time cp large-test-file.bin /mounted/storage/ + +# Test large file download +time cp /mounted/storage/large-file.bin /local/destination/ + +# Monitor during operation +iostat -x 1 # Linux +iostat 1 # macOS +``` + +## Troubleshoot performance issues + +### Common performance problems + +**Slow transfer speeds**: +- Check network bandwidth to cloud provider +- Verify cache directory is on fast storage +- Increase connection pool size gradually +- Test with different chunk sizes + +**High memory usage**: +- Reduce cache size allocation +- Lower connection pool count +- Disable prefetch temporarily +- Check for memory leaks in logs + +**Frequent timeouts**: +- Increase timeout values +- Check network stability +- Reduce concurrent operations +- Test during different times of day + +### Performance testing + +**Benchmark your configuration**: +```bash +# Create test files of different sizes +dd if=/dev/zero of=test-1gb.bin bs=1M count=1024 + +# Time uploads with different settings +time cp test-1gb.bin /mounted/storage/ +``` + +## Verification checklist + +- [ ] Data cache enabled and properly sized +- [ ] Metadata cache configured with appropriate TTL +- [ ] Connection pool optimized for your bandwidth +- [ ] File prefetch enabled for sequential access +- [ ] Chunk size appropriate for file sizes +- [ ] Timeouts set for worst-case scenarios +- [ ] Cache location on fastest available storage +- [ ] Performance tested with actual file sizes + +## Next steps + +- [Set up Docker container](docId:setup-docker-container) for containerized large file processing +- Learn about [media workflow integration](docId:your-media-guide) for video production +- Configure [monitoring and alerting](docId:your-monitoring-guide) for performance tracking \ No newline at end of file diff --git a/app/(docs)/object-mount/how-to/troubleshoot-mount-issues.md b/app/(docs)/object-mount/how-to/troubleshoot-mount-issues.md new file mode 100644 index 000000000..7ff50e49b --- /dev/null +++ b/app/(docs)/object-mount/how-to/troubleshoot-mount-issues.md @@ -0,0 +1,286 @@ +--- +title: Troubleshoot mount issues +docId: troubleshoot-mount-issues +metadata: + title: How to Troubleshoot Object Mount Issues + description: Solve common Object Mount problems including mounting failures, slow performance, and credential issues +--- + +Diagnose and resolve common Object Mount issues to ensure reliable cloud storage access. + +## Prerequisites + +- Object Mount installed and activated +- Access to log files and system information +- Basic troubleshooting permissions (ability to restart services) + +## Mount won't appear or activate + +### Check credentials and permissions + +**Verify credential configuration**: +1. **Go to Credentials tab** in Object Mount +2. **Test credentials** with "Test Connection" +3. **Confirm endpoint URL** is correct +4. **Check bucket accessibility** + +**For Storj DCS**: +- Endpoint: `https://gateway.storjshare.io` +- Region: `us-east-1` (required even though not used) +- Ensure bucket names are lexicographically ordered + +### Verify system dependencies + +**Check required components**: + +On macOS: +```bash +# Verify macFUSE is installed +ls -la /Library/Filesystems/macfuse.fs/ + +# Check macFUSE version +/Library/Filesystems/macfuse.fs/Contents/Resources/load_macfuse +``` + +On Windows: +- Ensure WinFsp is installed from [official releases](https://github.com/billziss-gh/winfsp) +- Check Windows services for WinFsp components + +On Linux: +```bash +# Check FUSE availability +ls -la /dev/fuse +lsmod | grep fuse + +# Install FUSE if missing +sudo apt install fuse # Debian/Ubuntu +sudo yum install fuse # CentOS/RHEL +``` + +### Diagnose POSIX mode issues + +**Common POSIX problems**: +- POSIX mode enabled with read-only credentials +- Insufficient permissions for metadata file creation + +**Solutions**: +```bash +# Disable POSIX mode for read-only mounts +# In mount configuration, uncheck "Enable POSIX mode" + +# OR upgrade credentials to read-write access +``` + +## Mount is slow or freezes + +### Enable caching + +**Configure data and metadata caching**: +1. **Go to Preferences > Cache Settings** +2. **Enable data cache**: Set cache size (e.g., 1-5GB) +3. **Enable metadata cache**: Set reasonable TTL (e.g., 5-15 minutes) +4. **Choose SSD location** for cache directory + +### Optimize connection settings + +**Increase connection pool**: +1. **Go to Preferences > Advanced Settings** +2. **Set S3 connection pool**: + - 50-100 for basic usage + - 150-200 for high-bandwidth connections (1Gbps+) +3. **Enable file prefetch**: Set `CUNO_OPTIONS = -filePrefetch` + +### Check network performance + +```bash +# Test network latency to Storj +ping gateway.storjshare.io + +# Test bandwidth to cloud storage +# Upload/download test files to measure throughput + +# Check for network congestion +netstat -i # Interface statistics +iftop # Real-time bandwidth monitoring +``` + +## Credentials work elsewhere but fail in Object Mount + +### Configure S3-compatible settings + +**For non-AWS S3 providers**: +1. **Use S3-Compatible tab** when importing credentials +2. **Select provider** from dropdown (if available) +3. **Set explicit region** even if not used +4. **Verify endpoint reachability** + +```bash +# Test endpoint connectivity +curl -I https://gateway.storjshare.io + +# Test S3 API access +aws s3 ls --endpoint-url https://gateway.storjshare.io \ + --profile your-profile +``` + +### Handle custom authentication + +**For providers requiring special configuration**: +- Check provider documentation for required headers +- Verify signature version compatibility +- Configure custom authentication parameters in advanced settings + +## Files are stalling during operations + +### Optimize transfer settings + +**For large file transfers**: +1. **Enable multi-part uploads** in preferences +2. **Adjust chunk size** for your connection speed +3. **Increase timeout values** for slow connections + +**Windows-specific solutions**: +- Use **"Fast Paste Here"** right-click option instead of standard Windows copy +- This bypasses Windows Explorer and uses optimized Object Mount transfers + +### Monitor transfer progress + +```bash +# Monitor network activity (Linux/macOS) +iftop -i your-interface + +# Check Object Mount processes +ps aux | grep cuno +top -p $(pgrep cuno) + +# Windows: Use Task Manager or Resource Monitor +``` + +## Enable detailed logging + +### Configure debug logging + +1. **Go to Preferences > Advanced Settings** +2. **Set Log Level to "debug" or "trace"** +3. **Reproduce the issue** +4. **Collect logs from**: + +**Log locations**: +- **macOS**: `~/Library/Application Support/Object Mount/cunoFS.log` +- **Windows**: `C:\Users\%username%\AppData\Local\Object Mount\cunofs.log` +- **Linux**: `~/.local/share/Object Mount/logs/` + +### Analyze log files + +**Common error patterns**: +```bash +# Search for authentication errors +grep -i "auth\|credential\|permission" cunoFS.log + +# Look for network issues +grep -i "timeout\|connection\|network" cunoFS.log + +# Check for FUSE/mount errors +grep -i "fuse\|mount\|filesystem" cunoFS.log +``` + +## Creative application compatibility issues + +### Handle "Leave Files in Place" problems + +**Common issues with NLE applications**: +- Applications verify write access even for read-only workflows +- POSIX mode conflicts with permission checks +- Mount paths are too long or contain special characters + +**Solutions**: +1. **Disable POSIX mode** for read-only media workflows +2. **Use shorter mount paths** closer to root directory +3. **Create proxy workflows** instead of direct mounting +4. **Test with read-write credentials** if available + +### Application-specific workarounds + +**For Avid Media Composer**: +- Keep mount paths short (avoid deep nested folders) +- Place scratch files on local storage, not mounted storage +- Test with small projects before production use + +**For Adobe Premiere Pro / After Effects**: +- Enable media cache on local storage +- Use proxy files for remote media when possible +- Monitor for memory usage spikes during scrubbing + +## System-level troubleshooting + +### Restart Object Mount service + +**Clean restart process**: +1. **Quit Object Mount** completely +2. **Clear any stuck mounts**: + ```bash + # macOS/Linux: Force unmount stuck filesystems + sudo umount -f /path/to/stuck/mount + + # Windows: Use Disk Management to disconnect network drives + ``` +3. **Clear cache** (if corruption suspected): + ```bash + rm -rf ~/Library/Application\ Support/Object\ Mount/cache/ # macOS + ``` +4. **Restart Object Mount** + +### Check system resources + +```bash +# Monitor memory usage +free -h # Linux +vm_stat # macOS + +# Check disk space for cache +df -h /path/to/cache/directory + +# Monitor CPU usage during operations +top / htop +``` + +## Advanced diagnostics + +### Test with minimal configuration + +1. **Create new test mount** with minimal settings +2. **Disable all optimization features** (caching, prefetch, etc.) +3. **Test basic functionality** (list, read small files) +4. **Gradually enable features** to isolate issues + +### Network troubleshooting + +```bash +# Test specific S3 operations +aws s3api head-bucket --bucket your-bucket \ + --endpoint-url https://gateway.storjshare.io + +# Monitor DNS resolution +nslookup gateway.storjshare.io + +# Check firewall/proxy settings +curl -v https://gateway.storjshare.io +``` + +## When to contact support + +**Gather this information before contacting support**: +- Operating system and version +- Object Mount version number +- Cloud storage provider and bucket details +- Complete error messages from logs +- Steps to reproduce the issue +- Screenshots of error dialogs + +**Contact Storj DCS support**: [Submit a support request](https://supportdcs.storj.io/hc/en-us/requests/new) + +## Next steps + +- [Optimize for large files](docId:optimize-large-files) once basic functionality is working +- Set up [performance monitoring](docId:your-monitoring-guide) for ongoing health checks +- Learn about [media workflow optimization](docId:your-media-guide) for creative applications \ No newline at end of file From df2a1b476e2f8f64efef8d8ff91b314030193403 Mon Sep 17 00:00:00 2001 From: onionjake <113088+onionjake@users.noreply.github.com> Date: Fri, 29 Aug 2025 14:36:14 -0600 Subject: [PATCH 3/8] Next phase adopting Diataxis --- .../concepts/security-and-encryption/page.md | 346 +++++++++++++++++ .../storj-architecture-overview/page.md | 290 ++++++++++++++ .../page.md | 141 +++++++ app/(docs)/node/concepts/_meta.json | 21 + .../node/concepts/network-participation.md | 316 +++++++++++++++ .../privacy-security-for-operators.md | 268 +++++++++++++ app/(docs)/node/concepts/reputation-system.md | 285 ++++++++++++++ .../node/concepts/storage-node-economics.md | 265 +++++++++++++ app/(docs)/object-mount/concepts/_meta.json | 21 + .../concepts/performance-characteristics.md | 362 ++++++++++++++++++ .../concepts/posix-compliance-explained.md | 258 +++++++++++++ .../concepts/when-to-use-fusion.md | 347 +++++++++++++++++ 12 files changed, 2920 insertions(+) create mode 100644 app/(docs)/learn/concepts/security-and-encryption/page.md create mode 100644 app/(docs)/learn/concepts/storj-architecture-overview/page.md create mode 100644 app/(docs)/learn/concepts/understanding-decentralized-storage/page.md create mode 100644 app/(docs)/node/concepts/_meta.json create mode 100644 app/(docs)/node/concepts/network-participation.md create mode 100644 app/(docs)/node/concepts/privacy-security-for-operators.md create mode 100644 app/(docs)/node/concepts/reputation-system.md create mode 100644 app/(docs)/node/concepts/storage-node-economics.md create mode 100644 app/(docs)/object-mount/concepts/_meta.json create mode 100644 app/(docs)/object-mount/concepts/performance-characteristics.md create mode 100644 app/(docs)/object-mount/concepts/posix-compliance-explained.md create mode 100644 app/(docs)/object-mount/concepts/when-to-use-fusion.md diff --git a/app/(docs)/learn/concepts/security-and-encryption/page.md b/app/(docs)/learn/concepts/security-and-encryption/page.md new file mode 100644 index 000000000..7a2d58479 --- /dev/null +++ b/app/(docs)/learn/concepts/security-and-encryption/page.md @@ -0,0 +1,346 @@ +--- +title: Security and Encryption +docId: security-encryption +metadata: + title: Security and Encryption in Storj - Complete Guide + description: Comprehensive explanation of Storj's security model, encryption implementation, key management, and privacy protections. +--- + +Security forms the foundation of Storj's value proposition. Understanding how encryption, access control, and privacy protections work helps you make informed decisions about data protection and compliance requirements. + +## Security Philosophy + +Storj implements security through a "zero-trust" architecture where no single party - including Storj Labs - can access your data without your explicit permission. This approach provides stronger security guarantees than traditional cloud storage models. + +### Core Security Principles + +**Default Encryption**: All data is encrypted client-side before leaving your device. There is no option to store unencrypted data on the network. + +**Zero-Knowledge Architecture**: Network operators, storage node operators, and Storj Labs cannot decrypt your data or meaningful metadata. + +**Client-Side Control**: You maintain complete control over encryption keys, access permissions, and data sharing decisions. + +**Open Source Transparency**: Core components are open source, enabling independent security audits and verification. + +## Encryption Implementation + +### Multi-Layer Encryption + +Storj implements multiple layers of encryption to protect different aspects of your data: + +**Content Encryption:** +- Each file encrypted with AES-256-GCM +- Unique random encryption key per file +- Authentication prevents tampering detection + +**Path Encryption:** +- File paths and names encrypted separately +- Prevents metadata analysis attacks +- Hierarchical key derivation enables efficient access management + +**Transport Encryption:** +- All network communication uses TLS 1.2+ +- Protection against network-level attacks +- Certificate validation prevents man-in-the-middle attacks + +### Encryption Process Flow + +When you store a file, encryption happens in specific stages: + +1. **Random Key Generation**: A cryptographically secure random key is generated for your specific file + +2. **Content Encryption**: Your file content is encrypted using AES-256-GCM with the random key + +3. **Key Derivation**: The random key is encrypted using a key derived from your root secret and the file path + +4. **Path Encryption**: The file path components are encrypted using hierarchically derived keys + +5. **Secure Transmission**: Encrypted data and metadata are transmitted over TLS-protected connections + +## Key Management Architecture + +### Hierarchical Key Derivation + +Storj uses a sophisticated key hierarchy that balances security with practical usability: + +**Root Secret:** +- Master key stored only on client devices +- Never transmitted over the network +- Used to derive all other encryption keys + +**Path Keys:** +- Derived from root secret and path components +- Each directory level has its own key +- Enables efficient sharing of directory trees + +**Content Keys:** +- Random keys for actual file encryption +- Encrypted with path-derived keys +- Stored as encrypted metadata on Satellites + +### Key Derivation Process + +The key hierarchy works through cryptographic derivation: + +``` +Root Secret → Path Key (project) → Path Key (bucket) → Path Key (folder) → Content Key +``` + +This hierarchy provides several security benefits: + +- **Forward Security**: Higher-level keys cannot be derived from lower-level ones +- **Efficient Sharing**: Share access to directory trees without exposing other data +- **Key Rotation**: Individual keys can be changed without affecting the entire hierarchy + +### Access Grant System + +Access Grants package together API permissions and encryption keys for secure sharing: + +**Components:** +- **API Key**: Satellite authorization token with specific permissions +- **Encryption Store**: Collection of encryption keys for authorized paths +- **Satellite Address**: Network endpoint for metadata operations + +**Capabilities:** +- Time-limited access (expiration dates) +- Path-restricted access (specific files or folders) +- Operation-limited access (read-only, write-only, etc.) +- Revocable access (can be disabled remotely) + +## Cryptographic Algorithms + +### Encryption Algorithms + +**AES-256-GCM:** +- Industry-standard authenticated encryption +- 256-bit keys provide long-term security +- Built-in integrity protection +- Hardware acceleration available on most platforms + +**Alternative Implementation:** +- **Secretbox** (NaCl library): Salsa20 + Poly1305 +- Used in some client implementations +- Equivalent security level to AES-256-GCM + +### Key Derivation Functions + +**HMAC-based Derivation:** +- Uses cryptographic hash functions for key derivation +- One-way operation prevents key inference attacks +- Standardized implementation across client libraries + +**Path-based Derivation:** +- Each path component contributes to key derivation +- Creates unique keys for every file and directory +- Enables fine-grained access control + +## Privacy Protections + +### Data Privacy + +**Content Privacy:** +- File contents never visible to storage nodes or Satellites +- Erasure coding splits encrypted data into meaningless pieces +- No single storage location contains reconstructable data + +**Metadata Privacy:** +- File names and paths encrypted client-side +- File sizes and timestamps protected through aggregation +- Usage patterns obscured through traffic analysis resistance + +**Communication Privacy:** +- No correlation between user identity and stored data +- Satellite operators see only encrypted metadata +- Network analysis cannot reveal user behavior patterns + +### Network-Level Privacy + +**IP Address Protection:** +- Optional use of VPNs or Tor for additional anonymity +- Direct peer-to-peer connections to storage nodes +- No centralized traffic monitoring points + +**Traffic Analysis Resistance:** +- Padding and timing randomization +- Decoy traffic options in some implementations +- Distributed access patterns across multiple nodes + +## Access Control Security + +### Capability-Based Access Control + +Storj implements capability-based security where possession of valid credentials grants specific access rights: + +**Access Grant Capabilities:** +- Self-contained authorization tokens +- Cannot be forged or replicated without private key access +- Specific permissions encoded cryptographically + +**Caveat System:** +- Fine-grained restrictions on access grants +- Time-based limitations +- Path-based restrictions +- Operation-specific permissions + +### Delegation and Sharing + +**Secure Delegation:** +- Create restricted access grants from existing ones +- No need to share root credentials +- Hierarchical permission inheritance + +**Revocation Mechanisms:** +- Satellite-level revocation for compromised access grants +- Time-based expiration for temporary access +- Path-specific revocation for granular control + +## Threat Model and Protections + +### Threat Scenarios + +**Compromised Storage Nodes:** +- Encrypted pieces provide no useful information +- Erasure coding prevents reconstruction from partial data +- Geographic distribution limits exposure scope + +**Compromised Satellites:** +- Encrypted metadata prevents data access +- No encryption keys stored on Satellites +- Users can migrate to alternative Satellites + +**Network Attacks:** +- TLS encryption protects data in transit +- End-to-end encryption prevents intermediate tampering +- Distributed architecture eliminates single points of failure + +**Government Surveillance:** +- Zero-knowledge architecture prevents mass surveillance +- No decrypt capabilities at service provider level +- International distribution complicates jurisdiction issues + +### Attack Mitigations + +**Cryptographic Attacks:** +- Use of proven, standardized encryption algorithms +- Regular security audits and updates +- Forward security through proper key rotation + +**Implementation Attacks:** +- Open source code enables security review +- Multiple independent client implementations +- Regular penetration testing and bug bounty programs + +**Side-Channel Attacks:** +- Timing attack mitigations in cryptographic operations +- Traffic analysis resistance through padding and randomization +- Hardware security module support for key storage + +## Compliance and Regulatory Considerations + +### Data Protection Regulations + +**GDPR Compliance:** +- Client-side encryption supports data minimization principles +- Zero-knowledge architecture simplifies compliance requirements +- User control over encryption keys supports data portability rights + +**CCPA and Similar Laws:** +- Encrypted storage prevents unauthorized data analysis +- User control over access enables privacy rights enforcement +- Minimal metadata collection reduces compliance scope + +### Industry Standards + +**SOC 2 Compliance:** +- Security controls documented and audited +- Availability and processing integrity protections +- Confidentiality through encryption and access controls + +**ISO 27001 Framework:** +- Information security management system implementation +- Risk assessment and management processes +- Continuous security monitoring and improvement + +## Security Best Practices + +### For Users + +**Key Management:** +- Store root secrets securely offline when possible +- Use hardware security modules for high-value applications +- Implement proper backup and recovery procedures + +**Access Control:** +- Use principle of least privilege for access grants +- Regularly review and rotate access credentials +- Monitor access patterns for anomalies + +**Operational Security:** +- Keep client software updated +- Use secure communication channels for credential sharing +- Implement proper incident response procedures + +### For Developers + +**Integration Security:** +- Validate all inputs and handle errors properly +- Use secure coding practices throughout applications +- Implement proper logging without exposing sensitive data + +**Key Handling:** +- Never log or persist encryption keys unnecessarily +- Use secure memory handling for cryptographic operations +- Implement proper key derivation and storage practices + +## Security Audits and Transparency + +### Independent Audits + +Storj undergoes regular security audits by independent third-party firms: + +- **Code Reviews**: Comprehensive analysis of cryptographic implementations +- **Penetration Testing**: Simulated attacks against network and client software +- **Architecture Reviews**: Evaluation of overall security design and threat model + +### Bug Bounty Programs + +Continuous security improvement through responsible disclosure: + +- **Researcher Rewards**: Financial incentives for security vulnerability discovery +- **Coordinated Disclosure**: Responsible handling of security issues +- **Public Reporting**: Transparent communication about security improvements + +### Open Source Benefits + +Open source development provides security advantages: + +- **Community Review**: Thousands of developers can examine code for issues +- **Rapid Response**: Quick patches and updates for discovered vulnerabilities +- **Transparency**: No hidden backdoors or surveillance capabilities + +## Future Security Enhancements + +### Planned Improvements + +**Quantum Resistance:** +- Research into post-quantum cryptographic algorithms +- Migration strategies for quantum-safe encryption +- Timeline alignment with NIST standardization efforts + +**Enhanced Privacy:** +- Additional metadata protection mechanisms +- Improved traffic analysis resistance +- Anonymous payment and account management options + +**Advanced Access Control:** +- Multi-party access control mechanisms +- Smart contract integration for automated access management +- Biometric and hardware-based authentication options + +Understanding Storj's comprehensive security model enables you to make informed decisions about data protection, compliance requirements, and integration approaches for your specific security needs. + +## Related Concepts + +- [Understanding Decentralized Storage](docId:understand-decent-stor) - Foundation concepts behind secure distributed storage +- [Storj Architecture Overview](docId:storj-arch-overview) - How security integrates with network architecture +- [Access Management](docId:capability-based-access-control) - Detailed access control mechanisms \ No newline at end of file diff --git a/app/(docs)/learn/concepts/storj-architecture-overview/page.md b/app/(docs)/learn/concepts/storj-architecture-overview/page.md new file mode 100644 index 000000000..51c81330e --- /dev/null +++ b/app/(docs)/learn/concepts/storj-architecture-overview/page.md @@ -0,0 +1,290 @@ +--- +title: Storj Architecture Overview +docId: storj-arch-overview +metadata: + title: Storj Network Architecture - Technical Overview + description: Comprehensive overview of Storj's distributed storage architecture, peer classes, data flow, and how the network components work together. +--- + +Storj's architecture implements decentralized storage through a sophisticated network of specialized components called peer classes. Understanding this architecture helps you make informed decisions about how to integrate and optimize Storj for your use cases. + +## Network Architecture Overview + +The Storj network consists of three primary peer classes that work together to provide secure, distributed storage: + +- **Storage Nodes** - Store encrypted data pieces and earn cryptocurrency rewards +- **Satellites** - Coordinate the network and manage metadata +- **Uplinks** - Client applications that store and retrieve data + +This separation of concerns creates a robust, scalable system where no single component can compromise the security or availability of your data. + +## Peer Classes Explained + +### Storage Nodes + +Storage Nodes form the foundation of the distributed storage network. They are operated by individuals and organizations worldwide who contribute storage capacity and bandwidth in exchange for cryptocurrency payments. + +**Key Characteristics:** +- Store encrypted, erasure-coded data pieces +- Cannot decrypt or reconstruct complete files +- Operate independently with minimal coordination +- Earn revenue based on storage duration and bandwidth usage + +**Quality Requirements:** +- Reliable internet connectivity +- Consistent uptime expectations +- Adequate storage capacity +- Geographic and network diversity + +### Satellites + +Satellites serve as coordination hubs for the network, handling metadata management, access control, and network operations without ever accessing your actual data. + +**Primary Responsibilities:** +- User account and project management +- API credential and access grant generation +- Node discovery and reputation tracking +- Audit and repair coordination +- Usage tracking and billing +- Garbage collection and maintenance + +**Trust Model:** +- Users choose which Satellite to trust with metadata +- Multiple Satellite options provide choice and redundancy +- Open source implementation allows independent operation + +### Uplinks + +Uplinks represent any application or service that stores or retrieves data from the network. This includes command-line tools, applications using libuplink, and S3-compatible gateways. + +**Core Functions:** +- Client-side encryption and decryption +- Erasure encoding and decoding +- Direct communication with Storage Nodes +- Access control and key management + +**Implementation Options:** +- **Native Integration** - Using libuplink directly +- **S3-Compatible Gateway** - Standard S3 API access +- **Command Line Interface** - Terminal-based operations + +## Data Flow Architecture + +### Upload Process + +When you upload a file to Storj, several coordinated steps ensure security and reliability: + +1. **Client-Side Encryption**: Your Uplink encrypts the entire file using strong encryption before any network transmission + +2. **Segmentation**: Large files are divided into segments (typically 64MB each) for efficient processing + +3. **Erasure Encoding**: Each segment is mathematically encoded into many pieces, where only a subset is needed to reconstruct the original segment + +4. **Node Selection**: The Satellite provides a list of healthy, diverse Storage Nodes for storing pieces + +5. **Distributed Storage**: Pieces are uploaded directly to Storage Nodes across the network + +6. **Metadata Recording**: The Satellite records encrypted metadata about where pieces are stored + +### Download Process + +Retrieving data reverses this process with built-in performance optimizations: + +1. **Metadata Retrieval**: Your Uplink requests file location information from the Satellite + +2. **Parallel Downloads**: Multiple pieces are downloaded simultaneously from different Storage Nodes + +3. **Redundancy Optimization**: Downloads stop as soon as enough pieces are retrieved for reconstruction + +4. **Reconstruction**: Erasure decoding rebuilds the original segments from the downloaded pieces + +5. **Decryption**: Your Uplink decrypts the reconstructed data using your encryption keys + +## Security Architecture + +### Encryption Layers + +Storj implements multiple layers of encryption to ensure comprehensive data protection: + +**File-Level Encryption:** +- Each file encrypted with a unique random key +- Industry-standard AES-256-GCM encryption +- Keys derived from your root secret and file path + +**Path Encryption:** +- Object keys (file paths) are encrypted +- Prevents metadata analysis attacks +- Hierarchical key derivation for efficient access management + +**Transport Security:** +- All network communications use TLS +- End-to-end encryption from client to storage + +### Zero-Knowledge Design + +The network architecture ensures that no single party can access your data without your explicit permission: + +- **Storage Nodes** see only encrypted, meaningless pieces +- **Satellites** see only encrypted metadata and cannot decrypt file content +- **Network Operators** cannot access user data or meaningful metadata + +### Access Control + +Sophisticated access management provides granular control without compromising security: + +**Access Grants:** +- Combine API keys, encryption keys, and permissions +- Enable secure sharing without exposing root credentials +- Support time-limited and scope-restricted access + +**Hierarchical Permissions:** +- Path-based access control +- Operation-specific restrictions (read, write, delete, list) +- Bucket and project-level isolation + +## Network Operations + +### Reputation System + +Storage Nodes build reputation through consistent performance: + +- **Uptime Tracking** - Availability and responsiveness monitoring +- **Audit Compliance** - Regular verification of data integrity +- **Data Transfer Performance** - Speed and reliability metrics + +Higher reputation nodes receive more data storage opportunities and higher payment rates. + +### Repair and Maintenance + +The network automatically maintains data integrity and availability: + +**Proactive Monitoring:** +- Continuous health checks of stored pieces +- Detection of node failures or departures +- Predictive identification of at-risk data + +**Automated Repair:** +- Recreation of lost pieces when redundancy falls below thresholds +- Placement of repaired pieces on healthy, diverse nodes +- Maintenance of geographic and network diversity + +### Economic Model + +The network operates through market-based economics: + +**Storage Node Incentives:** +- Payment for data storage over time +- Bandwidth compensation for uploads and downloads +- Performance bonuses for reliable operation + +**User Costs:** +- Competitive pricing through market competition +- Pay-only-for-usage model +- Predictable pricing without hidden fees + +## Performance Characteristics + +### Scalability + +The distributed architecture scales horizontally: + +- **Node Network Growth** - More nodes increase capacity and performance +- **Geographic Expansion** - Global distribution reduces latency +- **Parallel Processing** - Multiple simultaneous operations across the network + +### Reliability + +Multiple design elements ensure high reliability: + +- **No Single Points of Failure** - Distributed architecture eliminates critical dependencies +- **Redundancy Management** - Automated maintenance of data availability +- **Graceful Degradation** - Performance degrades gradually rather than failing completely + +### Efficiency + +Optimizations throughout the architecture improve resource utilization: + +- **Erasure Coding** - Better storage efficiency than replication +- **Deduplication** - Reduced storage requirements for similar content +- **Intelligent Routing** - Optimal path selection for data transfer + +## Integration Patterns + +### Application Integration + +**Direct Integration:** +- Use libuplink for custom applications +- Full control over encryption and access management +- Optimal performance for application-specific needs + +**S3-Compatible Integration:** +- Drop-in replacement for existing S3-based applications +- Familiar APIs and tooling +- Simplified migration from centralized storage + +**Hybrid Approaches:** +- Combine direct integration with S3 compatibility +- Use appropriate method for each use case +- Leverage strengths of both approaches + +## Comparison with Traditional Architecture + +### Centralized Storage Architecture + +Traditional cloud storage relies on: +- Large data centers with massive infrastructure +- Centralized control and management +- Replication across limited geographic regions +- Trust-based security models + +### Storj's Distributed Architecture + +In contrast, Storj provides: +- Thousands of independent storage nodes +- Decentralized coordination through Satellites +- Global distribution with mathematical redundancy +- Zero-trust security with client-side encryption + +## Technical Specifications + +### Current Network Parameters + +- **Erasure Coding**: 29 pieces required, 80 pieces stored, 130 pieces attempted upload +- **Segment Size**: 64MB maximum +- **Encryption**: AES-256-GCM with client-side key generation +- **Node Selection**: Geographic and network diversity requirements + +### Network Statistics + +The Storj network operates thousands of storage nodes across six continents, providing: +- Petabytes of distributed storage capacity +- Sub-second global response times +- 99.95%+ durability through erasure coding +- Enterprise-grade performance and reliability + +## Future Architecture Evolution + +### Planned Enhancements + +**Performance Improvements:** +- Advanced caching and content delivery optimizations +- Enhanced geographic routing algorithms +- Improved parallel processing capabilities + +**Feature Expansion:** +- Additional S3-compatible features +- Enhanced access control granularity +- Integration with emerging Web3 protocols + +**Network Growth:** +- Expansion to new geographic regions +- Support for additional storage node types +- Integration with edge computing infrastructure + +Understanding Storj's architecture enables you to make informed decisions about integration approaches, performance optimization, and security considerations for your specific use cases. + +## Related Concepts + +- [Understanding Decentralized Storage](docId:understand-decent-stor) - Foundational concepts behind distributed storage +- [Security and Encryption](docId:security-encryption) - Detailed cryptographic protections +- [File Redundancy](docId:file-redundancy) - How erasure coding provides better durability \ No newline at end of file diff --git a/app/(docs)/learn/concepts/understanding-decentralized-storage/page.md b/app/(docs)/learn/concepts/understanding-decentralized-storage/page.md new file mode 100644 index 000000000..6d4a688a9 --- /dev/null +++ b/app/(docs)/learn/concepts/understanding-decentralized-storage/page.md @@ -0,0 +1,141 @@ +--- +title: Understanding Decentralized Storage +docId: understand-decent-stor +metadata: + title: Understanding Decentralized Storage - Core Concepts + description: Learn the fundamental principles behind decentralized storage systems, how they differ from traditional cloud storage, and why they provide better security and privacy. +--- + +Decentralized storage represents a fundamental shift from traditional centralized cloud storage models. Understanding how it works and why it matters is essential for making informed decisions about data storage and protection. + +## What is Decentralized Storage + +Decentralized storage distributes your data across a global network of independent storage nodes rather than storing it in a single company's data centers. This approach eliminates single points of failure and reduces dependency on any one organization. + +### Key Characteristics + +**Distributed Architecture**: Your files are broken into encrypted pieces and distributed across many storage nodes worldwide. No single node contains enough information to reconstruct your data. + +**Node Independence**: Storage nodes are operated by independent individuals and organizations. This diversity prevents any single entity from controlling or accessing your data. + +**Client-Side Control**: You maintain complete control over encryption keys and access permissions. Even the network operators cannot access your data without your explicit permission. + +## How Decentralized Storage Differs from Traditional Cloud + +Traditional cloud storage relies on centralized infrastructure owned and operated by a single company. Your data lives in their data centers under their control. + +### Traditional Centralized Model + +- **Single Point of Control**: One company controls all aspects of data storage and access +- **Trust Requirements**: You must trust the provider with both data custody and privacy +- **Vendor Lock-in**: Difficult to migrate data and services to alternatives +- **Limited Transparency**: Proprietary systems with limited visibility into operations + +### Decentralized Model + +- **Distributed Control**: No single entity controls the entire system +- **Trustless Operation**: You don't need to trust individual storage providers +- **Open Standards**: Based on open protocols and verifiable operations +- **Increased Resilience**: Multiple independent failure domains + +## Benefits of Decentralization + +### Enhanced Privacy and Security + +**Default Encryption**: All data is encrypted client-side before leaving your device. Storage nodes never see your data in plain text. + +**Zero-Knowledge Architecture**: Network operators cannot access your data, metadata, or usage patterns without your explicit permission. + +**Reduced Attack Surface**: No centralized honeypots of data that attract large-scale attacks. + +### Improved Durability and Availability + +**Geographic Distribution**: Data pieces are stored across diverse geographic locations, reducing regional failure risks. + +**Redundancy Without Replication**: Erasure coding provides better durability than traditional replication while using less storage space. + +**Self-Healing Network**: Automatic repair mechanisms ensure data remains available even when storage nodes go offline. + +### Economic Advantages + +**Competitive Pricing**: Market-driven pricing from competing storage providers typically offers better rates than centralized services. + +**No Vendor Lock-in**: Standards-based access allows easy migration between providers or protocols. + +**Efficient Resource Utilization**: Leverages existing unused storage capacity rather than building dedicated data centers. + +## Technical Foundation + +### Erasure Coding + +Traditional cloud storage typically uses replication for redundancy - storing multiple complete copies of your data. Decentralized storage uses erasure coding, which is more efficient: + +- **Mathematical Redundancy**: Data is mathematically encoded so that losing some pieces doesn't mean losing access to your files +- **Storage Efficiency**: Requires significantly less storage space than replication for equivalent durability +- **Fault Tolerance**: Can withstand many simultaneous node failures without data loss + +### Cryptographic Security + +**End-to-End Encryption**: Data is encrypted on your device before transmission and remains encrypted throughout storage and retrieval. + +**Key Derivation**: Sophisticated key management ensures different encryption keys for different data, limiting exposure from any single key compromise. + +**Verifiable Operations**: Cryptographic proofs allow verification that data is being stored and maintained properly without revealing the data itself. + +## Real-World Implications + +### For Individuals + +**Enhanced Privacy**: Your personal files, photos, and documents remain completely private, even from the storage service provider. + +**Censorship Resistance**: No single entity can restrict access to your data for political or commercial reasons. + +**Cost Savings**: Competitive market pricing typically offers better value than traditional cloud storage. + +### For Organizations + +**Regulatory Compliance**: Client-side encryption and zero-knowledge architecture simplify compliance with data protection regulations. + +**Business Continuity**: Distributed storage reduces risks from provider outages, policy changes, or business failures. + +**Transparent Operations**: Open protocols and verifiable operations provide better visibility into how data is handled. + +## Addressing Common Concerns + +### Performance Considerations + +Modern decentralized storage networks achieve performance comparable to or better than traditional cloud storage through: + +- **Global Distribution**: Data retrieval from geographically optimal locations +- **Parallel Operations**: Simultaneous access to multiple storage nodes +- **Intelligent Routing**: Automatic selection of fastest available nodes + +### Complexity Management + +While the underlying technology is sophisticated, user interfaces abstract this complexity: + +- **Familiar APIs**: S3-compatible interfaces work with existing tools and applications +- **Simple Installation**: Command-line and graphical tools provide easy setup +- **Automatic Management**: Background processes handle technical details transparently + +## The Future of Storage + +Decentralized storage represents the evolution of data storage toward greater privacy, security, and user control. As data becomes increasingly valuable and privacy concerns grow, decentralized architectures provide a sustainable alternative to surveillance capitalism models. + +### Emerging Trends + +**Integration with Web3**: Decentralized storage serves as infrastructure for blockchain applications and decentralized autonomous organizations. + +**Edge Computing**: Distributed storage naturally complements edge computing architectures for reduced latency. + +**Data Sovereignty**: Growing emphasis on national and individual data sovereignty favors decentralized approaches. + +## Getting Started + +Understanding these concepts provides the foundation for effectively using decentralized storage. The practical benefits - privacy, security, durability, and cost-effectiveness - make it an attractive option for both personal and business use. + +## Related Concepts + +- [Storj Architecture Overview](docId:storj-arch-overview) - Technical details of how Storj implements decentralized storage +- [Security and Encryption](docId:security-encryption) - Deep dive into cryptographic protections +- [Access Management](docId:access-management-at-the-edge) - How access control works in decentralized systems \ No newline at end of file diff --git a/app/(docs)/node/concepts/_meta.json b/app/(docs)/node/concepts/_meta.json new file mode 100644 index 000000000..75d3b95fc --- /dev/null +++ b/app/(docs)/node/concepts/_meta.json @@ -0,0 +1,21 @@ +{ + "title": "Concepts", + "nav": [ + { + "title": "Storage Node Economics", + "id": "storage-node-economics" + }, + { + "title": "Reputation System", + "id": "reputation-system" + }, + { + "title": "Network Participation", + "id": "network-participation" + }, + { + "title": "Privacy and Security for Operators", + "id": "privacy-security-for-operators" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/node/concepts/network-participation.md b/app/(docs)/node/concepts/network-participation.md new file mode 100644 index 000000000..94ef2ac6c --- /dev/null +++ b/app/(docs)/node/concepts/network-participation.md @@ -0,0 +1,316 @@ +--- +title: Network Participation +docId: network-participation +metadata: + title: Storage Node Network Participation - Technical Requirements and Protocols + description: Technical explanation of how Storage Nodes participate in the Storj network, including protocols, requirements, and interaction patterns. +--- + +Storage Node network participation involves complex technical interactions between nodes, satellites, and customers. Understanding these protocols and requirements is essential for successful node operation and troubleshooting network-related issues. + +## Network Architecture and Protocols + +### Distributed Network Design + +The Storj network operates as a distributed system where Storage Nodes participate as independent peers, coordinated by satellites but not controlled by any central authority. + +**Peer-to-Peer Elements:** +- Direct data transfer between customers and Storage Nodes +- Independent node operation without central coordination +- Distributed storage and retrieval across global network + +**Coordination Elements:** +- Satellites manage metadata and coordinate operations +- Network-wide protocols ensure interoperability +- Standardized APIs enable consistent node behavior + +### Communication Protocols + +**gRPC Protocol Stack:** +- **Transport**: gRPC over HTTP/2 for efficient bidirectional communication +- **Serialization**: Protocol Buffers for fast, compact message encoding +- **Security**: TLS encryption for all network communications + +**Protocol Services:** +- **Node Contact**: Satellite health checks and node availability verification +- **Piece Storage**: Data upload and download operations +- **Audit Operations**: Data integrity verification requests +- **Repair Coordination**: Missing data regeneration processes + +## Network Identity and Authentication + +### Node Identity System + +**Identity Generation:** +- Cryptographic identity created during node setup +- Ed25519 public/private key pairs for signatures +- Unique Node ID derived from public key + +**Identity Verification:** +- Satellites verify node identity before data allocation +- Cryptographic challenges prove possession of private keys +- Identity theft protection through signature validation + +**Identity Lifecycle:** +- Identity remains constant throughout node lifetime +- Identity migration requires complete node data migration +- Lost identity means loss of all stored data and accumulated reputation + +### Authentication Mechanisms + +**Certificate-Based Authentication:** +- X.509 certificates for TLS connections +- Certificate validation ensures communication security +- Automatic certificate rotation for security maintenance + +**API Token Authentication:** +- Satellites issue temporary tokens for specific operations +- Token-based access control for different operation types +- Automatic token refresh and expiration management + +## Network Discovery and Registration + +### Satellite Discovery + +**Bootstrap Process:** +- New nodes discover available satellites through configuration +- Satellite addresses provided during node setup +- Multiple satellite connections for redundancy and load balancing + +**Satellite Selection:** +- Nodes can connect to multiple satellites simultaneously +- Geographic and performance-based satellite selection +- Automatic failover between satellites for reliability + +### Node Registration + +**Initial Contact Protocol:** +``` +1. Node contacts satellite with identity certificate +2. Satellite verifies node identity and configuration +3. Satellite records node metadata (address, capacity, etc.) +4. Node begins receiving data allocation opportunities +``` + +**Ongoing Registration Maintenance:** +- Regular contact with satellites to maintain active status +- Configuration updates automatically propagated to satellites +- Graceful handling of temporary connectivity issues + +## Data Flow and Operations + +### Data Storage Operations + +**Piece Upload Process:** +1. **Allocation Request**: Satellite requests node to store data piece +2. **Resource Verification**: Node checks available storage capacity +3. **Data Transfer**: Direct peer-to-peer transfer from customer to node +4. **Verification**: Node validates data integrity and sends confirmation +5. **Metadata Update**: Satellite records successful piece storage + +**Storage Requirements:** +- Adequate disk space with overhead for temporary operations +- Fast enough I/O to handle concurrent upload requests +- Reliable storage medium to prevent data corruption + +### Data Retrieval Operations + +**Piece Download Process:** +1. **Download Request**: Customer requests piece from node +2. **Authentication**: Node verifies request authenticity +3. **Data Transfer**: Direct transfer from node to customer +4. **Bandwidth Accounting**: Node tracks egress bandwidth for payment +5. **Performance Metrics**: Response time recorded for reputation + +**Performance Factors:** +- Network bandwidth affects customer experience +- Disk I/O speed influences response times +- Concurrent request handling improves node utilization + +### Audit and Verification + +**Audit Protocol:** +1. **Audit Challenge**: Satellite requests proof of piece storage +2. **Data Retrieval**: Node accesses and validates stored piece +3. **Cryptographic Proof**: Node generates proof of possession +4. **Response Submission**: Proof sent to satellite for verification +5. **Reputation Update**: Audit result affects node reputation score + +**Audit Requirements:** +- Must respond to audits within specified timeframes +- Proof generation requires access to complete, uncorrupted data +- Failed audits negatively impact reputation and earnings + +## Network Requirements and Infrastructure + +### Connectivity Requirements + +**Internet Connection Specifications:** +- **Minimum Upload Speed**: 5 Mbps for basic operation +- **Recommended Upload Speed**: 25+ Mbps for optimal performance +- **Latency Requirements**: <500ms to major internet hubs +- **Reliability**: >99% uptime for good reputation maintenance + +**Port Configuration:** +- **Storage Node Port**: 28967 (TCP and UDP) for data operations +- **Dashboard Port**: 14002 (TCP) for web interface access +- **Port Forwarding**: Required for inbound connections from customers + +**Network Address Translation (NAT):** +- Public IP address or proper port forwarding required +- Dynamic DNS services supported for residential connections +- IPv4 required (IPv6 support under development) + +### Firewall and Security Configuration + +**Required Inbound Access:** +- Storage node port must accept inbound connections from internet +- Dashboard port typically restricted to local network access +- QUIC protocol support for improved performance + +**Security Best Practices:** +- Firewall rules limiting access to essential ports only +- Regular security updates for operating system and node software +- Network monitoring for suspicious activity + +### Quality of Service (QoS) + +**Bandwidth Management:** +- Adequate bandwidth allocation for node operations +- QoS policies to prioritize node traffic if sharing connection +- Monitoring tools to track bandwidth utilization patterns + +**Performance Optimization:** +- Low-latency routing to major internet exchanges +- Multiple internet service provider connections for redundancy +- Content delivery network (CDN) considerations for global performance + +## Operational States and Lifecycles + +### Node Operational States + +**Active Operation:** +- Node regularly contacts satellites and responds to requests +- Receives new data allocation opportunities +- Full participation in all network operations + +**Suspended State:** +- Temporary status due to performance issues or downtime +- No new data allocation until performance improves +- Existing data remains accessible with potential reputation impact + +**Disqualified State:** +- Permanent removal from network due to serious issues +- All stored data becomes inaccessible to customers +- Loss of all accumulated earnings and reputation + +**Graceful Exit:** +- Voluntary departure with data migration to other nodes +- Maintains reputation and receives final payments +- Planned process for clean network departure + +### Lifecycle Management + +**Node Startup Sequence:** +``` +1. Identity verification and configuration loading +2. Satellite contact establishment and registration +3. Storage and bandwidth capacity reporting +4. Ready state for data operations +``` + +**Shutdown Procedures:** +- Graceful completion of in-progress operations +- Final satellite contact to report offline status +- Proper data persistence for restart capability + +**Maintenance Windows:** +- Planned downtime notification to satellites +- Minimized impact on reputation through proactive communication +- Coordinated maintenance scheduling with network operations + +## Performance Monitoring and Optimization + +### Key Performance Indicators + +**Network Performance Metrics:** +- **Contact Success Rate**: Percentage of successful satellite communications +- **Response Time**: Average time to respond to requests +- **Bandwidth Utilization**: Upload and download capacity utilization +- **Error Rates**: Failed operations and their causes + +**Resource Utilization:** +- **Storage Utilization**: Percentage of allocated space in use +- **CPU Usage**: Processing overhead for network operations +- **Memory Usage**: Buffer and cache utilization patterns +- **Disk I/O**: Read/write performance and queue depths + +### Optimization Strategies + +**Hardware Optimization:** +- SSD storage for better I/O performance +- Adequate RAM for caching and buffering operations +- Network interface cards optimized for throughput +- Redundant power supplies for reliability + +**Software Configuration:** +- Operating system tuning for network and disk performance +- Node software configuration optimization +- Monitoring and logging system setup +- Automated maintenance and update procedures + +**Network Optimization:** +- Bandwidth allocation and QoS configuration +- Route optimization and latency reduction +- Connection pooling and multiplexing +- Protocol tuning for specific network conditions + +## Troubleshooting Network Issues + +### Common Network Problems + +**Connectivity Issues:** +- Port forwarding configuration problems +- Firewall blocking required network traffic +- ISP restrictions on server-type traffic +- Dynamic IP address changes affecting accessibility + +**Performance Problems:** +- Insufficient bandwidth for node operations +- High network latency affecting response times +- Packet loss causing operation failures +- Network congestion during peak usage periods + +**Protocol Issues:** +- Certificate expiration or validation problems +- Protocol version mismatches between node and satellites +- gRPC communication errors and timeout issues +- TLS handshake failures affecting secure connections + +### Diagnostic Tools and Techniques + +**Network Connectivity Testing:** +- Port scanning to verify accessibility +- Bandwidth testing to measure actual throughput +- Latency measurement to identify performance issues +- Packet capture analysis for protocol-level debugging + +**Node-Specific Diagnostics:** +- Dashboard metrics for operational visibility +- Log file analysis for error identification +- Satellite communication testing +- Performance benchmarking against network standards + +**Infrastructure Monitoring:** +- Network utilization tracking +- System resource monitoring +- Historical performance analysis +- Automated alerting for operational issues + +Understanding network participation requirements and protocols enables Storage Node Operators to design robust, performant systems that reliably contribute to the Storj network while maximizing earnings potential. + +## Related Concepts + +- [Storage Node Economics](docId:storage-node-economics) - Economic aspects of network participation +- [Reputation System](docId:reputation-system) - How network performance affects reputation +- [Privacy and Security for Operators](docId:privacy-security-operators) - Security considerations for network operations \ No newline at end of file diff --git a/app/(docs)/node/concepts/privacy-security-for-operators.md b/app/(docs)/node/concepts/privacy-security-for-operators.md new file mode 100644 index 000000000..c2d85150b --- /dev/null +++ b/app/(docs)/node/concepts/privacy-security-for-operators.md @@ -0,0 +1,268 @@ +--- +title: Privacy and Security for Storage Node Operators +docId: privacy-security-operators +metadata: + title: Privacy and Security for Storage Node Operators - Protecting Your Operation + description: Comprehensive guide to privacy and security considerations for Storage Node Operators, including data protection, operational security, and risk mitigation. +--- + +Operating a Storage Node involves unique privacy and security considerations that differ from traditional data storage or hosting services. Understanding these considerations helps operators protect their infrastructure, maintain compliance, and operate with confidence. + +## Security Model for Operators + +### Zero-Knowledge Architecture Benefits + +Storage Node Operators benefit from Storj's zero-knowledge architecture, which provides inherent protection against many traditional data hosting risks. + +**Data Protection for Operators:** +- **No Plaintext Access**: Operators cannot access readable customer data +- **No Liability for Content**: Encrypted pieces provide legal protection from content-related issues +- **No Surveillance Concerns**: Cannot be compelled to monitor or report on specific data content +- **Reduced Attack Value**: Nodes don't contain complete files or readable information + +**Legal and Compliance Advantages:** +- **Simplified Compliance**: No access to personal data simplifies privacy regulation compliance +- **Reduced Legal Risk**: Cannot be held responsible for unknown encrypted content +- **International Operation**: Can operate across jurisdictions without content concerns +- **Subpoena Protection**: Cannot provide data that doesn't exist in readable form + +### Threat Model Analysis + +**External Threats to Operators:** +- Network attackers attempting to compromise node infrastructure +- Malicious actors seeking to exploit node vulnerabilities +- Data thieves targeting accumulated earnings or operational data +- State-level actors attempting surveillance or data collection + +**Internal Operational Risks:** +- Hardware failures leading to data loss and reputation damage +- Software vulnerabilities in node software or operating system +- Configuration errors exposing unnecessary services or data +- Physical security risks to hardware and network equipment + +## Operational Security Best Practices + +### System Hardening + +**Operating System Security:** +- **Minimal Installation**: Install only necessary services and software +- **Regular Updates**: Apply security patches promptly and consistently +- **User Management**: Use dedicated accounts with minimal privileges for node operation +- **Service Isolation**: Run node software in restricted environments when possible + +**Network Security Configuration:** +```bash +# Example firewall rules for Storage Node +# Allow only necessary ports +iptables -A INPUT -p tcp --dport 28967 -j ACCEPT # Storage Node +iptables -A INPUT -p udp --dport 28967 -j ACCEPT # QUIC Protocol +iptables -A INPUT -p tcp --dport 14002 -s 192.168.1.0/24 -j ACCEPT # Dashboard (local only) +iptables -A INPUT -j DROP # Default deny +``` + +**Access Control:** +- **SSH Key Authentication**: Disable password authentication for remote access +- **Multi-Factor Authentication**: Use 2FA for critical account access +- **Privilege Escalation Control**: Restrict sudo access and monitor usage +- **Regular Access Review**: Audit and remove unnecessary user accounts + +### Data Protection and Backup + +**Node Configuration Backup:** +- **Identity Files**: Critical cryptographic identity must be backed up securely +- **Configuration Files**: Node configuration and settings backup +- **Database Backup**: Regular backup of node database for disaster recovery +- **Encrypted Storage**: All backups should be encrypted and stored securely + +**Recovery Procedures:** +- **Identity Recovery**: Process for restoring node identity after hardware failure +- **Data Migration**: Procedures for moving node to new hardware +- **Database Restoration**: Steps to restore corrupted or lost node database +- **Network Reconfiguration**: Updating network settings after infrastructure changes + +### Physical Security Considerations + +**Hardware Protection:** +- **Secure Location**: Physical access control to prevent unauthorized hardware access +- **Environmental Controls**: Temperature, humidity, and power protection +- **Theft Prevention**: Securing valuable hardware against theft +- **Tamper Detection**: Monitoring for unauthorized physical access attempts + +**Facility Security:** +- **Access Logging**: Record who accesses hardware locations and when +- **Surveillance Systems**: Camera monitoring of critical infrastructure areas +- **Redundant Power**: UPS and backup power to prevent unexpected shutdowns +- **Network Redundancy**: Multiple internet connections for reliability + +## Privacy Considerations + +### Operational Privacy + +**Node Operator Anonymity:** +- **Identity Protection**: Node identity separate from personal identity +- **Payment Privacy**: STORJ token payments provide some transaction privacy +- **Network Metadata**: Satellites see node IP addresses and basic operational data +- **Traffic Analysis**: Network traffic patterns may reveal operational information + +**Information Disclosure Limits:** +- **Metadata Exposure**: Satellites see storage amounts and bandwidth usage +- **Performance Data**: Response times and availability statistics visible +- **Geographic Information**: IP address reveals approximate node location +- **Operational Patterns**: Activity patterns may be analyzable by satellites + +### Regulatory Compliance + +**Data Protection Regulations:** +- **GDPR Applicability**: Storage Node operations typically have minimal GDPR exposure +- **Local Privacy Laws**: Compliance requirements vary by operator jurisdiction +- **Cross-Border Data**: International data flows may have regulatory implications +- **Record Keeping**: Some jurisdictions require operational record retention + +**Content Responsibility:** +- **Safe Harbor Protections**: Zero-knowledge architecture provides content liability protection +- **Notice and Takedown**: Cannot comply with content-specific removal requests +- **Law Enforcement Cooperation**: Limited ability to provide content-related information +- **Jurisdiction Shopping**: Ability to operate in favorable legal jurisdictions + +## Financial Security and Privacy + +### Earnings Protection + +**Wallet Security:** +- **Hardware Wallets**: Use hardware wallets for significant earnings storage +- **Multi-Signature**: Consider multi-sig wallets for large-scale operations +- **Key Management**: Secure backup and recovery of wallet private keys +- **Regular Transfers**: Don't accumulate excessive earnings in operational wallets + +**Transaction Privacy:** +- **STORJ Token Characteristics**: Understand privacy properties of STORJ token transactions +- **Exchange Privacy**: Consider privacy implications of token-to-fiat exchanges +- **Tax Reporting**: Balance privacy with tax compliance requirements +- **Chain Analysis Resistance**: Understand blockchain analysis techniques and limitations + +### Business Structure Considerations + +**Legal Entity Structure:** +- **Personal vs. Business**: Consider separate legal entities for larger operations +- **Liability Protection**: Structure operations to limit personal liability +- **Tax Optimization**: Legal structures that optimize tax treatment +- **Privacy Protection**: Entities that protect operator personal information + +**Financial Reporting:** +- **Income Tracking**: Systems for tracking earnings from multiple nodes +- **Expense Documentation**: Record keeping for operational expenses and investments +- **Tax Compliance**: Proper reporting while maintaining operational privacy +- **Audit Preparedness**: Organized records for potential tax or business audits + +## Risk Management Strategies + +### Technical Risk Mitigation + +**Hardware Failure Protection:** +- **RAID Configuration**: Redundant storage to prevent single-drive failures +- **Hot Spares**: Spare hardware available for quick replacement +- **Monitoring Systems**: Early detection of hardware degradation +- **Insurance Coverage**: Consider insurance for valuable hardware investments + +**Network Security Monitoring:** +- **Intrusion Detection**: Systems to detect unauthorized access attempts +- **Log Analysis**: Regular review of system and security logs +- **Vulnerability Scanning**: Periodic security assessments of node infrastructure +- **Incident Response Planning**: Procedures for handling security incidents + +### Operational Risk Management + +**Reputation Risk:** +- **Performance Monitoring**: Proactive monitoring to prevent reputation damage +- **Redundancy Planning**: Backup systems to maintain availability during failures +- **Documentation**: Detailed procedures for consistent operation +- **Training**: Ensure all operators understand critical procedures + +**Economic Risk:** +- **Diversification**: Multiple nodes in different locations and configurations +- **Cost Management**: Understanding and controlling operational costs +- **Market Risk**: STORJ token price volatility impacts earnings +- **Technology Risk**: Network protocol changes may affect operations + +### Legal and Regulatory Risk + +**Compliance Strategy:** +- **Legal Research**: Understanding applicable laws and regulations +- **Professional Advice**: Legal and tax advice for significant operations +- **Documentation**: Maintaining records demonstrating compliance efforts +- **Adaptability**: Ability to adjust operations for changing regulatory environment + +**Jurisdiction Management:** +- **Operating Location**: Consider legal advantages of different jurisdictions +- **Data Routing**: Understanding where data flows and applicable laws +- **Treaty Benefits**: International tax treaties may provide advantages +- **Enforcement Risk**: Realistic assessment of enforcement likelihood and impact + +## Incident Response and Recovery + +### Security Incident Procedures + +**Incident Detection:** +- **Automated Monitoring**: Systems that detect and alert on security issues +- **Manual Review**: Regular manual checks for signs of compromise +- **Network Analysis**: Traffic analysis for suspicious patterns +- **Performance Monitoring**: Unusual performance may indicate security issues + +**Response Procedures:** +1. **Immediate Isolation**: Disconnect compromised systems from network +2. **Impact Assessment**: Determine scope and severity of incident +3. **Evidence Preservation**: Maintain logs and forensic evidence +4. **Recovery Planning**: Develop plan to restore normal operations +5. **Lessons Learned**: Post-incident review to prevent recurrence + +### Business Continuity Planning + +**Disaster Recovery:** +- **Data Recovery**: Procedures for restoring node data and configuration +- **Hardware Replacement**: Rapid replacement of failed or damaged equipment +- **Network Reconfiguration**: Alternative connectivity during outages +- **Communication Plans**: Procedures for communicating with stakeholders + +**Operational Continuity:** +- **Backup Operations**: Alternative locations or configurations for critical operations +- **Financial Reserves**: Adequate reserves for equipment replacement and repairs +- **Supply Chain**: Reliable sources for replacement hardware and services +- **Knowledge Management**: Documentation and training to prevent single points of failure + +## Best Practices Summary + +### Daily Operations + +**Security Hygiene:** +- Monitor system logs for suspicious activity +- Keep software and systems updated +- Review performance metrics for anomalies +- Verify backup integrity regularly + +**Privacy Protection:** +- Minimize unnecessary data collection and retention +- Use encrypted communications for all operational activities +- Maintain operational security discipline in communications +- Regular review of access controls and permissions + +### Long-term Strategy + +**Risk Assessment:** +- Regular evaluation of threat landscape changes +- Assessment of regulatory environment evolution +- Technology risk evaluation for protocol and software changes +- Economic risk analysis for business model sustainability + +**Infrastructure Evolution:** +- Plan for hardware lifecycle and replacement +- Evaluate new security technologies and practices +- Assess scaling requirements and security implications +- Maintain adaptability for changing operational requirements + +Understanding privacy and security considerations enables Storage Node Operators to build robust, secure operations that protect both their interests and contribute positively to the broader Storj network security model. + +## Related Concepts + +- [Storage Node Economics](docId:storage-node-economics) - Economic security and risk considerations +- [Reputation System](docId:reputation-system) - How security affects network reputation +- [Network Participation](docId:network-participation) - Technical security requirements for network operations \ No newline at end of file diff --git a/app/(docs)/node/concepts/reputation-system.md b/app/(docs)/node/concepts/reputation-system.md new file mode 100644 index 000000000..4922e1a85 --- /dev/null +++ b/app/(docs)/node/concepts/reputation-system.md @@ -0,0 +1,285 @@ +--- +title: Storage Node Reputation System +docId: reputation-system +metadata: + title: Storage Node Reputation System - How Trust and Performance Work + description: Detailed explanation of how Storj's reputation system works, including metrics, scoring, consequences, and strategies for maintaining good standing. +--- + +The Storj network's reputation system ensures data reliability and network health by tracking Storage Node performance and rewarding consistent, high-quality service. Understanding how reputation works is essential for successful Storage Node operation and maximizing earnings. + +## Reputation System Overview + +The reputation system continuously monitors Storage Node behavior across multiple dimensions and uses these metrics to make decisions about data placement, payment allocation, and network participation eligibility. + +### Core Design Principles + +**Objective Measurement:** +- Performance metrics based on measurable technical criteria +- Automated scoring eliminates subjective evaluation +- Transparent calculation methods with published formulas + +**Incentive Alignment:** +- Good performance increases earnings opportunities +- Poor performance reduces income and network participation +- Long-term reliability rewarded over short-term availability + +**Network Protection:** +- Identifies and removes unreliable nodes before they cause data loss +- Prevents gaming through multiple interconnected metrics +- Maintains network integrity through proactive monitoring + +### Trust and Verification Model + +**Zero-Trust Architecture:** +- No node is inherently trusted regardless of operator reputation +- All nodes continuously prove their reliability through measurable actions +- Network protocols assume potential node failures and design around them + +**Continuous Validation:** +- Real-time monitoring of all node interactions +- Historical performance tracking over extended periods +- Predictive analysis to identify potential future issues + +## Key Reputation Metrics + +The reputation system evaluates Storage Nodes across several critical performance dimensions, each contributing to overall network health and reliability. + +### Uptime and Availability + +**Uptime Measurement:** +- **Contact Success Rate**: Percentage of successful satellite communication attempts +- **Response Time**: Average time to respond to satellite requests +- **Availability Windows**: Performance during different time periods and network conditions + +**Scoring Methodology:** +- Rolling average over recent time periods (typically 30-90 days) +- Weighted by recency (recent performance matters more) +- Minimum thresholds for continued network participation + +**Impact on Operations:** +- High uptime increases data allocation from satellites +- Consistent availability builds long-term reputation +- Extended downtime triggers reputation penalties and reduced earnings + +### Data Integrity and Audit Performance + +**Audit System:** +- **Random Data Verification**: Satellites randomly request stored data pieces for verification +- **Cryptographic Validation**: Verify data hasn't been corrupted or tampered with +- **Response Accuracy**: Ensure correct data returned within expected timeframes + +**Audit Success Metrics:** +- **Audit Score**: Percentage of successful audit responses over time +- **Critical Threshold**: Scores below 60% lead to node disqualification +- **Recovery Mechanism**: Nodes can improve scores through consistent good performance + +**Data Protection Consequences:** +- Failed audits indicate potential data loss or corruption +- Multiple audit failures suggest systematic problems +- Disqualified nodes lose all stored data and accumulated earnings + +### Bandwidth Performance + +**Bandwidth Quality Metrics:** +- **Upload Speed**: Rate at which node can receive new data +- **Download Speed**: Rate at which customers can retrieve data +- **Consistency**: Variation in performance over time and conditions + +**Performance Measurement:** +- **Throughput Testing**: Regular speed tests during actual data operations +- **Latency Analysis**: Round-trip time for requests and responses +- **Reliability Tracking**: Success rate for data transfer operations + +**Impact on Earnings:** +- Faster nodes receive more customer download requests (higher egress earnings) +- Consistent performance increases satellite confidence and data allocation +- Poor bandwidth performance reduces competitiveness for network operations + +### Suspension and Probationary Status + +**Suspension Triggers:** +- Extended downtime (typically 7+ days offline) +- Repeated audit failures indicating data integrity issues +- Systematic performance problems affecting network operations + +**Suspension Consequences:** +- **No New Data**: Satellites stop allocating new data to suspended nodes +- **Limited Earnings**: Only existing storage generates revenue, no new egress opportunities +- **Reputation Impact**: Suspension period affects long-term reputation scoring + +**Recovery Process:** +- Demonstrate consistent good performance over extended period +- Address underlying issues causing suspension +- Gradual restoration of full network participation privileges + +## Reputation Calculation and Scoring + +### Scoring Algorithms + +**Weighted Average Approach:** +``` +Overall Reputation = ( + Uptime Score × 40% + + Audit Score × 40% + + Bandwidth Score × 20% +) +``` + +**Time-Weighted Calculations:** +- Recent performance weighted more heavily than historical performance +- Exponential decay of older performance data +- Minimum observation periods before reputation stabilizes + +### Reputation Categories + +**New Node (0-6 months):** +- **Vetting Period**: Extended observation before full network participation +- **Limited Data Allocation**: Gradual increase in stored data based on performance +- **Higher Scrutiny**: More frequent audits and monitoring during initial period + +**Established Node (6+ months):** +- **Full Participation**: Eligible for complete range of network operations +- **Historical Context**: Long-term performance patterns influence scoring +- **Reputation Momentum**: Good reputation builds over time, providing resilience to occasional issues + +**High-Reputation Node:** +- **Priority Selection**: Preferred for high-value or critical data storage +- **Increased Allocation**: More data and bandwidth opportunities +- **Network Recognition**: Acknowledged as reliable network participant + +### Reputation Decay and Recovery + +**Performance Decay:** +- Reputation slowly decreases without active positive performance +- Extended inactivity leads to reduced network trust +- Requires ongoing good performance to maintain high scores + +**Recovery Mechanisms:** +- Consistent good performance gradually improves reputation +- Recovery time typically longer than initial reputation building +- Some reputation damage may have permanent components + +## Strategic Reputation Management + +### Building Strong Reputation + +**Initial Setup Best Practices:** +- **Quality Hardware**: Invest in reliable storage and networking equipment +- **Stable Internet**: Ensure consistent, high-speed internet connectivity +- **Redundant Systems**: Plan for power outages, hardware failures, and network issues + +**Operational Excellence:** +- **Proactive Monitoring**: Track performance metrics and address issues quickly +- **Preventive Maintenance**: Regular system updates and hardware maintenance +- **Incident Response**: Quick resolution of problems to minimize reputation impact + +### Maintaining Good Standing + +**Performance Monitoring:** +- **Dashboard Review**: Regular monitoring of reputation metrics in node dashboard +- **Trend Analysis**: Track performance trends to identify potential issues early +- **Comparative Analysis**: Compare performance with network averages and best practices + +**Risk Mitigation:** +- **Redundancy Planning**: Backup power, redundant internet connections +- **Hardware Monitoring**: Proactive replacement of aging or failing components +- **Environmental Controls**: Temperature, humidity, and physical security considerations + +### Recovery from Poor Reputation + +**Issue Identification:** +- **Root Cause Analysis**: Understand underlying causes of reputation problems +- **Systematic Diagnosis**: Check hardware, software, network, and environmental factors +- **Performance Baseline**: Establish current performance levels and improvement targets + +**Improvement Strategy:** +- **Incremental Progress**: Focus on consistent small improvements rather than dramatic changes +- **Sustained Performance**: Maintain improved performance over extended periods +- **Patience and Persistence**: Reputation recovery takes time and sustained effort + +## Reputation Impact on Economics + +### Earnings Correlation + +**Direct Economic Impact:** +- Higher reputation nodes receive more data allocation +- Better reputation correlates with increased egress opportunities +- Premium placement for high-value or time-sensitive data + +**Long-term Economic Benefits:** +- Established reputation provides more predictable earnings +- High-reputation nodes weather network changes better +- Reputation becomes competitive advantage in oversupplied markets + +### Network Participation Benefits + +**Priority Access:** +- High-reputation nodes selected first for new data storage +- Priority consideration for repair operations (additional earnings) +- Access to network beta features and optimizations + +**Business Relationships:** +- Trusted nodes may receive direct enterprise customer allocation +- Partnership opportunities with other network participants +- Recognition in network community and documentation + +## Technical Implementation Details + +### Measurement Infrastructure + +**Satellite Monitoring:** +- Each satellite independently tracks reputation metrics +- Cross-satellite reputation data sharing for consistency +- Automated systems with minimal human intervention + +**Data Collection:** +- **Contact Attempts**: Every satellite communication logged and analyzed +- **Audit Scheduling**: Randomized audit selection to prevent gaming +- **Performance Metrics**: Detailed timing and success/failure tracking + +### Anti-Gaming Measures + +**Multiple Metric Dependencies:** +- Cannot improve reputation by optimizing single metric +- Interconnected scoring prevents exploitation of specific weaknesses +- Balanced requirements across all performance dimensions + +**Long-term Observation:** +- Reputation building requires sustained performance over months +- Short-term manipulation cannot significantly affect long-term scores +- Historical performance patterns more important than recent peaks + +## Future Evolution and Considerations + +### Reputation System Enhancements + +**Machine Learning Integration:** +- Predictive analysis of node reliability +- Pattern recognition for early problem identification +- Adaptive scoring based on network conditions and requirements + +**Granular Reputation Categories:** +- Specialized reputation scores for different types of operations +- Geographic or network-specific reputation tracking +- Customer-specific reputation preferences + +### Network Scalability + +**Distributed Reputation Management:** +- Scalable reputation calculation for millions of nodes +- Efficient data sharing between satellites +- Real-time reputation updates without performance impact + +**Integration with Other Systems:** +- Reputation data integration with payment systems +- Automated node lifecycle management based on reputation +- Customer-facing reputation transparency and selection tools + +Understanding the reputation system enables Storage Node Operators to make informed decisions about hardware investments, operational procedures, and long-term strategies that align with network requirements and maximize both individual success and network health. + +## Related Concepts + +- [Storage Node Economics](docId:storage-node-economics) - How reputation affects earnings +- [Network Participation](docId:network-participation) - Technical requirements for participation +- [Privacy and Security for Operators](docId:privacy-security-operators) - Security considerations affecting reputation \ No newline at end of file diff --git a/app/(docs)/node/concepts/storage-node-economics.md b/app/(docs)/node/concepts/storage-node-economics.md new file mode 100644 index 000000000..1c99d566d --- /dev/null +++ b/app/(docs)/node/concepts/storage-node-economics.md @@ -0,0 +1,265 @@ +--- +title: Storage Node Economics +docId: storage-node-economics +metadata: + title: Storage Node Economics - Understanding Earnings and Costs + description: Comprehensive explanation of how Storage Node economics work, including payment structures, cost factors, profitability analysis, and economic incentives. +--- + +Understanding Storage Node economics is essential for making informed decisions about hardware investments, operational strategies, and long-term profitability. The Storj network's economic model balances fair compensation for Storage Node Operators with competitive pricing for customers. + +## Economic Model Overview + +Storage Node economics operate on a decentralized marketplace model where Storage Node Operators provide storage capacity and bandwidth in exchange for STORJ token payments. The economic incentives align individual node profitability with network health and reliability. + +### Core Economic Principles + +**Market-Driven Pricing:** +- Competitive rates determined by supply and demand +- Geographic and performance-based pricing differentials +- Transparent pricing with no hidden fees or complex structures + +**Performance-Based Compensation:** +- Higher payments for reliable, high-performance nodes +- Reputation system rewards consistent service +- Quality metrics directly influence earnings potential + +**Predictable Payment Structure:** +- Clear payment rates published in advance +- Monthly payment cycles with transparent calculations +- Historical payment data available for forecasting + +## Revenue Streams + +Storage Node Operators earn STORJ tokens through three primary revenue streams, each with different characteristics and optimization strategies. + +### Storage Revenue + +**Payment Structure:** +- **Rate**: $1.50 per TB per month (as of current pricing) +- **Measurement**: Average storage used throughout the month +- **Payment**: Monthly based on actual usage + +**Storage Growth Patterns:** +- **Ramp-up Period**: New nodes gradually receive more data over 6-15 months +- **Geographic Factors**: Nodes in underserved regions may fill faster +- **Network Demand**: Overall network growth affects individual node fill rates + +**Optimization Strategies:** +- **Reliable Hardware**: Minimize downtime to maximize storage retention +- **Adequate Space**: Provide sufficient overhead (10-15%) for growth +- **Geographic Diversity**: Underserved regions may see faster fill rates + +### Egress Bandwidth Revenue + +**Payment Structure:** +- **Rate**: $20.00 per TB of egress bandwidth +- **Measurement**: Data downloaded from your node by customers +- **Variability**: Higher than storage revenue but less predictable + +**Egress Patterns:** +- **Customer Usage**: Depends on customer download patterns +- **Data Popularity**: Some stored data accessed more frequently +- **Performance Impact**: Faster nodes may receive more egress requests + +**Optimization Strategies:** +- **Network Performance**: Fast, reliable internet improves egress opportunities +- **Uptime Optimization**: Availability during peak hours increases earnings +- **Hardware Performance**: Faster disk I/O improves response times + +### Repair and Audit Bandwidth + +**Repair Bandwidth:** +- **Rate**: $10.00 per TB +- **Purpose**: Helping regenerate lost data from failed nodes +- **Opportunity**: Higher when network experiences node failures + +**Audit Bandwidth:** +- **Rate**: $10.00 per TB +- **Purpose**: Providing data samples for network integrity checks +- **Frequency**: Regular audits ensure data integrity and node reputation + +## Cost Factors + +Understanding operational costs is crucial for accurate profitability analysis and investment planning. + +### Hardware Costs + +**Initial Investment:** +- **Storage**: $15-30 per TB for consumer drives, $25-50 per TB for enterprise +- **Computer**: $200-800 for dedicated systems, $0 for existing hardware +- **Network Equipment**: $50-200 for routers, switches if needed + +**Hardware Lifecycle:** +- **Drive Replacement**: 3-5 years typical lifespan +- **System Upgrades**: 5-7 years for compute hardware +- **Expansion Costs**: Additional drives as node fills + +**Optimization Strategies:** +- **Used Hardware**: Repurpose existing systems to minimize initial investment +- **Efficient Hardware**: Balance cost with reliability and performance +- **Scalable Design**: Plan for expansion to spread fixed costs + +### Operational Expenses + +**Electricity Costs:** +- **Typical Usage**: 50-150 watts for typical storage node +- **Annual Cost**: $50-150 per year at $0.12/kWh +- **Optimization**: Energy-efficient hardware reduces ongoing costs + +**Internet Costs:** +- **Bandwidth Requirements**: Upload speed more important than download +- **Typical Needs**: 10+ Mbps upload for most nodes +- **Cost Considerations**: Unlimited plans usually necessary for profitability + +**Maintenance Costs:** +- **Time Investment**: 1-5 hours per month for monitoring and maintenance +- **Replacement Parts**: Budget for drive failures and hardware issues +- **Software Updates**: Regular updates ensure optimal performance and earnings + +### Tax and Legal Considerations + +**Income Reporting:** +- STORJ token payments typically considered taxable income +- Track earnings and expenses for accurate tax reporting +- Consider professional tax advice for significant operations + +**Business Structure:** +- Sole proprietorship suitable for small operations +- LLC or corporation may provide benefits for larger operations +- Consider liability insurance for significant investments + +## Profitability Analysis + +### Break-Even Calculations + +**Simple Break-Even Model:** +``` +Monthly Break-Even Storage = (Hardware Costs + Monthly Operating Costs) / Monthly Storage Rate +Example: ($500 hardware / 36 months + $15 monthly costs) / $1.50 per TB = 23TB break-even +``` + +**Realistic Timeline:** +- **Months 1-6**: Typically unprofitable due to low storage fill +- **Months 6-18**: Approaching break-even as storage fills +- **Months 18+**: Profitable operation with full or near-full utilization + +### Factors Affecting Profitability + +**Node Performance Factors:** +- **Uptime**: 99%+ uptime essential for maximum earnings +- **Geographic Location**: Underserved regions may have advantages +- **Internet Speed**: Faster connections increase egress opportunities +- **Hardware Reliability**: Downtime directly reduces earnings + +**Network-Level Factors:** +- **Overall Network Growth**: More customers increase demand for storage and bandwidth +- **Pricing Changes**: Storj periodically adjusts payment rates +- **Competition**: More nodes in region may reduce individual fill rates + +**Economic Environment:** +- **STORJ Token Price**: USD value affects actual earnings +- **Electricity Costs**: Regional energy costs impact profitability +- **Hardware Prices**: Equipment costs affect ROI calculations + +### Example Scenarios + +**Small Home Node (2TB):** +- **Initial Investment**: $300 (drive + basic hardware) +- **Monthly Costs**: $10 (electricity + internet allocation) +- **Full Capacity Earnings**: ~$3/month storage + variable egress +- **Break-even**: 18-24 months typical + +**Medium Dedicated Node (8TB):** +- **Initial Investment**: $800-1,200 +- **Monthly Costs**: $25-35 +- **Full Capacity Earnings**: ~$12/month storage + variable egress +- **Break-even**: 12-18 months typical + +**Large Scale Operation (50TB+):** +- **Initial Investment**: $5,000-15,000 +- **Monthly Costs**: $100-300 +- **Full Capacity Earnings**: ~$75+/month storage + significant egress +- **Break-even**: 12-24 months depending on scale and efficiency + +## Economic Incentives and Reputation + +### Reputation System Impact + +**Uptime Scoring:** +- Consistent availability increases storage allocation +- Downtime penalties reduce future earnings +- Recovery time needed after extended outages + +**Audit Performance:** +- Passing audits essential for continued participation +- Failed audits lead to disqualification and loss of all earnings +- Data integrity directly affects long-term profitability + +**Bandwidth Performance:** +- Fast response times increase egress opportunities +- Network latency affects competitiveness for customer requests +- Reliable performance builds reputation for future earnings + +### Long-Term Economic Strategy + +**Sustainability Focus:** +- Build sustainable operations rather than chasing short-term profits +- Invest in reliable hardware to minimize long-term operational issues +- Plan for hardware lifecycle and replacement costs + +**Scaling Considerations:** +- Evaluate optimal scale for your situation and resources +- Consider geographic distribution for larger operations +- Balance growth with maintaining service quality + +**Risk Management:** +- Diversify with multiple nodes to spread risk +- Maintain adequate reserves for hardware replacement +- Monitor network changes and adapt strategy accordingly + +## Market Dynamics and Future Considerations + +### Network Growth Impact + +**Customer Growth:** +- More customers increase demand for storage and bandwidth +- Network effects improve economics for all operators +- Expanding use cases drive additional revenue opportunities + +**Operator Competition:** +- More operators may reduce individual fill rates +- Geographic diversity helps mitigate competition effects +- Focus on reliability and performance for competitive advantage + +### Technology Evolution + +**Hardware Improvements:** +- More efficient hardware improves cost structures +- Higher capacity drives reduce per-TB hardware costs +- Better networking equipment improves performance competitiveness + +**Protocol Enhancements:** +- Network optimizations may improve payment rates +- New features could create additional revenue streams +- Protocol improvements enhance overall network efficiency + +### Economic Environment Changes + +**Token Price Volatility:** +- STORJ token price affects USD value of earnings +- Long-term token appreciation benefits early operators +- Price volatility creates both opportunity and risk + +**Regulatory Considerations:** +- Changing regulations may affect operation legality or economics +- Tax treatment may evolve as cryptocurrency regulations mature +- Monitor legal developments in relevant jurisdictions + +Understanding Storage Node economics enables informed decision-making about participation, investment levels, and operational strategies that align with your financial goals and risk tolerance. + +## Related Concepts + +- [Reputation System](docId:reputation-system) - How reputation affects earnings +- [Network Participation](docId:network-participation) - Technical aspects of network involvement +- [Privacy and Security for Operators](docId:privacy-security-operators) - Security considerations affecting operations \ No newline at end of file diff --git a/app/(docs)/object-mount/concepts/_meta.json b/app/(docs)/object-mount/concepts/_meta.json new file mode 100644 index 000000000..9eb2ac133 --- /dev/null +++ b/app/(docs)/object-mount/concepts/_meta.json @@ -0,0 +1,21 @@ +{ + "title": "Concepts", + "nav": [ + { + "title": "Object Mount vs Filesystems", + "id": "object-mount-vs-filesystems" + }, + { + "title": "POSIX Compliance Explained", + "id": "posix-compliance-explained" + }, + { + "title": "Performance Characteristics", + "id": "performance-characteristics" + }, + { + "title": "When to Use Fusion", + "id": "when-to-use-fusion" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/object-mount/concepts/performance-characteristics.md b/app/(docs)/object-mount/concepts/performance-characteristics.md new file mode 100644 index 000000000..e3d5371c9 --- /dev/null +++ b/app/(docs)/object-mount/concepts/performance-characteristics.md @@ -0,0 +1,362 @@ +--- +title: Performance Characteristics +docId: performance-characteristics +metadata: + title: Object Mount Performance Characteristics - Technical Analysis + description: Comprehensive analysis of Object Mount performance patterns, optimization strategies, and benchmarking results for different workload types. +--- + +Understanding Object Mount's performance characteristics enables you to optimize configurations, set appropriate expectations, and design workflows that leverage its strengths while mitigating limitations. + +## Performance Architecture Overview + +Object Mount's performance profile reflects the fundamental characteristics of bridging POSIX filesystems with object storage through intelligent caching, batching, and optimization strategies. + +### Core Performance Factors + +**Network Latency:** +- Geographic distance to object storage provider +- Internet connection quality and consistency +- Provider-specific API response times +- Network congestion and routing efficiency + +**Caching Effectiveness:** +- Working set size vs. available cache memory +- Access pattern predictability (sequential vs. random) +- Cache replacement algorithm efficiency +- Write-behind caching configuration + +**Object Storage Characteristics:** +- Provider throughput limits and throttling +- API operation costs and rate limits +- Multipart upload thresholds and performance +- Storage class characteristics (hot, cool, archive) + +**Workload Patterns:** +- File size distribution and access frequency +- Read vs. write operation ratios +- Sequential vs. random access patterns +- Concurrency levels and parallelization opportunities + +## Read Performance Analysis + +### Cache Hit Scenarios + +**First Access (Cache Miss):** +- Network latency + download time from object storage +- Typical performance: 50-500ms initial latency + bandwidth-limited throughput +- Large files: Streaming download allows processing during transfer + +**Subsequent Access (Cache Hit):** +- Near-local storage performance (microseconds to low milliseconds) +- Limited by local storage (SSD/RAM) and CPU processing +- Typical performance: 90%+ of local filesystem speed + +**Partial File Access:** +- HTTP range requests for specific file portions +- Excellent for large files where only portions are needed +- Media files: Seeking to specific timestamps without full download + +### Read Optimization Strategies + +**Intelligent Prefetching:** +``` +Sequential access detected → Prefetch next segments in background +Media file opened → Prefetch metadata and initial segments +Directory listed → Prefetch commonly accessed files +``` + +**Cache Management:** +- **LRU eviction**: Keeps most recently used data available +- **Working set optimization**: Adapts to application access patterns +- **Metadata caching**: Directory listings and file attributes cached separately + +### Read Performance by File Size + +**Small Files (< 1MB):** +- First access: Network latency dominated (50-200ms typical) +- Cached access: Excellent performance (< 1ms) +- Optimization: Batch small file operations when possible + +**Medium Files (1-64MB):** +- First access: Balanced latency and throughput +- Streaming: Can begin processing before complete download +- Cache efficiency: Good fit for typical cache sizes + +**Large Files (> 64MB):** +- First access: Throughput-limited (10-100MB/s depending on provider) +- Partial access: Range requests allow efficient random access +- Streaming optimizations: Best performance for sequential processing + +## Write Performance Analysis + +### Write Buffering and Batching + +**Small Writes (< 64KB):** +- Buffered in memory for batching +- Periodic flush to object storage (configurable intervals) +- Typical latency: Immediate return, background upload + +**Large Writes (> 5MB):** +- Direct streaming to object storage using multipart uploads +- Parallel upload of segments for maximum throughput +- Progress visible through standard file operations + +**Write-Behind Caching:** +``` +Application write → Local buffer → Background object upload → Confirmation +``` + +### Write Performance Optimization + +**Multipart Upload Benefits:** +- Parallel segment uploads improve throughput +- Resume capability for large files +- Better error recovery for network issues + +**Batching Strategies:** +- Multiple small writes combined into single object operations +- Metadata updates batched to reduce API calls +- Directory operations optimized through caching + +### Write Performance by Pattern + +**Sequential Writes:** +- Optimal performance with streaming and multipart uploads +- Excellent throughput for large files (often 100+ MB/s) +- Minimal read-modify-write cycles + +**Random Writes:** +- May require read-modify-write cycles for efficiency +- Performance depends on object size and write size +- Consider Fusion mode for write-intensive random access + +**Append Operations:** +- Efficient for log files and streaming data +- Write-behind buffering minimizes object rewriting +- Configurable flush intervals balance performance and durability + +## Metadata Performance + +### Metadata Operations + +**File Status (stat) Operations:** +- First access: Network round-trip to object storage +- Cached access: Local memory lookup (< 1ms) +- Batch optimization: Directory listings cache multiple entries + +**Directory Operations:** +- **List directory**: Efficient with pagination and caching +- **Create directory**: Immediate local operation, lazy object storage sync +- **Delete directory**: May require multiple object deletions + +### Metadata Caching Strategy + +**Cache Levels:** +- **L1**: In-memory metadata for immediate access +- **L2**: Local disk metadata for persistence across restarts +- **Refresh policies**: Configurable TTL and validation strategies + +**Consistency Management:** +- **Single client**: Strong consistency with immediate updates +- **Multiple clients**: Eventual consistency with configurable sync intervals +- **Conflict detection**: ETag validation prevents lost updates + +## Concurrency and Parallelization + +### Concurrent Access Patterns + +**Multiple Processes (Same Client):** +- Shared cache improves performance +- POSIX semantics maintained within client +- Lock coordination through local mechanisms + +**Multiple Clients:** +- Independent caches with sync overhead +- Eventual consistency model +- Performance depends on conflict frequency + +### Parallelization Benefits + +**Download Parallelization:** +- Multiple file downloads occur simultaneously +- Segment-level parallelization for large files +- Thread pool optimization balances concurrency and resource usage + +**Upload Parallelization:** +- Concurrent multipart uploads for multiple files +- Parallel segment uploads within single files +- Background processing maintains application responsiveness + +## Performance Benchmarking Results + +### Typical Performance Characteristics + +**Sequential Read Performance:** +- **Small files (1KB-1MB)**: 1,000-10,000 ops/sec (cache), 100-500 ops/sec (network) +- **Large files (100MB+)**: 50-200 MB/s throughput (provider-dependent) +- **Cache hit ratio**: 85-95% for typical workloads + +**Sequential Write Performance:** +- **Small writes**: 500-2,000 ops/sec (buffered), 10-100 ops/sec (immediate) +- **Large files**: 25-150 MB/s throughput (provider and upload parallelization dependent) +- **Latency**: < 1ms buffered, 50-500ms for object storage round-trip + +**Metadata Operations:** +- **Cached operations**: 50,000-100,000 ops/sec +- **Network operations**: 100-1,000 ops/sec (provider API dependent) +- **Directory listings**: 10-500 directories/sec (size and caching dependent) + +### Comparison with Local Storage + +**NVMe SSD Baseline:** +- Sequential read: 2,000-7,000 MB/s +- Sequential write: 1,000-5,000 MB/s +- Random IOPS: 100,000-1,000,000 operations/sec +- Latency: < 0.1ms for most operations + +**Object Mount Performance Relative to Local:** +- **Cached reads**: 80-95% of local performance +- **Uncached reads**: 5-15% of local (network-limited) +- **Buffered writes**: 60-90% of local performance +- **Metadata operations**: 10-80% depending on cache status + +## Optimization Strategies + +### Configuration Optimization + +**Cache Sizing:** +```bash +# Recommended cache sizes by use case +Development environment: 1-4GB +Media editing: 8-32GB +Data analysis: 4-16GB +Production servers: 10-50GB +``` + +**Write Buffer Optimization:** +- Buffer size: Balance memory usage with batching efficiency +- Flush intervals: Trade durability guarantees for performance +- Concurrent uploads: Match parallelization to available bandwidth + +### Application-Level Optimization + +**Access Pattern Optimization:** +- **Sequential access**: Leverage streaming and prefetching +- **Batch operations**: Group multiple file operations together +- **Working set management**: Keep frequently accessed files cached + +**File Organization Strategies:** +- **Large file benefits**: Better amortization of network overhead +- **Directory structure**: Balance deep nesting with listing performance +- **File naming**: Consistent patterns improve caching effectiveness + +### Provider-Specific Optimizations + +**Amazon S3:** +- Use Transfer Acceleration for global performance +- Optimize multipart upload thresholds (5MB+) +- Consider S3 storage classes for cost/performance balance + +**Storj:** +- Leverage global distribution for low latency +- Optimal segment size typically 64MB +- Geographic diversity improves fault tolerance + +**Azure Blob Storage:** +- Use hot/cool/archive tiers appropriately +- Optimize block size for throughput +- Consider regional replication for availability + +## Performance Monitoring and Troubleshooting + +### Key Performance Metrics + +**Throughput Metrics:** +- Bytes read/written per second +- Operations per second by type +- Cache hit ratios and miss penalties +- Network bandwidth utilization + +**Latency Metrics:** +- First-byte time for cache misses +- Operation completion times +- Queue depth and processing delays +- Network round-trip measurements + +### Performance Troubleshooting + +**Common Performance Issues:** + +**Cache Thrashing:** +- Symptoms: High cache miss rates, inconsistent performance +- Causes: Working set larger than cache, poor access patterns +- Solutions: Increase cache size, optimize access patterns + +**Network Bottlenecks:** +- Symptoms: Low throughput despite adequate bandwidth +- Causes: High latency, packet loss, provider throttling +- Solutions: Optimize provider selection, check network path + +**Write Performance Issues:** +- Symptoms: Slow write operations, high latency +- Causes: Insufficient buffering, small object sizes, network issues +- Solutions: Tune buffer sizes, batch operations, check provider performance + +### Diagnostic Tools + +**Built-in Monitoring:** +- Object Mount statistics and logging +- Cache performance metrics +- API operation timing and success rates + +**System-Level Tools:** +- `iostat`: Monitor I/O patterns and utilization +- `iftop`/`nethogs`: Network bandwidth monitoring +- `htop`/`top`: CPU and memory usage analysis + +**Application Profiling:** +- File access pattern analysis +- I/O operation timing measurements +- Working set size determination + +## Performance Expectations by Use Case + +### Media and Creative Workflows + +**Video Editing:** +- Initial file load: 30-120 seconds for 4K content +- Scrubbing performance: Near real-time after caching +- Export operations: 80-95% of local performance + +**Photo Processing:** +- RAW file loading: 2-10 seconds initial, < 1 second cached +- Batch processing: Excellent performance with adequate cache +- Export performance: Limited by upload bandwidth + +### Development and Testing + +**Code Compilation:** +- Source file access: Excellent cache performance +- Build artifacts: Good performance with proper caching +- Version control: Works well with appropriate buffer sizing + +**Data Analysis:** +- Dataset loading: Initial latency, then excellent performance +- Iterative analysis: Cache provides significant benefits +- Result export: Limited by upload bandwidth and parallelization + +### Backup and Archival + +**Backup Operations:** +- Initial backup: Limited by upload bandwidth +- Incremental backups: Excellent performance with change detection +- Restore operations: Good performance with parallel downloads + +Understanding these performance characteristics enables you to make informed decisions about Object Mount deployment, configuration, and optimization for your specific use cases. + +## Related Concepts + +- [Object Mount vs Filesystems](docId:object-mount-vs-filesystems) - Architectural performance implications +- [POSIX Compliance](docId:posix-compliance-exp) - How POSIX operations affect performance +- [When to Use Fusion](docId:when-to-use-fusion) - Alternative architectures for performance \ No newline at end of file diff --git a/app/(docs)/object-mount/concepts/posix-compliance-explained.md b/app/(docs)/object-mount/concepts/posix-compliance-explained.md new file mode 100644 index 000000000..71adb2c3b --- /dev/null +++ b/app/(docs)/object-mount/concepts/posix-compliance-explained.md @@ -0,0 +1,258 @@ +--- +title: POSIX Compliance Explained +docId: posix-compliance-exp +metadata: + title: POSIX Compliance in Object Mount - Technical Explanation + description: Detailed explanation of how Object Mount implements POSIX filesystem semantics on top of object storage, including limitations and compatibility considerations. +--- + +POSIX (Portable Operating System Interface) compliance is fundamental to Object Mount's ability to seamlessly integrate object storage with traditional applications. Understanding what POSIX compliance means and how Object Mount implements it helps you optimize performance and troubleshoot compatibility issues. + +## What is POSIX? + +POSIX defines a standard set of operating system interfaces that applications rely on for filesystem operations. Originally designed for Unix-like systems, POSIX has become the de facto standard for cross-platform filesystem compatibility. + +### Core POSIX Filesystem Operations + +**File Operations:** +- `open()` - Open files with various modes and flags +- `read()` / `write()` - Transfer data to and from files +- `seek()` - Move file position for random access +- `close()` - Properly close file handles +- `sync()` - Force data synchronization to storage + +**Directory Operations:** +- `mkdir()` / `rmdir()` - Create and remove directories +- `opendir()` / `readdir()` - List directory contents +- `rename()` - Move and rename files and directories + +**Metadata Operations:** +- `stat()` / `lstat()` - Get file and directory information +- `chmod()` - Change file permissions +- `chown()` - Change file ownership +- `utime()` - Update access and modification times + +## Object Mount's POSIX Implementation + +Object Mount translates these POSIX operations into object storage API calls while maintaining expected semantics as closely as possible. + +### File System Call Interception + +**LD_PRELOAD Mechanism:** +- Intercepts standard C library calls before they reach the kernel +- Works with dynamically linked applications without modification +- Provides near-native performance for supported operations +- Falls back to system calls for unsupported operations + +**Implementation Details:** +```c +// Intercepted call flow +application_open() → object_mount_open() → s3_get_object() → return_fd +application_read() → object_mount_read() → cache_or_fetch() → return_data +application_write() → object_mount_write() → buffer_or_upload() → return_bytes +``` + +### POSIX Feature Support Matrix + +#### Fully Supported Features + +**File Operations:** +- Sequential and random read access +- Write operations with various modes (O_RDONLY, O_WRONLY, O_RDWR) +- File truncation and extension +- Multiple file descriptors per file +- File position seeking (lseek) + +**Directory Operations:** +- Directory creation and deletion +- Directory listing with standard readdir() interface +- Nested directory structures +- Directory traversal with opendir/readdir/closedir + +**Metadata Support:** +- File size, timestamps (access, modify, create) +- POSIX permissions (read, write, execute for owner, group, other) +- File type identification (regular files, directories, symlinks) +- Extended attributes (limited provider support) + +#### Partially Supported Features + +**Advanced File Operations:** +- **Memory mapping (mmap)**: Supported for read-only access, limited write support +- **File locking (flock, fcntl)**: Advisory locking only, not enforced across all clients +- **Sparse files**: Emulated through object metadata, not space-efficient +- **Hard links**: Simulated through metadata references, not true filesystem hard links + +**Permission Enforcement:** +- Full POSIX permissions within single client sessions +- Cross-client permission sync with configurable intervals +- ACL support depends on underlying object storage provider + +#### Unsupported or Limited Features + +**System-Level Features:** +- **Device files**: Special files (block, character devices) not supported +- **Named pipes (FIFOs)**: Cannot create pipes that persist in object storage +- **Unix domain sockets**: Local-only constructs incompatible with object storage +- **Mandatory file locking**: Only advisory locking available + +**Performance-Limited Features:** +- **Small random writes**: Require read-modify-write cycles for efficiency +- **Atomic operations**: Limited to single-object operations +- **Directory rename**: May require copying all contained objects + +## Consistency and Synchronization + +### Single Client Consistency + +Within a single Object Mount instance, all operations maintain strong consistency: + +- **Read after write**: Immediately visible within the same process +- **File locking**: Fully enforced for concurrent access within one client +- **Metadata updates**: Instantly reflected in subsequent operations + +### Multi-Client Consistency + +When multiple Object Mount instances access the same data: + +**Eventually Consistent Model:** +- Changes propagate based on configured sync intervals +- Metadata refresh policies determine visibility delays +- Last-writer-wins conflict resolution for concurrent modifications + +**Synchronization Mechanisms:** +- **Periodic refresh**: Configurable intervals for metadata updates +- **Change detection**: Object etag and modification time monitoring +- **Manual sync**: Force refresh through configuration or API calls + +### Conflict Resolution + +**Write Conflicts:** +- Object storage uses last-writer-wins semantics +- Object Mount detects conflicts through etag validation +- Applications receive appropriate error codes for conflicts + +**Directory Conflicts:** +- Directory operations use eventual consistency +- Concurrent creation/deletion may have race conditions +- Robust error handling prevents filesystem corruption + +## Performance Implications + +### POSIX Operations and Object Storage Mapping + +**Efficient Operations:** +- **Large sequential reads**: Map directly to object downloads with excellent performance +- **Whole file writes**: Optimal for object storage write patterns +- **Directory listings**: Cached and batched for efficiency + +**Less Efficient Operations:** +- **Small random writes**: Require read-modify-write cycles +- **Frequent metadata updates**: Generate many small API calls +- **File append operations**: May require rewriting entire objects + +### Optimization Strategies + +**Application-Level Optimizations:** +- Use large buffer sizes for I/O operations +- Batch metadata operations when possible +- Prefer sequential access patterns over random access + +**Configuration Optimizations:** +- Tune cache sizes for working set requirements +- Adjust sync intervals based on consistency needs +- Configure write-behind caching for write-heavy workloads + +## Compatibility Testing and Validation + +### Application Compatibility + +**Highly Compatible Applications:** +- **Media processing tools**: Adobe Creative Suite, Final Cut Pro, Avid +- **Development tools**: Compilers, interpreters, IDEs +- **Data analysis**: Python, R, MATLAB scientific computing +- **Backup software**: rsync, tar, standard archiving tools + +**Moderately Compatible Applications:** +- **Databases**: SQLite works well, PostgreSQL/MySQL have considerations +- **Version control**: Git, SVN work with appropriate configuration +- **Web servers**: Static file serving works, consider caching strategies + +**Applications Requiring Consideration:** +- **High-frequency write applications**: May need write-behind caching +- **Real-time systems**: Network latency affects predictability +- **Lock-dependent applications**: Understand advisory vs. mandatory locking differences + +### Testing Methodology + +**Compatibility Validation:** +1. **Functional testing**: Verify all required operations work correctly +2. **Performance testing**: Measure impact on application throughput +3. **Error handling**: Test application behavior with network issues +4. **Concurrent access**: Validate behavior with multiple clients + +**Benchmark Applications:** +- **IOzone**: Filesystem benchmark suite for performance testing +- **Bonnie++**: Tests various I/O patterns and metadata operations +- **fio**: Flexible I/O tester for specific workload simulation + +## Troubleshooting POSIX Issues + +### Common Compatibility Problems + +**Permission Errors:** +- Verify Object Mount is running with appropriate user permissions +- Check object storage access credentials and bucket permissions +- Review POSIX mode configuration settings + +**Performance Issues:** +- Monitor cache hit rates and adjust cache size +- Check network latency to object storage provider +- Review access patterns for object storage optimization opportunities + +**Consistency Problems:** +- Adjust metadata refresh intervals for multi-client scenarios +- Verify time synchronization across all client systems +- Check for conflicting concurrent operations + +### Diagnostic Tools and Techniques + +**Object Mount Debugging:** +- Enable detailed logging to identify operation patterns +- Monitor cache statistics and hit rates +- Track API call frequency and latency + +**System-Level Diagnostics:** +- Use `strace` to monitor system calls and identify interception issues +- Monitor memory usage patterns for cache efficiency +- Network monitoring to identify connectivity problems + +## Standards Compliance + +### POSIX.1 Core Standards + +Object Mount aims for compatibility with: +- **IEEE Std 1003.1-2017**: Core POSIX specification for system interfaces +- **Single UNIX Specification**: Common Unix interface standards +- **Linux Standard Base**: Linux-specific POSIX extensions + +### Deviations from Standard + +**Documented Limitations:** +- Network latency affects operation timing guarantees +- Some atomic operation semantics differ due to object storage characteristics +- Extended attributes support varies by object storage provider +- File locking is advisory-only across multiple clients + +**Design Trade-offs:** +- Performance optimization may delay some metadata updates +- Eventual consistency model differs from traditional filesystem guarantees +- Error codes may differ in some edge cases due to object storage mapping + +Understanding POSIX compliance in Object Mount helps you make informed decisions about application compatibility, performance optimization, and troubleshooting approaches for your specific use cases. + +## Related Concepts + +- [Object Mount vs Filesystems](docId:object-mount-vs-filesystems) - High-level architectural comparison +- [Performance Characteristics](docId:performance-characteristics) - Detailed performance analysis +- [When to Use Fusion](docId:when-to-use-fusion) - Alternative deployment patterns \ No newline at end of file diff --git a/app/(docs)/object-mount/concepts/when-to-use-fusion.md b/app/(docs)/object-mount/concepts/when-to-use-fusion.md new file mode 100644 index 000000000..ea013fb9b --- /dev/null +++ b/app/(docs)/object-mount/concepts/when-to-use-fusion.md @@ -0,0 +1,347 @@ +--- +title: When to Use Object Mount Fusion +docId: when-to-use-fusion +metadata: + title: When to Use Object Mount Fusion - Decision Guide + description: Comprehensive guide to understanding Object Mount Fusion hybrid storage and when it provides advantages over standard Object Mount deployment. +--- + +Object Mount Fusion represents a hybrid storage architecture that combines local storage with object storage to optimize performance for specific use cases. Understanding when and how to deploy Fusion helps you achieve optimal performance for demanding applications. + +## What is Object Mount Fusion + +Object Mount Fusion creates a tiered storage system where frequently accessed data resides on fast local storage while less frequently accessed data automatically migrates to cost-effective object storage. + +### Fusion Architecture Components + +**Local Storage Tier (Hot):** +- High-performance local storage (NVMe SSD, RAM disk) +- Immediate access with no network latency +- Limited capacity, higher cost per GB +- Handles active working set and write operations + +**Object Storage Tier (Cool):** +- Unlimited capacity object storage +- Cost-effective long-term storage +- Network latency for first access +- Handles archived and infrequently accessed data + +**Intelligent Management Layer:** +- Automatic data movement between tiers +- Predictive caching based on access patterns +- Background synchronization with object storage +- Transparent operation for applications + +## Core Benefits of Fusion + +### Performance Advantages + +**Write Performance:** +- All writes go to local storage first +- No network latency for write operations +- Batch uploads to object storage in background +- Sustained write performance matching local storage + +**Read Performance:** +- Hot data accessed at local storage speed +- Predictive fetching minimizes cache misses +- Larger effective cache size through tiering +- Better performance for mixed workloads + +**Consistency Benefits:** +- Strong consistency for local operations +- Reduced dependency on network connectivity +- Better handling of temporary network issues +- Improved application responsiveness + +### Cost Optimization + +**Capacity Economics:** +- Expensive local storage only for active data +- Unlimited cheap object storage for bulk data +- Automatic optimization without manual intervention +- Cost-per-GB scales with usage patterns + +**Operational Efficiency:** +- Reduced bandwidth costs through intelligent tiering +- Fewer object storage API operations +- Optimized upload/download patterns +- Better resource utilization + +## Use Case Analysis + +### Excellent Fusion Candidates + +#### Database Workloads + +**Traditional Databases (PostgreSQL, MySQL):** +- Active tables and indexes on local storage +- Historical data automatically tiered to object storage +- Transaction logs written locally with async backup +- Query performance maintained for active dataset + +**Analytics Databases:** +- Recent data partitions kept local for fast queries +- Historical partitions moved to object storage +- Automatic data lifecycle management +- Cost-effective retention of years of data + +**Key Performance Indicators:** +- 70-90% of queries against recent data (excellent local cache hit rate) +- Clear temporal access patterns (recent data accessed more frequently) +- Large total dataset size (benefits from object storage economics) + +#### Write-Intensive Applications + +**Log Aggregation Systems:** +- All log writes go to local storage (no write latency) +- Background processing and compression +- Automatic archival to object storage +- Fast query access to recent logs + +**Content Creation Workflows:** +- Active projects on local storage +- Completed projects moved to object storage +- Version history and backups in object storage +- Instant access to active work + +**Application Development:** +- Source code and active branches on local storage +- Build artifacts and releases in object storage +- Historical versions automatically archived +- Fast build and test cycles + +### Good Fusion Candidates (With Considerations) + +#### Media Processing + +**Video Editing:** +- Current project files on local storage +- Raw footage and proxies intelligently cached +- Rendered outputs uploaded to object storage +- Consider storage requirements vs. project timelines + +**Photo Workflows:** +- Recent shoots on local storage for editing +- Processed images and RAW archives in object storage +- Catalog and preview data optimally distributed +- Balance local storage size with archive needs + +#### Scientific Computing + +**Data Analysis Pipelines:** +- Current datasets and intermediate results local +- Source data and final results in object storage +- Model training data intelligently cached +- Consider data access patterns and processing requirements + +### Poor Fusion Candidates + +#### Streaming Applications + +**Real-time Video Streaming:** +- Consistent network bandwidth requirements +- No benefit from local caching for live streams +- Standard Object Mount often sufficient +- Consider CDN integration instead + +#### Archive-Only Workloads + +**Backup Systems:** +- Data accessed infrequently after initial backup +- No benefit from local storage tier +- Standard Object Mount provides cost-effective solution +- Focus on bandwidth optimization instead + +#### Small Working Sets + +**Simple Web Applications:** +- Total data smaller than cost-effective local storage +- Access patterns don't benefit from tiering +- Complexity not justified by performance gains +- Standard Object Mount simpler and adequate + +## Technical Requirements and Considerations + +### Infrastructure Requirements + +**Local Storage Specifications:** +- **Capacity**: 10-50% of total dataset (application dependent) +- **Performance**: NVMe SSD recommended for database workloads +- **Reliability**: RAID or replication for critical applications +- **Monitoring**: Capacity and performance tracking essential + +**Network Requirements:** +- **Bandwidth**: Sufficient for background sync operations +- **Reliability**: Handle temporary connectivity issues gracefully +- **Latency**: Lower latency improves background sync efficiency + +**System Resources:** +- **Memory**: Additional RAM for cache management and metadata +- **CPU**: Processing overhead for tiering decisions and data movement +- **Monitoring**: Comprehensive logging and metrics collection + +### Configuration Considerations + +**Tiering Policies:** +- **Age-based tiering**: Move data to object storage after time threshold +- **Access-based tiering**: Move data based on access frequency +- **Size-based tiering**: Prioritize smaller files for local storage +- **Manual policies**: Application-specific tiering rules + +**Sync Strategies:** +- **Aggressive sync**: Immediate upload to object storage (higher reliability) +- **Lazy sync**: Batch uploads during low-activity periods (higher performance) +- **Selective sync**: Only sync specific file types or directories +- **Bandwidth limiting**: Throttle sync to preserve application bandwidth + +## Performance Expectations + +### Fusion vs. Standard Object Mount + +**Write Performance:** +- **Fusion**: Near-local performance for all writes +- **Standard**: Network-limited with write-behind caching +- **Improvement**: 5-50x better write latency, sustained throughput + +**Read Performance (Hot Data):** +- **Fusion**: Local storage speed for cached data +- **Standard**: Object storage speed for all data +- **Improvement**: 10-1000x better latency for frequently accessed data + +**Read Performance (Cold Data):** +- **Fusion**: Object storage speed plus tiering overhead +- **Standard**: Object storage speed +- **Difference**: Minimal performance difference, slight overhead + +### Performance Tuning + +**Cache Size Optimization:** +```bash +# Working set analysis +find /mount/point -type f -atime -7 | du -ch # Files accessed in last week +find /mount/point -type f -atime -30 | du -ch # Files accessed in last month + +# Recommended cache sizes +Database workloads: 20-40% of active dataset +Development: 50-80% of project files +Media editing: 30-60% of current projects +Analytics: 15-30% of recent data +``` + +**Tiering Policy Tuning:** +- Monitor access patterns to optimize age-based policies +- Track cache hit rates to validate sizing decisions +- Adjust sync frequency based on reliability requirements +- Balance local storage utilization with performance needs + +## Implementation Strategies + +### Migration Planning + +**Gradual Migration:** +1. Deploy Fusion alongside existing storage +2. Migrate non-critical applications first +3. Monitor performance and adjust policies +4. Gradually migrate production workloads + +**Performance Baseline:** +- Measure current application performance +- Identify performance bottlenecks and requirements +- Establish monitoring for key performance metrics +- Plan rollback procedures for issues + +### Operational Considerations + +**Monitoring and Alerting:** +- Local storage capacity and utilization +- Object storage sync status and bandwidth usage +- Cache hit rates and tiering effectiveness +- Application performance impacts + +**Backup and Recovery:** +- Local storage backup strategies +- Object storage provides inherent backup +- Recovery procedures for local storage failures +- Testing and validation of recovery processes + +**Capacity Management:** +- Growth projections for both local and object storage +- Cost optimization through tiering policy adjustments +- Capacity alerting and planning processes +- Hardware lifecycle management + +## Cost-Benefit Analysis + +### Cost Factors + +**Local Storage Costs:** +- Hardware acquisition and depreciation +- Power, cooling, and data center space +- Maintenance and replacement costs +- Administration and monitoring overhead + +**Object Storage Costs:** +- Storage costs (typically $0.01-0.05/GB/month) +- API operation costs +- Bandwidth costs for uploads/downloads +- No hardware or maintenance overhead + +### Break-Even Analysis + +**Typical Break-Even Points:** +- **Database workloads**: 500GB-2TB total data with 20-40% active +- **Development environments**: 100GB-1TB with 50-80% active +- **Media workflows**: 1TB-10TB with 30-60% active +- **Analytics**: 2TB-20TB with 15-30% active + +**ROI Calculation Factors:** +- Performance improvement value (productivity, user experience) +- Reduced hardware acquisition and maintenance costs +- Operational efficiency improvements +- Scalability and flexibility benefits + +## Decision Framework + +### Fusion Suitability Checklist + +**Strong Fusion Candidates:** +- [ ] Write-intensive workloads with local storage performance requirements +- [ ] Clear data access patterns (hot/cold data distinction) +- [ ] Large total dataset size (>500GB) with smaller active working set +- [ ] Performance-sensitive applications with cost constraints +- [ ] Existing local storage infrastructure that can be leveraged + +**Marginal Fusion Candidates:** +- [ ] Mixed workloads with unclear access patterns +- [ ] Medium-sized datasets (100-500GB) with moderate performance requirements +- [ ] Applications with flexible performance requirements +- [ ] Limited local storage infrastructure or budget + +**Poor Fusion Candidates:** +- [ ] Small datasets that fit entirely on cost-effective local storage +- [ ] Archive-only workloads with infrequent access +- [ ] Applications with minimal performance requirements +- [ ] Streaming or real-time applications without caching benefits + +### Implementation Decision Tree + +``` +Is write performance critical? +├─ Yes → Does data have clear hot/cold patterns? +│ ├─ Yes → Is total dataset > 500GB? +│ │ ├─ Yes → Strong Fusion candidate +│ │ └─ No → Consider standard local storage +│ └─ No → Consider standard Object Mount with write-behind caching +└─ No → Are read performance and cost both important? + ├─ Yes → Evaluate based on dataset size and access patterns + └─ No → Standard Object Mount likely sufficient +``` + +Understanding when to use Object Mount Fusion enables you to make informed architectural decisions that optimize both performance and cost for your specific use cases. + +## Related Concepts + +- [Object Mount vs Filesystems](docId:object-mount-vs-filesystems) - Fundamental architecture comparison +- [Performance Characteristics](docId:performance-characteristics) - Detailed performance analysis +- [POSIX Compliance](docId:posix-compliance-exp) - How POSIX semantics work in Fusion \ No newline at end of file From ce9b392dc4501ba26ad47277d1223499587f6f38 Mon Sep 17 00:00:00 2001 From: onionjake <113088+onionjake@users.noreply.github.com> Date: Fri, 29 Aug 2025 14:56:32 -0600 Subject: [PATCH 4/8] Next phase adding Diataxis --- app/(docs)/_meta.json | 25 +++++ app/(docs)/dcs/_meta.json | 53 +++++++++ app/(docs)/dcs/how-to/configure-cors.md | 29 ++++- app/(docs)/dcs/reference/cli-commands.md | 4 + .../tutorials/your-first-week-with-storj.md | 24 ++++- app/(docs)/learn/_meta.json | 17 +++ app/(docs)/learn/concepts/_meta.json | 101 ++++++++++++++++++ app/(docs)/node/_meta.json | 37 +++++++ .../node/concepts/storage-node-economics.md | 4 + app/(docs)/object-mount/_meta.json | 45 ++++++++ app/(docs)/page.md | 31 +++++- 11 files changed, 362 insertions(+), 8 deletions(-) create mode 100644 app/(docs)/_meta.json create mode 100644 app/(docs)/dcs/_meta.json create mode 100644 app/(docs)/learn/_meta.json create mode 100644 app/(docs)/learn/concepts/_meta.json create mode 100644 app/(docs)/node/_meta.json create mode 100644 app/(docs)/object-mount/_meta.json diff --git a/app/(docs)/_meta.json b/app/(docs)/_meta.json new file mode 100644 index 000000000..245e2a0ed --- /dev/null +++ b/app/(docs)/_meta.json @@ -0,0 +1,25 @@ +{ + "title": "Storj Documentation", + "nav": [ + { + "title": "Learn Concepts", + "id": "learn" + }, + { + "title": "Decentralized Cloud Storage", + "id": "dcs" + }, + { + "title": "Object Mount", + "id": "object-mount" + }, + { + "title": "Storage Node", + "id": "node" + }, + { + "title": "Support", + "id": "support" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/dcs/_meta.json b/app/(docs)/dcs/_meta.json new file mode 100644 index 000000000..d671893e1 --- /dev/null +++ b/app/(docs)/dcs/_meta.json @@ -0,0 +1,53 @@ +{ + "title": "Decentralized Cloud Storage (DCS)", + "nav": [ + { + "title": "Getting Started", + "id": "getting-started" + }, + { + "title": "Tutorials", + "id": "tutorials" + }, + { + "title": "How-to Guides", + "id": "how-to" + }, + { + "title": "Reference", + "id": "reference" + }, + { + "title": "API", + "id": "api" + }, + { + "title": "Buckets", + "id": "buckets" + }, + { + "title": "Objects", + "id": "objects" + }, + { + "title": "Access", + "id": "access" + }, + { + "title": "Code Samples", + "id": "code" + }, + { + "title": "Third-party Tools", + "id": "third-party-tools" + }, + { + "title": "Migration Guides", + "id": "migrate" + }, + { + "title": "Pricing", + "id": "pricing" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/configure-cors.md b/app/(docs)/dcs/how-to/configure-cors.md index 788d6efe8..1e745f032 100644 --- a/app/(docs)/dcs/how-to/configure-cors.md +++ b/app/(docs)/dcs/how-to/configure-cors.md @@ -6,6 +6,10 @@ metadata: description: Step-by-step guide to understand and work with Storj's CORS policy for secure web application development. --- +{% callout type="info" %} +**How-to Guide** - Problem-solving guide for specific tasks +{% /callout %} + This guide explains how to work with Cross-Origin Resource Sharing (CORS) when building web applications that access Storj storage. ## Prerequisites @@ -177,7 +181,24 @@ async uploadFile(file) { Once CORS is working correctly: -- [Implement presigned URLs for secure uploads](#) -- [Set up client-side file validation](#) -- [Configure bucket policies for web hosting](#) -- [Optimize web application performance](#) \ No newline at end of file +- [Use Presigned URLs](docId:use-presigned-urls) for secure uploads +- [Set up Object Versioning](docId:setup-object-versioning) for data protection +- [Optimize Upload Performance](docId:optimize-upload-performance) for better UX + +## Related Content + +**Start Learning:** +- [Your First Week with Storj](docId:first-week-storj-tutorial) - Complete beginner tutorial +- [Build Your First App](docId:build-your-first-app-tutorial) - Web app development guide + +**Related How-to Guides:** +- [Use Presigned URLs](docId:use-presigned-urls) - Secure browser uploads +- [Migrate from AWS S3](docId:migrate-from-s3) - Switch to Storj storage + +**Technical Details:** +- [S3 API Reference](docId:s3-api-reference) - CORS specification details +- [CLI Commands Reference](docId:cli-reference-001) - Command-line tools + +**Background Concepts:** +- [Security and Encryption](docId:security-and-encryption) - How Storj secures data +- [Storj Architecture Overview](docId:storj-architecture-overview) - System design \ No newline at end of file diff --git a/app/(docs)/dcs/reference/cli-commands.md b/app/(docs)/dcs/reference/cli-commands.md index 686ffb289..ff9daf11e 100644 --- a/app/(docs)/dcs/reference/cli-commands.md +++ b/app/(docs)/dcs/reference/cli-commands.md @@ -6,6 +6,10 @@ metadata: description: "Complete reference for all Uplink CLI commands, flags, and usage patterns for managing Storj DCS storage." --- +{% callout type="warning" %} +**Reference** - Authoritative specification and lookup information +{% /callout %} + Complete reference for the Uplink CLI tool commands and options. {% callout type="info" %} diff --git a/app/(docs)/dcs/tutorials/your-first-week-with-storj.md b/app/(docs)/dcs/tutorials/your-first-week-with-storj.md index 86b65a188..2e915700f 100644 --- a/app/(docs)/dcs/tutorials/your-first-week-with-storj.md +++ b/app/(docs)/dcs/tutorials/your-first-week-with-storj.md @@ -6,6 +6,10 @@ metadata: description: Comprehensive 7-day tutorial to master Storj DCS fundamentals, from account setup to advanced features --- +{% callout type="note" %} +**Tutorial** - Learning-oriented guide for hands-on skill development +{% /callout %} + Master Storj DCS in your first week with this comprehensive tutorial that takes you from complete beginner to confident user. ## What you'll build @@ -574,4 +578,22 @@ Now that you've mastered the basics: - [Discord Community](https://discord.gg/storj) - [Documentation](https://docs.storj.io) -Congratulations on completing your first week with Storj! You're now ready to build amazing applications with decentralized cloud storage. \ No newline at end of file +Congratulations on completing your first week with Storj! You're now ready to build amazing applications with decentralized cloud storage. + +## Related Content + +**More Tutorials:** +- [Build Your First App](docId:build-your-first-app-tutorial) - Create a web application with Storj integration + +**Next Steps (How-to Guides):** +- [Optimize Upload Performance](docId:optimize-upload-performance) - Speed up your data uploads +- [Configure CORS](docId:configure-cors-how-to) - Set up web application security +- [Migrate from AWS S3](docId:migrate-from-s3) - Switch from S3 to Storj + +**Technical Reference:** +- [CLI Commands Reference](docId:cli-reference-001) - Complete command documentation +- [S3 API Reference](docId:s3-api-reference) - API compatibility details + +**Understanding Concepts:** +- [Understanding Decentralized Storage](docId:understand-decent-stor) - Learn the fundamentals +- [Storj Architecture Overview](docId:storj-architecture-overview) - How the network works \ No newline at end of file diff --git a/app/(docs)/learn/_meta.json b/app/(docs)/learn/_meta.json new file mode 100644 index 000000000..208ed95ab --- /dev/null +++ b/app/(docs)/learn/_meta.json @@ -0,0 +1,17 @@ +{ + "title": "Learn", + "nav": [ + { + "title": "Concepts", + "id": "concepts" + }, + { + "title": "Tutorials", + "id": "tutorials" + }, + { + "title": "Self-host", + "id": "self-host" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/learn/concepts/_meta.json b/app/(docs)/learn/concepts/_meta.json new file mode 100644 index 000000000..62ef52487 --- /dev/null +++ b/app/(docs)/learn/concepts/_meta.json @@ -0,0 +1,101 @@ +{ + "title": "Core Concepts", + "nav": [ + { + "title": "Understanding Decentralized Storage", + "id": "understanding-decentralized-storage" + }, + { + "title": "Storj Architecture Overview", + "id": "storj-architecture-overview" + }, + { + "title": "Security and Encryption", + "id": "security-and-encryption" + }, + { + "title": "Definitions", + "id": "definitions" + }, + { + "title": "Access Control", + "id": "access" + }, + { + "title": "Connectors", + "id": "connectors" + }, + { + "title": "Consistency", + "id": "consistency" + }, + { + "title": "Data Structure", + "id": "data-structure" + }, + { + "title": "Decentralization", + "id": "decentralization" + }, + { + "title": "Edge Services", + "id": "edge-services" + }, + { + "title": "Encryption Keys", + "id": "encryption-key" + }, + { + "title": "File Redundancy", + "id": "file-redundancy" + }, + { + "title": "File Repair", + "id": "file-repair" + }, + { + "title": "Immutability", + "id": "immutability" + }, + { + "title": "Key Architecture Constructs", + "id": "key-architecture-constructs" + }, + { + "title": "Limits", + "id": "limits" + }, + { + "title": "Linksharing Service", + "id": "linksharing-service" + }, + { + "title": "Multi-tenant Data", + "id": "multi-tenant-data" + }, + { + "title": "Multiregion Availability", + "id": "multiregion-availability" + }, + { + "title": "S3 Compatibility", + "id": "s3-compatibility" + }, + { + "title": "Satellite", + "id": "satellite" + }, + { + "title": "Security Models", + "id": "security-models" + }, + { + "title": "Solution Architectures", + "id": "solution-architectures" + }, + { + "title": "WORM", + "id": "worm" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/node/_meta.json b/app/(docs)/node/_meta.json new file mode 100644 index 000000000..899e32780 --- /dev/null +++ b/app/(docs)/node/_meta.json @@ -0,0 +1,37 @@ +{ + "title": "Storage Node", + "nav": [ + { + "title": "Get Started", + "id": "get-started" + }, + { + "title": "Tutorials", + "id": "tutorials" + }, + { + "title": "How-to Guides", + "id": "how-to" + }, + { + "title": "Concepts", + "id": "concepts" + }, + { + "title": "Reference", + "id": "reference" + }, + { + "title": "Commercial Node", + "id": "commercial-node" + }, + { + "title": "Payouts", + "id": "payouts" + }, + { + "title": "FAQ", + "id": "faq" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/node/concepts/storage-node-economics.md b/app/(docs)/node/concepts/storage-node-economics.md index 1c99d566d..8cb3ced1a 100644 --- a/app/(docs)/node/concepts/storage-node-economics.md +++ b/app/(docs)/node/concepts/storage-node-economics.md @@ -6,6 +6,10 @@ metadata: description: Comprehensive explanation of how Storage Node economics work, including payment structures, cost factors, profitability analysis, and economic incentives. --- +{% callout type="success" %} +**Explanation** - Concept guide providing context and understanding +{% /callout %} + Understanding Storage Node economics is essential for making informed decisions about hardware investments, operational strategies, and long-term profitability. The Storj network's economic model balances fair compensation for Storage Node Operators with competitive pricing for customers. ## Economic Model Overview diff --git a/app/(docs)/object-mount/_meta.json b/app/(docs)/object-mount/_meta.json new file mode 100644 index 000000000..71926feb2 --- /dev/null +++ b/app/(docs)/object-mount/_meta.json @@ -0,0 +1,45 @@ +{ + "title": "Object Mount", + "nav": [ + { + "title": "Tutorials", + "id": "tutorials" + }, + { + "title": "How-to Guides", + "id": "how-to" + }, + { + "title": "Concepts", + "id": "concepts" + }, + { + "title": "Reference", + "id": "reference" + }, + { + "title": "Linux", + "id": "linux" + }, + { + "title": "macOS", + "id": "macos" + }, + { + "title": "Windows", + "id": "windows" + }, + { + "title": "Media Workflows", + "id": "media-workflows" + }, + { + "title": "Release Notes", + "id": "release-notes" + }, + { + "title": "FAQ", + "id": "faq" + } + ] +} \ No newline at end of file diff --git a/app/(docs)/page.md b/app/(docs)/page.md index 5b25cb2d4..84c0c5744 100644 --- a/app/(docs)/page.md +++ b/app/(docs)/page.md @@ -35,9 +35,34 @@ Some of the main Storj features include: ## How to Use These Docs -On the left side of the screen, you'll find the docs navbar. The pages are organized sequentially that you can follow step-by-step or if you're already familiar with object storage you can jump to the section that applies most to your use case. - -On the right side of the screen, there's a table of contents to help you move between parts of a page. To find a page fast, use the search bar at the top or press Ctrl+K or Cmd+K on your keyboard. +This documentation is organized using the [Diataxis framework](https://diataxis.fr/) to help you find exactly what you need: + +### Choose Your Path + +**New to Storj?** Start with our tutorials: +- [**DCS**: Your First Week with Storj](docId:first-week-storj-tutorial) - Complete beginner guide +- [**Object Mount**: Your First Mount](docId:object-mount-first-tutorial) - Get started with filesystem access +- [**Storage Node**: Set Up Your First Node](docId:setup-first-node-tutorial) - Start earning by providing storage + +**Need to solve a specific problem?** Check our how-to guides: +- [**DCS How-to Guides**](docId:dcs-how-to) - Task-focused solutions +- [**Object Mount How-to Guides**](docId:object-mount-how-to) - Specific configuration tasks +- [**Storage Node How-to Guides**](docId:node-how-to) - Operational procedures + +**Looking for technical details?** Browse our reference sections: +- [**DCS Reference**](docId:dcs-reference) - API specs, CLI commands, limits +- [**Object Mount Reference**](docId:object-mount-reference) - Configuration options, compatibility +- [**Storage Node Reference**](docId:node-reference) - System requirements, metrics + +**Want to understand how things work?** Read our concept explanations: +- [**Core Concepts**](docId:learn-concepts) - Fundamental Storj concepts +- [**Object Mount Concepts**](docId:object-mount-concepts) - Filesystem bridging explained +- [**Storage Node Concepts**](docId:node-concepts) - Economics, reputation, participation + +**Quick Navigation Tips:** +- Use the search bar (Ctrl+K or Cmd+K) to find specific topics +- Follow the left sidebar for sequential learning +- Use the right sidebar to jump within pages ## Join the community From ad9228301af7229832474ad9812439a22fc14a45 Mon Sep 17 00:00:00 2001 From: onionjake <113088+onionjake@users.noreply.github.com> Date: Fri, 29 Aug 2025 15:07:20 -0600 Subject: [PATCH 5/8] Next phase adding Diataxis --- app/(docs)/dcs/how-to/migrate-from-s3.md | 4 ++++ .../understanding-decentralized-storage/page.md | 4 ++++ app/(docs)/node/how-to/change-payout-address.md | 4 ++++ app/(docs)/node/reference/system-requirements.md | 4 ++++ app/(docs)/page.md | 16 ++++++++-------- 5 files changed, 24 insertions(+), 8 deletions(-) diff --git a/app/(docs)/dcs/how-to/migrate-from-s3.md b/app/(docs)/dcs/how-to/migrate-from-s3.md index 618f8121b..29c7bfc05 100644 --- a/app/(docs)/dcs/how-to/migrate-from-s3.md +++ b/app/(docs)/dcs/how-to/migrate-from-s3.md @@ -6,6 +6,10 @@ metadata: description: Complete guide to migrate your data and applications from Amazon S3 to Storj DCS --- +{% callout type="info" %} +**How-to Guide** - Problem-solving guide for specific tasks +{% /callout %} + Migrate your data and applications from Amazon S3 to Storj DCS with minimal disruption to your workflows. ## Prerequisites diff --git a/app/(docs)/learn/concepts/understanding-decentralized-storage/page.md b/app/(docs)/learn/concepts/understanding-decentralized-storage/page.md index 6d4a688a9..9257f6f06 100644 --- a/app/(docs)/learn/concepts/understanding-decentralized-storage/page.md +++ b/app/(docs)/learn/concepts/understanding-decentralized-storage/page.md @@ -6,6 +6,10 @@ metadata: description: Learn the fundamental principles behind decentralized storage systems, how they differ from traditional cloud storage, and why they provide better security and privacy. --- +{% callout type="success" %} +**Explanation** - Concept guide providing context and understanding +{% /callout %} + Decentralized storage represents a fundamental shift from traditional centralized cloud storage models. Understanding how it works and why it matters is essential for making informed decisions about data storage and protection. ## What is Decentralized Storage diff --git a/app/(docs)/node/how-to/change-payout-address.md b/app/(docs)/node/how-to/change-payout-address.md index 80868e32e..d6f12ba66 100644 --- a/app/(docs)/node/how-to/change-payout-address.md +++ b/app/(docs)/node/how-to/change-payout-address.md @@ -6,6 +6,10 @@ metadata: description: Step-by-step guide to update the wallet address where you receive payments for your storage node operations. --- +{% callout type="info" %} +**How-to Guide** - Problem-solving guide for specific tasks +{% /callout %} + This guide shows you how to change the wallet address where you receive payments for operating your storage node. ## Prerequisites diff --git a/app/(docs)/node/reference/system-requirements.md b/app/(docs)/node/reference/system-requirements.md index 401efb2bd..d4beaf14e 100644 --- a/app/(docs)/node/reference/system-requirements.md +++ b/app/(docs)/node/reference/system-requirements.md @@ -6,6 +6,10 @@ metadata: description: "Complete reference for Storage Node hardware, software, and network requirements for optimal performance." --- +{% callout type="warning" %} +**Reference** - Authoritative specification and lookup information +{% /callout %} + Complete reference for Storage Node system requirements and specifications. ## Hardware Requirements diff --git a/app/(docs)/page.md b/app/(docs)/page.md index 84c0c5744..f02bafbf4 100644 --- a/app/(docs)/page.md +++ b/app/(docs)/page.md @@ -41,18 +41,18 @@ This documentation is organized using the [Diataxis framework](https://diataxis. **New to Storj?** Start with our tutorials: - [**DCS**: Your First Week with Storj](docId:first-week-storj-tutorial) - Complete beginner guide -- [**Object Mount**: Your First Mount](docId:object-mount-first-tutorial) - Get started with filesystem access -- [**Storage Node**: Set Up Your First Node](docId:setup-first-node-tutorial) - Start earning by providing storage +- [**Object Mount**: Your First Mount](docId:your-first-object-mount) - Get started with filesystem access +- [**Storage Node**: Set Up Your First Node](docId:setup-first-storage-node) - Start earning by providing storage **Need to solve a specific problem?** Check our how-to guides: -- [**DCS How-to Guides**](docId:dcs-how-to) - Task-focused solutions -- [**Object Mount How-to Guides**](docId:object-mount-how-to) - Specific configuration tasks -- [**Storage Node How-to Guides**](docId:node-how-to) - Operational procedures +- [**DCS How-to Guides**](docId:REPde_t8MJMDaE2BU8RfQ) - Task-focused solutions +- [**Object Mount How-to Guides**](docId:okai0aiJei9No1Sh) - Specific configuration tasks +- [**Storage Node How-to Guides**](docId:change-payout-address-how-to) - Operational procedures **Looking for technical details?** Browse our reference sections: -- [**DCS Reference**](docId:dcs-reference) - API specs, CLI commands, limits -- [**Object Mount Reference**](docId:object-mount-reference) - Configuration options, compatibility -- [**Storage Node Reference**](docId:node-reference) - System requirements, metrics +- [**DCS Reference**](docId:cli-reference-001) - CLI commands and API specifications +- [**Object Mount Reference**](docId:okai0aiJei9No1Sh) - Configuration options, compatibility +- [**Storage Node Reference**](docId:node-system-req-ref-001) - System requirements, metrics **Want to understand how things work?** Read our concept explanations: - [**Core Concepts**](docId:learn-concepts) - Fundamental Storj concepts From 7cb8d1f29eec329486de3950ef180753f9ba0cae Mon Sep 17 00:00:00 2001 From: onionjake <113088+onionjake@users.noreply.github.com> Date: Fri, 10 Oct 2025 19:57:26 -0600 Subject: [PATCH 6/8] WIP Attempt to adopt diaxis --- app/(docs)/_meta.json | 26 +---- app/(docs)/dcs/_meta.json | 54 +--------- app/(docs)/dcs/how-to/_meta.json | 38 +------ .../page.md} | 8 +- .../page.md} | 2 +- .../page.md} | 2 +- .../page.md} | 2 +- .../page.md} | 0 .../page.md} | 2 +- .../page.md} | 2 +- .../{use-rclone.md => use-rclone/page.md} | 2 +- .../{cli-commands.md => cli-commands/page.md} | 2 +- .../{error-codes.md => error-codes/page.md} | 2 +- .../reference/{limits.md => limits/page.md} | 2 +- .../reference/{s3-api.md => s3-api/page.md} | 2 +- app/(docs)/dcs/tutorials/_meta.json | 14 +-- .../page.md} | 6 +- app/(docs)/dcs/tutorials/page.md | 16 +++ .../page.md} | 4 +- app/(docs)/learn/_meta.json | 18 +--- app/(docs)/learn/concepts/_meta.json | 102 +----------------- app/(docs)/node/_meta.json | 38 +------ app/(docs)/node/concepts/_meta.json | 22 +--- .../page.md} | 0 app/(docs)/node/concepts/page.md | 2 +- .../page.md} | 2 +- .../page.md} | 0 .../page.md} | 0 app/(docs)/node/how-to/_meta.json | 30 +----- .../page.md} | 2 +- .../page.md} | 0 .../{migrate-node.md => migrate-node/page.md} | 2 +- .../page.md} | 0 .../page.md} | 2 +- .../page.md} | 2 +- .../page.md} | 2 +- .../page.md} | 2 +- .../page.md} | 2 +- .../page.md} | 2 +- app/(docs)/object-mount/_meta.json | 46 +------- app/(docs)/object-mount/concepts/_meta.json | 22 +--- .../page.md} | 0 .../page.md} | 0 .../page.md} | 2 +- .../page.md} | 0 app/(docs)/object-mount/how-to/_meta.json | 26 +---- .../page.md} | 0 .../page.md} | 0 .../page.md} | 0 .../page.md} | 0 .../page.md} | 0 .../page.md} | 2 +- .../page.md} | 2 +- .../page.md} | 2 +- .../page.md} | 2 +- app/(docs)/page.md | 24 ++--- src/components/Navigation.jsx | 18 +++- 57 files changed, 91 insertions(+), 471 deletions(-) rename app/(docs)/dcs/how-to/{configure-cors.md => configure-cors/page.md} (95%) rename app/(docs)/dcs/how-to/{create-buckets.md => create-buckets/page.md} (98%) rename app/(docs)/dcs/how-to/{migrate-from-s3.md => migrate-from-s3/page.md} (99%) rename app/(docs)/dcs/how-to/{optimize-upload-performance.md => optimize-upload-performance/page.md} (99%) rename app/(docs)/dcs/how-to/{setup-bucket-logging.md => setup-bucket-logging/page.md} (100%) rename app/(docs)/dcs/how-to/{setup-object-versioning.md => setup-object-versioning/page.md} (99%) rename app/(docs)/dcs/how-to/{use-presigned-urls.md => use-presigned-urls/page.md} (99%) rename app/(docs)/dcs/how-to/{use-rclone.md => use-rclone/page.md} (99%) rename app/(docs)/dcs/reference/{cli-commands.md => cli-commands/page.md} (99%) rename app/(docs)/dcs/reference/{error-codes.md => error-codes/page.md} (99%) rename app/(docs)/dcs/reference/{limits.md => limits/page.md} (99%) rename app/(docs)/dcs/reference/{s3-api.md => s3-api/page.md} (99%) rename app/(docs)/dcs/tutorials/{build-your-first-app.md => build-your-first-app/page.md} (99%) create mode 100644 app/(docs)/dcs/tutorials/page.md rename app/(docs)/dcs/tutorials/{your-first-week-with-storj.md => your-first-week-with-storj/page.md} (99%) rename app/(docs)/node/concepts/{network-participation.md => network-participation/page.md} (100%) rename app/(docs)/node/concepts/{privacy-security-for-operators.md => privacy-security-for-operators/page.md} (99%) rename app/(docs)/node/concepts/{reputation-system.md => reputation-system/page.md} (100%) rename app/(docs)/node/concepts/{storage-node-economics.md => storage-node-economics/page.md} (100%) rename app/(docs)/node/how-to/{change-payout-address.md => change-payout-address/page.md} (99%) rename app/(docs)/node/how-to/{fix-database-corruption.md => fix-database-corruption/page.md} (100%) rename app/(docs)/node/how-to/{migrate-node.md => migrate-node/page.md} (99%) rename app/(docs)/node/how-to/{monitor-node-performance.md => monitor-node-performance/page.md} (100%) rename app/(docs)/node/how-to/{setup-remote-access.md => setup-remote-access/page.md} (99%) rename app/(docs)/node/how-to/{troubleshoot-offline-node.md => troubleshoot-offline-node/page.md} (99%) rename app/(docs)/node/reference/{configuration.md => configuration/page.md} (99%) rename app/(docs)/node/reference/{dashboard-metrics.md => dashboard-metrics/page.md} (99%) rename app/(docs)/node/reference/{system-requirements.md => system-requirements/page.md} (99%) rename app/(docs)/node/tutorials/{setup-first-node.md => setup-first-node/page.md} (99%) rename app/(docs)/object-mount/concepts/{object-mount-vs-filesystems.md => object-mount-vs-filesystems/page.md} (100%) rename app/(docs)/object-mount/concepts/{performance-characteristics.md => performance-characteristics/page.md} (100%) rename app/(docs)/object-mount/concepts/{posix-compliance-explained.md => posix-compliance-explained/page.md} (99%) rename app/(docs)/object-mount/concepts/{when-to-use-fusion.md => when-to-use-fusion/page.md} (100%) rename app/(docs)/object-mount/how-to/{configure-posix-permissions.md => configure-posix-permissions/page.md} (100%) rename app/(docs)/object-mount/how-to/{install-debian-ubuntu.md => install-debian-ubuntu/page.md} (100%) rename app/(docs)/object-mount/how-to/{install-rhel-centos.md => install-rhel-centos/page.md} (100%) rename app/(docs)/object-mount/how-to/{optimize-large-files.md => optimize-large-files/page.md} (100%) rename app/(docs)/object-mount/how-to/{troubleshoot-mount-issues.md => troubleshoot-mount-issues/page.md} (100%) rename app/(docs)/object-mount/reference/{cli-reference.md => cli-reference/page.md} (99%) rename app/(docs)/object-mount/reference/{compatibility.md => compatibility/page.md} (99%) rename app/(docs)/object-mount/reference/{configuration.md => configuration/page.md} (99%) rename app/(docs)/object-mount/tutorials/{your-first-mount.md => your-first-mount/page.md} (99%) diff --git a/app/(docs)/_meta.json b/app/(docs)/_meta.json index 245e2a0ed..b99a0de94 100644 --- a/app/(docs)/_meta.json +++ b/app/(docs)/_meta.json @@ -1,25 +1 @@ -{ - "title": "Storj Documentation", - "nav": [ - { - "title": "Learn Concepts", - "id": "learn" - }, - { - "title": "Decentralized Cloud Storage", - "id": "dcs" - }, - { - "title": "Object Mount", - "id": "object-mount" - }, - { - "title": "Storage Node", - "id": "node" - }, - { - "title": "Support", - "id": "support" - } - ] -} \ No newline at end of file +{"title": "Storj Documentation"} \ No newline at end of file diff --git a/app/(docs)/dcs/_meta.json b/app/(docs)/dcs/_meta.json index d671893e1..7dc0839f3 100644 --- a/app/(docs)/dcs/_meta.json +++ b/app/(docs)/dcs/_meta.json @@ -1,53 +1 @@ -{ - "title": "Decentralized Cloud Storage (DCS)", - "nav": [ - { - "title": "Getting Started", - "id": "getting-started" - }, - { - "title": "Tutorials", - "id": "tutorials" - }, - { - "title": "How-to Guides", - "id": "how-to" - }, - { - "title": "Reference", - "id": "reference" - }, - { - "title": "API", - "id": "api" - }, - { - "title": "Buckets", - "id": "buckets" - }, - { - "title": "Objects", - "id": "objects" - }, - { - "title": "Access", - "id": "access" - }, - { - "title": "Code Samples", - "id": "code" - }, - { - "title": "Third-party Tools", - "id": "third-party-tools" - }, - { - "title": "Migration Guides", - "id": "migrate" - }, - { - "title": "Pricing", - "id": "pricing" - } - ] -} \ No newline at end of file +{"title": "Decentralized Cloud Storage (DCS)"} \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/_meta.json b/app/(docs)/dcs/how-to/_meta.json index b87daf07d..836831afa 100644 --- a/app/(docs)/dcs/how-to/_meta.json +++ b/app/(docs)/dcs/how-to/_meta.json @@ -1,37 +1 @@ -{ - "title": "How-to Guides", - "nav": [ - { - "title": "Create buckets", - "id": "create-buckets" - }, - { - "title": "Use Rclone", - "id": "use-rclone" - }, - { - "title": "Configure CORS", - "id": "configure-cors" - }, - { - "title": "Set up object versioning", - "id": "setup-object-versioning" - }, - { - "title": "Use presigned URLs", - "id": "use-presigned-urls" - }, - { - "title": "Optimize upload performance", - "id": "optimize-upload-performance" - }, - { - "title": "Set up bucket logging", - "id": "setup-bucket-logging" - }, - { - "title": "Migrate from AWS S3", - "id": "migrate-from-s3" - } - ] -} \ No newline at end of file +{"title": "How-to Guides"} \ No newline at end of file diff --git a/app/(docs)/dcs/how-to/configure-cors.md b/app/(docs)/dcs/how-to/configure-cors/page.md similarity index 95% rename from app/(docs)/dcs/how-to/configure-cors.md rename to app/(docs)/dcs/how-to/configure-cors/page.md index 1e745f032..6d2c83c04 100644 --- a/app/(docs)/dcs/how-to/configure-cors.md +++ b/app/(docs)/dcs/how-to/configure-cors/page.md @@ -1,6 +1,6 @@ --- title: How to configure CORS for web applications -docId: configure-cors-how-to +docId: configure-cors metadata: title: How to Configure CORS for Storj Web Applications description: Step-by-step guide to understand and work with Storj's CORS policy for secure web application development. @@ -188,15 +188,15 @@ Once CORS is working correctly: ## Related Content **Start Learning:** -- [Your First Week with Storj](docId:first-week-storj-tutorial) - Complete beginner tutorial -- [Build Your First App](docId:build-your-first-app-tutorial) - Web app development guide +- [Your First Week with Storj](docId:your-first-week-with-storj) - Complete beginner tutorial +- [Build Your First App](docId:build-your-first-app) - Web app development guide **Related How-to Guides:** - [Use Presigned URLs](docId:use-presigned-urls) - Secure browser uploads - [Migrate from AWS S3](docId:migrate-from-s3) - Switch to Storj storage **Technical Details:** -- [S3 API Reference](docId:s3-api-reference) - CORS specification details +- [S3 API Reference](docId:s3-api) - CORS specification details - [CLI Commands Reference](docId:cli-reference-001) - Command-line tools **Background Concepts:** diff --git a/app/(docs)/dcs/how-to/create-buckets.md b/app/(docs)/dcs/how-to/create-buckets/page.md similarity index 98% rename from app/(docs)/dcs/how-to/create-buckets.md rename to app/(docs)/dcs/how-to/create-buckets/page.md index 0b6aa12df..afec381b0 100644 --- a/app/(docs)/dcs/how-to/create-buckets.md +++ b/app/(docs)/dcs/how-to/create-buckets/page.md @@ -1,6 +1,6 @@ --- title: How to create buckets -docId: create-buckets-how-to +docId: create-buckets metadata: title: How to Create Storj Buckets description: Step-by-step guide to create Storj buckets using command-line tools or the Storj Console. diff --git a/app/(docs)/dcs/how-to/migrate-from-s3.md b/app/(docs)/dcs/how-to/migrate-from-s3/page.md similarity index 99% rename from app/(docs)/dcs/how-to/migrate-from-s3.md rename to app/(docs)/dcs/how-to/migrate-from-s3/page.md index 29c7bfc05..0d5f96730 100644 --- a/app/(docs)/dcs/how-to/migrate-from-s3.md +++ b/app/(docs)/dcs/how-to/migrate-from-s3/page.md @@ -1,6 +1,6 @@ --- title: Migrate from AWS S3 -docId: migrate-from-s3-guide +docId: migrate-from-s3 metadata: title: How to Migrate from AWS S3 to Storj DCS description: Complete guide to migrate your data and applications from Amazon S3 to Storj DCS diff --git a/app/(docs)/dcs/how-to/optimize-upload-performance.md b/app/(docs)/dcs/how-to/optimize-upload-performance/page.md similarity index 99% rename from app/(docs)/dcs/how-to/optimize-upload-performance.md rename to app/(docs)/dcs/how-to/optimize-upload-performance/page.md index 82b504e5f..71bf74979 100644 --- a/app/(docs)/dcs/how-to/optimize-upload-performance.md +++ b/app/(docs)/dcs/how-to/optimize-upload-performance/page.md @@ -1,6 +1,6 @@ --- title: Optimize upload performance -docId: optimize-upload-perf +docId: optimize-upload-performance metadata: title: How to Optimize Upload Performance - Storj DCS description: Improve file upload speeds using parallel transfers with Rclone and Uplink CLI diff --git a/app/(docs)/dcs/how-to/setup-bucket-logging.md b/app/(docs)/dcs/how-to/setup-bucket-logging/page.md similarity index 100% rename from app/(docs)/dcs/how-to/setup-bucket-logging.md rename to app/(docs)/dcs/how-to/setup-bucket-logging/page.md diff --git a/app/(docs)/dcs/how-to/setup-object-versioning.md b/app/(docs)/dcs/how-to/setup-object-versioning/page.md similarity index 99% rename from app/(docs)/dcs/how-to/setup-object-versioning.md rename to app/(docs)/dcs/how-to/setup-object-versioning/page.md index edf15cc83..dd66f1cb8 100644 --- a/app/(docs)/dcs/how-to/setup-object-versioning.md +++ b/app/(docs)/dcs/how-to/setup-object-versioning/page.md @@ -1,6 +1,6 @@ --- title: Set up object versioning -docId: setup-object-vers1 +docId: setup-object-versioning metadata: title: How to Set Up Object Versioning - Storj DCS description: Step-by-step guide to enable object versioning on your Storj DCS buckets for data protection and recovery diff --git a/app/(docs)/dcs/how-to/use-presigned-urls.md b/app/(docs)/dcs/how-to/use-presigned-urls/page.md similarity index 99% rename from app/(docs)/dcs/how-to/use-presigned-urls.md rename to app/(docs)/dcs/how-to/use-presigned-urls/page.md index f6fa25852..cec25e9f5 100644 --- a/app/(docs)/dcs/how-to/use-presigned-urls.md +++ b/app/(docs)/dcs/how-to/use-presigned-urls/page.md @@ -1,6 +1,6 @@ --- title: Use presigned URLs -docId: use-presigned-urls1 +docId: use-presigned-urls metadata: title: How to Use Presigned URLs - Storj DCS description: Create presigned URLs to allow unauthenticated access to your Storj objects for uploads and downloads diff --git a/app/(docs)/dcs/how-to/use-rclone.md b/app/(docs)/dcs/how-to/use-rclone/page.md similarity index 99% rename from app/(docs)/dcs/how-to/use-rclone.md rename to app/(docs)/dcs/how-to/use-rclone/page.md index c649548c1..3ebb5e390 100644 --- a/app/(docs)/dcs/how-to/use-rclone.md +++ b/app/(docs)/dcs/how-to/use-rclone/page.md @@ -1,6 +1,6 @@ --- title: How to use Rclone with Storj -docId: use-rclone-how-to +docId: use-rclone metadata: title: How to Use Rclone with Storj DCS description: Step-by-step guide to configure and use Rclone with Storj, including choosing between S3-compatible and native integration. diff --git a/app/(docs)/dcs/reference/cli-commands.md b/app/(docs)/dcs/reference/cli-commands/page.md similarity index 99% rename from app/(docs)/dcs/reference/cli-commands.md rename to app/(docs)/dcs/reference/cli-commands/page.md index ff9daf11e..9ee5fb194 100644 --- a/app/(docs)/dcs/reference/cli-commands.md +++ b/app/(docs)/dcs/reference/cli-commands/page.md @@ -1,6 +1,6 @@ --- title: "CLI Commands Reference" -docId: "cli-reference-001" +docId: cli-commands metadata: title: "Uplink CLI Commands Reference" description: "Complete reference for all Uplink CLI commands, flags, and usage patterns for managing Storj DCS storage." diff --git a/app/(docs)/dcs/reference/error-codes.md b/app/(docs)/dcs/reference/error-codes/page.md similarity index 99% rename from app/(docs)/dcs/reference/error-codes.md rename to app/(docs)/dcs/reference/error-codes/page.md index a8f3b65bc..d49e15a72 100644 --- a/app/(docs)/dcs/reference/error-codes.md +++ b/app/(docs)/dcs/reference/error-codes/page.md @@ -1,6 +1,6 @@ --- title: "Error Codes Reference" -docId: "error-codes-ref-001" +docId: error-codes metadata: title: "Error Codes and Troubleshooting Reference" description: "Reference for common error codes, HTTP status codes, and troubleshooting information for Storj DCS." diff --git a/app/(docs)/dcs/reference/limits.md b/app/(docs)/dcs/reference/limits/page.md similarity index 99% rename from app/(docs)/dcs/reference/limits.md rename to app/(docs)/dcs/reference/limits/page.md index 27467bd61..eb1de619e 100644 --- a/app/(docs)/dcs/reference/limits.md +++ b/app/(docs)/dcs/reference/limits/page.md @@ -1,6 +1,6 @@ --- title: "Service Limits Reference" -docId: "service-limits-ref-001" +docId: limits metadata: title: "Storj DCS Service Limits and Specifications" description: "Complete reference for service limits, quotas, and technical specifications for Storj DCS object storage." diff --git a/app/(docs)/dcs/reference/s3-api.md b/app/(docs)/dcs/reference/s3-api/page.md similarity index 99% rename from app/(docs)/dcs/reference/s3-api.md rename to app/(docs)/dcs/reference/s3-api/page.md index ff9c5c473..b82eeff44 100644 --- a/app/(docs)/dcs/reference/s3-api.md +++ b/app/(docs)/dcs/reference/s3-api/page.md @@ -1,6 +1,6 @@ --- title: "S3 API Reference" -docId: "s3-api-reference-001" +docId: s3-api metadata: title: "S3 API Compatibility Reference" description: "Complete reference for S3 API compatibility with Storj DCS, including supported operations, limits, and Storj-specific extensions." diff --git a/app/(docs)/dcs/tutorials/_meta.json b/app/(docs)/dcs/tutorials/_meta.json index 173bb3e2d..31a818dcd 100644 --- a/app/(docs)/dcs/tutorials/_meta.json +++ b/app/(docs)/dcs/tutorials/_meta.json @@ -1,13 +1 @@ -{ - "title": "Tutorials", - "nav": [ - { - "title": "Your first week with Storj", - "id": "your-first-week-with-storj" - }, - { - "title": "Build your first app", - "id": "build-your-first-app" - } - ] -} \ No newline at end of file +{"title": "Tutorials"} \ No newline at end of file diff --git a/app/(docs)/dcs/tutorials/build-your-first-app.md b/app/(docs)/dcs/tutorials/build-your-first-app/page.md similarity index 99% rename from app/(docs)/dcs/tutorials/build-your-first-app.md rename to app/(docs)/dcs/tutorials/build-your-first-app/page.md index a52833db0..2fc568d13 100644 --- a/app/(docs)/dcs/tutorials/build-your-first-app.md +++ b/app/(docs)/dcs/tutorials/build-your-first-app/page.md @@ -1,6 +1,6 @@ --- title: Build your first app -docId: build-first-app-storj +docId: build-your-first-app metadata: title: Build Your First App with Storj DCS description: 30-minute tutorial to build a simple file sharing web application using Storj DCS @@ -86,7 +86,7 @@ touch public/index.html public/style.css public/app.js ### Set up environment variables Create `.env` file: -```env +```bash STORJ_ACCESS_KEY=your_access_key_here STORJ_SECRET_KEY=your_secret_key_here STORJ_BUCKET=file-share-demo @@ -815,7 +815,7 @@ npm run dev - [Object versioning](docId:setup-object-versioning) for file history - [CORS configuration](docId:configure-cors) for web applications - [Performance optimization](docId:optimize-upload-performance) for large files -- [Multi-region deployment](docId:setup-multi-region-storage) for global apps +- [Multi-region deployment](/dcs/buckets/preferred-storage-region) for global apps ### Deploy your application diff --git a/app/(docs)/dcs/tutorials/page.md b/app/(docs)/dcs/tutorials/page.md new file mode 100644 index 000000000..73e4cc3fe --- /dev/null +++ b/app/(docs)/dcs/tutorials/page.md @@ -0,0 +1,16 @@ +--- +title: Tutorials +docId: tutorials +metadata: + title: Storj DCS Tutorials + description: Learn Storj DCS through hands-on tutorials +--- + +# Storj DCS Tutorials + +Learn Storj DCS fundamentals through these comprehensive tutorials. + +## Available Tutorials + +- [Your First Week with Storj](./your-first-week-with-storj) - Complete beginner guide +- [Build Your First App](./build-your-first-app) - Build an application with Storj DCS \ No newline at end of file diff --git a/app/(docs)/dcs/tutorials/your-first-week-with-storj.md b/app/(docs)/dcs/tutorials/your-first-week-with-storj/page.md similarity index 99% rename from app/(docs)/dcs/tutorials/your-first-week-with-storj.md rename to app/(docs)/dcs/tutorials/your-first-week-with-storj/page.md index 2e915700f..daf8c8ecd 100644 --- a/app/(docs)/dcs/tutorials/your-first-week-with-storj.md +++ b/app/(docs)/dcs/tutorials/your-first-week-with-storj/page.md @@ -1,6 +1,6 @@ --- title: Your first week with Storj -docId: first-week-storj-tutorial +docId: your-first-week-with-storj metadata: title: Your First Week with Storj DCS - Complete Beginner Tutorial description: Comprehensive 7-day tutorial to master Storj DCS fundamentals, from account setup to advanced features @@ -565,7 +565,7 @@ Now that you've mastered the basics: 1. **Explore specific use cases**: - [Build your first app](docId:build-your-first-app) with Storj integration - - [Set up multi-region storage](docId:setup-multi-region-storage) for global applications + - [Set up multi-region storage](/dcs/buckets/preferred-storage-region) for global applications - [Migrate from AWS S3](docId:migrate-from-s3) to Storj 2. **Dive deeper into features**: diff --git a/app/(docs)/learn/_meta.json b/app/(docs)/learn/_meta.json index 208ed95ab..81cca82c2 100644 --- a/app/(docs)/learn/_meta.json +++ b/app/(docs)/learn/_meta.json @@ -1,17 +1 @@ -{ - "title": "Learn", - "nav": [ - { - "title": "Concepts", - "id": "concepts" - }, - { - "title": "Tutorials", - "id": "tutorials" - }, - { - "title": "Self-host", - "id": "self-host" - } - ] -} \ No newline at end of file +{"title": "Learn"} \ No newline at end of file diff --git a/app/(docs)/learn/concepts/_meta.json b/app/(docs)/learn/concepts/_meta.json index 62ef52487..dfd99e700 100644 --- a/app/(docs)/learn/concepts/_meta.json +++ b/app/(docs)/learn/concepts/_meta.json @@ -1,101 +1 @@ -{ - "title": "Core Concepts", - "nav": [ - { - "title": "Understanding Decentralized Storage", - "id": "understanding-decentralized-storage" - }, - { - "title": "Storj Architecture Overview", - "id": "storj-architecture-overview" - }, - { - "title": "Security and Encryption", - "id": "security-and-encryption" - }, - { - "title": "Definitions", - "id": "definitions" - }, - { - "title": "Access Control", - "id": "access" - }, - { - "title": "Connectors", - "id": "connectors" - }, - { - "title": "Consistency", - "id": "consistency" - }, - { - "title": "Data Structure", - "id": "data-structure" - }, - { - "title": "Decentralization", - "id": "decentralization" - }, - { - "title": "Edge Services", - "id": "edge-services" - }, - { - "title": "Encryption Keys", - "id": "encryption-key" - }, - { - "title": "File Redundancy", - "id": "file-redundancy" - }, - { - "title": "File Repair", - "id": "file-repair" - }, - { - "title": "Immutability", - "id": "immutability" - }, - { - "title": "Key Architecture Constructs", - "id": "key-architecture-constructs" - }, - { - "title": "Limits", - "id": "limits" - }, - { - "title": "Linksharing Service", - "id": "linksharing-service" - }, - { - "title": "Multi-tenant Data", - "id": "multi-tenant-data" - }, - { - "title": "Multiregion Availability", - "id": "multiregion-availability" - }, - { - "title": "S3 Compatibility", - "id": "s3-compatibility" - }, - { - "title": "Satellite", - "id": "satellite" - }, - { - "title": "Security Models", - "id": "security-models" - }, - { - "title": "Solution Architectures", - "id": "solution-architectures" - }, - { - "title": "WORM", - "id": "worm" - } - ] -} \ No newline at end of file +{"title": "Core Concepts"} \ No newline at end of file diff --git a/app/(docs)/node/_meta.json b/app/(docs)/node/_meta.json index 899e32780..3350e5372 100644 --- a/app/(docs)/node/_meta.json +++ b/app/(docs)/node/_meta.json @@ -1,37 +1 @@ -{ - "title": "Storage Node", - "nav": [ - { - "title": "Get Started", - "id": "get-started" - }, - { - "title": "Tutorials", - "id": "tutorials" - }, - { - "title": "How-to Guides", - "id": "how-to" - }, - { - "title": "Concepts", - "id": "concepts" - }, - { - "title": "Reference", - "id": "reference" - }, - { - "title": "Commercial Node", - "id": "commercial-node" - }, - { - "title": "Payouts", - "id": "payouts" - }, - { - "title": "FAQ", - "id": "faq" - } - ] -} \ No newline at end of file +{"title": "Storage Node"} \ No newline at end of file diff --git a/app/(docs)/node/concepts/_meta.json b/app/(docs)/node/concepts/_meta.json index 75d3b95fc..ccf572136 100644 --- a/app/(docs)/node/concepts/_meta.json +++ b/app/(docs)/node/concepts/_meta.json @@ -1,21 +1 @@ -{ - "title": "Concepts", - "nav": [ - { - "title": "Storage Node Economics", - "id": "storage-node-economics" - }, - { - "title": "Reputation System", - "id": "reputation-system" - }, - { - "title": "Network Participation", - "id": "network-participation" - }, - { - "title": "Privacy and Security for Operators", - "id": "privacy-security-for-operators" - } - ] -} \ No newline at end of file +{"title": "Concepts"} \ No newline at end of file diff --git a/app/(docs)/node/concepts/network-participation.md b/app/(docs)/node/concepts/network-participation/page.md similarity index 100% rename from app/(docs)/node/concepts/network-participation.md rename to app/(docs)/node/concepts/network-participation/page.md diff --git a/app/(docs)/node/concepts/page.md b/app/(docs)/node/concepts/page.md index 9c48a3a2d..20eaaf41e 100644 --- a/app/(docs)/node/concepts/page.md +++ b/app/(docs)/node/concepts/page.md @@ -1,6 +1,6 @@ --- title: Concepts -docId: KJzDdewgBVcK6rnp0Qho2 +docId: concepts redirects: - /node/KJzD-concepts weight: 5 diff --git a/app/(docs)/node/concepts/privacy-security-for-operators.md b/app/(docs)/node/concepts/privacy-security-for-operators/page.md similarity index 99% rename from app/(docs)/node/concepts/privacy-security-for-operators.md rename to app/(docs)/node/concepts/privacy-security-for-operators/page.md index c2d85150b..46b552689 100644 --- a/app/(docs)/node/concepts/privacy-security-for-operators.md +++ b/app/(docs)/node/concepts/privacy-security-for-operators/page.md @@ -1,6 +1,6 @@ --- title: Privacy and Security for Storage Node Operators -docId: privacy-security-operators +docId: privacy-security-for-operators metadata: title: Privacy and Security for Storage Node Operators - Protecting Your Operation description: Comprehensive guide to privacy and security considerations for Storage Node Operators, including data protection, operational security, and risk mitigation. diff --git a/app/(docs)/node/concepts/reputation-system.md b/app/(docs)/node/concepts/reputation-system/page.md similarity index 100% rename from app/(docs)/node/concepts/reputation-system.md rename to app/(docs)/node/concepts/reputation-system/page.md diff --git a/app/(docs)/node/concepts/storage-node-economics.md b/app/(docs)/node/concepts/storage-node-economics/page.md similarity index 100% rename from app/(docs)/node/concepts/storage-node-economics.md rename to app/(docs)/node/concepts/storage-node-economics/page.md diff --git a/app/(docs)/node/how-to/_meta.json b/app/(docs)/node/how-to/_meta.json index d256ff53a..836831afa 100644 --- a/app/(docs)/node/how-to/_meta.json +++ b/app/(docs)/node/how-to/_meta.json @@ -1,29 +1 @@ -{ - "title": "How-to Guides", - "nav": [ - { - "title": "Change payout address", - "id": "change-payout-address" - }, - { - "title": "Migrate node", - "id": "migrate-node" - }, - { - "title": "Troubleshoot offline node", - "id": "troubleshoot-offline-node" - }, - { - "title": "Fix database corruption", - "id": "fix-database-corruption" - }, - { - "title": "Set up remote access", - "id": "setup-remote-access" - }, - { - "title": "Monitor node performance", - "id": "monitor-node-performance" - } - ] -} \ No newline at end of file +{"title": "How-to Guides"} \ No newline at end of file diff --git a/app/(docs)/node/how-to/change-payout-address.md b/app/(docs)/node/how-to/change-payout-address/page.md similarity index 99% rename from app/(docs)/node/how-to/change-payout-address.md rename to app/(docs)/node/how-to/change-payout-address/page.md index d6f12ba66..96afbf750 100644 --- a/app/(docs)/node/how-to/change-payout-address.md +++ b/app/(docs)/node/how-to/change-payout-address/page.md @@ -1,6 +1,6 @@ --- title: How to change your payout address -docId: change-payout-address-how-to +docId: change-payout-address metadata: title: How to Change Your Storage Node Payout Address description: Step-by-step guide to update the wallet address where you receive payments for your storage node operations. diff --git a/app/(docs)/node/how-to/fix-database-corruption.md b/app/(docs)/node/how-to/fix-database-corruption/page.md similarity index 100% rename from app/(docs)/node/how-to/fix-database-corruption.md rename to app/(docs)/node/how-to/fix-database-corruption/page.md diff --git a/app/(docs)/node/how-to/migrate-node.md b/app/(docs)/node/how-to/migrate-node/page.md similarity index 99% rename from app/(docs)/node/how-to/migrate-node.md rename to app/(docs)/node/how-to/migrate-node/page.md index 153a7e9ce..625e4b82e 100644 --- a/app/(docs)/node/how-to/migrate-node.md +++ b/app/(docs)/node/how-to/migrate-node/page.md @@ -1,6 +1,6 @@ --- title: How to migrate your node to a new device -docId: migrate-node-how-to +docId: migrate-node metadata: title: How to Migrate Your Storage Node to a New Device description: Complete step-by-step guide to safely migrate your Storj storage node to new hardware or storage location while preserving data and reputation. diff --git a/app/(docs)/node/how-to/monitor-node-performance.md b/app/(docs)/node/how-to/monitor-node-performance/page.md similarity index 100% rename from app/(docs)/node/how-to/monitor-node-performance.md rename to app/(docs)/node/how-to/monitor-node-performance/page.md diff --git a/app/(docs)/node/how-to/setup-remote-access.md b/app/(docs)/node/how-to/setup-remote-access/page.md similarity index 99% rename from app/(docs)/node/how-to/setup-remote-access.md rename to app/(docs)/node/how-to/setup-remote-access/page.md index dbe9c55e8..088cf32c8 100644 --- a/app/(docs)/node/how-to/setup-remote-access.md +++ b/app/(docs)/node/how-to/setup-remote-access/page.md @@ -1,6 +1,6 @@ --- title: Set up remote access -docId: setup-remote-access +docId: setup-remote-access metadata: title: How to Set Up Remote Access - Storage Node Dashboard description: Configure secure remote access to your storage node dashboard using SSH tunneling diff --git a/app/(docs)/node/how-to/troubleshoot-offline-node.md b/app/(docs)/node/how-to/troubleshoot-offline-node/page.md similarity index 99% rename from app/(docs)/node/how-to/troubleshoot-offline-node.md rename to app/(docs)/node/how-to/troubleshoot-offline-node/page.md index 6614384e9..488c29ed8 100644 --- a/app/(docs)/node/how-to/troubleshoot-offline-node.md +++ b/app/(docs)/node/how-to/troubleshoot-offline-node/page.md @@ -1,6 +1,6 @@ --- title: How to troubleshoot an offline node -docId: troubleshoot-offline-node-how-to +docId: troubleshoot-offline-node metadata: title: How to Troubleshoot Storage Node Offline Issues description: Step-by-step guide to diagnose and fix storage node connectivity issues when your node appears offline or unreachable. diff --git a/app/(docs)/node/reference/configuration.md b/app/(docs)/node/reference/configuration/page.md similarity index 99% rename from app/(docs)/node/reference/configuration.md rename to app/(docs)/node/reference/configuration/page.md index e04206f2d..d8c64851b 100644 --- a/app/(docs)/node/reference/configuration.md +++ b/app/(docs)/node/reference/configuration/page.md @@ -1,6 +1,6 @@ --- title: "Storage Node Configuration Reference" -docId: "node-config-ref-001" +docId: configuration metadata: title: "Storage Node Configuration Reference" description: "Complete reference for Storage Node configuration parameters, config.yaml options, and environment variables." diff --git a/app/(docs)/node/reference/dashboard-metrics.md b/app/(docs)/node/reference/dashboard-metrics/page.md similarity index 99% rename from app/(docs)/node/reference/dashboard-metrics.md rename to app/(docs)/node/reference/dashboard-metrics/page.md index 3b9899df7..436218195 100644 --- a/app/(docs)/node/reference/dashboard-metrics.md +++ b/app/(docs)/node/reference/dashboard-metrics/page.md @@ -1,6 +1,6 @@ --- title: "Dashboard Metrics Reference" -docId: "node-dashboard-ref-001" +docId: dashboard-metrics metadata: title: "Storage Node Dashboard Metrics Reference" description: "Complete reference for all Storage Node dashboard metrics, monitoring data, and performance indicators." diff --git a/app/(docs)/node/reference/system-requirements.md b/app/(docs)/node/reference/system-requirements/page.md similarity index 99% rename from app/(docs)/node/reference/system-requirements.md rename to app/(docs)/node/reference/system-requirements/page.md index d4beaf14e..02d0e48f1 100644 --- a/app/(docs)/node/reference/system-requirements.md +++ b/app/(docs)/node/reference/system-requirements/page.md @@ -1,6 +1,6 @@ --- title: "System Requirements Reference" -docId: "node-system-req-ref-001" +docId: system-requirements metadata: title: "Storage Node System Requirements Reference" description: "Complete reference for Storage Node hardware, software, and network requirements for optimal performance." diff --git a/app/(docs)/node/tutorials/setup-first-node.md b/app/(docs)/node/tutorials/setup-first-node/page.md similarity index 99% rename from app/(docs)/node/tutorials/setup-first-node.md rename to app/(docs)/node/tutorials/setup-first-node/page.md index 6b77c6865..7dbcb1fee 100644 --- a/app/(docs)/node/tutorials/setup-first-node.md +++ b/app/(docs)/node/tutorials/setup-first-node/page.md @@ -1,6 +1,6 @@ --- title: Setup your first node -docId: setup-first-storage-node +docId: setup-first-node metadata: title: Setup Your First Storage Node Tutorial description: Complete 60-minute tutorial to set up your first Storj storage node from start to finish with step-by-step instructions. diff --git a/app/(docs)/object-mount/_meta.json b/app/(docs)/object-mount/_meta.json index 71926feb2..51c15ae26 100644 --- a/app/(docs)/object-mount/_meta.json +++ b/app/(docs)/object-mount/_meta.json @@ -1,45 +1 @@ -{ - "title": "Object Mount", - "nav": [ - { - "title": "Tutorials", - "id": "tutorials" - }, - { - "title": "How-to Guides", - "id": "how-to" - }, - { - "title": "Concepts", - "id": "concepts" - }, - { - "title": "Reference", - "id": "reference" - }, - { - "title": "Linux", - "id": "linux" - }, - { - "title": "macOS", - "id": "macos" - }, - { - "title": "Windows", - "id": "windows" - }, - { - "title": "Media Workflows", - "id": "media-workflows" - }, - { - "title": "Release Notes", - "id": "release-notes" - }, - { - "title": "FAQ", - "id": "faq" - } - ] -} \ No newline at end of file +{"title": "Object Mount"} \ No newline at end of file diff --git a/app/(docs)/object-mount/concepts/_meta.json b/app/(docs)/object-mount/concepts/_meta.json index 9eb2ac133..ccf572136 100644 --- a/app/(docs)/object-mount/concepts/_meta.json +++ b/app/(docs)/object-mount/concepts/_meta.json @@ -1,21 +1 @@ -{ - "title": "Concepts", - "nav": [ - { - "title": "Object Mount vs Filesystems", - "id": "object-mount-vs-filesystems" - }, - { - "title": "POSIX Compliance Explained", - "id": "posix-compliance-explained" - }, - { - "title": "Performance Characteristics", - "id": "performance-characteristics" - }, - { - "title": "When to Use Fusion", - "id": "when-to-use-fusion" - } - ] -} \ No newline at end of file +{"title": "Concepts"} \ No newline at end of file diff --git a/app/(docs)/object-mount/concepts/object-mount-vs-filesystems.md b/app/(docs)/object-mount/concepts/object-mount-vs-filesystems/page.md similarity index 100% rename from app/(docs)/object-mount/concepts/object-mount-vs-filesystems.md rename to app/(docs)/object-mount/concepts/object-mount-vs-filesystems/page.md diff --git a/app/(docs)/object-mount/concepts/performance-characteristics.md b/app/(docs)/object-mount/concepts/performance-characteristics/page.md similarity index 100% rename from app/(docs)/object-mount/concepts/performance-characteristics.md rename to app/(docs)/object-mount/concepts/performance-characteristics/page.md diff --git a/app/(docs)/object-mount/concepts/posix-compliance-explained.md b/app/(docs)/object-mount/concepts/posix-compliance-explained/page.md similarity index 99% rename from app/(docs)/object-mount/concepts/posix-compliance-explained.md rename to app/(docs)/object-mount/concepts/posix-compliance-explained/page.md index 71adb2c3b..7e1e10723 100644 --- a/app/(docs)/object-mount/concepts/posix-compliance-explained.md +++ b/app/(docs)/object-mount/concepts/posix-compliance-explained/page.md @@ -1,6 +1,6 @@ --- title: POSIX Compliance Explained -docId: posix-compliance-exp +docId: posix-compliance-explained metadata: title: POSIX Compliance in Object Mount - Technical Explanation description: Detailed explanation of how Object Mount implements POSIX filesystem semantics on top of object storage, including limitations and compatibility considerations. diff --git a/app/(docs)/object-mount/concepts/when-to-use-fusion.md b/app/(docs)/object-mount/concepts/when-to-use-fusion/page.md similarity index 100% rename from app/(docs)/object-mount/concepts/when-to-use-fusion.md rename to app/(docs)/object-mount/concepts/when-to-use-fusion/page.md diff --git a/app/(docs)/object-mount/how-to/_meta.json b/app/(docs)/object-mount/how-to/_meta.json index 330aa0148..836831afa 100644 --- a/app/(docs)/object-mount/how-to/_meta.json +++ b/app/(docs)/object-mount/how-to/_meta.json @@ -1,25 +1 @@ -{ - "title": "How-to Guides", - "nav": [ - { - "title": "Install on Debian/Ubuntu", - "id": "install-debian-ubuntu" - }, - { - "title": "Install on RHEL/CentOS", - "id": "install-rhel-centos" - }, - { - "title": "Configure POSIX permissions", - "id": "configure-posix-permissions" - }, - { - "title": "Optimize for large files", - "id": "optimize-large-files" - }, - { - "title": "Troubleshoot mount issues", - "id": "troubleshoot-mount-issues" - } - ] -} \ No newline at end of file +{"title": "How-to Guides"} \ No newline at end of file diff --git a/app/(docs)/object-mount/how-to/configure-posix-permissions.md b/app/(docs)/object-mount/how-to/configure-posix-permissions/page.md similarity index 100% rename from app/(docs)/object-mount/how-to/configure-posix-permissions.md rename to app/(docs)/object-mount/how-to/configure-posix-permissions/page.md diff --git a/app/(docs)/object-mount/how-to/install-debian-ubuntu.md b/app/(docs)/object-mount/how-to/install-debian-ubuntu/page.md similarity index 100% rename from app/(docs)/object-mount/how-to/install-debian-ubuntu.md rename to app/(docs)/object-mount/how-to/install-debian-ubuntu/page.md diff --git a/app/(docs)/object-mount/how-to/install-rhel-centos.md b/app/(docs)/object-mount/how-to/install-rhel-centos/page.md similarity index 100% rename from app/(docs)/object-mount/how-to/install-rhel-centos.md rename to app/(docs)/object-mount/how-to/install-rhel-centos/page.md diff --git a/app/(docs)/object-mount/how-to/optimize-large-files.md b/app/(docs)/object-mount/how-to/optimize-large-files/page.md similarity index 100% rename from app/(docs)/object-mount/how-to/optimize-large-files.md rename to app/(docs)/object-mount/how-to/optimize-large-files/page.md diff --git a/app/(docs)/object-mount/how-to/troubleshoot-mount-issues.md b/app/(docs)/object-mount/how-to/troubleshoot-mount-issues/page.md similarity index 100% rename from app/(docs)/object-mount/how-to/troubleshoot-mount-issues.md rename to app/(docs)/object-mount/how-to/troubleshoot-mount-issues/page.md diff --git a/app/(docs)/object-mount/reference/cli-reference.md b/app/(docs)/object-mount/reference/cli-reference/page.md similarity index 99% rename from app/(docs)/object-mount/reference/cli-reference.md rename to app/(docs)/object-mount/reference/cli-reference/page.md index 12370a923..b5b1f07d3 100644 --- a/app/(docs)/object-mount/reference/cli-reference.md +++ b/app/(docs)/object-mount/reference/cli-reference/page.md @@ -1,6 +1,6 @@ --- title: "Object Mount CLI Reference" -docId: "object-mount-cli-ref-001" +docId: cli-reference metadata: title: "Object Mount CLI Commands Reference" description: "Complete reference for Object Mount CLI commands, options, and usage patterns." diff --git a/app/(docs)/object-mount/reference/compatibility.md b/app/(docs)/object-mount/reference/compatibility/page.md similarity index 99% rename from app/(docs)/object-mount/reference/compatibility.md rename to app/(docs)/object-mount/reference/compatibility/page.md index 436b01c7f..1aff7b21e 100644 --- a/app/(docs)/object-mount/reference/compatibility.md +++ b/app/(docs)/object-mount/reference/compatibility/page.md @@ -1,6 +1,6 @@ --- title: "Compatibility Reference" -docId: "object-mount-compat-ref-001" +docId: compatibility metadata: title: "Object Mount Compatibility Reference" description: "Complete reference for Object Mount compatibility with operating systems, applications, and cloud storage providers." diff --git a/app/(docs)/object-mount/reference/configuration.md b/app/(docs)/object-mount/reference/configuration/page.md similarity index 99% rename from app/(docs)/object-mount/reference/configuration.md rename to app/(docs)/object-mount/reference/configuration/page.md index 524c24c05..2abf33887 100644 --- a/app/(docs)/object-mount/reference/configuration.md +++ b/app/(docs)/object-mount/reference/configuration/page.md @@ -1,6 +1,6 @@ --- title: "Configuration Reference" -docId: "object-mount-config-ref-001" +docId: configuration metadata: title: "Object Mount Configuration Reference" description: "Complete reference for Object Mount configuration parameters, environment variables, and advanced settings." diff --git a/app/(docs)/object-mount/tutorials/your-first-mount.md b/app/(docs)/object-mount/tutorials/your-first-mount/page.md similarity index 99% rename from app/(docs)/object-mount/tutorials/your-first-mount.md rename to app/(docs)/object-mount/tutorials/your-first-mount/page.md index 281403f2e..8a46e9938 100644 --- a/app/(docs)/object-mount/tutorials/your-first-mount.md +++ b/app/(docs)/object-mount/tutorials/your-first-mount/page.md @@ -1,6 +1,6 @@ --- title: Your first mount -docId: your-first-object-mount +docId: your-first-mount metadata: title: Your First Object Mount Tutorial description: Complete 15-minute hands-on tutorial to mount and access Storj files using Object Mount with step-by-step instructions. diff --git a/app/(docs)/page.md b/app/(docs)/page.md index f02bafbf4..0b940bc6e 100644 --- a/app/(docs)/page.md +++ b/app/(docs)/page.md @@ -40,24 +40,24 @@ This documentation is organized using the [Diataxis framework](https://diataxis. ### Choose Your Path **New to Storj?** Start with our tutorials: -- [**DCS**: Your First Week with Storj](docId:first-week-storj-tutorial) - Complete beginner guide -- [**Object Mount**: Your First Mount](docId:your-first-object-mount) - Get started with filesystem access -- [**Storage Node**: Set Up Your First Node](docId:setup-first-storage-node) - Start earning by providing storage +- [**DCS**: Your First Week with Storj](/dcs/tutorials/your-first-week-with-storj) - Complete beginner guide +- [**Object Mount**: Your First Mount](/object-mount) - Get started with filesystem access +- [**Storage Node**: Set Up Your First Node](/node) - Start earning by providing storage **Need to solve a specific problem?** Check our how-to guides: -- [**DCS How-to Guides**](docId:REPde_t8MJMDaE2BU8RfQ) - Task-focused solutions -- [**Object Mount How-to Guides**](docId:okai0aiJei9No1Sh) - Specific configuration tasks -- [**Storage Node How-to Guides**](docId:change-payout-address-how-to) - Operational procedures +- [**DCS How-to Guides**](/dcs/how-to) - Task-focused solutions +- [**Object Mount How-to Guides**](/object-mount/how-to) - Specific configuration tasks +- [**Storage Node How-to Guides**](/node/how-to) - Operational procedures **Looking for technical details?** Browse our reference sections: -- [**DCS Reference**](docId:cli-reference-001) - CLI commands and API specifications -- [**Object Mount Reference**](docId:okai0aiJei9No1Sh) - Configuration options, compatibility -- [**Storage Node Reference**](docId:node-system-req-ref-001) - System requirements, metrics +- [**DCS Reference**](/dcs/reference) - CLI commands and API specifications +- [**Object Mount Reference**](/object-mount) - Configuration options, compatibility +- [**Storage Node Reference**](/node) - System requirements, metrics **Want to understand how things work?** Read our concept explanations: -- [**Core Concepts**](docId:learn-concepts) - Fundamental Storj concepts -- [**Object Mount Concepts**](docId:object-mount-concepts) - Filesystem bridging explained -- [**Storage Node Concepts**](docId:node-concepts) - Economics, reputation, participation +- [**Core Concepts**](/learn/concepts) - Fundamental Storj concepts +- [**Object Mount Concepts**](/object-mount/concepts) - Filesystem bridging explained +- [**Storage Node Concepts**](/node/concepts) - Economics, reputation, participation **Quick Navigation Tips:** - Use the search bar (Ctrl+K or Cmd+K) to find specific topics diff --git a/src/components/Navigation.jsx b/src/components/Navigation.jsx index 9b0de1745..175c6bd81 100644 --- a/src/components/Navigation.jsx +++ b/src/components/Navigation.jsx @@ -23,6 +23,22 @@ function NavLink({ title, href, current, root, disclosure, className }) { padding = 'pl-0' } + if (!href) { + return ( + + {title} + + ) + } + return ( From 2ab6d4a0da071475f1c7617dadf521c06494355e Mon Sep 17 00:00:00 2001 From: "Alexey A. Leonov" Date: Sat, 11 Oct 2025 11:06:13 +0700 Subject: [PATCH 7/8] fixed titles --- app/(docs)/node/how-to/change-payout-address/page.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/app/(docs)/node/how-to/change-payout-address/page.md b/app/(docs)/node/how-to/change-payout-address/page.md index 96afbf750..e0960d8ba 100644 --- a/app/(docs)/node/how-to/change-payout-address/page.md +++ b/app/(docs)/node/how-to/change-payout-address/page.md @@ -29,12 +29,11 @@ Before changing your payout address, ensure you have: **Verification**: Double-check your new wallet address is correct - incorrect addresses may result in lost payments. -## Change payout address Choose the method that matches your storage node installation: {% tabs %} - +## Change payout address {% tab label="CLI Install (Docker)" %} ### Step 1: Stop the storage node From 8cc6b7a949b5cd6619e9c923ee72adef8330369f Mon Sep 17 00:00:00 2001 From: "Alexey A. Leonov" Date: Sat, 11 Oct 2025 11:53:43 +0700 Subject: [PATCH 8/8] trying to fix the title leveling problem Error: Cannot add 'h3' to table of contents without a preceding 'h2' --- .../node/how-to/change-payout-address/page.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/app/(docs)/node/how-to/change-payout-address/page.md b/app/(docs)/node/how-to/change-payout-address/page.md index e0960d8ba..24426ffa2 100644 --- a/app/(docs)/node/how-to/change-payout-address/page.md +++ b/app/(docs)/node/how-to/change-payout-address/page.md @@ -29,14 +29,14 @@ Before changing your payout address, ensure you have: **Verification**: Double-check your new wallet address is correct - incorrect addresses may result in lost payments. +## Change payout address Choose the method that matches your storage node installation: {% tabs %} -## Change payout address {% tab label="CLI Install (Docker)" %} -### Step 1: Stop the storage node +## Step 1: Stop the storage node Stop your running storage node container safely: @@ -47,7 +47,7 @@ docker rm storagenode The `-t 300` flag allows the node 5 minutes to gracefully shut down and complete any ongoing operations. -### Step 2: Update configuration +## Step 2: Update configuration Edit your configuration file to add or update the wallet address. The location depends on how you set up your node: @@ -85,7 +85,7 @@ docker run -d --restart unless-stopped \ storjlabs/storagenode:latest ``` -### Step 3: Restart the storage node +## Step 3: Restart the storage node Start your storage node with the updated configuration: @@ -98,7 +98,7 @@ Start your storage node with the updated configuration: {% tab label="Windows GUI Install" %} -### Step 1: Stop the storage node service +## Step 1: Stop the storage node service Open an elevated PowerShell window (Run as Administrator) and stop the service: @@ -111,7 +111,7 @@ Alternatively, you can use the Windows Services applet: 2. Find "Storj V3 Storage Node" in the list 3. Right-click and select "Stop" -### Step 2: Edit configuration file +## Step 2: Edit configuration file Open the configuration file with a text editor. **Important**: Use Notepad++ or another advanced text editor - the regular Windows Notepad may not work properly with the file format. @@ -129,7 +129,7 @@ operator: Save the file. -### Step 3: Restart the storage node service +## Step 3: Restart the storage node service Restart the service to apply the changes: