diff --git a/docs/blog/posts/amd-mi300x-inference-benchmark.md b/docs/blog/posts/amd-mi300x-inference-benchmark.md
index bc747ee78..18b8d343c 100644
--- a/docs/blog/posts/amd-mi300x-inference-benchmark.md
+++ b/docs/blog/posts/amd-mi300x-inference-benchmark.md
@@ -217,8 +217,8 @@ is the primary sponsor of this benchmark, and we are sincerely grateful for thei
If you'd like to use top-tier bare metal compute with AMD GPUs, we recommend going
with Hot Aisle. Once you gain access to a cluster, it can be easily accessed via `dstack`'s [SSH fleet](../../docs/concepts/fleets.md#ssh-fleets) easily.
-### RunPod
+### Runpod
If you’d like to use on-demand compute with AMD GPUs at affordable prices, you can configure `dstack` to
-use [RunPod](https://runpod.io/). In
+use [Runpod](https://runpod.io/). In
this case, `dstack` will be able to provision fleets automatically when you run dev environments, tasks, and
services.
diff --git a/docs/blog/posts/amd-on-runpod.md b/docs/blog/posts/amd-on-runpod.md
index c1ff25015..0d5c60b4e 100644
--- a/docs/blog/posts/amd-on-runpod.md
+++ b/docs/blog/posts/amd-on-runpod.md
@@ -1,25 +1,25 @@
---
-title: Supporting AMD accelerators on RunPod
+title: Supporting AMD accelerators on Runpod
date: 2024-08-21
-description: "dstack, the open-source AI container orchestration platform, adds support for AMD accelerators, with RunPod as the first supported cloud provider."
+description: "dstack, the open-source AI container orchestration platform, adds support for AMD accelerators, with Runpod as the first supported cloud provider."
slug: amd-on-runpod
categories:
- Changelog
---
-# Supporting AMD accelerators on RunPod
+# Supporting AMD accelerators on Runpod
While `dstack` helps streamline the orchestration of containers for AI, its primary goal is to offer vendor independence
and portability, ensuring compatibility across different hardware and cloud providers.
-Inspired by the recent `MI300X` benchmarks, we are pleased to announce that RunPod is the first cloud provider to offer
+Inspired by the recent `MI300X` benchmarks, we are pleased to announce that Runpod is the first cloud provider to offer
AMD GPUs through `dstack`, with support for other cloud providers and on-prem servers to follow.
## Specification
-For the reference, below is a comparison of the `MI300X` and `H100 SXM` specs, incl. the prices offered by RunPod.
+For the reference, below is a comparison of the `MI300X` and `H100 SXM` specs, incl. the prices offered by Runpod.
| | MI300X | H100X SXM |
|---------------------------------|-------------------------------------------|--------------|
@@ -113,8 +113,8 @@ cloud resources and run the configuration.
1. The examples above demonstrate the use of
[TGI](https://huggingface.co/docs/text-generation-inference/en/installation_amd).
AMD accelerators can also be used with other frameworks like vLLM, Ollama, etc., and we'll be adding more examples soon.
-2. RunPod is the first cloud provider where dstack supports AMD. More cloud providers will be supported soon as well.
-3. Want to give RunPod and `dstack` a try? Make sure you've signed up for [RunPod](https://www.runpod.io/),
+2. Runpod is the first cloud provider where dstack supports AMD. More cloud providers will be supported soon as well.
+3. Want to give Runpod and `dstack` a try? Make sure you've signed up for [Runpod](https://www.runpod.io/),
then [set up](../../docs/reference/server/config.yml.md#runpod) the `dstack server`.
> Have questioned or feedback? Join our [Discord](https://discord.gg/u8SmfwPpMd)
diff --git a/docs/blog/posts/beyond-kubernetes-2024-recap-and-whats-ahead.md b/docs/blog/posts/beyond-kubernetes-2024-recap-and-whats-ahead.md
index 4c6b43f9b..9d32f336b 100644
--- a/docs/blog/posts/beyond-kubernetes-2024-recap-and-whats-ahead.md
+++ b/docs/blog/posts/beyond-kubernetes-2024-recap-and-whats-ahead.md
@@ -22,7 +22,7 @@ While `dstack` integrates with leading cloud GPU providers, we aim to expand par
sharing our vision of simplifying AI infrastructure orchestration with a lightweight, efficient alternative to Kubernetes.
This year, we’re excited to welcome our first partners: [Lambda](https://lambdalabs.com/),
-[RunPod](https://www.runpod.io/),
+[Runpod](https://www.runpod.io/),
[CUDO Compute](https://www.cudocompute.com/),
and [Hot Aisle](https://hotaisle.xyz/).
@@ -114,7 +114,7 @@ This year, we’re particularly proud of our newly added integration with AMD.
`dstack` works seamlessly with any on-prem AMD clusters. For example, you can rent such servers through our partner
[Hot Aisle](https://hotaisle.xyz/).
-> Among cloud providers, [AMD](https://www.amd.com/en/products/accelerators/instinct.html) is supported only through RunPod. In Q1 2025, we plan to extend it to
+> Among cloud providers, [AMD](https://www.amd.com/en/products/accelerators/instinct.html) is supported only through Runpod. In Q1 2025, we plan to extend it to
[Nscale](https://www.nscale.com/),
> [Hot Aisle](https://hotaisle.xyz/), and potentially other providers open to collaboration.
diff --git a/docs/blog/posts/dstack-sky-own-cloud-accounts.md b/docs/blog/posts/dstack-sky-own-cloud-accounts.md
index 13c927a31..8fe8c9c4e 100644
--- a/docs/blog/posts/dstack-sky-own-cloud-accounts.md
+++ b/docs/blog/posts/dstack-sky-own-cloud-accounts.md
@@ -25,7 +25,7 @@ To use your own cloud account, open the project settings and edit the correspond
{ width=650 }
You can configure your cloud accounts for any of the supported providers, including AWS, GCP, Azure, TensorDock, Lambda,
-CUDO, RunPod, and Vast.ai.
+CUDO, Runpod, and Vast.ai.
Additionally, you can disable certain backends if you do not plan to use them.
diff --git a/docs/blog/posts/state-of-cloud-gpu-2025.md b/docs/blog/posts/state-of-cloud-gpu-2025.md
index 238926ebf..b9add7915 100644
--- a/docs/blog/posts/state-of-cloud-gpu-2025.md
+++ b/docs/blog/posts/state-of-cloud-gpu-2025.md
@@ -28,7 +28,7 @@ These axes split providers into distinct archetypes—each with different econom
| :---- | :---- | :---- |
| **Classical hyperscalers** | General-purpose clouds with GPU SKUs bolted on | AWS, Google Cloud, Azure, OCI |
| **Massive neoclouds** | GPU-first operators built around dense HGX or MI-series clusters | CoreWeave, Lambda, Nebius, Crusoe |
-| **Rapidly-catching neoclouds** | Smaller GPU-first players building out aggressively | RunPod, DataCrunch, Voltage Park, TensorWave, Hot Aisle |
+| **Rapidly-catching neoclouds** | Smaller GPU-first players building out aggressively | Runpod, DataCrunch, Voltage Park, TensorWave, Hot Aisle |
| **Cloud marketplaces** | Don’t own capacity; sell orchestration + unified API over multiple backends | NVIDIA DGX Cloud (Lepton), Modal, Lightning AI, dstack Sky |
| **DC aggregators** | Aggregate idle capacity from third-party datacenters, pricing via market dynamics | Vast.ai |
@@ -89,7 +89,7 @@ For comparison, below is the price range for H100×GPU clusters across providers
-> Most hyperscalers and neoclouds need short- or long-term contracts, though providers like RunPod, DataCrunch, and Nebius offer on-demand clusters. Larger capacity and longer commitments bring bigger discounts — Nebius offers up to 35% off for longer terms.
+> Most hyperscalers and neoclouds need short- or long-term contracts, though providers like Runpod, DataCrunch, and Nebius offer on-demand clusters. Larger capacity and longer commitments bring bigger discounts — Nebius offers up to 35% off for longer terms.
## New GPU generations – why they matter
diff --git a/docs/blog/posts/toffee.md b/docs/blog/posts/toffee.md
index 3854937e5..190ecf8c2 100644
--- a/docs/blog/posts/toffee.md
+++ b/docs/blog/posts/toffee.md
@@ -20,7 +20,7 @@ In a recent engineering [blog post](https://research.toffee.ai/blog/how-we-use-d
[Toffee](https://toffee.ai) builds AI-powered experiences backed by LLMs and image-generation models. To serve these workloads efficiently, they combine:
-- **GPU neoclouds** such as [RunPod](https://www.runpod.io/) and [Vast.ai](https://vast.ai/) for flexible, cost-efficient GPU capacity
+- **GPU neoclouds** such as [Runpod](https://www.runpod.io/) and [Vast.ai](https://vast.ai/) for flexible, cost-efficient GPU capacity
- **AWS** for core, non-AI services and backend infrastructure
- **dstack** as the orchestration layer that provisions GPU resources and exposes AI models via `dstack` [services](../../docs/concepts/services.md) and [gateways](../../docs/concepts/gateways.md)
@@ -68,7 +68,7 @@ Beyond oechestration, Toffee relies on `dstack`’s UI as a central observabilit
-> *Thanks to dstack’s seamless integration with GPU neoclouds like RunPod and Vast.ai, we’ve been able to shift most workloads off hyperscalers — reducing our effective GPU spend by roughly 2–3× without changing a single line of model code.*
+> *Thanks to dstack’s seamless integration with GPU neoclouds like Runpod and Vast.ai, we’ve been able to shift most workloads off hyperscalers — reducing our effective GPU spend by roughly 2–3× without changing a single line of model code.*
>
> *— [Nikita Shupeyko](https://www.linkedin.com/in/nikita-shupeyko/), AI/ML & Cloud Infrastructure Architect at Toffee*
diff --git a/docs/blog/posts/volumes-on-runpod.md b/docs/blog/posts/volumes-on-runpod.md
index de0c8d6d0..c17faf7b1 100644
--- a/docs/blog/posts/volumes-on-runpod.md
+++ b/docs/blog/posts/volumes-on-runpod.md
@@ -1,24 +1,24 @@
---
-title: Using volumes to optimize cold starts on RunPod
+title: Using volumes to optimize cold starts on Runpod
date: 2024-08-13
-description: "Learn how to use volumes with dstack to optimize model inference cold start times on RunPod."
+description: "Learn how to use volumes with dstack to optimize model inference cold start times on Runpod."
slug: volumes-on-runpod
categories:
- Changelog
---
-# Using volumes to optimize cold starts on RunPod
+# Using volumes to optimize cold starts on Runpod
Deploying custom models in the cloud often faces the challenge of cold start times, including the time to provision a
new instance and download the model. This is especially relevant for services with autoscaling when new model replicas
need to be provisioned quickly.
Let's explore how `dstack` optimizes this process using volumes, with an example of
-deploying a model on RunPod.
+deploying a model on Runpod.
-Suppose you want to deploy Llama 3.1 on RunPod as a [service](../../docs/concepts/services.md):
+Suppose you want to deploy Llama 3.1 on Runpod as a [service](../../docs/concepts/services.md):