Skip to content

Commit fd5c1d9

Browse files
authored
chore(docs): add documentation on backend detection override (#5915)
Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent 5ce982b commit fd5c1d9

File tree

3 files changed

+15
-0
lines changed

3 files changed

+15
-0
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -189,6 +189,8 @@ local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
189189
local-ai run oci://localai/phi-2:latest
190190
```
191191

192+
> **Automatic Backend Detection**: When you install models from the gallery or YAML files, LocalAI automatically detects your system's GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see [GPU Acceleration](https://localai.io/features/gpu-acceleration/#automatic-backend-detection).
193+
192194
For more information, see [💻 Getting started](https://localai.io/basics/getting_started/index.html)
193195

194196
## 📰 Latest project news

docs/content/docs/features/GPU-acceleration.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,16 @@ This section contains instruction on how to use LocalAI with GPU acceleration.
1515
For acceleration for AMD or Metal HW is still in development, for additional details see the [build]({{%relref "docs/getting-started/build#Acceleration" %}})
1616
{{% /alert %}}
1717

18+
## Automatic Backend Detection
19+
20+
When you install a model from the gallery (or a YAML file), LocalAI intelligently detects the required backend and your system's capabilities, then downloads the correct version for you. Whether you're running on a standard CPU, an NVIDIA GPU, an AMD GPU, or an Intel GPU, LocalAI handles it automatically.
21+
22+
For advanced use cases or to override auto-detection, you can use the `LOCALAI_FORCE_META_BACKEND_CAPABILITY` environment variable. Here are the available options:
23+
24+
- `default`: Forces CPU-only backend. This is the fallback if no specific hardware is detected.
25+
- `nvidia`: Forces backends compiled with CUDA support for NVIDIA GPUs.
26+
- `amd`: Forces backends compiled with ROCm support for AMD GPUs.
27+
- `intel`: Forces backends compiled with SYCL/oneAPI support for Intel GPUs.
1828

1929
## Model configuration
2030

docs/content/docs/getting-started/quickstart.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -106,6 +106,9 @@ local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
106106
local-ai run oci://localai/phi-2:latest
107107
```
108108

109+
{{% alert icon="⚡" %}}
110+
**Automatic Backend Detection**: When you install models from the gallery or YAML files, LocalAI automatically detects your system's GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see [GPU Acceleration]({{% relref "docs/features/gpu-acceleration#automatic-backend-detection" %}}).
111+
{{% /alert %}}
109112

110113
For a full list of options, refer to the [Installer Options]({{% relref "docs/advanced/installer" %}}) documentation.
111114

0 commit comments

Comments
 (0)