diff --git a/0_app/0_root/index.md b/0_app/0_root/index.mdx similarity index 67% rename from 0_app/0_root/index.md rename to 0_app/0_root/index.mdx index 8162d21..d408a96 100644 --- a/0_app/0_root/index.md +++ b/0_app/0_root/index.mdx @@ -5,6 +5,50 @@ description: Learn how to run Llama, DeepSeek, Qwen, Phi, and other LLMs locally index: 1 --- +import { Card, Cards } from "fumadocs-ui/components/card"; +import { getDocsSectionIcon } from "@/lib/docsSectionIcon"; + +## Explore the docs + + + + + + + + + + To get LM Studio, head over to the [Downloads page](/download) and download an installer for your operating system. LM Studio is available for macOS, Windows, and Linux. diff --git a/0_app/0_root/meta.json b/0_app/0_root/meta.json new file mode 100644 index 0000000..bc9c64f --- /dev/null +++ b/0_app/0_root/meta.json @@ -0,0 +1,7 @@ +{ + "title": "Introduction", + "pages": [ + "offline", + "system-requirements" + ] +} diff --git a/0_app/0_root/offline.md b/0_app/0_root/offline.mdx similarity index 98% rename from 0_app/0_root/offline.md rename to 0_app/0_root/offline.mdx index aebcd97..646de64 100644 --- a/0_app/0_root/offline.md +++ b/0_app/0_root/offline.mdx @@ -4,9 +4,9 @@ description: LM Studio can operate entirely offline, just make sure to get some index: 4 --- -```lms_notice + In general, LM Studio does not require the internet in order to work. This includes core functions like chatting with models, chatting with documents, or running a local server, none of which require the internet. -``` + ### Operations that do NOT require connectivity diff --git a/0_app/1_basics/_connect-apps.md b/0_app/1_basics/_connect-apps.mdx similarity index 81% rename from 0_app/1_basics/_connect-apps.md rename to 0_app/1_basics/_connect-apps.mdx index 5f1cc09..dbd56ea 100644 --- a/0_app/1_basics/_connect-apps.md +++ b/0_app/1_basics/_connect-apps.mdx @@ -5,7 +5,7 @@ description: Getting started with connecting applications to LM Studio LM Studio comes with a few built-in themes for app-wide color palettes. -
+
### Selecting a Theme @@ -13,13 +13,13 @@ You can choose a theme in the Settings tab. Choosing the "Auto" option will automatically switch between Light and Dark themes based on your system settings. -```lms_protip + You can jump to Settings from anywhere in the app by pressing `cmd` + `,` on macOS or `ctrl` + `,` on Windows/Linux. -``` + -###### To get to the Settings page, you need to be on [Power User mode](/docs/modes) or higher. +To get to the Settings page, you need to be on Power User mode or higher. -
+
### Community diff --git a/0_app/1_basics/index.md b/0_app/1_basics/index.md deleted file mode 100644 index 51c7513..0000000 --- a/0_app/1_basics/index.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Get started with LM Studio -sidebar_title: Overview -description: Download and run Large Language Models like Qwen, Mistral, Gemma, or gpt-oss in LM Studio. -index: 1 ---- - -Double check computer meets the minimum [system requirements](/docs/system-requirements). - -```lms_info -You might sometimes see terms such as `open-source models` or `open-weights models`. Different models might be released under different licenses and varying degrees of 'openness'. In order to run a model locally, you need to be able to get access to its "weights", often distributed as one or more files that end with `.gguf`, `.safetensors` etc. -``` - -
- -## Getting up and running - -First, **install the latest version of LM Studio**. You can get it from [here](/download). - -Once you're all set up, you need to **download your first LLM**. - -### 1. Download an LLM to your computer - -Head over to the Discover tab to download models. Pick one of the curated options or search for models by search query (e.g. `"Llama"`). See more in-depth information about downloading models [here](/docs/basics/download-models). - - - -### 2. Load a model to memory - -Head over to the **Chat** tab, and - -1. Open the model loader -2. Select one of the models you downloaded (or [sideloaded](/docs/advanced/sideload)). -3. Optionally, choose load configuration parameters. - - - -##### What does loading a model mean? - -Loading a model typically means allocating memory to be able to accommodate the model's weights and other parameters in your computer's RAM. - -### 3. Chat! - -Once the model is loaded, you can start a back-and-forth conversation with the model in the Chat tab. - - - -
- -### Community - -Chat with other LM Studio users, discuss LLMs, hardware, and more on the [LM Studio Discord server](https://discord.gg/aPQfnNkxGC). diff --git a/0_app/1_basics/index.mdx b/0_app/1_basics/index.mdx new file mode 100644 index 0000000..6b950b1 --- /dev/null +++ b/0_app/1_basics/index.mdx @@ -0,0 +1,62 @@ +--- +title: Get started with LM Studio +sidebar_title: Overview +description: Download and run Large Language Models like Qwen, Mistral, Gemma, or gpt-oss in LM Studio. +index: 1 +--- + +import { Step, Steps } from "fumadocs-ui/components/steps"; + +Double check computer meets the minimum [system requirements](/docs/system-requirements). + + +You might sometimes see terms such as `open-source models` or `open-weights models`. Different models might be released under different licenses and varying degrees of 'openness'. In order to run a model locally, you need to be able to get access to its "weights", often distributed as one or more files that end with `.gguf`, `.safetensors` etc. + + +
+ +## Getting up and running + +First, **install the latest version of LM Studio**. You can get it from [here](/download). + +Once you're all set up, you need to **download your first LLM**. + + + +

Download an LLM to your computer

+ + Head over to the Discover tab to download models. Pick one of the curated options or search for models by search query (e.g. `"Llama"`). See more in-depth information about downloading models [here](/docs/basics/download-models). + + +
+ + +

Load a model to memory

+ + Head over to the **Chat** tab, and + + 1. Open the model loader + 2. Select one of the models you downloaded (or [sideloaded](/docs/advanced/sideload)). + 3. Optionally, choose load configuration parameters. + + + +

What does loading a model mean?

+ + Loading a model typically means allocating memory to be able to accommodate the model's weights and other parameters in your computer's RAM. +
+ + +

Chat!

+ + Once the model is loaded, you can start a back-and-forth conversation with the model in the Chat tab. + + +
+
+ +
+ +### Community + +Chat with other LM Studio users, discuss LLMs, hardware, and more on the [LM Studio Discord server](https://discord.gg/aPQfnNkxGC). diff --git a/0_app/1_basics/meta.json b/0_app/1_basics/meta.json new file mode 100644 index 0000000..a793b2b --- /dev/null +++ b/0_app/1_basics/meta.json @@ -0,0 +1,12 @@ +{ + "title": "Getting Started", + "pages": [ + "chat", + "_connect-apps", + "download-model", + "_keychords", + "lmstudio-vs-llmster-vs-lms", + "rag", + "_troubleshooting" + ] +} diff --git a/0_app/2_mcp/deeplink.md b/0_app/2_mcp/deeplink.mdx similarity index 94% rename from 0_app/2_mcp/deeplink.md rename to 0_app/2_mcp/deeplink.mdx index b4efd64..45e4d01 100644 --- a/0_app/2_mcp/deeplink.md +++ b/0_app/2_mcp/deeplink.mdx @@ -1,5 +1,5 @@ --- -title: "`Add to LM Studio` Button" +title: "Add to LM Studio Button" description: Add MCP servers to LM Studio using a deeplink index: 2 --- @@ -14,9 +14,7 @@ Starting with version 0.3.17 (10), LM Studio can act as an MCP host. Learn more Enter your MCP JSON entry to generate a deeplink for the `Add to LM Studio` button. -```lms_mcp_deep_link_generator - -``` + ## Try an example diff --git a/0_app/2_mcp/index.md b/0_app/2_mcp/index.mdx similarity index 65% rename from 0_app/2_mcp/index.md rename to 0_app/2_mcp/index.mdx index f769557..c754862 100644 --- a/0_app/2_mcp/index.md +++ b/0_app/2_mcp/index.mdx @@ -10,9 +10,9 @@ Starting LM Studio 0.3.17, LM Studio acts as an **Model Context Protocol (MCP) H Never install MCPs from untrusted sources. -```lms_warning + Some MCP servers can run arbitrary code, access your local files, and use your network connection. Always be cautious when installing and using MCP servers. If you don't trust the source, don't install it. -``` + # Use MCP servers in LM Studio @@ -22,24 +22,28 @@ Starting 0.3.17 (b10), LM Studio supports both local and remote MCP servers. You Switch to the "Program" tab in the right hand sidebar. Click `Install > Edit mcp.json`. - + This will open the `mcp.json` file in the in-app editor. You can add MCP servers by editing this file. - + ### Example MCP to try: Hugging Face MCP Server This MCP server provides access to functions like model and dataset search.
- - - Add MCP Server hf-mcp-server to LM Studio - - - Add MCP Server hf-mcp-server to LM Studio - + + Add MCP Server hf-mcp-server to LM Studio + Add MCP Server hf-mcp-server to LM Studio
@@ -56,7 +60,7 @@ This MCP server provides access to functions like model and dataset search. } ``` -###### You will need to replace `` with your actual Hugging Face token. Learn more [here](https://huggingface.co/docs/hub/en/security-tokens). +You will need to replace `` with your actual Hugging Face token. Learn more [here](https://huggingface.co/docs/hub/en/security-tokens). Use the [deeplink button](mcp/deeplink), or copy the JSON snippet above and paste it into your `mcp.json` file. diff --git a/0_app/2_mcp/meta.json b/0_app/2_mcp/meta.json new file mode 100644 index 0000000..a9726e1 --- /dev/null +++ b/0_app/2_mcp/meta.json @@ -0,0 +1,6 @@ +{ + "title": "MCP", + "pages": [ + "deeplink" + ] +} diff --git a/0_app/3_modelyaml/index.md b/0_app/3_modelyaml/index.md index b66c23c..3b5b069 100644 --- a/0_app/3_modelyaml/index.md +++ b/0_app/3_modelyaml/index.md @@ -1,6 +1,6 @@ --- -title: "Introduction to `model.yaml`" -description: Describe models with the cross-platform `model.yaml` specification. +title: "Introduction to model.yaml" +description: Describe models with the cross-platform model.yaml specification. index: 5 socialCard: url: https://files.lmstudio.ai/modelyaml-card.jpg diff --git a/0_app/3_modelyaml/meta.json b/0_app/3_modelyaml/meta.json new file mode 100644 index 0000000..37a1889 --- /dev/null +++ b/0_app/3_modelyaml/meta.json @@ -0,0 +1,6 @@ +{ + "title": "model.yaml", + "pages": [ + "publish" + ] +} diff --git a/0_app/3_modelyaml/publish.md b/0_app/3_modelyaml/publish.md index eaea1ba..3d7f8e0 100644 --- a/0_app/3_modelyaml/publish.md +++ b/0_app/3_modelyaml/publish.md @@ -1,5 +1,5 @@ --- -title: Publish a `model.yaml` +title: Publish a model.yaml description: Upload your model definition to the LM Studio Hub. index: 7 --- @@ -22,7 +22,7 @@ lms clone qwen/qwen3-8b This will result in a local copy `model.yaml`, `README` and other metadata files. Importantly, this does NOT download the model weights. -```lms_terminal +```bash title="Terminal" $ ls README.md manifest.json model.yaml thumbnail.png ``` diff --git a/0_app/3_presets/meta.json b/0_app/3_presets/meta.json new file mode 100644 index 0000000..6c054da --- /dev/null +++ b/0_app/3_presets/meta.json @@ -0,0 +1,9 @@ +{ + "title": "Presets", + "pages": [ + "import", + "publish", + "pull", + "push" + ] +} diff --git a/0_app/3_presets/push.md b/0_app/3_presets/push.mdx similarity index 50% rename from 0_app/3_presets/push.md rename to 0_app/3_presets/push.mdx index 3cf073b..85513ef 100644 --- a/0_app/3_presets/push.md +++ b/0_app/3_presets/push.mdx @@ -4,6 +4,8 @@ description: Publish new revisions of your Presets to the LM Studio Hub. index: 5 --- +import { Step, Steps } from "fumadocs-ui/components/steps"; + `Feature In Preview` Starting LM Studio 0.3.15, you can publish your Presets to the LM Studio community. This allows you to share your Presets with others and import Presets from other users. @@ -18,14 +20,20 @@ Presets you share on the LM Studio Hub can be updated. -## Step 1: Make Changes and Commit + + +

Make Changes and Commit

-Make any changes to your Preset, both in parameters that are already included in the Preset, or by adding new parameters. + Make any changes to your Preset, both in parameters that are already included in the Preset, or by adding new parameters. +
-## Step 2: Click the Push Button + +

Click the Push Button

-Once changes are committed, you will see a `Push` button. Click it to push your changes to the Hub. + Once changes are committed, you will see a `Push` button. Click it to push your changes to the Hub. -Pushing changes will result in a new revision of your Preset on the Hub. + Pushing changes will result in a new revision of your Preset on the Hub. - + +
+
diff --git a/0_app/5_advanced/meta.json b/0_app/5_advanced/meta.json new file mode 100644 index 0000000..7c2b36c --- /dev/null +++ b/0_app/5_advanced/meta.json @@ -0,0 +1,14 @@ +{ + "title": "Advanced", + "pages": [ + "_branching", + "_context", + "_errors", + "import-model", + "parallel-requests", + "per-model", + "prompt-template", + "speculative-decoding", + "_vision" + ] +} diff --git a/0_app/5_advanced/per-model.md b/0_app/5_advanced/per-model.mdx similarity index 68% rename from 0_app/5_advanced/per-model.md rename to 0_app/5_advanced/per-model.mdx index 91ba99f..e542b6c 100644 --- a/0_app/5_advanced/per-model.md +++ b/0_app/5_advanced/per-model.mdx @@ -9,31 +9,30 @@ You can set default load settings for each model in LM Studio. When the model is loaded anywhere in the app (including through [`lms load`](/docs/cli#load-a-model-with-options)) these settings will be used. -
+
### Setting default parameters for a model Head to the My Models tab and click on the gear ⚙️ icon to edit the model's default parameters. - + This will open a dialog where you can set the default parameters for the model. -