Model providers
Using LM Studio with OpenClaw
Browse more in Model providers.
All model providers guides →This guide walks you through configuring OpenClaw to use LM Studio as a local model provider. You will install and start the LM Studio server, wire it into OpenClaw via onboarding, and optionally pin a specific LM Studio model as your default.
By the end, your OpenClaw agents will talk to models running on your own hardware through LM Studio.
Prerequisites
- ✓LM Studio desktop app or `llmster` headless installed on the same machine where you expose the LM Studio HTTP server.
- ✓OpenClaw CLI installed and available as the `openclaw` command.
- ✓Network access from OpenClaw to the LM Studio server URL `http://localhost:1234` (or your chosen host/port).
- ✓An LM Studio API token if authentication is enabled in your LM Studio server configuration.
Steps
- 1
Install LM Studio and start the local server
Install LM Studio (desktop) or the `llmster` headless daemon so you can expose a local HTTP API that OpenClaw talks to. This script comes from LM Studio and sets up the tooling on your machine.
bashcurl -fsSL https://lmstudio.ai/install.sh | bash - 2
Run the LM Studio daemon and HTTP server
Start the LM Studio background daemon and HTTP server so OpenClaw can reach it on a stable port. If you use the desktop app you can start it there; for headless setups, run these commands and keep them running.
bashlms daemon up lms server start --port 1234 - 3
Export your LM Studio API token for OpenClaw
OpenClaw reads the LM Studio token from the `LM_API_TOKEN` environment variable when you onboard the provider. If your LM Studio server has authentication enabled, set the real token; if auth is disabled, you still need to export any non-empty value.
bashexport LM_API_TOKEN="your-lm-studio-api-token" - 4
Use a placeholder token for unauthenticated LM Studio servers
When LM Studio authentication is disabled, OpenClaw still expects `LM_API_TOKEN` to be set. Export a placeholder value so onboarding and later requests succeed without 401 errors.
bashexport LM_API_TOKEN="placeholder-key" - 5
Run OpenClaw onboarding and select LM Studio
Run the OpenClaw onboarding flow to register LM Studio as a model provider. During the prompts, choose `LM Studio` so OpenClaw writes the correct provider config and auth profile.
bashopenclaw onboard - 6
Set your default LM Studio model in OpenClaw
After onboarding, point OpenClaw at the specific LM Studio model you want as the default. Use the `openclaw models set` command with the `lmstudio/` prefix plus the LM Studio model key.
bashopenclaw models set lmstudio/qwen/qwen3.5-9b - 7
Script non-interactive onboarding for CI or provisioning
For automated setups, use non-interactive onboarding so you can configure LM Studio without prompts. This variant assumes `LM_API_TOKEN` is already exported and writes the LM Studio provider and auth profile into your config.
bashopenclaw onboard \ --non-interactive \ --accept-risk \ --auth-choice lmstudio - 8
Pin base URL, API key, and model in non-interactive onboarding
If you need to override the default LM Studio URL or specify the model and API key explicitly, pass them as flags. This is useful when LM Studio runs on a non-default host/port or when you want a specific model key baked into your config.
bashopenclaw onboard \ --non-interactive \ --accept-risk \ --auth-choice lmstudio \ --custom-base-url http://localhost:1234/v1 \ --lmstudio-api-key "$LM_API_TOKEN" \ --custom-model-id qwen/qwen3.5-9b - 9
Configure LM Studio explicitly in your OpenClaw models config
If you manage OpenClaw config by hand, add an explicit `lmstudio` provider block. This pins the base URL, API key, and one or more LM Studio models with their capabilities and token limits.
json{ models: { providers: { lmstudio: { baseUrl: "http://localhost:1234/v1", apiKey: "${LM_API_TOKEN}", api: "openai-completions", models: [ { id: "qwen/qwen3-coder-next", name: "Qwen 3 Coder Next", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 128000, maxTokens: 8192, }, ], }, }, }, }
Configuration
| Option | Description | Example |
|---|---|---|
| LM_API_TOKEN | LM Studio token value that OpenClaw uses to authenticate to the LM Studio server; for unauthenticated servers any non-empty value works. | sk-lm-1234567890 |
| models.providers.lmstudio.baseUrl | The HTTP base URL for your LM Studio server API that OpenClaw calls. | http://localhost:1234/v1 |
| models.providers.lmstudio.apiKey | The API key value OpenClaw sends to LM Studio, typically wired from LM_API_TOKEN. | ${LM_API_TOKEN} |
| models.providers.lmstudio.api | The LM Studio API surface OpenClaw targets; LM Studio exposes an OpenAI-compatible completions API. | openai-completions |
| models.providers.lmstudio.models[0].id | The LM Studio model key as returned by the LM Studio API, without the lmstudio/ provider prefix. | qwen/qwen3-coder-next |
| models.providers.lmstudio.models[0].name | Human-readable name for the LM Studio model in your OpenClaw config. | Qwen 3 Coder Next |
| models.providers.lmstudio.models[0].contextWindow | Maximum context window size in tokens that OpenClaw should assume for this LM Studio model. | 128000 |
| models.providers.lmstudio.models[0].maxTokens | Maximum number of output tokens OpenClaw should request from this LM Studio model. | 8192 |
Troubleshooting
LM Studio not detected during OpenClaw setup or model calls fail
LM Studio must be running and listening on the expected port, and `LM_API_TOKEN` must be set. Start the server, then verify the API responds before retrying onboarding or model calls.
# Start via desktop app, or headless:
lms server start --port 1234
curl http://localhost:1234/api/v1/modelsAuthentication errors (HTTP 401) when OpenClaw talks to LM Studio
HTTP 401 means the token OpenClaw sends does not match the LM Studio server configuration. Confirm that `LM_API_TOKEN` matches the key configured in LM Studio, or if your server does not require authentication, set any non-empty value for `LM_API_TOKEN` and rerun onboarding.
Model not loaded errors from LM Studio when OpenClaw sends a request
LM Studio supports just-in-time (JIT) model loading and needs it enabled to load models on first request. Turn on JIT in LM Studio so models load automatically instead of failing with 'Model not loaded' errors.
Frequently asked questions
Powered by Mem0
Add persistent memory to OpenClaw
Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.