Model providers
Using SGLang with OpenClaw
Browse more in Model providers.
All model providers guides →This guide shows you how to connect SGLang to OpenClaw using SGLang’s OpenAI-compatible `/v1` HTTP API. You will configure OpenClaw to either auto-discover SGLang models via `SGLANG_API_KEY` or define them explicitly with full model metadata.
` models through OpenClaw for chat completions.
Prerequisites
- ✓A running SGLang server exposing an OpenAI-compatible `/v1` HTTP API (for example it should serve `/v1/models` and `/v1/chat/completions`).
- ✓Access to a shell where you can set environment variables and run the `openclaw` CLI.
- ✓An OpenClaw project where you can edit the models and agents configuration.
Steps
- 1
Expose SGLang on an OpenAI-compatible /v1 endpoint
Start SGLang with an OpenAI-compatible server so OpenClaw can talk to it as if it were an OpenAI backend. Make sure your base URL exposes `/v1` endpoints like `/v1/models` and `/v1/chat/completions` on a reachable host and port.
text/v1/models /v1/chat/completions - 2
Set the SGLang API key for OpenClaw
Export `SGLANG_API_KEY` so OpenClaw knows to talk to SGLang and can opt into model discovery. If your SGLang server does not enforce auth, any non-empty value works; if it does, this must match your server’s configuration.
bashexport SGLANG_API_KEY="sglang-local" - 3
Run OpenClaw onboarding to configure models
sglang`. This lets OpenClaw query SGLang’s `/v1/models` endpoint and convert the returned IDs into model entries automatically.
bashopenclaw onboard - 4
Point your default agent at an SGLang model
If you already know the SGLang model ID, configure your default agent to use it so all default agent calls go through SGLang. This is the quickest way to start using SGLang once discovery or manual configuration is in place.
json{ agents: { defaults: { model: { primary: "sglang/your-model-id" }, }, }, } - 5
Configure SGLang as an explicit provider with manual models
sglang` explicitly. This disables auto-discovery and gives you full control over the model list and metadata OpenClaw uses.
json{ models: { providers: { sglang: { baseUrl: "http://127.0.0.1:30000/v1", apiKey: "${SGLANG_API_KEY}", api: "openai-completions", models: [ { id: "your-model-id", name: "Local SGLang Model", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 128000, maxTokens: 8192, }, ], }, }, }, } - 6
Verify SGLang is reachable from the OpenClaw host
Before debugging OpenClaw, confirm that the SGLang `/v1` API responds from the same machine where OpenClaw runs. This catches issues like SGLang not running, the wrong port, or a firewall blocking access.
bashcurl http://127.0.0.1:30000/v1/models
Configuration
| Option | Description | Example |
|---|---|---|
| SGLANG_API_KEY | API key value OpenClaw uses to talk to SGLang and to opt into model discovery when no explicit `models.providers.sglang` is defined. | sglang-local |
| models.providers.sglang.baseUrl | The base URL of your SGLang OpenAI-compatible `/v1` server. | http://127.0.0.1:30000/v1 |
| models.providers.sglang.apiKey | The API key OpenClaw sends to SGLang when using explicit provider configuration. | ${SGLANG_API_KEY} |
| models.providers.sglang.api | The API protocol OpenClaw uses for SGLang; SGLang uses the OpenAI-compatible completions API. | openai-completions |
| models.providers.sglang.models[].id | The SGLang model ID that OpenClaw calls on your SGLang server. | your-model-id |
| models.providers.sglang.models[].name | A human-readable name for the SGLang model inside OpenClaw. | Local SGLang Model |
| models.providers.sglang.models[].contextWindow | The maximum context window size you want OpenClaw to assume for this SGLang model. | 128000 |
| models.providers.sglang.models[].maxTokens | The maximum number of output tokens OpenClaw should request from this SGLang model. | 8192 |
| agents.defaults.model.primary | The primary model reference your default agent uses; for SGLang this is an `sglang/...` model ID. | sglang/your-model-id |
Troubleshooting
Server not reachable when OpenClaw tries to talk to SGLang.
SGLang may not be running or the host/port is wrong. From the OpenClaw machine, call the SGLang `/v1/models` endpoint directly; if this fails, fix the SGLang server address or start the server.
curl http://127.0.0.1:30000/v1/modelsAuth errors when OpenClaw sends requests to SGLang.
Your SGLang server likely enforces authentication and the `SGLANG_API_KEY` value does not match. sglang` with the correct `apiKey`.
export SGLANG_API_KEY="sglang-local"SGLang models are not auto-discovered and `sglang/...` models do not appear in OpenClaw.
sglang`. sglang` block if you want discovery, or keep it and define your models manually in the `models` array.
Frequently asked questions
Powered by Mem0
Add persistent memory to OpenClaw
Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.