Model providers
Using LiteLLM with OpenClaw
Browse more in Model providers.
All model providers guides →This guide shows you how to run OpenClaw through a LiteLLM proxy so you get centralized cost tracking, logging, and model routing across multiple providers. You will start a LiteLLM gateway, wire OpenClaw to it with a dedicated API key, and configure specific models like `claude-opus-4-6` and `gpt-4o`.
By the end, your OpenClaw agents will call LiteLLM instead of individual model APIs while keeping your existing agent configs mostly unchanged.
Prerequisites
- ✓An existing OpenClaw setup where you can run the `openclaw` CLI.
- ✓Python and pip available so you can install and run the LiteLLM proxy.
- ✓Network access from your OpenClaw environment to `http://localhost:4000` or wherever LiteLLM runs.
- ✓API keys for the underlying providers you plan to route through LiteLLM (for example `ANTHROPIC_API_KEY` and `OPENAI_API_KEY`).
Steps
- 1
Run OpenClaw onboarding with LiteLLM auth
If you are starting from a fresh OpenClaw project, use the onboarding flow so OpenClaw knows it should authenticate via LiteLLM. This wires up the provider choice without you editing config by hand.
bashopenclaw onboard --auth-choice litellm-api-key - 2
Install and start the LiteLLM proxy
Run the LiteLLM proxy so OpenClaw has a single HTTP endpoint to send all model traffic to. This example starts LiteLLM with `claude-opus-4-6` exposed; you can expand this later with more models and routing.
bashpip install 'litellm[proxy]' litellm --model claude-opus-4-6 - 3
Export your LiteLLM API key for OpenClaw
OpenClaw reads the LiteLLM API key from the `LITELLM_API_KEY` environment variable. Set this in the same shell or process environment where you run `openclaw` so every request is authenticated against LiteLLM.
bashexport LITELLM_API_KEY="sk-litellm-key" - 4
Start OpenClaw with LiteLLM as the model provider
Once LiteLLM is running and the API key is exported, start OpenClaw normally. With the LiteLLM auth choice and env var in place, OpenClaw routes its model calls through the LiteLLM proxy.
bashexport LITELLM_API_KEY="your-litellm-key" openclaw - 5
Configure LiteLLM as a provider in your OpenClaw config file
Define LiteLLM as a model provider in your OpenClaw config so you can reference specific LiteLLM models like `litellm/claude-opus-4-6` and `litellm/gpt-4o`. This also sets the default agent model to the LiteLLM-backed Claude model.
json{ models: { providers: { litellm: { baseUrl: "http://localhost:4000", apiKey: "${LITELLM_API_KEY}", api: "openai-completions", models: [ { id: "claude-opus-4-6", name: "Claude Opus 4.6", reasoning: true, input: ["text", "image"], contextWindow: 200000, maxTokens: 64000, }, { id: "gpt-4o", name: "GPT-4o", reasoning: false, input: ["text", "image"], contextWindow: 128000, maxTokens: 8192, }, ], }, }, }, agents: { defaults: { model: { primary: "litellm/claude-opus-4-6" }, }, }, } - 6
Create a LiteLLM virtual key dedicated to OpenClaw
Use LiteLLM virtual keys to give OpenClaw its own API key with a monthly budget cap. This lets you control and monitor OpenClaw’s spend separately from other clients hitting the same LiteLLM proxy.
bashcurl -X POST "http://localhost:4000/key/generate" \ -H "Authorization: Bearer $LITELLM_MASTER_KEY" \ -H "Content-Type: application/json" \ -d '{ "key_alias": "openclaw", "max_budget": 50.00, "budget_duration": "monthly" }' - 7
Set up LiteLLM model routing for multiple backends
yaml` so model names like `claude-opus-4-6` and `gpt-4o` route to the right upstream providers. OpenClaw keeps requesting the same model IDs while LiteLLM swaps or fans out to Anthropic and OpenAI under the hood.
yamlmodel_list: - model_name: claude-opus-4-6 litellm_params: model: claude-opus-4-6 api_key: os.environ/ANTHROPIC_API_KEY - model_name: gpt-4o litellm_params: model: gpt-4o api_key: os.environ/OPENAI_API_KEY - 8
Inspect LiteLLM usage and spend for your OpenClaw traffic
Once OpenClaw is sending traffic through LiteLLM, use the LiteLLM dashboard APIs to inspect key info and spend logs. This helps you confirm that the right key is in use and that your budget limits behave as expected.
bash# Key info curl "http://localhost:4000/key/info" \ -H "Authorization: Bearer sk-litellm-key" # Spend logs curl "http://localhost:4000/spend/logs" \ -H "Authorization: Bearer $LITELLM_MASTER_KEY"
Configuration
| Option | Description | Example |
|---|---|---|
| LITELLM_API_KEY | API key that OpenClaw uses to authenticate against the LiteLLM proxy. | sk-litellm-key |
| models.providers.litellm.baseUrl | The LiteLLM proxy URL that OpenClaw sends model requests to. | http://localhost:4000 |
| models.providers.litellm.apiKey | Reference to the LiteLLM API key environment variable inside the OpenClaw config. | ${LITELLM_API_KEY} |
| models.providers.litellm.api | The LiteLLM API style OpenClaw uses when talking to the proxy. | openai-completions |
| models.providers.litellm.models[0].id | The LiteLLM model identifier for the Claude Opus model exposed through the proxy. | claude-opus-4-6 |
| models.providers.litellm.models[1].id | The LiteLLM model identifier for the GPT-4o model exposed through the proxy. | gpt-4o |
| agents.defaults.model.primary | The default model OpenClaw agents use, here pointing at the LiteLLM-backed Claude model. | litellm/claude-opus-4-6 |
Troubleshooting
OpenClaw starts but no requests reach LiteLLM on http://localhost:4000.
LiteLLM runs on `http://localhost:4000` by default, so if you changed the port or host you must update `baseUrl` in your OpenClaw config to match. Also confirm the `litellm` proxy process is running with the `litellm --model claude-opus-4-6` command.
pip install 'litellm[proxy]'
litellm --model claude-opus-4-6LiteLLM rejects requests from OpenClaw with an authentication error.
OpenClaw connects through LiteLLM’s proxy-style OpenAI-compatible `/v1` endpoint and expects a valid `LITELLM_API_KEY`. Make sure you exported `LITELLM_API_KEY` in the same environment where you run `openclaw`, and if you use a virtual key, generate it via the LiteLLM key API and use that value.
export LITELLM_API_KEY="sk-litellm-key"OpenClaw model calls through LiteLLM ignore OpenAI-only features like `service_tier` or prompt caching.
Native OpenAI-only request shaping does not apply through LiteLLM: no `service_tier`, no Responses `store`, no prompt-cache hints, and no OpenAI reasoning-compat payload shaping. Remove those assumptions from your prompts or configs when routing via LiteLLM.
You expect OpenClaw attribution headers to appear upstream but they are missing.
Hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`) are not injected on custom LiteLLM base URLs. If you rely on those headers for analytics, you need to collect logs at the LiteLLM layer instead.
Frequently asked questions
Powered by Mem0
Add persistent memory to OpenClaw
Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.