Model providers
Using DeepSeek with OpenClaw
Browse more in Model providers.
All model providers guides →This guide walks you through configuring DeepSeek as a model provider in OpenClaw using the DeepSeek OpenAI-compatible API. You will wire up your DeepSeek API key, onboard the provider via the OpenClaw CLI, and understand which built-in DeepSeek models you can target.
By the end, your OpenClaw gateway will talk to DeepSeek using `deepseek/deepseek-chat` or `deepseek/deepseek-reasoner` as models.
Prerequisites
- ✓An active DeepSeek account with an API key from https://platform.deepseek.com/api_keys.
- ✓An OpenClaw installation with the `openclaw` CLI available in your shell.
- ✓Access to the machine where your OpenClaw Gateway runs so you can set environment variables or edit `~/.openclaw/.env`.
Steps
- 1
Get your DeepSeek API key
Before you touch OpenClaw, you need a DeepSeek API key that the gateway can use for authentication. com`, and OpenClaw talks to it using that key.
Log into DeepSeek and create an API key at the URL from the docs so you have a token ready for the next steps.
textGet your API key at https://platform.deepseek.com/api_keys. - 2
Onboard DeepSeek interactively with the OpenClaw CLI
Use the OpenClaw CLI to onboard DeepSeek so the gateway knows about the provider and can store your key. This interactive flow prompts you for the API key and configures `deepseek/deepseek-chat` as the default model, which is usually what you want for general chat-style agents.
bashopenclaw onboard --auth-choice deepseek-api-key - 3
Onboard DeepSeek non-interactively for automation
If you script your infrastructure or CI, run the non-interactive onboarding command instead. This lets you pass `DEEPSEEK_API_KEY` from your environment, set the mode to `local`, skip health checks, and accept the risk flags without any prompts.
It’s ideal for repeatable setups on servers.
bashopenclaw onboard --non-interactive \ --mode local \ --auth-choice deepseek-api-key \ --deepseek-api-key "$DEEPSEEK_API_KEY" \ --skip-health \ --accept-risk - 4
Expose DEEPSEEK_API_KEY to the OpenClaw Gateway process
When you run the OpenClaw Gateway as a daemon (launchd or systemd), the environment from your shell usually does not propagate automatically. shellEnv` configuration.
If the daemon cannot see this variable, DeepSeek calls will fail even if onboarding worked.
textIf the Gateway runs as a daemon (launchd/systemd), make sure `DEEPSEEK_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`). - 5
Choose the right DeepSeek model from the built-in catalog
OpenClaw ships with a built-in catalog for DeepSeek so you can reference models by stable IDs. `deepseek/deepseek-chat` is the default non-thinking chat surface, while `deepseek/deepseek-reasoner` exposes the reasoning-enabled surface with a larger max output.
Pick the one that matches your use case and configure your agents or calls accordingly.
textModel ref: `deepseek/deepseek-chat` | Name: DeepSeek Chat | Context: 131,072 | Max output: 8,192 | Default model; DeepSeek V3.2 non-thinking surface Model ref: `deepseek/deepseek-reasoner` | Name: DeepSeek Reasoner | Context: 131,072 | Max output: 65,536 | Reasoning-enabled V3.2 surface
Configuration
| Option | Description | Example |
|---|---|---|
| DEEPSEEK_API_KEY | DeepSeek API key used by OpenClaw to authenticate against the DeepSeek OpenAI-compatible API at https://api.deepseek.com. | sk-deepseek-1234567890abcdef |
Troubleshooting
OpenClaw Gateway cannot call DeepSeek when running under systemd, even though `openclaw onboard` succeeded.
The daemon process does not see `DEEPSEEK_API_KEY`, so authentication fails at runtime. shellEnv`, then restart the service so it picks up the new environment.
If the Gateway runs as a daemon (launchd/systemd), make sure `DEEPSEEK_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`).The default model in OpenClaw is not the DeepSeek model you expect after onboarding.
The interactive DeepSeek onboarding sets `deepseek/deepseek-chat` as the default model. If you want the reasoning surface instead, explicitly configure `deepseek/deepseek-reasoner` in your agent or model configuration after onboarding so OpenClaw routes calls to the correct model.
Model ref: `deepseek/deepseek-chat` (default)
Model ref: `deepseek/deepseek-reasoner`Frequently asked questions
Powered by Mem0
Add persistent memory to OpenClaw
Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.