Model providers
Using Z.AI with OpenClaw
Browse more in Model providers.
All model providers guides →AI as a model provider inside OpenClaw. AI API key, pick the right regional or Coding Plan endpoint, and set a default GLM model for your agents.
1` through the `zai` provider.
Prerequisites
- ✓A Z.AI account with access to GLM models and an API key created in the Z.AI console.
- ✓An existing OpenClaw installation with the `openclaw` CLI available in your shell.
- ✓Network access from your OpenClaw environment to the Z.AI Chat Completions API.
Steps
- 1
Decide between auto-detect and explicit regional onboarding
AI supports both an auto-detected endpoint based on your key and explicit regional or Coding Plan endpoints. Decide upfront whether you want OpenClaw to infer the correct base URL from the key prefix or force a specific Coding Plan or region.
AI.
- 2
Run onboarding with auto-detected Z.AI endpoint
AI endpoint from your API key. This is best for most setups because it configures the correct base URL automatically from the key prefix.
Run this in the environment where your OpenClaw agents execute.
bashopenclaw onboard --auth-choice zai-api-key - 3
Run onboarding with an explicit Z.AI regional or Coding Plan endpoint
If you need to force a specific Coding Plan or general API surface, use one of the explicit onboarding choices instead of auto-detect. These commands lock OpenClaw to the global or China region and to either the Coding Plan or general API.
AI account is provisioned.
bash# Coding Plan Global (recommended for Coding Plan users) openclaw onboard --auth-choice zai-coding-global # Coding Plan CN (China region) openclaw onboard --auth-choice zai-coding-cn # General API openclaw onboard --auth-choice zai-global # General API CN (China region) openclaw onboard --auth-choice zai-cn - 4
Set a default Z.AI GLM model for your agents
AI’s GLM models without specifying a model on every call. 1` as the primary model and wires in your `ZAI_API_KEY`.
defaults`.
json{ env: { ZAI_API_KEY: "sk-..." }, agents: { defaults: { model: { primary: "zai/glm-5.1" } } }, } - 5
Verify Z.AI models are registered in OpenClaw
AI catalog. Listing models for the `zai` provider helps you catch auth or region issues early.
Run this from the same environment where you configured the provider.
bashopenclaw models list --provider zai - 6
Optionally disable Z.AI tool-call streaming for specific models
AI enables `tool_stream` by default for tool-call streaming, which may not fit every integration. You can override this behavior per model by setting `tool_stream: false` under the `zai/<model>` entry.
Use this when your tools or downstream consumers expect non-streaming tool calls.
json{ agents: { defaults: { models: { "zai/<model>": { params: { tool_stream: false }, }, }, }, }, }
Configuration
| Option | Description | Example |
|---|---|---|
| ZAI_API_KEY | Z.AI API key used by the `zai` provider for Bearer authentication against the Z.AI Chat Completions API. | sk-... |
| agents.defaults.model.primary | Sets the default primary model for all agents, here pointing to a Z.AI GLM model reference. | zai/glm-5.1 |
| agents.defaults.models."zai/<model>".params.tool_stream | Controls whether tool-call streaming is enabled for a specific Z.AI model; `tool_stream` is enabled by default. | false |
Troubleshooting
Z.AI models do not appear when running `openclaw models list --provider zai`
This usually means onboarding did not run with the correct auth choice or the `ZAI_API_KEY` is missing from your environment. " }`.
openclaw onboard --auth-choice zai-api-keyTool calls from Z.AI models stream when your integration expects non-streaming behavior
AI enables `tool_stream` by default for tool-call streaming, which can break consumers that assume a single tool result payload. models` for `"zai/<model>"` and set `params: { tool_stream: false }`.
{
agents: {
defaults: {
models: {
"zai/<model>": {
params: { tool_stream: false },
},
},
},
},
}Frequently asked questions
Powered by Mem0
Add persistent memory to OpenClaw
Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.