Model providers
Using Together AI with OpenClaw
Browse more in Model providers.
All model providers guides →This guide shows you how to configure Together AI as a model provider in OpenClaw. You will wire up your Together API key, set a default Together model for your agents, and optionally make Together the default video generation backend.
By the end, your OpenClaw setup will route both text and video generation through Together’s OpenAI-compatible API.
Prerequisites
- ✓An active Together AI account with a valid TOGETHER_API_KEY.
- ✓An existing OpenClaw installation with the CLI available so you can run `openclaw onboard`.
- ✓Access to the machine or environment where your OpenClaw Gateway runs, including its home directory (for example `~/.openclaw/.env`).
Steps
- 1
Understand how Together AI plugs into OpenClaw
Before you wire anything up, you need to know how OpenClaw talks to Together. OpenClaw treats Together as a `together` provider, authenticates with `TOGETHER_API_KEY`, and sends requests to an OpenAI-compatible API at the Together base URL.
Keep these details in mind when debugging or when you proxy traffic.
textProvider: `together` Auth: `TOGETHER_API_KEY` API: OpenAI-compatible Base URL: `https://api.together.xyz/v1` - 2
Onboard Together AI and store the API key
Use the OpenClaw onboarding command to register Together AI and store your API key in the Gateway config. This keeps credentials out of your app code and lets OpenClaw reuse them across agents.
Run this interactively when you first add Together to a project.
bashopenclaw onboard --auth-choice together-api-key - 3
Set a Together model as the default agent model
Configure your agents so OpenClaw uses a Together model by default. 5`, which is also the default model in the bundled catalog and has reasoning enabled.
Add this block to your OpenClaw config so any agent without an explicit model uses Together.
json{ agents: { defaults: { model: { primary: "together/moonshotai/Kimi-K2.5" }, }, }, } - 4
Automate onboarding with a non-interactive Together setup
For CI, scripts, or containerized deployments, run the non-interactive onboarding flow. 5` as the default model without prompting.
Use this in your provisioning scripts so new environments come up ready to call Together.
bashopenclaw onboard --non-interactive \ --mode local \ --auth-choice together-api-key \ --together-api-key "$TOGETHER_API_KEY" - 5
Expose TOGETHER_API_KEY to the Gateway daemon
If your OpenClaw Gateway runs under launchd or systemd, its environment may not see your shell’s `TOGETHER_API_KEY`. shellEnv`.
Without this, the Gateway will fail when it tries to call Together.
textIf the Gateway runs as a daemon (launchd/systemd), make sure `TOGETHER_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`). - 6
Use Together’s built-in model catalog in OpenClaw
OpenClaw ships with a bundled Together catalog so you can reference models by their full Together IDs. Pick from Kimi, GLM, Llama, and DeepSeek variants depending on your use case.
Use these exact model refs in your agent configs or when overriding models per request.
text| Model ref | Name | Input | Context | Notes | | --- | --- | --- | --- | --- | | `together/moonshotai/Kimi-K2.5` | Kimi K2.5 | text, image | 262,144 | Default model; reasoning enabled | | `together/zai-org/GLM-4.7` | GLM 4.7 Fp8 | text | 202,752 | General-purpose text model | | `together/meta-llama/Llama-3.3-70B-Instruct-Turbo` | Llama 3.3 70B Instruct Turbo | text | 131,072 | Fast instruction model | | `together/meta-llama/Llama-4-Scout-17B-16E-Instruct` | Llama 4 Scout 17B 16E Instruct | text, image | 10,000,000 | Multimodal | | `together/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | Llama 4 Maverick 17B 128E Instruct FP8 | text, image | 20,000,000 | Multimodal | | `together/deepseek-ai/DeepSeek-V3.1` | DeepSeek V3.1 | text | 131,072 | General text model | | `together/deepseek-ai/DeepSeek-R1` | DeepSeek R1 | text | 131,072 | Reasoning model | | `together/moonshotai/Kimi-K2-Instruct-0905` | Kimi K2-Instruct 0905 | text | 262,144 | Secondary Kimi text model | - 7
Configure Together as the default video generation provider
The bundled Together plugin also wires into OpenClaw’s shared `video_generate` tool. 2 model.
This enables text-to-video and single-image reference flows with support for `aspectRatio` and `resolution`.
json{ agents: { defaults: { videoGenerationModel: { primary: "together/Wan-AI/Wan2.2-T2V-A14B", }, }, }, }
Configuration
| Option | Description | Example |
|---|---|---|
| TOGETHER_API_KEY | API key OpenClaw uses to authenticate with Together AI’s OpenAI-compatible API. | sk-together-1234567890abcdef |
| agents.defaults.model.primary | Sets the default Together text model for all agents that do not override the model. | together/moonshotai/Kimi-K2.5 |
| agents.defaults.videoGenerationModel.primary | Sets the default Together video generation model used by the shared video_generate tool. | together/Wan-AI/Wan2.2-T2V-A14B |
Troubleshooting
Gateway calls to Together fail when running under systemd or launchd, but work from your shell.
The daemon process does not see `TOGETHER_API_KEY`. shellEnv` so the service inherits it.
If the Gateway runs as a daemon (launchd/systemd), make sure `TOGETHER_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`).OpenClaw uses the wrong default model instead of Together’s Kimi K2.5.
The default model is not set to the Together preset. 5"`, or rerun the onboarding preset which sets this as the default model.
{
agents: {
defaults: {
model: { primary: "together/moonshotai/Kimi-K2.5" },
},
},
}Frequently asked questions
Powered by Mem0
Add persistent memory to OpenClaw
Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.