Model providers
Using NVIDIA NIM with OpenClaw
Browse more in Model providers.
All model providers guides →This guide shows you how to wire up NVIDIA NIM’s OpenAI-compatible API to OpenClaw. You’ll export the right API key, point OpenClaw at NVIDIA’s models, and set a default agent model.
By the end, your OpenClaw agents will call NVIDIA-hosted models like `nvidia/nvidia/nemotron-3-super-120b-a12b` through the NVIDIA `/v1` endpoint.
Prerequisites
- ✓An NVIDIA account with access to API keys at https://build.nvidia.com/settings/api-keys.
- ✓An existing OpenClaw installation with the `openclaw` CLI available in your shell.
- ✓Network access from your OpenClaw environment to `https://integrate.api.nvidia.com/v1`.
Steps
- 1
Create an NVIDIA API key
Start by creating an API key in your NVIDIA account so OpenClaw can authenticate against the NIM endpoint. You do this in the NVIDIA web UI, and you will use the resulting key as an environment variable in later steps.
textCreate an API key at https://build.nvidia.com/settings/api-keys. - 2
Export your NVIDIA_API_KEY and run OpenClaw onboarding
Set the `NVIDIA_API_KEY` environment variable in your shell so OpenClaw can auto-enable the NVIDIA provider. Then run the onboarding command; using `--auth-choice skip` keeps onboarding focused on local config without extra auth prompts.
bashexport NVIDIA_API_KEY="nvapi-..." openclaw onboard --auth-choice skip - 3
Set an NVIDIA model as the active model
Tell OpenClaw which NVIDIA model to use by default. This command registers `nvidia/nvidia/nemotron-3-super-120b-a12b` as the current model for your workflows, so subsequent agent runs will target this model unless you override it.
bashopenclaw models set nvidia/nvidia/nemotron-3-super-120b-a12b - 4
Configure NVIDIA as a provider in your OpenClaw config
If you maintain an explicit config file, add the NVIDIA provider block so your setup is self-documenting and easy to version-control. This config points OpenClaw at NVIDIA’s OpenAI-compatible `/v1` endpoint and sets the default agent model to the Nemotron 3 Super 120B reference.
json{ env: { NVIDIA_API_KEY: "nvapi-..." }, models: { providers: { nvidia: { baseUrl: "https://integrate.api.nvidia.com/v1", api: "openai-completions", }, }, }, agents: { defaults: { model: { primary: "nvidia/nvidia/nemotron-3-super-120b-a12b" }, }, }, } - 5
Choose a model from the built-in NVIDIA catalog
OpenClaw ships with a static catalog of NVIDIA-backed models you can target by reference. Pick the model ref that matches your use case and swap it into your `openclaw models set` command or config file.
text| Model ref | Name | Context | Max output | | --- | --- | --- | --- | | `nvidia/nvidia/nemotron-3-super-120b-a12b` | NVIDIA Nemotron 3 Super 120B | 262,144 | 8,192 | | `nvidia/moonshotai/kimi-k2.5` | Kimi K2.5 | 262,144 | 8,192 | | `nvidia/minimaxai/minimax-m2.5` | Minimax M2.5 | 196,608 | 8,192 | | `nvidia/z-ai/glm5` | GLM 5 | 202,752 | 8,192 |
Configuration
| Option | Description | Example |
|---|---|---|
| NVIDIA_API_KEY | API key used by OpenClaw to authenticate against NVIDIA’s OpenAI-compatible API. | nvapi-abc1234567890 |
| env.NVIDIA_API_KEY | Config entry that injects the NVIDIA API key into the OpenClaw runtime environment. | nvapi-abc1234567890 |
| models.providers.nvidia.baseUrl | Base URL for the NVIDIA OpenAI-compatible completions endpoint. | https://integrate.api.nvidia.com/v1 |
| models.providers.nvidia.api | API protocol that OpenClaw uses when talking to NVIDIA; set to the OpenAI-style completions API. | openai-completions |
| agents.defaults.model.primary | Default primary model ref for agents when using the NVIDIA provider. | nvidia/nvidia/nemotron-3-super-120b-a12b |
Troubleshooting
NVIDIA models do not appear or calls fail when using the NVIDIA provider
The NVIDIA provider only auto-enables when the `NVIDIA_API_KEY` environment variable is set. Export `NVIDIA_API_KEY` in the same shell where you run `openclaw` so the provider activates without extra config.
export NVIDIA_API_KEY="nvapi-..."API key shows up in shell history or `ps` output after running OpenClaw commands
Passing the NVIDIA key with `--token` exposes it in shell history and process listings. Use the `NVIDIA_API_KEY` environment variable instead so the key stays out of command-line arguments.
export NVIDIA_API_KEY="nvapi-..."Frequently asked questions
Powered by Mem0
Add persistent memory to OpenClaw
Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.