Model providers
Connect any LLM to OpenClaw
API keys, model selection, and provider-specific options — for Anthropic, OpenAI, Bedrock, Ollama, and more.
43·guidesUsing Alibaba Model Studio with OpenClaw
Alibaba Cloud Model Studio API integration for OpenClaw agents.
Using BytePlus with OpenClaw
BytePlus (ByteDance) model API configuration for OpenClaw.
Using fal with OpenClaw
Fast serverless AI inference via fal for OpenClaw image and media tasks.
Using GLM (Zhipu AI) with OpenClaw
Zhipu AI GLM model API configuration for OpenClaw.
Using Hugging Face with OpenClaw
Connect Hugging Face Inference API and hosted models to OpenClaw.
Using Inferrs with OpenClaw
Inferrs serverless model inference configuration for OpenClaw.
Using Kilocode with OpenClaw
Kilocode model provider setup for OpenClaw agents.
Using LiteLLM with OpenClaw
Route OpenClaw model calls through LiteLLM proxy for unified provider access.
Using LM Studio with OpenClaw
Run local models via LM Studio's OpenAI-compatible server with OpenClaw.
Using MiniMax with OpenClaw
MiniMax (Hailuo) multimodal model integration for OpenClaw.
Using Moonshot AI with OpenClaw
Moonshot AI (Kimi) long-context model configuration for OpenClaw.
Using NVIDIA NIM with OpenClaw
Deploy optimized NVIDIA NIM microservices as model backends for OpenClaw.
Using Qianfan (Baidu) with OpenClaw
Baidu Qianfan ERNIE model API setup for OpenClaw.
Using Qwen with OpenClaw
Alibaba Qwen cloud model API setup for OpenClaw.
Using Runway with OpenClaw
Runway Gen video and image generation API for OpenClaw.
Using SGLang with OpenClaw
SGLang serving runtime for structured generation with OpenClaw.
Using StepFun with OpenClaw
StepFun (Step-1) model API configuration for OpenClaw agents.
Using Synthetic with OpenClaw
Synthetic model provider setup and configuration for OpenClaw.
Using Venice AI with OpenClaw
Privacy-preserving inference via Venice AI for OpenClaw agents.
Using vLLM with OpenClaw
High-throughput local or self-hosted inference via vLLM for OpenClaw.
Using Volcengine (Doubao) with OpenClaw
ByteDance Volcengine Doubao model API integration for OpenClaw.
Using Vydra with OpenClaw
Vydra model API configuration for OpenClaw agents.
Using Xiaomi AI with OpenClaw
Xiaomi AI model API integration for OpenClaw.
Using Z.AI with OpenClaw
Z.AI model provider configuration for OpenClaw agents.
Using xAI Grok with OpenClaw
Connect xAI's Grok models to OpenClaw agents.
Using Together AI with OpenClaw
Open-source model hosting via Together AI in OpenClaw.
Using Mistral with OpenClaw
Mistral AI model configuration for OpenClaw.
Using Groq with OpenClaw
Ultra-fast Groq inference for OpenClaw agents.
Using Google Gemini with OpenClaw
Google Gemini model provider setup for OpenClaw.
Using GitHub Copilot Models with OpenClaw
Route model calls through GitHub Copilot's model API in OpenClaw.
Using Fireworks AI with OpenClaw
Fast inference via Fireworks AI for OpenClaw agents.
Using DeepSeek with OpenClaw
Configure DeepSeek models in OpenClaw for reasoning-focused tasks.
Using OpenRouter with OpenClaw
Route model requests through OpenRouter.
Using OpenAI with OpenClaw
OpenAI API configuration for OpenClaw agents.
Using Ollama (Local LLMs) with OpenClaw
Run local models via Ollama with OpenClaw.
Using Deepgram with OpenClaw
Speech and audio APIs via Deepgram in OpenClaw.
Using ComfyUI with OpenClaw
Integrate ComfyUI workflows with OpenClaw where supported.
Using Cloudflare AI Gateway with OpenClaw
Put Cloudflare AI Gateway in front of model calls.
Using Claude Max API Proxy with OpenClaw
Claude Max proxy setup notes for OpenClaw.
Using Chutes with OpenClaw
Chutes provider configuration for OpenClaw.
Using Arcee AI with OpenClaw
Configure Arcee AI models in OpenClaw.
Using Anthropic Claude with OpenClaw
API keys, model selection, and Claude-specific options for OpenClaw.
Using Amazon Bedrock with OpenClaw
Connect OpenClaw to models on Amazon Bedrock.