Model providers

Using Mistral with OpenClaw

3 min read

Browse more in Model providers.

All model providers guides →

This guide walks you through configuring Mistral as a model provider in OpenClaw, including text/image models, audio transcription with Voxtral, and memory embeddings. You will wire up your Mistral API key, pick the right bundled model IDs, and enable media and memory features that route through Mistral.

By the end, your OpenClaw agents will call Mistral models like `mistral/mistral-large-latest` and `voxtral-mini-latest` without extra glue code.

Prerequisites

  • A Mistral account with an API key that starts with `sk-...` so you can authenticate against `https://api.mistral.ai/v1`.
  • An existing OpenClaw project where you can run `openclaw` CLI commands and edit the agent configuration.

Steps

  1. 1

    Onboard Mistral with the OpenClaw CLI

    Start by teaching OpenClaw about your Mistral credentials so the gateway can authenticate requests. The interactive flow is handy the first time; the non-interactive variant is better for CI or server provisioning where `MISTRAL_API_KEY` is already exported.

    bash
    openclaw onboard --auth-choice mistral-api-key
    # or non-interactive
    openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
  2. 2

    Set the default Mistral LLM for your agents

    Configure your agents to use a Mistral model as their primary LLM so every new session routes through Mistral by default. ai/v1`.

    json
    {
      env: { MISTRAL_API_KEY: "sk-..." },
      agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
    }
  3. 3

    Choose the right Mistral model from the built-in catalog

    OpenClaw ships a catalog of Mistral models so you can swap behavior by changing a single model ref. Pick models from this list based on your needs: large context, coding, reasoning, or adjustable reasoning via `mistral/mistral-small-latest`.

    text
    mistral/mistral-large-latest
    mistral/mistral-medium-2508
    mistral/mistral-small-latest
    mistral/pixtral-large-latest
    mistral/codestral-latest
    mistral/devstral-medium-latest
    mistral/magistral-small
  4. 4

    Enable audio transcription with Voxtral over Mistral

    If your agent needs media understanding, enable the media audio tool and point it at Mistral’s Voxtral model. This config turns on audio transcription and tells OpenClaw to call the `voxtral-mini-latest` model via Mistral’s `/v1/audio/transcriptions` path.

    json
    {
      tools: {
        media: {
          audio: {
            enabled: true,
            models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
          },
        },
      },
    }
  5. 5

    Tune reasoning behavior for mistral-small-latest

    When you use `mistral/mistral-small-latest`, OpenClaw maps its session thinking level to Mistral’s `reasoning_effort` parameter. This lets you control how much intermediate thinking the model surfaces, from minimal traces (`none`) to full reasoning (`high`) on the Chat Completions API.

    text
    mistral/mistral-small-latest

Configuration

OptionDescriptionExample
MISTRAL_API_KEYAPI key OpenClaw uses to authenticate against the Mistral API for text, image, audio, and embeddings.sk-abc123example
envTop-level config section where you expose `MISTRAL_API_KEY` to the OpenClaw runtime.{ MISTRAL_API_KEY: "sk-abc123example" }
agents.defaults.model.primarySets the default primary model for all agents, such as a Mistral catalog ref like `mistral/mistral-large-latest`.mistral/mistral-large-latest
tools.media.audio.enabledTurns the media audio tool on so OpenClaw can send audio to Mistral’s Voxtral transcription endpoint.true
tools.media.audio.models[0].providerSelects Mistral as the provider for audio transcription in the media tool.mistral
tools.media.audio.models[0].modelSpecifies which Mistral audio model to use for transcription.voxtral-mini-latest
memorySearch.providerChooses Mistral as the provider for memory embeddings when agents perform memory search.mistral

Troubleshooting

Calls to Mistral models fail and logs mention missing authentication or invalid API key.

` key. If you onboarded non-interactively, confirm the same key is exported in the shell where you ran `openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"`.

bash
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"

Audio transcription requests through the media tool do not return results or hit a 404 on the Mistral side.

Media transcription path uses `/v1/audio/transcriptions`, so you must enable the media audio tool and set the model to `voxtral-mini-latest`. audio` block and that the provider is set to `mistral`.

bash
{
  tools: {
    media: {
      audio: {
        enabled: true,
        models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
      },
    },
  },
}

Memory search features in your agent do not return any embeddings-backed results.

Memory embeddings path uses `/v1/embeddings` with the default model `mistral-embed`. provider = "mistral"` so OpenClaw routes embedding calls to Mistral instead of another provider.

bash
memorySearch.provider = "mistral"

Frequently asked questions

Powered by Mem0

Add persistent memory to OpenClaw

Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.

More in Model providers