Model providers

Using OpenAI with OpenClaw

3 min read

Browse more in Model providers.

All model providers guides →

This guide shows you how to wire OpenAI into OpenClaw, whether you use a direct OpenAI API key or a ChatGPT/Codex subscription. 4 as your default model, tune interaction style, and enable OpenAI image and video generation.

By the end, your OpenClaw agents will talk to OpenAI over native Responses routes with sensible defaults for transport, warm-up, and priority processing.

Setup flow

Prerequisites

  • An OpenAI Platform account with an API key from the OpenAI dashboard if you want usage-based access to `openai/gpt-5.4`.
  • A ChatGPT/Codex subscription if you want to use the `openai-codex/*` OAuth route instead of an API key.
  • An existing OpenClaw installation with the CLI available so you can run `openclaw onboard` and `openclaw config` commands.

Steps

  1. 1

    Decide between OpenAI API key and Codex subscription

    Pick whether your OpenClaw agents should talk to OpenAI via a direct API key (`openai/*`) or via ChatGPT/Codex OAuth (`openai-codex/*`). Use the API key path when you want usage-based billing and direct OpenAI Platform API access; use Codex when you rely on your ChatGPT subscription and Codex cloud.

    text
    Route summary:
    *   `openai/gpt-5.4` = direct OpenAI Platform API route
    *   Requires `OPENAI_API_KEY` (or equivalent OpenAI provider config)
    *   In OpenClaw, ChatGPT/Codex sign-in is routed through `openai-codex/*`, not `openai/*`
    
    Route summary:
    *   `openai-codex/gpt-5.4` = ChatGPT/Codex OAuth route
    *   Uses ChatGPT/Codex sign-in, not a direct OpenAI Platform API key
    *   Provider-side limits for `openai-codex/*` can differ from the ChatGPT web/app experience
  2. 2

    Onboard OpenClaw with an OpenAI API key

    If you choose the direct OpenAI Platform route, run the OpenClaw onboarding wizard with the OpenAI API key option. You can run it interactively or wire it into automation by passing `OPENAI_API_KEY` non-interactively.

    bash
    openclaw onboard --auth-choice openai-api-key
    # or non-interactive
    openclaw onboard --openai-api-key "$OPENAI_API_KEY"
  3. 3

    Set GPT‑5.4 as the default model using the OpenAI API key

    4` by default. This snippet also wires your `OPENAI_API_KEY` into the environment OpenClaw reads.

    json
    {
      env: { OPENAI_API_KEY: "sk-..." },
      agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
    }
  4. 4

    Onboard OpenClaw with a Codex (ChatGPT) subscription

    If you prefer to use your ChatGPT/Codex subscription instead of an API key, run the onboarding flow with the Codex auth choice. You can either let the wizard drive OAuth or call the Codex login directly when you script this.

    bash
    # Run Codex OAuth in the wizard
    openclaw onboard --auth-choice openai-codex
    
    # Or run OAuth directly
    openclaw models auth login --provider openai-codex
  5. 5

    Set GPT‑5.4 as the default model using Codex OAuth

    4 route so OpenClaw uses ChatGPT/Codex OAuth instead of the API key path. This keeps the model ID consistent while swapping the underlying transport and auth.

    json
    {
      agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
    }
  6. 6

    Tune the OpenAI interaction style

    OpenClaw ships an OpenAI-specific prompt overlay that makes responses warmer and more expressive. Keep it on for a friendly tone, or turn it off if you want the raw base OpenClaw system prompt.

    json
    {
      plugins: {
        entries: {
          openai: {
            config: {
              personality: "friendly",
            },
          },
        },
      },
    }
    
    {
      plugins: {
        entries: {
          openai: {
            config: {
              personality: "off",
            },
          },
        },
      },
    }
    
    openclaw config set plugins.entries.openai.config.personality off
  7. 7

    Configure OpenAI image and video generation defaults

    If you want OpenClaw’s shared tools to use OpenAI for images and video, set the default image and video models. This lets your agents call `image_generate` and `video_generate` without specifying provider details each time.

    json
    {
      agents: {
        defaults: {
          imageGenerationModel: {
            primary: "openai/gpt-image-1",
          },
        },
      },
    }
    
    {
      agents: {
        defaults: {
          videoGenerationModel: {
            primary: "openai/sora-2",
          },
        },
      },
    }
  8. 8

    Control transport, warm-up, and fast mode for OpenAI routes

    Fine-tune how OpenClaw talks to OpenAI by setting transport, WebSocket warm-up, priority processing, and fast mode. These options help you balance latency, reliability, and cost for both `openai/*` and `openai-codex/*` models.

    json
    {
      agents: {
        defaults: {
          model: { primary: "openai-codex/gpt-5.4" },
          models: {
            "openai-codex/gpt-5.4": {
              params: {
                transport: "auto",
              },
            },
          },
        },
      },
    }
    
    {
      agents: {
        defaults: {
          models: {
            "openai/gpt-5.4": {
              params: {
                openaiWsWarmup: false,
              },
            },
          },
        },
      },
    }
    
    {
      agents: {
        defaults: {
          models: {
            "openai/gpt-5.4": {
              params: {
                openaiWsWarmup: true,
              },
            },
          },
        },
      },
    }
    
    {
      agents: {
        defaults: {
          models: {
            "openai/gpt-5.4": {
              params: {
                serviceTier: "priority",
              },
            },
            "openai-codex/gpt-5.4": {
              params: {
                serviceTier: "priority",
              },
            },
          },
        },
      },
    }
    
    {
      agents: {
        defaults: {
          models: {
            "openai/gpt-5.4": {
              params: {
                fastMode: true,
              },
            },
            "openai-codex/gpt-5.4": {
              params: {
                fastMode: true,
              },
            },
          },
        },
      },
    }
  9. 9

    Enable or disable OpenAI Responses server-side compaction

    4, you can control OpenAI’s server-side compaction hints from OpenClaw. Use these settings when you want to tweak how aggressively OpenAI compacts context or when you need to turn it off.

    json
    {
      agents: {
        defaults: {
          models: {
            "azure-openai-responses/gpt-5.4": {
              params: {
                responsesServerCompaction: true,
              },
            },
          },
        },
      },
    }
    
    {
      agents: {
        defaults: {
          models: {
            "openai/gpt-5.4": {
              params: {
                responsesServerCompaction: true,
                responsesCompactThreshold: 120000,
              },
            },
          },
        },
      },
    }
    
    {
      agents: {
        defaults: {
          models: {
            "openai/gpt-5.4": {
              params: {
                responsesServerCompaction: false,
              },
            },
          },
        },
      },
    }

Configuration

OptionDescriptionExample
env.OPENAI_API_KEYThe OpenAI Platform API key used for direct `openai/*` routes like `openai/gpt-5.4`.sk-...
agents.defaults.model.primarySets the default primary model for new agents and sessions.openai/gpt-5.4
agents.defaults.imageGenerationModel.primarySets the default model used by the shared `image_generate` tool.openai/gpt-image-1
agents.defaults.videoGenerationModel.primarySets the default model used by the shared `video_generate` tool.openai/sora-2
plugins.entries.openai.config.personalityControls whether the OpenAI-specific prompt overlay is enabled or disabled.friendly
models.providers."openai-codex".models[].contextTokensOverrides the runtime context token cap for a specific Codex model.160000
agents.defaults.models."openai/gpt-5.4".params.transportChooses the transport for OpenAI streaming: SSE, WebSocket, or auto.auto
agents.defaults.models."openai/gpt-5.4".params.openaiWsWarmupEnables or disables WebSocket warm-up for OpenAI Responses models.true
agents.defaults.models."openai/gpt-5.4".params.serviceTierSets the OpenAI `service_tier` for priority processing on native routes.priority
agents.defaults.models."openai/gpt-5.4".params.fastModeToggles OpenClaw fast mode, which maps to OpenAI priority processing.true
agents.defaults.models."openai/gpt-5.4".params.responsesServerCompactionControls whether OpenClaw injects OpenAI Responses server-side compaction hints.true
agents.defaults.models."openai/gpt-5.4".params.responsesCompactThresholdSets a custom token threshold for OpenAI Responses compaction.120000

Troubleshooting

OpenAI rejects `openai/gpt-5.3-codex-spark` when called via the direct API path.

3-codex-spark` on the direct OpenAI API path because live OpenAI API requests reject it. 3-codex-spark` instead if your Codex account is entitled to Spark.

Codex sessions hit context limits earlier than the native `contextWindow` suggests.

4`, OpenClaw keeps a large native `contextWindow` but enforces a smaller runtime `contextTokens` cap for better latency and quality. 4`.

bash
{
  models: {
    providers: {
      "openai-codex": {
        models: [
          {
            id: "gpt-5.4",
            contextTokens: 160000,
          },
        ],
      },
    },
  },
}

Frequently asked questions

Powered by Mem0

Add persistent memory to OpenClaw

Official Mem0 plugin for OpenClaw keeps context across chats and tools. Smaller prompts, lower cost, better continuity for your agents.

More in Model providers