Overview

This guide walks you through deploying an AI assistant powered by OpenAI Codex using Cloud Deployment. You'll configure the provider, set up the deployment environment, and get your assistant running.

Why OpenAI Codex?

  • 128K context window
  • Image generation and understanding
  • Tool/function calling

Why Cloud Deployment?

  • Managed infrastructure
  • Global edge locations
  • Automatic TLS

Requirements

  • Fly.io Account: Account on Fly.io (or similar PaaS)
  • flyctl CLI: Fly.io command-line tool
  • Persistent Volume: For state persistence
  • OpenAI Codex credentials: API key or authentication

Step 1: Configure OpenAI Codex

Standard API key authentication

  1. Create an account at platform.openai.com
  2. Navigate to API Keys section
  3. Generate a new API key
  4. Set OPENAI_API_KEY environment variable
Environment Variable: OPENAI_API_KEY

Step 2: Prepare Cloud Deployment Environment

  1. Install flyctl: Install the Fly.io CLI
    curl -L https://fly.io/install.sh | sh
  2. Login to Fly: Authenticate with Fly.io
    fly auth login
  3. Create App: Create a new Fly.io app
    fly apps create openclaw-gateway
  4. Create Volume: Create persistent volume for state
    fly volumes create openclaw_data --size 1
  5. Configure fly.toml: Set up the Fly.io configuration file
  6. Set Secrets: Configure environment variables
    fly secrets set ANTHROPIC_API_KEY=sk-ant-...
  7. Deploy: Deploy to Fly.io
    fly deploy

Step 3: Configuration

Create your openclaw.json configuration:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "openai-codex/gpt-5.2"
      }
    }
  },
  "models": {
    "providers": {
      "openai-codex": {
  "models": {
    "providers": {
      "openai-codex": {
        "apiKey": "${OPENAI_API_KEY}
    }
  }
}

Step 4: Deploy

# fly.toml
app = "openclaw-gateway"
primary_region = "iad"

[build]
  image = "node:22-slim"

[env]
  NODE_ENV = "production"
  OPENCLAW_STATE_DIR = "/data"

[mounts]
  source = "openclaw_data"
  destination = "/data"

[[services]]
  internal_port = 18789
  protocol = "tcp"

  [[services.ports]]
    port = 443
    handlers = ["tls", "http"]

Step 5: Verify

# Check deployment status
openclaw status

# View logs
openclaw logs --follow

# Test with a message
openclaw test "Hello, are you working?"

Connect to Channels

Now connect your deployed OpenAI Codex assistant to messaging channels: