Skip to content

lumishoang/opencode-proxy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenClawCode

OpenAI-compatible proxy for the OpenCode API.

Why This Exists

Want to use OpenCode models with OpenClaw? This is the bridge.

OpenCode provides powerful, affordable models (Qwen, MiniMax, GLM, …) but its API is not natively supported by OpenClaw. This proxy converts OpenCode API calls into OpenAI-compatible format, so OpenClaw can use any OpenCode model out of the box.

OpenClaw agent → OpenClawCode proxy (:8080) → OpenCode API

In 3 steps:

  1. npm install -g openclawcode
  2. Run the proxy (see Quick Start)
  3. Add the opencode provider to your openclaw.json — see Integration with OpenClaw

Architecture

Client (OpenAI SDK / OpenClaw / etc.) → Proxy (:8080) → OpenCode API

Quick Start

Prerequisites

  • Node.js 18+ (with native fetch support)
  • An OpenCode API key

Installation

npm install -g openclawcode

Configuration

Create a .env file in your working directory (e.g., ~/.openclaw/.env):

OPENCODE_GO_API_KEY=your_api_key_here
PROXY_PORT=8080
OPENCODE_BASE_URL=https://opencode.ai/zen/go/v1
SESSION_TTL_MS=1800000

Running

# From the directory containing .env
node $(npm root -g)/openclawcode/src/index.js

# Or with custom env vars
PROXY_PORT=3000 OPENCODE_GO_API_KEY=xxx node $(npm root -g)/openclawcode/src/index.js

Verify

curl http://127.0.0.1:8080/health

Usage with OpenAI SDK

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'http://127.0.0.1:8080/v1',
  apiKey: 'not-needed',
});

const response = await client.chat.completions.create({
  model: 'qwen3.6-plus',
  messages: [{ role: 'user', content: 'Hello!' }],
});

API Endpoints

Method Endpoint Description
POST /v1/chat/completions Chat completion (OpenAI compatible)
GET /health Health check
POST /admin/clear-cache Clear session cache

Environment Variables

Variable Default Description
PROXY_PORT 8080 Port proxy listens on
OPENCODE_BASE_URL https://opencode.ai/zen/go/v1 OpenCode API URL
OPENCODE_GO_API_KEY (required) Your OpenCode API key
SESSION_TTL_MS 1800000 Session cache TTL (30 min)
OPENCODE_PROXY_JSON_LIMIT_MB 200 Max JSON body size (MB)
OPENCODE_BACKEND_TIMEOUT_MS 90000 Backend timeout (90s)

Session Cache

The proxy caches session IDs per conversation to maintain context between API calls.

  • TTL: 30 minutes (configurable via SESSION_TTL_MS)
  • Key: Based on the first user message content
  • Auto-cleanup: Expired entries are automatically removed

Features

  • OpenAI-compatible API (/v1/chat/completions)
  • Streaming support (SSE)
  • Tool calling passthrough
  • Session management with auto-cleanup
  • Configurable timeouts and limits
  • Loads .env from current working directory

Limitations

  • Token usage is not reported (always returns 0)
  • System messages are forwarded as-is

Integration with OpenClaw

Once the proxy is running on port 8080, add the opencode provider to your openclaw.json under models.providers:

{
  "models": {
    "providers": {
      "opencode": {
        "api": "openai-completions",
        "baseUrl": "http://127.0.0.1:8080/v1",
        "models": [
          {
            "id": "qwen3.6-plus",
            "name": "OpenCode Qwen3.6 Plus",
            "api": "openai-completions",
            "reasoning": true,
            "input": ["text"],
            "contextWindow": 262144,
            "maxTokens": 131072
          }
        ]
      }
    }
  }
}

Then assign the model to any agent:

{
  "agents": {
    "list": [
      {
        "id": "my-agent",
        "model": "qwen3.6-plus"
      }
    ]
  }
}

Note: contextWindow and maxTokens should match actual model limits. qwen3.6-plus is confirmed working. For other OpenCode models, test via the proxy first or check the OpenCode docs at https://opencode.ai. No apiKey is needed for the opencode provider — the proxy handles authentication via the OPENCODE_GO_API_KEY env var.

License

MIT

About

OpenAI-compatible proxy for the OpenCode API. Bridge OpenCode models to any OpenAI-compatible client.

Resources

License

Stars

Watchers

Forks

Contributors