OpenAI-compatible proxy for the OpenCode API.
Want to use OpenCode models with OpenClaw? This is the bridge.
OpenCode provides powerful, affordable models (Qwen, MiniMax, GLM, …) but its API is not natively supported by OpenClaw. This proxy converts OpenCode API calls into OpenAI-compatible format, so OpenClaw can use any OpenCode model out of the box.
OpenClaw agent → OpenClawCode proxy (:8080) → OpenCode API
In 3 steps:
npm install -g openclawcode- Run the proxy (see Quick Start)
- Add the
opencodeprovider to youropenclaw.json— see Integration with OpenClaw
Client (OpenAI SDK / OpenClaw / etc.) → Proxy (:8080) → OpenCode API
- Node.js 18+ (with native
fetchsupport) - An OpenCode API key
npm install -g openclawcodeCreate a .env file in your working directory (e.g., ~/.openclaw/.env):
OPENCODE_GO_API_KEY=your_api_key_here
PROXY_PORT=8080
OPENCODE_BASE_URL=https://opencode.ai/zen/go/v1
SESSION_TTL_MS=1800000# From the directory containing .env
node $(npm root -g)/openclawcode/src/index.js
# Or with custom env vars
PROXY_PORT=3000 OPENCODE_GO_API_KEY=xxx node $(npm root -g)/openclawcode/src/index.jscurl http://127.0.0.1:8080/healthimport OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://127.0.0.1:8080/v1',
apiKey: 'not-needed',
});
const response = await client.chat.completions.create({
model: 'qwen3.6-plus',
messages: [{ role: 'user', content: 'Hello!' }],
});| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/chat/completions |
Chat completion (OpenAI compatible) |
| GET | /health |
Health check |
| POST | /admin/clear-cache |
Clear session cache |
| Variable | Default | Description |
|---|---|---|
PROXY_PORT |
8080 |
Port proxy listens on |
OPENCODE_BASE_URL |
https://opencode.ai/zen/go/v1 |
OpenCode API URL |
OPENCODE_GO_API_KEY |
(required) | Your OpenCode API key |
SESSION_TTL_MS |
1800000 |
Session cache TTL (30 min) |
OPENCODE_PROXY_JSON_LIMIT_MB |
200 |
Max JSON body size (MB) |
OPENCODE_BACKEND_TIMEOUT_MS |
90000 |
Backend timeout (90s) |
The proxy caches session IDs per conversation to maintain context between API calls.
- TTL: 30 minutes (configurable via
SESSION_TTL_MS) - Key: Based on the first user message content
- Auto-cleanup: Expired entries are automatically removed
- OpenAI-compatible API (
/v1/chat/completions) - Streaming support (SSE)
- Tool calling passthrough
- Session management with auto-cleanup
- Configurable timeouts and limits
- Loads
.envfrom current working directory
- Token usage is not reported (always returns 0)
- System messages are forwarded as-is
Once the proxy is running on port 8080, add the opencode provider to your openclaw.json under models.providers:
{
"models": {
"providers": {
"opencode": {
"api": "openai-completions",
"baseUrl": "http://127.0.0.1:8080/v1",
"models": [
{
"id": "qwen3.6-plus",
"name": "OpenCode Qwen3.6 Plus",
"api": "openai-completions",
"reasoning": true,
"input": ["text"],
"contextWindow": 262144,
"maxTokens": 131072
}
]
}
}
}
}Then assign the model to any agent:
{
"agents": {
"list": [
{
"id": "my-agent",
"model": "qwen3.6-plus"
}
]
}
}Note:
contextWindowandmaxTokensshould match actual model limits.qwen3.6-plusis confirmed working. For other OpenCode models, test via the proxy first or check the OpenCode docs at https://opencode.ai. NoapiKeyis needed for the opencode provider — the proxy handles authentication via theOPENCODE_GO_API_KEYenv var.
MIT