
ZeaburOpenClaw 🦞 (formerly Clawdbot, Moltbot) is a personal AI assistant that runs locally and connects to multiple messaging platforms via a WebSocket-based Gateway architecture.
⚠️ This template uses ghcr.io/openclaw/openclaw:2026.2.26. OpenClaw 🦞 is in rapid development, so there may be undiscovered bugs. Changing versions may also cause stability issues.
⚠️ If you encounter any issues, feel free to check the GitHub issues for solutions or to report new ones. For Zeabur platform-related issues, please contact Zeabur support.
⚠️ macOS-specific software and packages (e.g. Homebrew) are not supported in this container environment. Please look for alternative solutions.
⚠️ This template requires a Dedicated Server on Zeabur. It cannot run on shared clusters.
⚠️ This template is pre-configured and ready to use - no need to run openclaw onboard. If you want to reconfigure, open Command in Zeabur dashboard and run:
openclaw onboard --gateway-bind lan
See the Wizard Reference for all available flags.
glm-4.7-flash): If you entered the API Key during deployment, go directly to step 3. You can also add ZEABUR_AI_HUB_API_KEY later via Variables tab in Zeabur dashboard (restart service after adding).anthropic/claude-opus-4-6. Go to Web UI Settings or add API key via environment variables. See: https://docs.openclaw.ai/providers/anthropicFor AI model configuration, see the official documentation.
This template includes failover models that automatically switch when the primary model is unavailable. Default chain: glm-4.7-flash → grok-4-fast-non-reasoning → minimax-m2.5 → kimi-k2.5 → qwen-3-32 → gpt-5-mini.
You can manage models from Web UI Chat or Command in the Zeabur dashboard.
Via Web UI Chat — type slash commands directly in the chat box:
/model zeabur-ai/glm-4.7-flash — change primary model/model — view current model/models — list model providers/models <provider> — list models for a specific provider (e.g. /models zeabur-ai)Via Command (Zeabur dashboard) — same commands in the terminal:
openclaw models set zeabur-ai/glm-4.7-flash
openclaw models status
openclaw models list --all
openclaw models fallbacks list
openclaw models fallbacks add zeabur-ai/gpt-5-mini
openclaw models fallbacks remove zeabur-ai/gpt-5-mini
openclaw models fallbacks clear
Or edit the config file directly (~/.openclaw/config.json5):
"agents": {
"defaults": {
"model": {
"primary": "zeabur-ai/glm-4.7-flash",
"fallbacks": ["zeabur-ai/grok-4-fast-non-reasoning", "zeabur-ai/minimax-m2.5"]
}
}
}
After editing the config file, restart the service.
Besides Zeabur AI Hub, you can add external providers like Anthropic, OpenAI, Google, etc.
Method 1: Environment variables — add API keys via Variables tab in Zeabur dashboard:
ANTHROPIC_API_KEY — for Claude modelsOPENAI_API_KEY — for GPT modelsGOOGLE_API_KEY — for Gemini modelsAfter adding, restart the service, then switch model:
/model anthropic/claude-opus-4-6openclaw models set anthropic/claude-opus-4-6Method 2: Auth token — open Command in Zeabur dashboard:
# Paste an API key for a provider
openclaw models auth paste-token --provider anthropic
# Or use interactive auth helper
openclaw models auth add
Method 3: Config file — edit ~/.openclaw/config.json5:
"models": {
"providers": {
"anthropic": { "apiKey": "sk-ant-..." },
"openai": { "apiKey": "sk-..." }
}
}
For all supported providers, see the official documentation.
Getting your bot token from BotFather:
/newbot to create a new bot123456789:ABCdefGHIjklMNOpqrsTUVwxyz)Adding the token to Zeabur:
TELEGRAM_BOT_TOKEN with your bot token"plugins": { "entries": { "telegram": { "enabled": true } } }. To disable, set enabled to false.Pairing your Telegram account:
/start to your bot in TelegramJN4MSY23)openclaw pairing approve telegram <code>openclaw pairing approve telegram <code>Approved telegram sender <user-id>.Step 1: Configure WhatsApp channel Add the following configuration via OpenClaw Web UI (Settings → Config) or paste it to chat:
"channels": {
"whatsapp": {
"selfChatMode": true,
"dmPolicy": "allowlist",
"allowFrom": ["+15551234567"]
}
}
Replace +15551234567 with your WhatsApp phone number (with country code). Restart the service after saving.
Step 2: Link WhatsApp
openclaw channels loginCreating a LINE Messaging API channel:
Adding credentials to Zeabur:
LINE_CHANNEL_ACCESS_TOKEN with your channel access tokenLINE_CHANNEL_SECRET with your channel secret"plugins": { "entries": { "line": { "enabled": true } } }Setting up the webhook:
https://<your-domain>/line/webhookhttps://<your-domain>/line/webhookPairing your LINE account:
JN4MSY23)openclaw pairing approve line <code>openclaw pairing approve line <code>Approved line sender <user-id>.For other messaging platforms (Discord, Slack, etc.), see the Channels documentation.
Verify your setup:
zeabur-ai/glm-4.7-flashanthropic/claude-opus-4-6 (requires API key)Switch model for current conversation (via chat commands):
/models - View available models/model <model-id> - Switch model for this conversation only (does not affect other conversations)Change default model for all new conversations (via Web UI Settings):
gemini-2.5-flash-lite, gpt-5-nano). If a model becomes unavailable, use /new to start a new conversation with the default model.Add AI providers (via Web UI Settings or environment variables):
Modify Zeabur AI Hub models (via Zeabur dashboard):
/opt/openclaw/providers/zeabur-ai-hub.json5/home/node/.openclaw/openclaw.json via Files tab or Web UI Settings. Add the following to models.providers.zeabur-ai.models array:{ "id": "gpt-5.2", "name": "GPT-5.2", "reasoning": false, "input": ["text", "image"], "cost": { "input": 1.5, "output": 12, "cacheRead": 0.15, "cacheWrite": 0 }, "contextWindow": 400000, "maxTokens": 8192 },
{ "id": "gpt-5.1", "name": "GPT-5.1", "reasoning": false, "input": ["text", "image"], "cost": { "input": 1.35, "output": 11, "cacheRead": 0.14, "cacheWrite": 0 }, "contextWindow": 400000, "maxTokens": 8192 },
{ "id": "gpt-5-nano", "name": "GPT-5 Nano", "reasoning": false, "input": ["text", "image"], "cost": { "input": 0.1, "output": 0.8, "cacheRead": 0.01, "cacheWrite": 0 }, "contextWindow": 400000, "maxTokens": 8192 },
{ "id": "glm-4.7", "name": "GLM-4.7", "reasoning": false, "input": ["text", "image"], "cost": { "input": 0.5, "output": 2, "cacheRead": 0.12, "cacheWrite": 0 }, "contextWindow": 204800, "maxTokens": 8192 },
{ "id": "glm-4.7-flash", "name": "GLM-4.7 Flash", "reasoning": false, "input": ["text", "image"], "cost": { "input": 0.25, "output": 1, "cacheRead": 0.06, "cacheWrite": 0 }, "contextWindow": 204800, "maxTokens": 8192 },
{ "id": "kimi-k2.5", "name": "Kimi K2.5", "reasoning": false, "input": ["text"], "cost": { "input": 0.45, "output": 2, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 131072, "maxTokens": 8192 }
All data is stored under /home/node:
/home/node/.openclaw - Configuration, sessions, devices, and credentials/home/node/.openclaw/workspace - Workspace and memory files💡 Tip: We recommend creating a backup after completing your initial setup or making significant configuration changes.
Backup:
backup → Download from /home/node in Files tab (e.g. backup-1430.tar.gz)
cd /home/node && tar -czvf backup.tar.gz .openclawRestore:
/home/node folder in Files tabrestore <backup-file> --strip 2 (e.g. restore data-2026-02-27.zip --strip 2)restore <backup-file> (e.g. restore backup-1430.tar.gz)cd /home/node && tar -xzvf <backup-file>/home/node in Files tab to free up disk space⚠️ Restore will overwrite existing configuration and data on the new service. Remember to also restore related environment variables (e.g. TELEGRAM_BOT_TOKEN). Channel backup/restore has only been tested with Telegram and WhatsApp.
Default startup command:
/opt/openclaw/startup.sh && /opt/openclaw/start_gateway.sh
When the gateway stops, a helper page appears at the service URL with error details and steps to fix it:
/home/node/.openclaw/openclaw.json), and correct the issueIf your deployment does not have the helper page, follow these steps:
sleep 3600, then click Restart — this keeps the container running so you can edit files/home/node/.openclaw/openclaw.json), and correct the issue/opt/openclaw/startup.sh && /opt/openclaw/start_gateway.sh and click Restart💡 To enable the helper page, redeploy from this template.
ghcr.io/openclaw/openclaw, Tag: change from the current version (e.g. 2026.2.19) to the new version (e.g. 2026.2.26)⚠️ Avoid using
latestas the tag — it always pulls the newest release, which may introduce breaking changes or unexpected errors. Pin to a specific version for stability.
💡 The startup script automatically migrates your config on each boot — new settings (like
trustedProxies,dangerouslyDisableDeviceAuth) are added if missing. Your existing settings are preserved.
This means the Web UI has not been paired with a Gateway Token yet. Fix:
You can find the Gateway Token in the Zeabur Dashboard Instructions tab or Environment Variables (OPENCLAW_GATEWAY_TOKEN).
This error occurs after upgrading to image 2026.2.23 or later without updating the config. Fix: edit /home/node/.openclaw/openclaw.json and add "dangerouslyAllowHostHeaderOriginFallback": true under gateway.controlUi:
{
"gateway": {
"controlUi": {
"dangerouslyAllowHostHeaderOriginFallback": true
}
}
}
Then restart the service. New deployments from this template already include this setting.
⚠️ This feature requires a fresh deployment from this template. Existing deployments do not have the Tailscale startup scripts — please redeploy to use this feature.
Instead of a public domain, you can use Tailscale to make OpenClaw accessible only within your private network (tailnet), without exposing it to the public internet.
Prerequisites:
Step 1: Set environment variables In the Zeabur dashboard Environment Variables tab, add:
TS_AUTHKEY (required): Your Tailscale Auth Key (tskey-auth-xxx). Get one at Tailscale Admin Console → Keys. Without this, Tailscale setup is skipped entirely.TS_HOSTNAME (optional): The machine name on your tailnet, which determines your access URL (https://<TS_HOSTNAME>.<tailnet>.ts.net). Defaults to openclaw if not set.Step 2: Switch startup command Go to Settings → Command, change to:
/opt/openclaw/startup.sh && /opt/openclaw/start_gateway_tailscale.sh
Restart the service.
Step 3: Install Tailscale on your device Install Tailscale on the device you want to access OpenClaw from (macOS, Windows, iOS, Android, Linux), and log in with the same Tailscale account used to create the Auth Key.
Step 4: First login to Web UI Once started, open in your browser (must be on the same tailnet):
https://<TS_HOSTNAME>.<your-tailnet>.ts.net
You can find your tailnet DNS name at Tailscale Admin Console → DNS, or check the full URL in the service Logs on Zeabur dashboard.
Log in using either method:
https://<TS_HOSTNAME>.<your-tailnet>.ts.net?token=<GATEWAY_TOKEN>You can find the Gateway Token in the Zeabur dashboard Instructions tab or Environment Variables (OPENCLAW_GATEWAY_TOKEN).
Step 5: Connect OpenClaw app (Optional, macOS example)
wss://<TS_HOSTNAME>.<your-tailnet>.ts.netFor iOS and Android setup, see the official documentation.
Switch back to public domain mode:
Change the startup command back to /opt/openclaw/startup.sh && /opt/openclaw/start_gateway.sh and restart.
This template pre-configures the following settings for Zeabur's cloud environment:
gateway.trustedProxies: Set to ["10.0.0.0/8", "172.16.0.0/12"] so the gateway correctly identifies client IPs behind Zeabur's reverse proxy. Without this, the Web UI may show "device identity required" errors.dangerouslyDisableDeviceAuth: Disables Web UI device pairing (device pairing is designed for local networks; cloud deployments use Gateway Token authentication instead)./usr/local/bin symlinks: The openclaw, backup, and restore commands are symlinked to /usr/local/bin so they work in Zeabur's Command terminal.OPENCLAW_DISABLE_BONJOUR=1: Disables mDNS/Bonjour because Zeabur container hostnames can exceed the 63-byte DNS label limit. mDNS is only used for local network discovery and is not needed in cloud environments.OPENCLAW_TELEGRAM_DISABLE_AUTO_SELECT_FAMILY=true: Fixes Telegram connection issues in containerized environments (required for image versions 2026.2.17 and later).2026/2/27
dangerouslyDisableDeviceAuth — replace device pairing with Gateway Token authentication for cloud deploymentzeabur-ai/glm-4.7-flash with failover chain (grok-4-fast-non-reasoning → minimax-m2.5 → kimi-k2.5 → qwen-3-32 → gpt-5-mini)2026.2.26 — Telegram DM allowlist inheritance fix, temp dir permissions fix for containers, CLI gateway --force in non-root Docker, Gemini model ID normalization, and additional security hardening2026/2/26
2026.2.25 — 100+ security fixes across 2026.2.23→2026.2.25, new providers (Kilo Gateway, Mistral, Volcano Engine), heartbeat directPolicy config, gateway WebSocket auth hardening, cross-channel routing isolation, Discord voice DAVE reliability, Telegram webhook hang fix, and numerous stability improvements2026/2/24
2026.2.23 — includes 30+ security fixes, new providers (Kilo Gateway, Mistral, Volcano Engine), unified channel streaming config, multilingual stop phrases, reasoning/thinking suppression across all channels, and numerous stability improvementsdangerouslyAllowHostHeaderOriginFallback to Control UI config — required for non-loopback deployments since 2026.2.232026/2/22
rescue.sh) — replaced by the helper page2026/2/20
2026.2.19 — add OPENCLAW_TELEGRAM_DISABLE_AUTO_SELECT_FAMILY=true env var to fix Telegram connection issues (required for image versions 2026.2.17 and later)2026/2/16
latest to keep up with rapid security fixes2026/2/15
start_gateway_tailscale.sh for private HTTPS access via tailnet without exposing to public internet2026.2.142026/2/10
/v1/images/generations to /v1/chat/completions API, default model to gemini-2.5-flash-imagegpt-oss-120b, llama-3.3-70b, qwen-3-32 HTTP 500: add supportsStore: false compat flaggpt-oss-120b reasoning flag (set to true).zip format (Zeabur backup service)TELEGRAM_BOT_TOKEN, DISCORD_BOT_TOKEN, SLACK_BOT_TOKEN, SLACK_APP_TOKEN, LINE_CHANNEL_ACCESS_TOKEN, LINE_CHANNEL_SECRET) into config at startup2026/2/7
2026/2/4
backup and restore global commands2026/2/2
Zeabur