Temporary OpenAI-compatible proxy that helps TypingMind interoperate with a native OpenClaw gateway.
- TypingMind test/save flows can call
GET /v1andPOST /v1. - OpenClaw compatibility is strongest on
POST /v1/chat/completions. - This proxy serves compatibility endpoints and forwards valid chat requests to OpenClaw.
The matrix below tracks known behavior by TypingMind client variant.
| TypingMind client/version | Status | Behavior summary | Required proxy mode |
|---|---|---|---|
TypingMindMac desktop app (observed UA on 2026-02-19, macOS 26.3 build 25D125) |
Supported | Save/test flow works via POST /v1; chat works via POST /v1/chat/completions; streaming works with incremental chunks and clean [DONE]; image payloads are accepted but not processed as true vision in current ChatGPT Plus OAuth path. Use HTTPS endpoint in practice. |
Default static guard (STATIC_KEY_GUARD=1) recommended. |
TypingMind hosted web app (setapp.typingcloud.com) |
Supported | Requires HTTPS endpoint; same chat/stream behavior as desktop path when routed through tunnel URL. | Default static guard (STATIC_KEY_GUARD=1) recommended. |
| Other TypingMind versions/builds | Untested | Treat as unknown until make smoke and an interactive save/test/chat check pass. |
Start with default static guard, then adjust only if required. |
Validation baseline for each new TypingMind build:
make smoke BASE_URL=http://127.0.0.1:17890
make smoke BASE_URL=https://<TUNNEL_HOSTNAME>- macOS
- OpenClaw gateway running locally on
127.0.0.1:18789 python3openclawCLI inPATH
Apply the same native gateway baseline used in openclaw-config so TypingMind can reach OpenClaw through this proxy.
# enable OpenAI-compatible chat-completions endpoint
openclaw config set gateway.http.endpoints.chatCompletions.enabled true
# enforce token auth on gateway
openclaw config set gateway.auth.mode token
# if token is empty, generate one
openclaw doctor --generate-gateway-token
# restart gateway
openclaw gateway restartcd "$(git rev-parse --show-toplevel)"
make install
make statusDefault listen port is 17890.
Run these checks only after make install.
Verify default proxy-auth flow (recommended):
cd "$(git rev-parse --show-toplevel)"
PROXY_KEY=$(make -s print-static-api-key | tail -n 1)
curl -sS -X POST http://127.0.0.1:17890/v1/chat/completions \
-H "Authorization: Bearer $PROXY_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"openclaw:main","messages":[{"role":"user","content":"Reply OK"}],"stream":false}' \
| jq -r '.object, .choices[0].message.content'Expected:
chat.completionOK
Optional legacy verification (only if static guard is disabled):
make disable-static-api-key-guard
GATEWAY_TOKEN=$(openclaw config get gateway.auth.token | tr -d '"')
curl -sS -X POST http://127.0.0.1:17890/v1/chat/completions \
-H "Authorization: Bearer $GATEWAY_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model":"openclaw:main","messages":[{"role":"user","content":"Reply OK"}],"stream":false}' \
| jq -r '.object, .choices[0].message.content'Native gateway verification (diagnostic):
GATEWAY_TOKEN=$(openclaw config get gateway.auth.token | tr -d '"')
curl -sS -X POST http://127.0.0.1:18789/v1/chat/completions \
-H "Authorization: Bearer $GATEWAY_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model":"openclaw:main","messages":[{"role":"user","content":"Reply OK"}],"stream":false}' \
| jq -r '.model, .choices[0].message.content'Use this when you want a temporary external HTTPS URL without installing the named tunnel service.
cloudflared tunnel --url http://127.0.0.1:17890Use the printed URL in TypingMind as:
- Endpoint URL:
https://<TRYCLOUDFLARE_URL>/v1 - Model ID:
openclaw:main
This URL changes every time you restart the command.
make help # show all available commands with usage notes
make install # render templates, install LaunchAgent, start proxy
make uninstall # remove proxy LaunchAgent + launcher script (and tunnel agent if present)
make start # start proxy LaunchAgent
make stop # stop proxy LaunchAgent
make restart # stop + reinstall proxy LaunchAgent
make status # show launchctl status and listening port
make logs # tail proxy log file
make health # GET /health on local proxy
make smoke # run /v1, /v1/models, /v1/chat/completions smoke checks
make smoke-keywords # validate keyword alias routing to expected upstream models
make list-keywords # print supported escalation keywords and mapped models
make launchctl-diagnostics # print LaunchAgent diagnostics for proxy + tunnel
make print-static-api-key # print proxy static API key for TypingMind bearer value
make rotate-static-api-key # rotate proxy static API key and restart proxy LaunchAgent
make enable-static-api-key-guard # enforce static bearer at proxy edge
make disable-static-api-key-guard # legacy mode without proxy edge bearer check
make lint # markdownlint --fix then markdownlintmake install-cloudflare-tunnel \
TUNNEL_TOKEN_FILE=~/.cloudflared/typingmind-openclaw-proxy.token \
TUNNEL_HOSTNAME=<TUNNEL_HOSTNAME>
make uninstall-cloudflare-tunnel
make start-cloudflare-tunnel
make stop-cloudflare-tunnel
make restart-cloudflare-tunnel \
TUNNEL_TOKEN_FILE=~/.cloudflared/typingmind-openclaw-proxy.token \
TUNNEL_HOSTNAME=<TUNNEL_HOSTNAME>
make status-cloudflare-tunnel
make logs-cloudflare-tunnelmake install-cloudflare-tunnel-system \
TUNNEL_TOKEN_FILE=~/.cloudflared/typingmind-openclaw-proxy.token
make uninstall-cloudflare-tunnel-systemmake install PORT=18080
make install STATIC_KEY_GUARD=1
make install STATIC_KEY_GUARD=0
make smoke BASE_URL=http://127.0.0.1:17890
make smoke BASE_URL=https://<TUNNEL_HOSTNAME>
make smoke SMOKE_MODEL=openclaw:mainThis project now supports (and make install auto-initializes) an optional static API key guard at the proxy layer.
- Key file (local only):
~/.openclaw/secrets/typingmind-openclaw-proxy-static-api-key - Generated automatically on
make installif missing. - Used as client bearer token for
POST /v1andPOST /v1/chat/completions. - Proxy still uses
gateway.auth.tokenseparately for upstream OpenClaw calls.
make installdefaults toSTATIC_KEY_GUARD=1.- TypingMind sends the static proxy bearer key.
- Proxy injects
OPENCLAW_GATEWAY_TOKEN(read locally fromopenclaw config get gateway.auth.token) for upstream calls to OpenClaw. - Result: TypingMind never needs the gateway token directly.
Show the key to paste into TypingMind:
make print-static-api-keyRotate the key and restart proxy:
make rotate-static-api-keyWhen static key guard is enabled, TypingMind should use:
- Header key:
Authorization - Header value:
Bearer <STATIC_PROXY_API_KEY>
If you want to avoid the extra proxy edge key and keep legacy behavior:
make disable-static-api-key-guardor at install time:
make install STATIC_KEY_GUARD=0In this mode, proxy does not enforce the static bearer key. You can still set TypingMind bearer to the gateway token, but this is not recommended for tunnel exposure because it exposes a higher-privilege secret to the client side.
This project supports token-managed persistent tunnel install as the only persistent mode.
No cloudflared tunnel login is required on the local machine.
In Cloudflare Zero Trust dashboard:
- Go to
Networks->Connectors(Cloudflare Tunnels). - Create a tunnel (type
Cloudflared) namedtypingmind-openclaw-proxy. - In
Route tunnel->Published applications, fill the form:Subdomain:ocDomain: your zone (for exampleexample.com)Path: leave emptyType:HTTPURL:127.0.0.1:17890(orhttp://127.0.0.1:17890)- Resulting hostname:
oc.<YOUR_DOMAIN>
- In
Install and run connectors, copy the generated Tunnel Token.
Notes:
- The
Install and run connectorspage (token page) does not show hostname mapping fields. - Hostname mapping is configured in
Route tunnelfor remotely managed tunnels.
mkdir -p ~/.cloudflared
chmod 700 ~/.cloudflared
cat > ~/.cloudflared/typingmind-openclaw-proxy.token <<'EOF'
<CLOUDFLARE_TUNNEL_TOKEN>
EOF
chmod 600 ~/.cloudflared/typingmind-openclaw-proxy.tokencloudflared tunnel run --token "$(cat ~/.cloudflared/typingmind-openclaw-proxy.token)"make install-cloudflare-tunnel \
TUNNEL_TOKEN_FILE=~/.cloudflared/typingmind-openclaw-proxy.token \
TUNNEL_HOSTNAME=<TUNNEL_HOSTNAME>Alternative (system-wide service via sudo, as shown in Cloudflare docs):
make install-cloudflare-tunnel-system \
TUNNEL_TOKEN_FILE=~/.cloudflared/typingmind-openclaw-proxy.tokenCheck service:
make status-cloudflare-tunnel
make logs-cloudflare-tunnelStop/remove (safe if absent):
make uninstall-cloudflare-tunnelIf you used the system-wide service option:
make uninstall-cloudflare-tunnel-systemcurl -i https://<TUNNEL_HOSTNAME>/v1
curl -i -X POST https://<TUNNEL_HOSTNAME>/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"x":1}'Expected: HTTP 200 from proxy compatibility endpoints.
Proxy logs:
tail -f ~/.openclaw/logs/typingmind-proxy.logTunnel logs (LaunchAgent-managed):
tail -f ~/.openclaw/logs/typingmind-proxy-tunnel.log
tail -f ~/.openclaw/logs/typingmind-proxy-tunnel.err.logOr via make target:
make logs-cloudflare-tunnelNote: cloudflared commonly writes operational logs to typingmind-proxy-tunnel.err.log.
Use this only after you have a working HTTPS endpoint from one of the tunnel options above.
- Endpoint URL:
- Quick tunnel:
https://<TRYCLOUDFLARE_URL>/v1 - Persistent token tunnel:
https://<TUNNEL_HOSTNAME>/v1
- Quick tunnel:
- Model ID:
openclaw:main
Model catalog behavior:
GET /v1/modelsreturns TypingMind-safe model IDs.- Known OpenClaw models are exposed with friendly IDs (for example
openclaw:gpt-5-1). - Unknown upstream model IDs still fall back to
ocm:<upstream_id_with_/replaced_by__>(for exampleocm:openai-codex__gpt-5.1). POST /v1/chat/completionsaccepts these aliased IDs and the proxy maps them back to canonical OpenClaw model IDs before forwarding upstream.- To disable aliasing and expose raw upstream IDs, set
OPENCLAW_PROXY_MODEL_ID_ALIASING_ENABLED=0. - For routing diagnostics, keep
OPENCLAW_PROXY_DEBUG_MODEL_ROUTING=1(default) to log requested/decoded/forwarded model IDs in proxy logs.
Important:
- Use HTTPS endpoint for TypingMind (all client variants).
- Do not use
http://127.0.0.1:17890/v1in TypingMind. - Local HTTP endpoint is for local diagnostics only (for example
curltests). Support Image Inputin TypingMind works only when OpenClaw is backed by an API provider path that actually supports vision/image payloads. In this project's current ChatGPT Plus OAuth path, image payloads are accepted but not processed as real vision input.
If OPENCLAW_PROXY_ESCALATION_KEYWORDS_ENABLED=1 (default), you can prefix a user message with a keyword to override the model for that request:
!fast->openai-codex/gpt-5.1(fast default on ChatGPT-account Codex auth)!mini->openai-codex/gpt-5.1-codex-mini!std/!gp->openai-codex/gpt-5.1!deep/!max->openai-codex/gpt-5.1-codex-max!codex/!heavy->openai-codex/gpt-5.3-codex
Built-in helper command (intercepted by proxy, not forwarded upstream):
!keywords(also!keyword,!switches,!models) -> returns the current keyword-to-model map directly in chat.
Example:
!deep Plan a safe rollback strategy for these changes.
templates/start_typingmind_proxy.sh.tpltemplates/ai.openclaw.typingmind-proxy.plist.tpltemplates/start_typingmind_cloudflared_token.sh.tpltemplates/ai.openclaw.typingmind-proxy-tunnel-token.plist.tpl
make install renders these templates into:
~/.openclaw/bin/start_typingmind_proxy.sh~/Library/LaunchAgents/ai.openclaw.typingmind-proxy.plist
- Do not commit tokens.
- Token is fetched at runtime from local OpenClaw config (
gateway.auth.token). - Keep gateway bound to loopback unless you intentionally expose via secure tunnel/reverse proxy.