TypeScript port of a Codex/ChatGPT account load balancer with a dashboard, SQLite storage, and OAuth-backed account onboarding.
This repo currently provides:
- Fastify backend on
2455 - React dashboard on
5173 - OAuth callback listener on
1455 - SQLite persistence for accounts, settings, and request logs
- OAuth account storage with refresh-token based renewal
- Start backend and frontend together with one command
- Add accounts from the dashboard using real OpenAI OAuth
- Configure Codex CLI to point at this proxy
- Persist and refresh OAuth account tokens
OAuth account onboarding is implemented, but full codex-cli request-shape compatibility is still incomplete.
The upstream ChatGPT Codex backend is stricter than standard OpenAI Responses API. Until the request adapter is completed, direct proxy requests may still require upstream-native fields like:
instructionsinputas a liststore: falsestream: true
Install dependencies:
npm install
cd frontend && npm install --legacy-peer-deps && cd ..Run backend and frontend together:
npm run devServices:
- backend API:
http://127.0.0.1:2455 - frontend dashboard:
http://127.0.0.1:5173 - OAuth callback:
http://127.0.0.1:1455/auth/callback
Backend only:
npm run dev:backendFrontend only:
npm run dev:frontendRun the full project in Docker:
docker compose up --buildServices:
- backend API:
http://127.0.0.1:2455 - frontend dashboard:
http://127.0.0.1:5173 - OAuth callback:
http://127.0.0.1:1455/auth/callback
The compose setup exposes port 1455 so browser OAuth can finish outside the container.
Defaults are already set in code, but these are the main ones:
HOST=0.0.0.0
PORT=2455
DB_PATH=./data/codex-lm-ts.db
AUTH_BASE_URL=https://auth.openai.com
OAUTH_CLIENT_ID=app_EMoamEEZ73f0CkXaXp7hrann
OAUTH_SCOPE="openid profile email offline_access"
OAUTH_REDIRECT_URI=http://localhost:1455/auth/callback
OAUTH_CALLBACK_HOST=127.0.0.1
OAUTH_CALLBACK_PORT=1455For Docker, the compose file already overrides OAUTH_CALLBACK_HOST=0.0.0.0.
- Start the project.
- Open
http://127.0.0.1:5173. - Go to
Accounts. - Click
Add with OAuth. - Choose:
Browser (PKCE)for normal local loginDevice codefor headless or remote setups
- Complete the OpenAI sign-in flow.
After success, the account is stored in SQLite and will be refreshed automatically before proxy use.
- The callback listener uses
http://localhost:1455/auth/callback - Port
1455must be reachable from your browser - If the callback cannot reach the app directly, paste the callback URL into the dialog’s manual callback input
The dashboard still supports manual upstream accounts, but if your goal is Codex/ChatGPT OAuth-backed usage, use the OAuth dialog instead.
Edit ~/.codex/config.toml:
model = "gpt-5.3-codex"
model_provider = "codex-lm-ts"
[model_providers.codex-lm-ts]
name = "OpenAI"
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"If you enabled proxy API key enforcement in dashboard settings, add:
[model_providers.codex-lm-ts]
name = "OpenAI"
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
env_key = "CODEX_LM_TS_API_KEY"And export it:
export CODEX_LM_TS_API_KEY="your-proxy-key"Backend health:
curl http://127.0.0.1:2455/healthList models through the proxy:
curl http://127.0.0.1:2455/backend-api/codex/modelsTest a manual upstream-native request:
curl -N -X POST http://127.0.0.1:2455/backend-api/codex/responses \
-H 'content-type: application/json' \
-d '{
"model": "gpt-5.3-codex",
"instructions": "You are a helpful coding assistant.",
"store": false,
"stream": true,
"input": [
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "Write a hello world program in TypeScript."
}
]
}
]
}'The proxy supports Cherry Studio via both the OpenAI Responses API and Chat Completions.
- In Cherry Studio, go to Settings → Providers and add a new provider.
- Choose OpenAI (Responses API) — not "OpenAI-Compatible".
- Set API Base URL to
http://localhost:2455(or your backend URL). Do not add/v1— Cherry Studio appends it. - Set API Key to your proxy API key (from dashboard Settings) if you enabled proxy auth.
- Add the Codex models (e.g.
gpt-5.3-codex) and enable the provider.
Cherry Studio will send requests to POST /v1/responses, which the proxy forwards to Codex. The Responses API format is compatible with Codex.
If you prefer OpenAI-Compatible (Chat Completions), use the same base URL. The proxy adapts requests to Codex and transcodes the streamed response.
GET /healthGET /api/accountsPOST /api/accountsPATCH /api/accounts/:idPOST /api/oauth/startGET /api/oauth/statusPOST /api/oauth/completePOST /api/oauth/manual-callbackGET /api/settingsPUT /api/settingsGET /api/dashboard/summaryGET /api/request-logsGET /v1/modelsPOST /v1/chat/completions(Cherry Studio / OpenAI Chat Completions)POST /v1/responsesGET /backend-api/codex/modelsPOST /backend-api/codex/responses
Backend:
npm run typecheck
npm testFrontend:
cd frontend
npm run typecheck
npm run build