Coding Assistant Config Generator

Generate configs for Claude Code, Codex, OpenCode, and Qwen Code

About

This tool generates ready-to-use configuration files and start commands for AI coding assistants. Works with any OpenAI-compatible endpoint — go-llm-proxy, vLLM, llama-server, Ollama, LiteLLM, or cloud APIs. Prefer not to edit config files? Use the "Start command" output to get a shell script you can run directly.

Everything runs client-side. No data is sent to, retained, or collected by any server.

Endpoint Setup

Your OpenAI-compatible endpoint. Include /v1 if required by your server.
go-llm-proxy provides protocol translation between coding assistants and local backends, plus transparent vision/OCR image processing, PDF text extraction, and web search interception for Claude Code, Codex, OpenCode, and Qwen Code. Learn more →

Models

No models added yet. Use the form above or click a preset to add models.

Generate Configuration

Configuration File

Replace <YOUR-API-KEY> placeholders with your actual credentials after copying.

Installation Instructions

Web Search

Local LLM backends don't support the web search tools that coding assistants request. go-llm-proxy solves this by intercepting search tool calls, executing them via Tavily or Brave Search, and injecting results back into the conversation — transparently, with no client-side setup.

With go-llm-proxy configured:

Without a proxy, you can configure client-side search directly:

"webSearch": {
  "provider": [
    { "type": "tavily", "apiKey": "tvly-..." },
    { "type": "google", "apiKey": "...", "searchEngineId": "..." },
    { "type": "dashscope" }
  ],
  "default": "tavily"
}

DashScope is available automatically for Qwen OAuth users. Google requires a Custom Search API key and engine ID.

Images, PDFs & Vision

If you're pointing a coding assistant directly at vLLM or llama-server, image and PDF features silently fail on text-only models. You're limited to models with native vision support, or you lose screenshots, paste-image, and document reading entirely.

go-llm-proxy is a free, open-source single binary you run alongside your backend. It sits between your coding assistant and your inference server and handles this transparently:

Everything runs on your machines. The proxy is a ~15MB binary with no dependencies, no cloud services, no accounts.

Built for the local LLM community. go-llm-proxy on GitHub