A universal LLM proxy that connects Claude Code to any language model provider
Production-ready LLM proxy server that converts requests from various LLM providers to Anthropic's Claude API format. Built with Go for high performance and reliability.
As simple as CCO_API_KEY="<openrouter key>" cco code
and then selecting openrouter,qwen/qwen3-coder
as model and voila
Inspired by Claude Code Router but rebuilt from the ground up to actually work reliably.
|
|
💡 Note: When installed with
go install
, the binary is namedclaude-code-open
. Throughout this documentation, you can substitutecco
withclaude-code-open
or create an alias as shown in the installation section.
📥 Option 1: Install with Go (Recommended)
The easiest way to install is using Go's built-in installer:
# Install directly from GitHub
go install github.com/Davincible/claude-code-open@latest
# The binary will be installed as 'claude-code-open' in $(go env GOBIN) or $(go env GOPATH)/bin
# Create an alias for shorter command (optional)
echo 'alias cco="claude-code-open"' >> ~/.bashrc # or ~/.zshrc
source ~/.bashrc # or ~/.zshrc
# Or create a symlink (Linux/macOS) - handles both GOBIN and GOPATH
GOBIN_DIR=$(go env GOBIN)
if [ -z "$GOBIN_DIR" ]; then
GOBIN_DIR="$(go env GOPATH)/bin"
fi
sudo ln -s "$GOBIN_DIR/claude-code-open" /usr/local/bin/cco
# One-liner version:
# sudo ln -s "$([ -n "$(go env GOBIN)" ] && go env GOBIN || echo "$(go env GOPATH)/bin")/claude-code-open" /usr/local/bin/cco
🔨 Option 2: Build from Source
# Clone the repository
git clone https://github.com/Davincible/claude-code-open
cd claude-code-open
# Build with Make (creates 'cco' binary)
make build
sudo make install # Install to /usr/local/bin
# Or build manually
go build -o cco .
sudo mv cco /usr/local/bin/
⚙️ Option 3: Install with Custom Binary Name
# Install with go install and create symlink using Go environment
go install github.com/Davincible/claude-code-open@latest
GOBIN_DIR=$(go env GOBIN); [ -z "$GOBIN_DIR" ] && GOBIN_DIR="$(go env GOPATH)/bin"
sudo ln -sf "$GOBIN_DIR/claude-code-open" /usr/local/bin/cco
# Or use go install with custom GOBIN (if you have write permissions)
GOBIN=/usr/local/bin go install github.com/Davincible/claude-code-open@latest
sudo mv /usr/local/bin/claude-code-open /usr/local/bin/cco
# Or install to a custom directory you own
mkdir -p ~/.local/bin
GOBIN=~/.local/bin go install github.com/Davincible/claude-code-open@latest
ln -sf ~/.local/bin/claude-code-open ~/.local/bin/cco
# Add ~/.local/bin to PATH if not already there
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
For the fastest setup, you can run without any configuration file using just the CCO_API_KEY
environment variable:
# Set your API key (works with any provider)
# This is the API key of the provider you want to use, can be any one of the supported providers
# Then in Claude Code you set the model with <provider>,<model name> e.g. openrouter,moonshotai/kimi-k2
export CCO_API_KEY="your-api-key-here"
# Start the router immediately - no config file needed!
# Although you can create a config if you want to store your API keys for all providers
cco start # or claude-code-open start
# The API key will be used for whichever provider your model requests
# e.g., if you use "openrouter,anthropic/claude-sonnet-4" -> key goes to OpenRouter
# e.g., if you use "openai,gpt-4o" -> key goes to OpenAI
🔑 How CCO_API_KEY Works |
---|
✅ Single API Key - Use one environment variable for all providers |
For advanced setups with multiple API keys, generate a complete YAML configuration:
cco config generate # or claude-code-open config generate
This creates config.yaml
with all 5 supported providers and sensible defaults. Then edit the file to add your API keys:
# config.yaml
host: 127.0.0.1
port: 6970
api_key: your-proxy-key # Optional: protect the proxy
providers:
- name: openrouter
api_key: your-openrouter-api-key
model_whitelist: ["claude", "gpt-4"] # Optional: filter models
- name: openai
api_key: your-openai-api-key
# ... etc
Alternatively, use the interactive setup:
cco config init # or claude-code-open config init
🚀 Start the Service cco start
# or
claude-code-open start 📊 Check Status cco status
# or
claude-code-open status |
💬 Use with Claude Code cco code [arguments]
# or
claude-code-open code [...]
# Auto-starts if not running ⏹️ Stop the Service cco stop
# or
claude-code-open stop |
The router supports explicit provider and model selection using comma notation, which overrides all automatic routing logic:
🤖 Automatic Routing (Fallback) |
---|
When no comma is present in the model name, the router applies these rules in order:
|
📁 |
🔧 |
The router uses a modular provider system where each provider implements the Provider
interface:
type Provider interface {
Name() string
SupportsStreaming() bool
TransformRequest(request []byte) ([]byte, error)
TransformResponse(response []byte) ([]byte, error)
TransformStream(chunk []byte, state *StreamState) ([]byte, error)
IsStreaming(headers map[string][]string) bool
GetEndpoint() string
SetAPIKey(key string)
}
🐧 Linux/macOS
|
🪟 Windows
|
🔄 Backward Compatibility: The router will also check
~/.claude-code-router/
for existing configurations and use them automatically, with a migration notice.
The router now supports modern YAML configuration with automatic defaults:
# Server settings
host: 127.0.0.1
port: 6970
api_key: your-proxy-key-here # Optional: protect proxy with authentication
# Provider configurations
providers:
# OpenRouter - Access to multiple models
- name: openrouter
api_key: your-openrouter-api-key
# url: auto-populated from defaults
# default_models: auto-populated with curated list
model_whitelist: ["claude", "gpt-4"] # Optional: filter models by pattern
# OpenAI - Direct GPT access
- name: openai
api_key: your-openai-api-key
# Automatically configured with GPT-4, GPT-4-turbo, GPT-3.5-turbo
# Anthropic - Direct Claude access
- name: anthropic
api_key: your-anthropic-api-key
# Automatically configured with Claude models
# Nvidia - Nemotron models
- name: nvidia
api_key: your-nvidia-api-key
# Google Gemini
- name: gemini
api_key: your-gemini-api-key
# Router configuration for different use cases
router:
default: openrouter,anthropic/claude-sonnet-4
think: openai,o1-preview
long_context: anthropic,claude-sonnet-4
background: anthropic,claude-3-haiku-20240307
web_search: openrouter,perplexity/llama-3.1-sonar-huge-128k-online
Map custom domains (like localhost) to existing providers for local model support:
# config.yaml
domain_mappings:
localhost: openai # Use OpenAI transformations for localhost requests
127.0.0.1: gemini # Use Gemini transformations for 127.0.0.1 requests
custom.api: openrouter # Use OpenRouter transformations for custom APIs
providers:
name: local-lmstudio
url: "http://localhost:1234/v1/chat/completions"
api_key: "not-needed"
Benefits:
- ✅ Local Model Support - Route localhost to existing providers
- ✅ Reuse Transformations - Leverage proven request/response logic
- ✅ No Custom Provider Needed - Use existing provider implementations
- ✅ Flexible Mapping - Any domain can map to any provider
📋 JSON Configuration (Click to expand)
The router still supports JSON configuration for backward compatibility:
{
"HOST": "127.0.0.1",
"PORT": 6970,
"APIKEY": "your-router-api-key-optional",
"Providers": [
{
"name": "openrouter",
"api_base_url": "https://openrouter.ai/api/v1/chat/completions",
"api_key": "your-provider-api-key",
"models": ["anthropic/claude-sonnet-4"],
"model_whitelist": ["claude", "gpt-4"],
"default_models": ["anthropic/claude-sonnet-4"]
}
],
"Router": {
"default": "openrouter,anthropic/claude-sonnet-4",
"think": "openrouter,anthropic/claude-sonnet-4",
"longContext": "openrouter,anthropic/claude-sonnet-4",
"background": "openrouter,anthropic/claude-3-5-haiku",
"webSearch": "openrouter,perplexity/llama-3.1-sonar-large-128k-online"
}
}
✅ Auto-Defaults - URLs and model lists auto-populated |
✅ Smart Model Management - Auto-filtered by whitelists |
🎯 |
⚡ |
Format:
provider_name,model_name
(e.g.,openai,gpt-4o
,anthropic,claude-sonnet-4
)
🚀 Start Service cco start [--verbose] [--log-file] |
📊 Check Status cco status |
⏹️ Stop Service cco stop |
📁 Generate Config cco config generate [--force] 🔧 Interactive Setup cco config init |
👁️ Show Config cco config show ✅ Validate Config cco config validate |
# Run Claude Code through the router
cco code [args...]
# Examples:
cco code --help
cco code "Write a Python script to sort a list"
cco code --resume session-name
To add support for a new LLM provider:
-
Create Provider Implementation:
// internal/providers/newprovider.go type NewProvider struct { name string endpoint string apiKey string } func (p *NewProvider) TransformRequest(request []byte) ([]byte, error) { // Implement Claude → Provider format transformation } func (p *NewProvider) TransformResponse(response []byte) ([]byte, error) { // Implement Provider → Claude format transformation } func (p *NewProvider) TransformStream(chunk []byte, state *StreamState) ([]byte, error) { // Implement streaming response transformation (Provider → Claude format) }
-
Register Provider:
// internal/providers/registry.go func (r *Registry) Initialize() { r.Register(NewOpenRouterProvider()) r.Register(NewOpenAIProvider()) r.Register(NewAnthropicProvider()) r.Register(NewNvidiaProvider()) r.Register(NewGeminiProvider()) r.Register(NewYourProvider()) // Add here }
-
Update Domain Mapping:
// internal/providers/registry.go domainProviderMap := map[string]string{ "your-provider.com": "yourprovider", // ... existing mappings }
🐹 Go 1.24.4 or later |
💻 Development Tools (optional) |
# Development with hot reload (automatically installs Air if needed)
make dev
# This will:
# - Install Air if not present
# - Start the server with `cco start --verbose`
# - Watch for Go file changes
# - Automatically rebuild and restart on changes
🔨 Single Platform go build -o cco .
# or
make build
task build 🌍 Cross-Platform make build-all
task build-all |
🎯 Manual Cross-Compilation GOOS=linux GOARCH=amd64 go build -o cco-linux-amd64 .
GOOS=darwin GOARCH=amd64 go build -o cco-darwin-amd64 .
GOOS=windows GOARCH=amd64 go build -o cco-windows-amd64.exe . |
🔍 Basic Tests go test ./...
make test
task test |
📊 Coverage go test -cover ./...
make coverage
task test-coverage |
🛡️ Security task security
task benchmark
task check |
The project includes both a traditional Makefile
and a modern Taskfile.yml
for task automation. Task provides more powerful features and better cross-platform support.
📋 Available Tasks (Click to expand)
# Core development tasks
task build # Build the binary
task test # Run tests
task fmt # Format code
task lint # Run linter
task clean # Clean build artifacts
# Advanced tasks
task dev # Development mode with hot reload
task build-all # Cross-platform builds
task test-coverage # Tests with coverage report
task benchmark # Run benchmarks
task security # Security audit
task check # All checks (fmt, lint, test, security)
# Service management
task start # Start the service (builds first)
task stop # Stop the service
task status # Check service status
# Configuration
task config-generate # Generate example config
task config-validate # Validate current config
# Utilities
task deps # Download dependencies
task mod-update # Update all dependencies
task docs # Start documentation server
task install # Install to system
task release # Create release build
Create /etc/systemd/system/claude-code-open.service
:
[Unit]
Description=Claude Code Open
After=network.target
[Service]
Type=simple
User=your-user
ExecStart=/usr/local/bin/cco start
# Or if using go install without symlink:
# ExecStart=%h/go/bin/claude-code-open start
# Or with dynamic Go path:
# ExecStartPre=/usr/bin/env bash -c 'echo "GOPATH: $(go env GOPATH)"'
# ExecStart=/usr/bin/env bash -c '"$(go env GOPATH)/bin/claude-code-open" start'
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl enable claude-code-open
sudo systemctl start claude-code-open
The router respects these environment variables:
🔑 |
📁 |
🔑 How CCO_API_KEY Works |
---|
1️⃣ No Config File - Creates minimal config with all providers |
# Use your OpenAI API key directly with OpenAI
export CCO_API_KEY="sk-your-openai-key"
cco start
# This request will use your OpenAI key:
# - "openai,gpt-4o"
curl http://localhost:6970/health
📋 Log Information
|
📈 Operational Metrics
|
🚫 Service Won't Start
🔑 Authentication Errors
|
⚙️ Transformation Errors
🐌 Performance Issues
|
cco start --verbose
🔍 Debug Information |
---|
✅ Request/response transformations |
This project is licensed under the MIT License - see the LICENSE file for details.
🎯 v0.3.0 - Latest Release
✨ New Providers - Added Nvidia and Google Gemini support (5 total providers)
📄 YAML Configuration - Modern YAML config with automatic defaults
🔍 Model Whitelisting - Filter available models per provider using patterns
🔐 API Key Protection - Optional proxy-level authentication
💻 Enhanced CLI - New cco config generate
command
🧪 Comprehensive Testing - 100% test coverage for all providers
📋 Default Model Management - Auto-populated curated model lists
🔄 Streaming Tool Calls - Fixed complex streaming parameter issues
⚡ v0.2.0 - Architecture Overhaul
🏗️ Complete Refactor - Modular architecture
🔌 Multi-Provider Support - OpenRouter, OpenAI, Anthropic
💻 Improved CLI Interface - Better user experience
🛡️ Production-Ready - Error handling and logging
⚙️ Configuration Management - Robust config system
🔄 Process Lifecycle - Proper service management
🌱 v0.1.0 - Initial Release
🎯 Proof-of-Concept - Initial implementation
🔌 Basic OpenRouter - Single provider support
🌐 Simple Proxy - Basic functionality
Made with ❤️ for the Claude Code community