You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A personal LLM gateway with fault-tolerant capabilities for calls to LLM models from any provider with OpenAI-compatible APIs. Advanced features like retry, model sequencing, and body parameter injection are also available. Especially useful to work with AI coders like Cline and RooCode and providers like OpenRouter.
Configurable, interactive proxy server for all LLM hackers. Features API key rotation, protocol conversion, and piping API traffic through locally installed CLI apps like gemini-cli. Route any app to any remote LLM model or backend, override hardcoded models