A FastAPI-based server plugin for the llm
CLI that exposes LLM models through API interfaces compatible with popular LLM clients.
This allows you to use local or remote LLM models with any client that expects standard LLM API formats.
Install this plugin to llm
:
# Install from PyPI
llm install llm-api-server
# Or install from GitHub
llm install https://github.com/danielcorin/llm-api.git
# Or install from local development directory
cd /path/to/llm-api
llm install -e .
Verify installation:
# Check the plugin is installed
llm plugins
# The 'api' command should be available
llm api --help
For development, use uv
:
# Clone the repository
git clone https://github.com/danielcorin/llm-api.git
cd llm-api
# Create a virtual environment and install dependencies
uv venv
source .venv/bin/activate
uv sync --dev
# Install as an editable LLM plugin
llm install -e .
Start the API server:
llm api --port 8000
The server provides OpenAI Chat Completions API endpoints:
GET /v1/models
- List available modelsPOST /v1/chat/completions
- Create chat completions with:- Streaming support
- Tool/function calling (for models with
supports_tools=True
) - Structured output via
response_format
(for models withsupports_schema=True
) - Conversation history with tool results
from openai import OpenAI
# Point the client to your local llm-server
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed" # API key is not required for local server
)
# Use any model available in your llm CLI
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
)
print(response.choices[0].message.content)
Streaming is also supported
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Models that support tools (indicated by supports_tools=True
) can use OpenAI-compatible function calling:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed" # API key is not required for local server
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What's the weather in San Francisco?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}]
)
Models that support schema (indicated by supports_schema=True
) can generate structured JSON output:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Generate a person's profile"}],
response_format={
"type": "json_schema",
"json_schema": {
"name": "person",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name", "age", "email"]
}
}
}
)
# The response will contain valid JSON matching the schema
print(response.choices[0].message.content)
Run the test script to verify the OpenAI-compliant API:
python -m pytest tests/test_openai_api.py
- Python 3.9+
llm
CLI tool installed- One or more LLM models configured in
llm
Format code:
ruff format .
Lint code:
ruff check --fix .
Run all tests:
pytest
The server integrates with the llm
CLI tool's configuration.
Make sure you follow the setup instructions.
- Installed and configured
llm
with your preferred models - Set up any necessary API keys for cloud-based models
- Verified models are available with
llm models
- OpenAI Chat Completions API (
/v1/chat/completions
)- Compatible with OpenAI Python/JavaScript SDKs
- Works tools expecting OpenAI format
- Full support for streaming, tool calling, and structured output
- OpenAI Responses API (
/v1/responses
) - Anthropic Messages API (
/v1/messages
)
MIT