Skip to content

danielcorin/llm-api

Repository files navigation

llm-api

A FastAPI-based server plugin for the llm CLI that exposes LLM models through API interfaces compatible with popular LLM clients. This allows you to use local or remote LLM models with any client that expects standard LLM API formats.

Installation

As an LLM Plugin

Install this plugin to llm:

# Install from PyPI
llm install llm-api-server

# Or install from GitHub
llm install https://github.com/danielcorin/llm-api.git

# Or install from local development directory
cd /path/to/llm-api
llm install -e .

Verify installation:

# Check the plugin is installed
llm plugins

# The 'api' command should be available
llm api --help

Development Installation

For development, use uv:

# Clone the repository
git clone https://github.com/danielcorin/llm-api.git
cd llm-api

# Create a virtual environment and install dependencies
uv venv
source .venv/bin/activate
uv sync --dev

# Install as an editable LLM plugin
llm install -e .

Usage

Start the API server:

llm api --port 8000

The server provides OpenAI Chat Completions API endpoints:

  • GET /v1/models - List available models
  • POST /v1/chat/completions - Create chat completions with:
    • Streaming support
    • Tool/function calling (for models with supports_tools=True)
    • Structured output via response_format (for models with supports_schema=True)
    • Conversation history with tool results

Features

Basic Usage

from openai import OpenAI

# Point the client to your local llm-server
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="not-needed"  # API key is not required for local server
)

# Use any model available in your llm CLI
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello, how are you?"}
    ]
)

print(response.choices[0].message.content)

Streaming is also supported

stream = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Tool/Function Calling

Models that support tools (indicated by supports_tools=True) can use OpenAI-compatible function calling:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="not-needed"  # API key is not required for local server
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What's the weather in San Francisco?"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string", "description": "City name"}
                },
                "required": ["location"]
            }
        }
    }]
)

Structured Output with Schema

Models that support schema (indicated by supports_schema=True) can generate structured JSON output:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="not-needed"
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Generate a person's profile"}],
    response_format={
        "type": "json_schema",
        "json_schema": {
            "name": "person",
            "schema": {
                "type": "object",
                "properties": {
                    "name": {"type": "string"},
                    "age": {"type": "integer"},
                    "email": {"type": "string"}
                },
                "required": ["name", "age", "email"]
            }
        }
    }
)

# The response will contain valid JSON matching the schema
print(response.choices[0].message.content)

Testing

Run the test script to verify the OpenAI-compliant API:

python -m pytest tests/test_openai_api.py

Development

Prerequisites

  • Python 3.9+
  • llm CLI tool installed
  • One or more LLM models configured in llm

Code Quality

Format code:

ruff format .

Lint code:

ruff check --fix .

Running Tests

Run all tests:

pytest

Configuration

The server integrates with the llm CLI tool's configuration. Make sure you follow the setup instructions.

  1. Installed and configured llm with your preferred models
  2. Set up any necessary API keys for cloud-based models
  3. Verified models are available with llm models

Supported API Specifications

Currently Implemented

  • OpenAI Chat Completions API (/v1/chat/completions)
    • Compatible with OpenAI Python/JavaScript SDKs
    • Works tools expecting OpenAI format
    • Full support for streaming, tool calling, and structured output

Help Wanted

License

MIT

About

API server for `llm` CLI tool

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages