Generate AI-friendly interfaces from your existing Swagger/OpenAPI specs.
This tool converts APIs into Model Control Protocol (MCP) format β used to control tools and environments via large language models.
This tool converts OpenAPI/Swagger specifications into Model Control Protocol (MCP) format, making it easy to create AI agents that can interact with your APIs.
This is a monorepo containing two packages:
packages/core
: The main OpenAPI-to-MCP CLI tool and librarypackages/playground
: A web application for testing the tool in your browser
- π₯ Generate universal mcp.json from OpenAPI/Swagger files
- π Auto-extract state schema from suitable GET endpoints
- π€ Export to multiple formats:
- π Prompt instructions with state information
- π§± JSON action templates with getState action
- π§ Function schemas with getState function (OpenAI compatible)
- π MCP server TypeScript file compatible with Claude Desktop
- π Standalone executor for API interaction (LangChain/Web compatible)
- π Plug-and-play handler files with README
- π LangChain tools with argsSchema and toolloader
- π OpenAI plugin manifest with deployment instructions
- π§ͺ Simulation mode for testing without backend changes
- Build Claude Desktop-compatible tools in seconds β Generate an MCP server that works directly with Claude
- Turn any OpenAPI spec into LangChain/AutoGPT tools β Use the executor or LangChain tools for AI interfaces in any framework
- Power AI interfaces for existing microservices β No backend changes required, works with existing APIs
- Create ChatGPT plugins effortlessly β Generate OpenAI plugin manifests with proper schemas
- Prototype AI agents with minimal setup β Use simulation mode to test AI interaction with your API
- Create custom handler logic β Extend handlers with preprocessing, caching, or business logic
# Install globally
npm install -g openapi-to-mcp
# Or use directly with npx
npx openapi-to-mcp <swagger-file>
# Basic usage - generates all outputs
openapi-to-mcp path/to/swagger.yaml
# Specify output directory
openapi-to-mcp path/to/swagger.yaml -o ./custom-output
# Generate only specific formats
openapi-to-mcp path/to/swagger.yaml --prompt --functions
# Generate MCP server for Claude Desktop
openapi-to-mcp path/to/swagger.yaml --server --api-url https://your-api.com
# Generate standalone executor for any framework
openapi-to-mcp path/to/swagger.yaml --executor --api-url https://your-api.com
# Generate individual handler files for customization
openapi-to-mcp path/to/swagger.yaml --handlers --api-url https://your-api.com
# Generate LangChain tools for your API
openapi-to-mcp path/to/swagger.yaml --langchain --api-url https://your-api.com
# Generate OpenAI plugin manifest files
openapi-to-mcp path/to/swagger.yaml --openai-plugin --api-url https://your-api.com
# Specify a particular endpoint for state schema
openapi-to-mcp path/to/swagger.yaml --state-endpoint /status
# Simulate AI interaction with your API
openapi-to-mcp path/to/swagger.yaml --simulate "list all available pets" --api-url https://your-api.com
# Simulate using Claude instead of OpenAI (default)
openapi-to-mcp path/to/swagger.yaml --simulate "add a new pet" --provider claude --api-url https://your-api.com
# See all options
openapi-to-mcp --help
Running the generator creates the following files:
generated.mcp.json
: The MCP specification for your APIprompt.txt
: Prompt instructions for LLMstemplates.json
: JSON action templatesfunctionSchemas.json
: OpenAI-compatible function schemasmcp-server.ts
: Ready-to-use TypeScript MCP server implementation for Claude Desktopexecutor.ts
: Standalone executor for using API actions in any frameworkhandlers/
: Directory with individual handler implementations for each actionlangchain-tools.ts
: Ready-to-use LangChain tools with Zod validationlangchain-toolloader.ts
: Helper for selective tool loading.well-known/ai-plugin.json
: OpenAI plugin manifest fileOPENAI-PLUGIN-README.md
: Deployment instructions for the OpenAI plugin
- Install dependencies:
npm install @modelcontextprotocol/sdk zod
- Compile the server:
tsc mcp-server.ts --esModuleInterop --module nodenext
- Run with Claude Desktop:
claude tools register mcp-server.js
// Example usage with any framework
import { ApiExecutor } from "./executor";
async function main() {
const api = new ApiExecutor("https://your-api.com");
// Get API state
const state = await api.getState();
console.log("Current state:", state);
// Execute an action
const result = await api.execute("listPets", { limit: 10 });
console.log("Pets:", result);
}
// Example usage with LangChain
import { ChatOpenAI } from "langchain/chat_models/openai";
import { AgentExecutor, createStructuredChatAgent } from "langchain/agents";
import { loadTools } from "./langchain-toolloader";
async function main() {
const model = new ChatOpenAI({
temperature: 0,
modelName: "gpt-4-turbo",
});
// Load all tools or specify which ones to load
const tools = loadTools(["listPets", "getPet", "getState"]);
const agent = createStructuredChatAgent({
llm: model,
tools,
});
const agentExecutor = new AgentExecutor({
agent,
tools,
});
const result = await agentExecutor.invoke({
input:
"What pets are available and can you show me details of pet with ID 1?",
});
console.log(result.output);
}
Follow the instructions in OPENAI-PLUGIN-README.md
to deploy your OpenAI plugin:
- Host your API on a public server
- Copy the
.well-known/ai-plugin.json
to your server - Ensure your OpenAPI spec is available at the URL specified in the plugin manifest
- Register your plugin with OpenAI
Simulation mode lets you test AI interaction with your API without requiring setup:
# Set your API key (required for simulation)
export OPENAI_API_KEY=your_key_here
# Or for Claude
export CLAUDE_API_KEY=your_key_here
# Run a simulation
openapi-to-mcp path/to/swagger.yaml --simulate "find pets with tag 'dog'" --api-url https://pet-api.com
This will:
- Parse your OpenAPI spec
- Generate necessary handler files
- Send the request to the LLM with function schemas
- Execute API call via the executor
- Return the LLM's final response with data
The tool integrates state information into all exports:
- MCP JSON: Includes a complete
stateSchema
section with structure and examples - Prompt Text: Describes the state structure and provides an example
- Function Schemas: Adds a
getState
function for retrieving the current state - Action Templates: Includes a
getState
action with empty parameters - Executor: Includes a
getState()
method for retrieving current state - Handler Files: Includes a
getState.ts
handler file - LangChain Tools: Includes a
getState
tool for retrieving current state - OpenAI Plugin: Includes state description in the plugin manifest
The tool automatically searches for suitable GET endpoints to use as state schema sources, with priority given to endpoints with names containing:
- state
- status
- scene
- objects
- tracks
- world
You can also manually specify an endpoint using the --state-endpoint
option.
- Clone the repository
- Install dependencies:
npm install
- Build all packages:
npm run build
- Run the core package locally:
npm start --workspace=openapi-to-mcp -- path/to/swagger.yaml
- Start the playground:
npm run dev --workspace=openapi-to-mcp-playground
- Log in to npm:
npm login
- Build and publish:
npm run build npm publish --workspace=openapi-to-mcp --access public
This project uses GitHub Actions for CI/CD to automatically publish new versions to npm when a new tag is pushed:
-
Update version in package.json:
npm version patch --workspace=openapi-to-mcp # or minor/major
This will automatically create a git tag.
-
Push the new tag to GitHub:
git push origin --tags
-
The GitHub Action will trigger and publish the new version to npm.
If you fork this project, you'll need to set up your own npm publishing:
- Create an npm account and get an access token from npmjs.com β Access Tokens
- Add this token to your GitHub repository as a secret named
NPM_TOKEN
- Update the package name in package.json to avoid conflicts
We welcome contributions from the community! Please read our contribution guidelines before submitting a pull request.