MCP Server: Scalable OpenAPI Endpoint Discovery and API Request Tool
TODO
- The docker image is 2GB without pre-downloaded models. Its 3.76GB with pre-downloaded models!! Too big, someone please help me to reduce the size.
TL'DR
Why I create this: I want to serve my private API, whose swagger openapi docs is a few hundreds KB in size.
- Claude MCP simply error on processing these size of file
- I attempted convert the result to YAML, not small enough and a lot of errors. FAILED
- I attempted to provide a API category, then ask MCP Client (Claude Desktop) to get the api doc by group. Still too big, FAILED.
Eventually I came down to this solution:
- It uses in-memory semantic search to find relevant Api endpoints by natural language (such as list products)
- It returns the complete end-point docs (as I designed it to store one endpoint as one chunk) in millionseconds (as it's in memory)
Boom, Claude now knows what API to call, with the full parameters!
Wait I have to create another tool in this server to make the actual restful request, because "fetch" server simply don't work, and I don't want to debug why.
https://github.com/user-attachments/assets/484790d2-b5a7-475d-a64d-157e839ad9b0
Technical highlights:
1query -> [Embedding] -> FAISS TopK -> OpenAPI docs -> MCP Client (Claude Desktop) 2MCP Client -> Construct OpenAPI Request -> Execute Request -> Return Response
Features
- 🧠 Use remote openapi json file as source, no local file system access, no updating required for API changes
- 🔍 Semantic search using optimized MiniLM-L3 model (43MB vs original 90MB)
- 🚀 FastAPI-based server with async support
- 🧠 Endpoint based chunking OpenAPI specs (handles 100KB+ documents), no loss of endpoint context
- ⚡ In-memory FAISS vector search for instant endpoint discovery
Limitations
- Not supporting linux/arm/v7 (build fails on Transformer library)
- 🐢 Cold start penalty (~15s for model loading) if not using docker image
- [Obsolete] Current docker image disabled downloading models. You have a dependency over huggingface. When you load the Claude Desktop, it takes some time to download the model. If huggingface is down, your server will not start.
- The latest docker image is embedding pre-downloaded models. If there is issues, I would revert to the old one.
Multi-instance config example
Here is the multi-instance config example. I design it so it can more flexibly used for multiple set of apis:
1{ 2 "mcpServers": { 3 "finance_openapi": { 4 "command": "docker", 5 "args": [ 6 "run", 7 "-i", 8 "--rm", 9 "-e", 10 "OPENAPI_JSON_DOCS_URL=https://api.finance.com/openapi.json", 11 "-e", 12 "MCP_API_PREFIX=finance", 13 "buryhuang/mcp-server-any-openapi:latest" 14 ] 15 }, 16 "healthcare_openapi": { 17 "command": "docker", 18 "args": [ 19 "run", 20 "-i", 21 "--rm", 22 "-e", 23 "OPENAPI_JSON_DOCS_URL=https://api.healthcare.com/openapi.json", 24 "-e", 25 "MCP_API_PREFIX=healthcare", 26 "buryhuang/mcp-server-any-openapi:latest" 27 ] 28 } 29 } 30}
In this example:
- The server will automatically extract base URLs from the OpenAPI docs:
https://api.finance.com
for finance APIshttps://api.healthcare.com
for healthcare APIs
- You can optionally override the base URL using
API_REQUEST_BASE_URL
environment variable:
1{ 2 "mcpServers": { 3 "finance_openapi": { 4 "command": "docker", 5 "args": [ 6 "run", 7 "-i", 8 "--rm", 9 "-e", 10 "OPENAPI_JSON_DOCS_URL=https://api.finance.com/openapi.json", 11 "-e", 12 "API_REQUEST_BASE_URL=https://api.finance.staging.com", 13 "-e", 14 "MCP_API_PREFIX=finance", 15 "buryhuang/mcp-server-any-openapi:latest" 16 ] 17 } 18 } 19}
Claude Desktop Usage Example
Claude Desktop Project Prompt:
You should get the api spec details from tools financial_api_request_schema
You task is use financial_make_request tool to make the requests to get response. You should follow the api spec to add authorization header:
Authorization: Bearer <xxxxxxxxx>
Note: The base URL will be returned in the api_request_schema response, you don't need to specify it manually.
In chat, you can do:
Get prices for all stocks
Installation
Installing via Smithery
To install Scalable OpenAPI Endpoint Discovery and API Request Tool for Claude Desktop automatically via Smithery:
1npx -y @smithery/cli install @baryhuang/mcp-server-any-openapi --client claude
Using pip
1pip install mcp-server-any-openapi
Configuration
Customize through environment variables:
OPENAPI_JSON_DOCS_URL
: URL to the OpenAPI specification JSON (defaults to https://api.staging.readymojo.com/openapi.json)MCP_API_PREFIX
: Customizable tool namespace (default "any_openapi"):1# Creates tools: custom_api_request_schema and custom_make_request 2docker run -e MCP_API_PREFIX=finance ...
Available Tools
The server provides the following tools (where {prefix}
is determined by MCP_API_PREFIX
):
{prefix}_api_request_schema
Get API endpoint schemas that match your intent. Returns endpoint details including path, method, parameters, and response formats.
Input Schema:
1{ 2 "query": { 3 "type": "string", 4 "description": "Describe what you want to do with the API (e.g., 'Get user profile information', 'Create a new job posting')" 5 } 6}
{prefix}_make_request
Essential for reliable execution with complex APIs where simplified implementations fail. Provides:
Input Schema:
1{ 2 "method": { 3 "type": "string", 4 "description": "HTTP method (GET, POST, PUT, DELETE, PATCH)", 5 "enum": ["GET", "POST", "PUT", "DELETE", "PATCH"] 6 }, 7 "url": { 8 "type": "string", 9 "description": "Fully qualified API URL (e.g., https://api.example.com/users/123)" 10 }, 11 "headers": { 12 "type": "object", 13 "description": "Request headers (optional)", 14 "additionalProperties": { 15 "type": "string" 16 } 17 }, 18 "query_params": { 19 "type": "object", 20 "description": "Query parameters (optional)", 21 "additionalProperties": { 22 "type": "string" 23 } 24 }, 25 "body": { 26 "type": "object", 27 "description": "Request body for POST, PUT, PATCH (optional)" 28 } 29}
Response Format:
1{ 2 "status_code": 200, 3 "headers": { 4 "content-type": "application/json", 5 ... 6 }, 7 "body": { 8 // Response data 9 } 10}
Docker Support
Multi-Architecture Builds
Official images support 3 platforms:
1# Build and push using buildx 2docker buildx create --use 3docker buildx build --platform linux/amd64,linux/arm64 \ 4 -t buryhuang/mcp-server-any-openapi:latest \ 5 --push .
Flexible Tool Naming
Control tool names through MCP_API_PREFIX
:
1# Produces tools with "finance_api" prefix: 2docker run -e MCP_API_PREFIX=finance_ ...
Supported Platforms
- linux/amd64
- linux/arm64
Option 1: Use Prebuilt Image (Docker Hub)
1docker pull buryhuang/mcp-server-any-openapi:latest
Option 2: Local Development Build
1docker build -t mcp-server-any-openapi .
Running the Container
1docker run \ 2 -e OPENAPI_JSON_DOCS_URL=https://api.example.com/openapi.json \ 3 -e MCP_API_PREFIX=finance \ 4 buryhuang/mcp-server-any-openapi:latest
Key Components
-
EndpointSearcher: Core class that handles:
- OpenAPI specification parsing
- Semantic search index creation
- Endpoint documentation formatting
- Natural language query processing
-
Server Implementation:
- Async FastAPI server
- MCP protocol support
- Tool registration and invocation handling
Running from Source
1python -m mcp_server_any_openapi
Integration with Claude Desktop
Configure the MCP server in your Claude Desktop settings:
1{ 2 "mcpServers": { 3 "any_openapi": { 4 "command": "docker", 5 "args": [ 6 "run", 7 "-i", 8 "--rm", 9 "-e", 10 "OPENAPI_JSON_DOCS_URL=https://api.example.com/openapi.json", 11 "-e", 12 "MCP_API_PREFIX=finance", 13 "buryhuang/mcp-server-any-openapi:latest" 14 ] 15 } 16 } 17}
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
This project is licensed under the terms included in the LICENSE file.
Implementation Notes
- Endpoint-Centric Processing: Unlike document-level analysis that struggles with large specs, we index individual endpoints with:
- Path + Method as unique identifiers
- Parameter-aware embeddings
- Response schema context
- Optimized Spec Handling: Processes OpenAPI specs up to 10MB (~5,000 endpoints) through:
- Lazy loading of schema components
- Parallel parsing of path items
- Selective embedding generation (omits redundant descriptions)