GetMCP/Blog/Build with MCP

OpenAPI to MCP: Turn Your API Spec into a Live MCP Server

InfiWebs · · 6 min read

You have an OpenAPI spec. Maybe you also have a Postman collection, or a folder of Swagger 2.0 JSON files from before the team migrated, or just a long list of cURL commands in a Notion doc somewhere.

Good news: that’s most of the work of building an MCP server already done.

Better news: the gap between having an API spec and having a working MCP server can be closed in under ten minutes if you know which translations to make.

This post is the practical walkthrough — what gets translated cleanly, what doesn’t, and how to do the conversion either by hand or with one of the bridge tools.

OpenAPI and MCP: same problem, different shape

Both formats describe how software talks to software. Both define endpoints (or tools), parameters (or arguments), authentication, and response schemas. The conceptual overlap is large enough that most of the translation is mechanical.

The shapes differ in two important ways.

OpenAPI is designed for humans and code generators. It exists so a developer can read the spec, understand the API, and either write a client or generate one. The descriptions, examples, and schemas optimize for human comprehension.

MCP is designed for language models. The same fields exist — name, description, parameters, schema — but they’re consumed by a model that’s deciding whether to call your tool. The descriptions matter more, the names matter more, and the schemas need to be specific enough that the model fills in arguments correctly without round-tripping with the user.

Same data, different reader. The translation isn’t lossy, but it isn’t neutral either.

What translates cleanly

Paths become tools. GET /projects becomes a list_projects tool. POST /projects/:id/tasks becomes create_task_in_project. The naming convention is verb-noun rather than the REST verb-path style, and the conversion is rote.

Parameters become arguments. Query string parameters, path parameters, and request body fields all collapse into a single flat argument list on the MCP side. OpenAPI distinguishes in: query from in: body; MCP doesn’t care — the bridge layer rebuilds the HTTP request from the flat argument list on its way out.

Response schemas become tool outputs. If your OpenAPI spec defines what 200 OK looks like, the MCP tool can use the same schema for its return type. The model uses this to know what to expect back, which makes downstream reasoning more accurate.

Authentication maps through. OpenAPI’s securitySchemes map cleanly to MCP’s auth options. Bearer tokens, API keys in headers, and OAuth 2.1 all translate without surprise.

What doesn’t translate cleanly

A handful of OpenAPI patterns don’t have direct MCP equivalents, and pretending they do produces brittle servers.

File uploads. OpenAPI handles multipart/form-data natively; MCP tools are JSON-in, JSON-out. The workaround is to expose an upload-URL endpoint that returns a signed URL, then have a separate tool that operates on the uploaded resource by reference.

Pagination. OpenAPI describes paginated lists with ?page=2&per_page=50; a model calling an MCP tool doesn’t naturally know to follow pagination. The fix is to make the tool return the first page plus a next_cursor field, and add a separate fetch_next_page tool the model can call when the user asks for more.

Streaming responses. Server-sent events and chunked transfer in REST don’t translate to single-shot MCP responses. For tools that need to stream — log tails, long-running jobs — use MCP’s progress notifications.

Complex auth flows. OAuth 2.1 authorization code flow with PKCE is supported, but the actual flow (redirect the user to an auth page, handle the callback, exchange the code) happens before the MCP client connects. The MCP server receives the resulting access token; it doesn’t drive the flow.

The manual path (if you want full control)

If you’re building the MCP server by hand, the minimum structure for one tool is roughly:

{
"name": "create_task",
"description": "Create a task in the user's project, optionally assigning it to a teammate.",
"input_schema": {
"type": "object",
"required": ["title", "project_id"],
"properties": {
"title": {
"type": "string",
"description": "Task title visible to the user"
},
"project_id": {
"type": "string",
"description": "ID of the project to add the task to"
},
"due_date": {
"type": "string",
"format": "date"
},
"assignee_id": {
"type": "string",
"description": "Optional teammate ID; omit to leave unassigned"
}
}
}
}

Multiply that by every endpoint, wire it to a JSON-RPC 2.0 handler, deploy it on a public HTTPS endpoint, and you have a working MCP server.

Realistic time for forty endpoints, by hand: a long week. Most of that time is description-writing, not code.

The ten-minute path

The faster route — and the one we’d recommend for nine SaaS teams out of ten — is to use a bridge that consumes your OpenAPI file and emits an MCP server automatically.

GetMCP imports OpenAPI 3.0 and 3.1 specs, Swagger 2.0, Postman collections, and individual cURL commands. You upload the file (or paste the URL), and the import generates one MCP tool per endpoint with names, descriptions, and schemas pre-filled from your spec.

That doesn’t mean the work is done — the auto-generated descriptions still need a pass to be model-friendly rather than developer-friendly, and you’ll want to disable any tools the agent shouldn’t have access to. But you skip the boilerplate entirely and start from a working baseline that takes roughly eight minutes to import and ten more to polish.

Tips for making auto-generated tools actually good

A few small moves separate a mechanically-imported MCP server from one the model actually picks correctly.

Rename mechanically-generated tools. get_projects_id_tasks is what an importer produces from GET /projects/:id/tasks. list_tasks_in_project is what your customers’ AI clients should see.

Rewrite descriptions in second person. OpenAPI descriptions tend to be third-person and reference-style. MCP descriptions should be instructional: “Use this tool to list tasks belonging to a specific project. Returns up to fifty tasks per call, sorted by most recently updated.”

Scope tools out of the agent’s reach when they shouldn’t be there. The bulk-delete endpoint exists in your API; it should not exist in your MCP surface. Most bridges let you toggle tools on and off without removing them from the underlying API.

Add examples to the description string. “Example: list_tasks_in_project with project_id=’proj_8f21′ returns the open tasks in that project.” Models pick tools more accurately when they can pattern-match against a concrete example.

Watch the description length. Very long descriptions get truncated by some clients; very short ones leave the model guessing. Two to four sentences is the sweet spot — enough to describe the tool’s purpose, its arguments, and any non-obvious behavior.

Where to go from here

If you have an OpenAPI spec sitting in your repo, the next ten minutes can produce a working MCP server. You can import yours into GetMCP free — unlimited sites, no credit card, no time limit on the free plan.

If you don’t have a spec yet but you do have an API, the practical move is to generate one. Most modern frameworks have an OpenAPI generator built in or one click away. The spec is useful even outside MCP; it doubles as documentation and as the starting point for SDKs.

And if you’re earlier in the journey and still figuring out whether MCP makes sense for your product at all, What Is Model Context Protocol? A SaaS Developer’s Guide is the place to start. If you’re past that and weighing build-vs-buy, How to Add MCP to Your SaaS in 2026 has the full decision matrix.

InfiWebs
← All posts

Leave a reply

Try GetMCP

Ship MCP for your product, in days.

Install the plugin, import your API, share one URL. Your users will start calling your tools from Claude tonight.