REST API to MCP Server Migration: Ship This Weekend

Your REST API works. It handles auth, it scales, your customers rely on it every day. But to an AI agent — Claude, Cursor, ChatGPT Agents — it doesn’t exist.

AI agents don’t browse documentation, construct HTTP requests from scratch, or parse OpenAPI specs on the fly. They interact with tools: named functions with structured inputs and outputs that they can discover, invoke, and reason about. The REST API to MCP server migration is the bridge between what you already have and the fastest-growing distribution channel in software right now. And it’s genuinely a weekend project — not a quarter-long rewrite.

MCP SDK downloads hit 97 million per month as of early 2026. The protocol is backed by Anthropic, OpenAI, Google, and Microsoft (MCP Manager). OpenAI adopted MCP across its Agents SDK, Responses API, and ChatGPT desktop in March 2025. In December 2025, Anthropic donated the protocol to the Agentic AI Foundation under the Linux Foundation, cementing it as a vendor-neutral open standard.

This isn’t a niche experiment — it’s infrastructure. If your API isn’t reachable by agents, your product is invisible to a growing slice of how developers work.

Why Your REST API Is Invisible to AI Agents (And What That’s Costing You)

MCP server downloads grew from roughly 100,000 in November 2024 to over 8 million by April 2025 — an 80x increase in five months (MCP Manager). To put that in context: Zapier’s developer platform took five years to reach 1,400 integrations. MCP hit 1,400 servers in approximately one year (Bloomberry). As of early 2026, over 8,600 MCP servers are indexed across official registries, with unofficial registries like mcp.so indexing over 16,000 (SkillsIndex).

Every one of those servers is a competitor that can now be invoked directly from Claude, Cursor, or a custom agent workflow. If your SaaS product isn’t on that list, you’re not being considered for agentic use cases — no matter how good your REST API documentation is.

The opportunity cost is real. Agents route tasks to whatever tools are available and well-described. First-mover advantage here is significant.

REST vs. MCP — The Mental Model Shift That Makes Everything Click

Before writing a single line of code, you need to internalize one conceptual shift. REST thinks in resources and verbs: a URL identifies a thing, and HTTP methods (GET, POST, PUT, DELETE) describe what to do with it.

MCP thinks in named functions: a tool is a callable with a name, a description, and a JSON Schema that defines its inputs. The description isn’t documentation — it’s the signal the AI model reads to decide whether to call the tool at all.

Here’s the same operation in both paradigms:

REST endpoint:

“`

GET /contacts/{id}

Authorization: Bearer {token}

“`

Equivalent MCP tool definition:

“`json

{

“name”: “get_contact”,

“description”: “Retrieve a single contact by ID. Use this when you need details about a specific person — not for listing or searching contacts.”,

“inputSchema”: {

“type”: “object”,

“properties”: {

“id”: { “type”: “string”, “description”: “The contact’s unique identifier” }

},

“required”: [“id”]

}

}

“`

Notice what changed: the description is now load-bearing. An agent that can’t understand what your tool does will either call it incorrectly or ignore it entirely. Write descriptions as if you’re explaining the tool to a smart colleague who has never seen your API — not as if you’re writing a docstring.

Choosing What to Expose: A Decision Framework for Tools, Resources, and Things to Leave Alone

Not every endpoint belongs in your MCP server. Exposing everything creates noise that degrades agent performance and expands your attack surface. Here’s a practical decision framework:

Expose as MCP Tools (actions with side effects or meaningful inputs):

  • POST, PUT, PATCH, DELETE endpoints that change state
  • Search and filter endpoints where inputs meaningfully shape output
  • Any endpoint an agent might need as part of a multi-step workflow

Expose as MCP Resources (data the agent reads passively):

  • Read-heavy GET endpoints returning reference data (user profile, account settings, plan details)
  • Endpoints that provide context the agent needs before taking action

Leave REST-only, at least initially:

  • High-frequency polling endpoints (health checks, metrics streams) — agents loop, and you don’t want them hammering these at 10–100x human call rates
  • Endpoints with complex binary payloads (file uploads, raw exports)
  • Admin and internal endpoints that should never be agent-accessible

Rule of thumb: if a human would call it from a UI form, it probably makes a good Tool. If a human would call it once to load context, it probably makes a good Resource.

Destructive actions — DELETE, bulk operations, anything irreversible — deserve explicit confirmation patterns in their descriptions. Something like: “This permanently deletes the record and cannot be undone. Only call this after explicitly confirming the action with the user.” Agents respect these constraints when they’re clearly stated.

The REST API to MCP Server Migration Playbook — Before and After Code (Python + Node.js)

This is where most tutorials stop being useful. Let’s look at real migration code for both major ecosystems.

Python: FastAPI + fastapi-mcp

If you’re running FastAPI, the migration is almost embarrassingly straightforward. The `fastapi-mcp` library wraps your existing app as an MCP server with as few as three lines of code — zero manual schema duplication required (fastapi-mcp GitHub).

Before (existing FastAPI route):

“`python

from fastapi import FastAPI

app = FastAPI()

@app.get(“/contacts/{contact_id}”)

async def get_contact(contact_id: str):

return db.get_contact(contact_id)

“`

After (MCP layer added):

“`python

from fastapi import FastAPI

from fastapi_mcp import FastApiMCP

app = FastAPI()

@app.get(“/contacts/{contact_id}”)

async def get_contact(contact_id: str):

return db.get_contact(contact_id)

mcp = FastApiMCP(app)

mcp.mount()

“`

Your existing routes are automatically discovered and exposed as MCP tools. FastAPI’s Pydantic models become the JSON Schema. Your OpenAPI `description` annotations become tool descriptions. The REST API keeps working exactly as before — `mcp.mount()` adds a `/mcp` endpoint alongside everything else.

Node.js: Express + MCP SDK

For Express users, you’ll add `@modelcontextprotocol/sdk` and register tools manually — slightly more code, but complete control over descriptions and schema.

“`javascript

import express from ‘express’;

import { McpServer } from ‘@modelcontextprotocol/sdk/server/mcp.js’;

import { StreamableHTTPServerTransport } from ‘@modelcontextprotocol/sdk/server/streamableHttp.js’;

import { z } from ‘zod’;

const app = express();

const mcpServer = new McpServer({ name: ‘my-saas-api’, version: ‘1.0.0’ });

mcpServer.tool(

‘get_contact’,

‘Retrieve a contact by ID. Use for fetching details about a specific person.’,

{ id: z.string().describe(‘Contact unique identifier’) },

async ({ id }) => {

const contact = await contactService.getById(id); // reuse existing service layer

return { content: [{ type: ‘text’, text: JSON.stringify(contact) }] };

}

);

// Mount MCP at /mcp — REST routes are completely untouched

app.use(‘/mcp’, (req, res) => {

const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined });

mcpServer.connect(transport);

transport.handleRequest(req, res);

});

“`

The critical pattern: wrap your service layer, not your HTTP handlers. Reuse the business logic, skip the HTTP ceremony. This keeps your tool implementations DRY and makes testing straightforward.

Running REST and MCP in Parallel Without Breaking Existing Integrations

The strategy here is non-negotiable: never remove, always add. Your MCP server should be a new mount point on your existing server process — not a replacement service, and definitely not a separate repository.

Both examples above follow this pattern deliberately. FastAPI’s `mcp.mount()` adds `/mcp` routes alongside your existing routes. Express’s `app.use(‘/mcp’, …)` leaves every other route untouched.

What this means operationally:

  • Existing API consumers see zero changes
  • You can ship the MCP layer behind a feature flag and enable it per customer
  • If something goes wrong, rollback is removing one line of code
  • Schema drift between two codebases — the failure mode that kills parallel implementations — is impossible when there’s only one codebase

Don’t create a separate MCP microservice. The temptation to “keep things clean” by isolating MCP into its own service sounds reasonable until you’re maintaining two copies of your business logic.

Authentication Done Right: From API Keys to OAuth 2.1 + PKCE in One Weekend

This is the section most tutorials skip entirely. It’s also the one that will determine whether your MCP server is production-safe.

The current state of MCP auth is genuinely concerning. Only 8.5% of MCP servers have implemented OAuth — despite the November 2025 MCP spec mandating OAuth 2.1 + PKCE for public remote servers. Worse, 53% of production MCP servers rely on insecure long-lived static secrets for authentication (Astrix Security). You can do better, and you should.

Stage 1: API key pass-through (Friday night — unblocks testing immediately)

For local/stdio MCP servers used in Claude Desktop or Cursor, passing an API key through environment variables is a reasonable starting point:

“`json

{

“mcpServers”: {

“my-saas-dev”: {

“command”: “python”,

“args”: [“-m”, “my_saas_mcp”],

“env”: { “API_KEY”: “sk-your-dev-key”, “BASE_URL”: “http://localhost:8000” }

}

}

}

“`

This is not production-ready for a public remote server. It is fine for local development and internal tooling.

Stage 2: OAuth 2.1 + PKCE (by end of weekend)

For a public-facing remote MCP server, you need OAuth 2.1 with PKCE. The minimum viable path if you’re using an existing OAuth provider (Auth0, Okta, Supabase):

  1. Register an OAuth application in your existing auth system
  2. Implement the MCP authorization endpoint — your server returns an `authorization_url` that starts the PKCE flow
  3. Handle the callback and exchange the authorization code for tokens
  4. Validate tokens on every incoming MCP request

Steps 1 and 3 are mostly provider configuration. The MCP SDK handles PKCE challenge/verifier generation.

One critical security note before you wire this up: always validate and sanitize OAuth `authorization_endpoint` URLs before using them. A CVSS 9.6 critical vulnerability in the widely-used `mcp-remote` package — with 437,000+ downloads, referenced in Cloudflare, Hugging Face, and Auth0 docs — allowed command injection through unsanitized authorization endpoint URLs (JFrog Security Research). Never construct redirect URLs from unvalidated input.

The Edge Cases Nobody Warns You About (Error Handling, Rate Limits, Schema Gotchas)

Error handling: Don’t swallow HTTP errors

The single most common mistake in MCP migrations is catching HTTP errors and returning empty results. When your API returns a 404, an agent that receives `null` assumes the record doesn’t exist and moves on. When it receives silence on a 500, it either retries forever or gives up. Neither is correct behavior.

Map HTTP status codes to explicit MCP error responses:

“`python

async def get_contact_tool(id: str):

response = await api_client.get(f”/contacts/{id}”)

if response.status_code == 404:

raise McpError(ErrorCode.InvalidParams, f”Contact {id} not found — verify the ID is correct”)

if response.status_code == 429:

raise McpError(ErrorCode.InternalError, “Rate limit exceeded. Wait 60 seconds before retrying.”)

if response.status_code >= 500:

raise McpError(ErrorCode.InternalError, f”Upstream API error ({response.status_code}) — this is transient, retry once”)

return response.json()

“`

The error message is what the agent reads to decide its next action. Write it like you’re leaving instructions.

Rate limiting: Agents loop

A human developer calls your search endpoint perhaps 20 times per hour. An agent in an agentic loop can hit it 200 times per minute without any user involvement. Your existing rate limits were designed for human call patterns.

Apply a separate, more conservative rate limit policy to your `/mcp` path. A reasonable baseline: 60 tool calls per minute per session, with a clear `Retry-After` duration surfaced in the error message. If you’re behind a gateway like Kong, Zuplo, or AWS API Gateway, this is a configuration change, not a code change.

Schema gotchas

Two issues that trip up nearly every migration:

  • Optional vs. required fields: FastAPI’s Pydantic models infer `required` from Python type hints. If a field is typed `str | None = None`, it’s optional. Double-check your generated schemas match your intent — agents will always try to fill required fields.
  • Enum types: REST APIs often accept string enums. If your MCP schema doesn’t include the `enum` constraint, agents will invent values. They’re creative in the worst way.

Testing, Smoke-Testing with Claude Desktop, and Shipping to Production

Start with MCP Inspector

Before connecting any AI client, use the official [MCP Inspector](https://inspector.mcptools.dev/) to validate your server. It’s a browser-based tool that connects to your local MCP server and lets you browse all registered tools, send test calls with custom inputs, and inspect raw request/response payloads. Run it against every tool before touching Claude Desktop.

Smoke test with Claude Desktop

Once Inspector passes, add your server to Claude Desktop and ask it to perform a real task — not “call the get_contact tool with ID 123”, but “find the contact named Sarah and summarize her recent activity.” Watch the tool call log. If the agent calls the wrong tool, your descriptions need work. If it fails silently, check your error handling.

The weekend timeline

Friday evening (2–3 hours): Set up the MCP SDK, register your first 3–5 tools, get MCP Inspector passing cleanly.

Saturday (6–8 hours): Complete tool registration for all target endpoints. Implement auth (API key pass-through to unblock testing). Write explicit error handling for every tool. Run end-to-end tests with Claude Desktop.

Sunday (4–6 hours): Implement OAuth 2.1 + PKCE if shipping a public remote server. Apply rate limiting to the `/mcp` path. Deploy to staging. Smoke test with a production-like API key and a real agent workflow.

By Sunday evening, you have a tested, production-grade MCP server running alongside your REST API — with auth, error handling, and rate limiting in place.

Conclusion: Complete Your REST API to MCP Server Migration This Weekend

The REST API to MCP server migration isn’t a rewrite — it’s a translation layer that makes your existing product natively callable by every AI agent your customers are already using. The conceptual shift from resources-and-verbs to named functions is the hardest part, and the implementation follows naturally once that clicks.

The teams shipping MCP support this weekend aren’t the ones with the best APIs. They’re the ones who stopped waiting for a perfect migration guide and started wrapping what they already had.

Pick your two best-loved endpoints, wrap them this Friday, and connect them to Claude Desktop before lunch on Saturday. Once you see an agent reason about your API in real time, the rest of the migration writes itself.

Leave a Reply

Your email address will not be published. Required fields are marked *