Every conversation about adding MCP to a SaaS product eventually arrives at the same question: how does the AI client authenticate? It’s the question that decides whether you ship or whether you stall in security review for a quarter. And it’s the question with the most subtle answer in the entire MCP specification.
This post is the practical guide. What auth methods MCP supports, which one your SaaS should pick, and the handful of mistakes that turn a clean bridge into a credential leak.
The four auth flows MCP supports
The current MCP specification recognizes four authentication patterns. Each fits a different SaaS shape; only one fits all of them.
1. API key in a header
The user generates an API key in your dashboard. They paste it into their MCP client’s configuration. Every request the client makes to your MCP server includes that key in an Authorization header (or a custom header like X-API-Key). Your server validates the key and routes the call.
Best fit: Most SaaS products. If your API already issues keys, you’re already 80% of the way there.
Trade-offs: Keys are long-lived. If a user pastes their key into the wrong place, it’s compromised until they rotate it. Scope your keys narrowly (per-user, per-workspace, with explicit permissions) so the blast radius of a leak is small.
2. Bearer token (short-lived)
The user authenticates through your normal login flow, receives a short-lived token (15 minutes to a few hours), and the MCP client refreshes it as needed. Same header pattern as API keys, but the token is ephemeral.
Best fit: Products that already use short-lived tokens server-to-server. If you’ve built JWT-based auth into your API, this is a near-zero-effort lift.
Trade-offs: You need a refresh mechanism the MCP client can reach. Most modern MCP clients support OAuth 2.1’s refresh flow; older or custom clients may not.
3. OAuth 2.1 with PKCE
The full OAuth dance. The user clicks “Connect” in their MCP client, gets redirected to your authorization page, approves the connection, and the client receives an access token through the authorization code flow with Proof Key for Code Exchange.
Best fit: Enterprise SaaS where individual users need fine-grained scopes, and where security review will reject anything less. Also the right pick if you’re a multi-tenant platform where one customer’s MCP connection shouldn’t touch another’s data.
Trade-offs: Real implementation cost. You need an OAuth authorization server, scope definitions, consent screens, token storage, and refresh handling. The MCP specification documents the flow but doesn’t simplify it.
4. No authentication
The MCP server is open to anyone with the URL. Used for public read-only data — weather, stock prices, public datasets.
Best fit: Public APIs that genuinely have no per-user state. Almost never the right answer for SaaS.
Which to pick: a five-second decision tree
Use OAuth 2.1 if your customers’ security teams will block anything else. Use bearer tokens if you already use JWTs and your MCP client supports refresh. Use API keys for almost everything else. Use no auth only for genuinely public data.
For early-stage SaaS specifically: ship API keys first, plan OAuth 2.1 for your enterprise tier. This is the same path Linear, Notion, Stripe, and every major B2B platform has taken with their API surface — there’s no reason to do it differently for MCP.
The mistake that breaks every auth model: storing user credentials in the bridge
The single biggest security mistake in MCP server design is the bridge layer storing user credentials.
Here’s what goes wrong. You build (or buy) an MCP bridge that needs to know your customer’s API key to forward calls. Naïve implementations ask the customer for the key once and store it in the bridge’s database. Now your bridge is a high-value target — a single breach exposes every customer’s API key for every product they’ve connected.
The right model is end-to-end credential pass-through. The customer configures their key in their MCP client (Claude Desktop, Cursor, etc.). The client sends it on every request. The bridge forwards it to your real API without persisting it. If the bridge’s database is compromised tomorrow, no customer credentials are exposed.
If you’re evaluating MCP bridge tools, this is the first question to ask: do you store user credentials, or pass them through? The correct answer is “pass-through, never stored.” GetMCP uses pass-through by design — the credential the customer pastes into their MCP client is forwarded with each call, never written to disk.
Scope: the second mistake
The second mistake is shipping MCP tools with the same scope as your full API. If a user’s API key normally grants read:everything write:everything delete:everything, then a leaked key from an MCP context exposes the entire account.
Two patterns mitigate this.
MCP-specific scopes. Issue a separate key for MCP use that’s limited to a subset of your API surface. Read-only by default; the user explicitly opts in to write or delete capabilities. This works well for API-key-based auth.
OAuth scopes. If you’ve gone full OAuth 2.1, define MCP-specific scopes and ask for them at consent time. A user authorizing Claude to “create tasks and read project status” sees exactly that on the consent screen, and the resulting token can’t do anything else.
Tool-level permissions: the third layer
Even after auth and scope, you have one more layer of safety: per-tool permissions.
Some tools should require explicit confirmation before the AI client invokes them. delete_account, issue_refund, send_email_to_all_customers — these need a human “yes” before they run, regardless of whether the auth token would permit them.
MCP supports this through tool annotations the client honors. Mark high-risk tools as requiring confirmation; Claude, Cursor, and other compliant clients will show the user a prompt before invoking. This is belt-and-suspenders, but it’s the difference between a useful agent and an expensive accident.
Logging and audit: what every MCP server needs
Auth is half the story. The other half is being able to prove what happened.
Every MCP server should log, at minimum: the tool called, the arguments passed (with sensitive values redacted), the timestamp, the requesting client, and the outcome. For enterprise-tier products, this is a hard compliance requirement; for everyone else, it’s how you debug agent behavior when something weird happens.
Practical advice: don’t log raw arguments without filtering. If a tool takes a “message” parameter, you’ll quickly accumulate logs full of customer messages including ones with PII. Either redact predictable fields by name, or hash them, or log only metadata (length, type) for sensitive params.
What good auth looks like on a real MCP server
To bring this together — a well-secured MCP server in 2026 looks like this:
API keys (or OAuth tokens) are required on every request. They’re scoped narrowly to the user’s account and to MCP-specific permissions. Credentials are passed end-to-end and never stored in the bridge. High-risk tools are flagged for client-side confirmation. Every tool call is logged with sensitive args redacted, and the logs are queryable by the account owner.
GetMCP ships with all of this built in — pass-through auth (no credentials stored), per-tool permission controls, an audit log on every plan, and OAuth 2.1 on the Agency tier. You don’t have to design this yourself.
Spin up your secure MCP server free →
If you’re earlier in the journey, What Is Model Context Protocol? explains the protocol from scratch. If you’re past auth and into implementation, OpenAPI to MCP walks through generating an MCP server from your spec.
Related posts
OpenAPI to MCP: Turn Your API Spec into a Live MCP Server
You have an OpenAPI spec. Good news: that's most of the work of building an MCP server already…
How to Add MCP to Your SaaS in 2026: A Practical Guide
Six months ago, "add MCP support" was a Q4-roadmap line item. Today it's the difference between your product…