GetMCP/Product/Chat Playground

Test your MCP server with a real AI client, right inside WordPress.

Chat is a built-in playground that drives your GetMCP servers the same way Claude Desktop, ChatGPT, and Cursor do — real provider, real tool calls, real OAuth. Skip the "install a desktop client and reconfigure JSON" round-trip; iterate on your tools in seconds.

4 AI providers 5 auth strategies OAuth 2.1 · PKCE 0 Ajax hops
AI providers

Bring your own model. Any of the four.

Drop in an API key for Claude, ChatGPT, Groq, or Gemini and the chat starts using it. Keys are encrypted with libsodium on disk and decrypted only for your authenticated admin session — calls go directly from your browser to the provider, never through GetMCP servers.

01 · CLAUDEAnthropic

Claude Sonnet · Opus · Haiku

Direct calls to api.anthropic.com with prompt caching enabled by default — the conversation prefix re-uses cache on every turn at ~10% of normal input cost.

  • Native tool calling (tool_use blocks)
  • Top-level cache_control auto-place
  • Browser-direct via dangerous-direct-browser opt-in
02 · CHATGPTOpenAI

GPT-4o · 4-Turbo · o1-mini

Standard Chat Completions endpoint with the same tool list shape Cursor uses. OpenAI's automatic prompt caching kicks in once the prefix crosses 1024 tokens — no client config needed.

  • OpenAI tools + tool_calls
  • Auto-cache for stable prefixes
  • Per-server tool selection at request time
03 · GROQFast

Llama 3.3 70B · 3.1 8B · Mixtral

OpenAI-compatible interface against Groq's LPU runtime. When latency matters more than reasoning depth — quick smoke tests, deterministic schemas — Groq turns each turn into a sub-second round trip.

  • OpenAI-compatible request shape
  • LPU-backed millisecond responses
  • Free tier covers iterative testing
04 · GEMINIGoogle

Gemini 2.5 Pro · 2.5 Flash

Native Gemini generateContent with function-declaration tools. Schema gets sanitised on the way through (empty required arrays stripped) so existing OpenAPI-derived tools just work.

  • Gemini functionCall / functionResponse
  • System instruction + per-call resource
  • Schema sanitiser for cross-provider tools
Live tool execution

Watch the AI pick a tool, call it, and answer.

When the model decides to call a tool, the chat fires a JSON-RPC tools/call against your MCP endpoint with the configured auth, captures the result, and feeds it back. The whole round-trip renders inline — input arguments, raw JSON output, latency, and the model's final answer.

mcp.mysaas.com/wp-admin/admin.php?page=getmcp-chats
User
What meetings do I have tomorrow?
GetMCP Assistant · Claude Sonnet 4.5
Discovering tools on calendly
Selected list_events from Calendly
Calling list_events with arguments
list_events
Calendly MCP · mcp/calendly
Success · 287ms
→ Input arguments
{
  "from": "2026-05-13",
  "to": "2026-05-13",
  "timezone": "Asia/Calcutta"
}
✓ Output
📅
Design review — GetMCP Chat
10:00 – 10:30 · [email protected]
evt_8842
📅
1:1 with Marcus
13:00 – 13:30 · [email protected]
evt_8851
You have 2 meetings tomorrow (Wed, May 13). Your first is at 10:00 — design review with Priya. Want me to send anyone a heads-up?
Server authentication

Five auth methods. Including the one Claude Desktop uses.

Point the chat at any GetMCP-built server — or any third-party MCP server — and it negotiates auth the same way ChatGPT and Claude Desktop do. Tokens are encrypted on disk and refreshed silently when they expire.

NONEPublic

Open servers

For test servers and read-only public endpoints. The chat skips auth entirely; tools execute as anonymous calls.

BEARERToken

Static bearer token

Paste a token; the chat sends it as Authorization: Bearer … on every tool call. Encrypted at rest with libsodium.

API KEYCustom header

API key header

Configure the header name (X-API-KEY, X-FlowMattic-Key, anything) and the key. Useful for legacy services that don't speak OAuth.

BASICUser · pass

HTTP Basic

Username + password, base64-encoded into the Authorization header. Both halves are stored encrypted; the base64 happens at request time.

OAUTH 2.1PKCE

OAuth 2.1 with PKCE

Click Connect: the chat probes the MCP endpoint for the WWW-Authenticate hint, walks RFC 9728 + 8414 discovery, registers a public client (RFC 7591), redirects you with PKCE, and persists the token bundle. Silent refresh on 401.

REFRESHAuto

Silent token refresh

When the MCP server returns 401 mid-conversation, the chat exchanges the refresh token, swaps the access token in place, and retries the same tool call — without bothering the user.

Control & safety

Admin-only. Cost-aware. Auditable.

Chat sessions consume your provider quota and execute MCP tools using stored credentials. The playground is gated to a per-site allowlist, every call is logged, and approvals can be required before any tool fires.

User allowlist

Decide exactly who can chat.

Site administrators always have access. Editors and other admins can be granted access individually from Settings → Security → Chat playground access. The Chats nav item disappears for users who aren't on the list — not just disabled, hidden — and the REST endpoints reject them at the server.

  • Stored in getmcp_settings.chat_allowed_users
  • Per-user opt-in for non-admin editors
  • Sidebar item hidden when not allowed
  • Capability check on every chat REST call
Settings · Security
[email protected] administrator
[email protected] editor
Site admins always have access — list adds non-admins.
Cost controls

Anthropic prompt caching, on by default.

Every Claude request goes out with a top-level cache_control breakpoint. After the first turn, the conversation prefix — tools, system prompt, and every prior message — is served from Anthropic's cache at roughly 10% of normal input price. OpenAI's automatic cache kicks in for the same reason: byte-stable prefix.

  • Top-level cache_control: ephemeral on every Claude call
  • Frozen system prompt + deterministic tool order
  • OpenAI auto-cache on prefixes ≥1024 tokens
  • Approval gate optional per session
usage · turn 4 of 6
input_tokens 412
cache_read_input_tokens 3,841
cache_creation_input_tokens 0
↳ effective input cost ~10% of full
Audit trail

Every chat call lands in your existing logs.

The chat identifies itself to the MCP server as GetMCP Chat via the standard initialize handshake — so it shows up in your logs alongside Claude Desktop, ChatGPT, and Cursor. Filter by client, replay calls, export to CSV. Same logs page, no separate dashboard.

  • MCP clientInfo.name: GetMCP Chat
  • Per-call latency, payload, status
  • Filter logs by client type
  • Replay any historical call inline
Logs · last 5 calls
200list_events · GetMCP Chat287 ms
200create_event · Claude412 ms
200list_event_types · GetMCP Chat142 ms
200get_availability · ChatGPT231 ms
200list_events · Cursor265 ms
The interface

An admin chat that doesn't feel like an admin chat.

Built with the same primitives a real chat product would use — markdown rendering, streaming replies, an approval gate, suggested prompts. Settings are persisted server-side per user, so the playground remembers your preferences across machines.

MARKDOWNHeadings · lists · code

Real markdown rendering

Assistant replies parse ## headings, bullet lists, numbered lists, blockquotes, links, and inline code — collapsed into native HTML, not echoed as raw text.

STREAMINGChar-by-char

Streaming replies

Final answers stream in with a typewriter effect (toggleable). The thinking dots, tool trace, and per-tool status indicators all reflect the underlying provider state in real time.

APPROVALPer call

Optional approval gate

Turn Auto-approve tool calls off and the assistant pauses before invoking any MCP tool. Approve or deny inline; deny messages are surfaced back to the model so it can try a different approach.

SUGGESTIONSPer server

Server-aware suggestions

The empty state proposes prompts based on whichever MCP server is active and what tools it advertises — so you have something to click instead of facing a blank composer.

RAW JSONToggle

Inspect the raw response

Each tool card has a View raw JSON toggle. Useful when the model's natural-language summary glosses over a field you care about, or when you're debugging a tool's output shape.

PERSISTEDPer site

Server-side settings

UI preferences, provider keys, and per-server credentials all live in the encrypted getmcp_chat_settings WP option — not browser localStorage — so the same settings follow you across devices.

Why it matters

Stop installing five clients to test one server.

The shortest path from "I changed a tool description" to "I see how Claude reacts" used to be: rebuild, redeploy, re-add the server in Claude Desktop, restart the desktop app, retry the prompt. Now it's a tab switch.

ITERATESeconds

Edit tool → reload chat

Change a description, schema, or response transform in the GetMCP admin, then ask the chat again. The MCP server picks up the change on the next call — no client restart, no re-pairing.

COMPARESide-by-side

Test against any provider

Ask the same prompt to Claude Sonnet, GPT-4o, and Llama 3.3 in three turns. Spot which provider misuses your tool surface — and which one nails it on the first try.

DEMOStakeholder-ready

Show your MCP server live

No "let me share my screen with my desktop client" preamble. Open the WordPress admin, send a prompt, watch the tool fire. The whole flow is visible — input args, output JSON, final answer.

ONBOARDNew users

Self-serve evaluation

Customers and prospects can validate your MCP server without setting up Claude Desktop, ChatGPT enterprise connectors, or any external account — just the admin user you grant them.

Built into every plan

Drop in an API key. Start chatting.

The Chat Playground is included in every GetMCP install — no add-on, no extra license. Bring your own provider key and you're one click from a real AI client running against your real MCP server.