MCP Explained: What It Is and Why It Matters

Anthropic's Model Context Protocol — MCP — is an open standard for connecting LLMs to external tools and data sources. It launched in late 2024 and has since become the closest thing the AI tool ecosystem has to a shared plumbing specification. The pitch is "USB-C for AI" — one standardized connector instead of a thousand custom integrations. That framing is useful, mostly accurate, and just misleading enough to be worth taking apart.

What It Actually Does

MCP is a JSON-RPC 2.0 protocol. That's the first thing worth understanding — it's not a framework, not a library, not a product. It's a wire protocol that defines how an LLM client (Claude, Cursor, Windsurf, etc.) talks to an external server that exposes tools, data, or prompt templates. The client says "what can you do?" The server responds with a capabilities list. The client picks a tool, calls it with structured arguments, and gets structured results back. That's it. That's the whole thing at the transport level.

The architecture is client-server. An MCP client lives inside whatever AI application you're using. An MCP server is a separate process — could be local, could be remote — that wraps some external service or data source. The client discovers the server's capabilities through a negotiation step, then calls them as needed during a conversation. The transport layer supports two modes: stdio (for local servers, where the client spawns the server as a child process) and HTTP with Server-Sent Events (for remote servers). A newer "Streamable HTTP" transport is replacing the original SSE approach [VERIFY], but stdio remains the most common for local development.

The protocol defines three primitives. Tools are actions — things the LLM can invoke that have side effects. Send an email, create a file, query a database. Tools have JSON Schema definitions that tell the model what arguments they accept. Resources are data — things the LLM can read but not change. A file's contents, a database record, a web page. Resources have URIs and can be listed or read. Prompts are reusable templates — predefined interaction patterns that a server can offer. In practice, tools get 90% of the attention, resources get the rest, and prompts are a nice idea that most servers don't bother implementing.

Why does this matter? Because before MCP, every AI tool that wanted to connect to, say, GitHub had to build its own GitHub integration from scratch. Every client reimplemented the same OAuth flow, the same API calls, the same error handling. If you had N AI clients and M external services, you needed N times M custom integrations. MCP reduces that to N plus M — each client implements the MCP client protocol once, each service gets one MCP server, and everything connects. In theory.

What The Demo Makes You Think

The demos are compelling. You see Claude Code connected to a filesystem server, a GitHub server, a database server — and the model seamlessly reads files, creates pull requests, and queries tables without anyone writing custom integration code. It looks like the connection problem is solved. AI tools can now talk to anything.

Here's what the demo skips.

It skips the part where MCP standardizes the connection, not the quality. A bad MCP server is still bad. If the server doesn't handle errors properly, the model gets garbage responses and hallucinates a fix. If the server doesn't validate inputs, malformed arguments pass through and produce cryptic failures. The protocol doesn't enforce good engineering — it just makes good engineering pluggable.

It skips the authentication story, which is — to be generous — incomplete. Most MCP servers handle auth by reading API keys from environment variables. That works for a developer on their laptop. It does not work for a production system that needs OAuth token refresh, credential rotation, or multi-user auth delegation. The MCP spec has an authorization framework [VERIFY], but real-world implementation is uneven. Many community servers treat auth as an afterthought.

It skips the discovery problem. How do you find MCP servers? There's no npm for MCP. Registries like Smithery and mcp.run exist, but there's no single authoritative source. You end up searching GitHub, reading blog posts, and hoping the server you found still works with the current spec version. The USB-C analogy breaks here — USB-C has a certification program. MCP has a README that might be six months out of date.

And it skips the "USB-C for AI" analogy's biggest failure mode. USB-C actually works universally — you plug in a cable and it charges, transfers data, or outputs video. MCP servers vary wildly in capability, reliability, and maintenance status. It's more like "USB-C if every cable manufacturer could choose which pins to implement." The connector is standard. What comes through it is not.

What's Coming

The MCP specification is actively evolving. The transport layer is getting more mature — the move from raw SSE to Streamable HTTP reflects real-world feedback about connection reliability and statefulness. The authorization model is being formalized, with OAuth 2.0 support moving from "some servers do this" to "the spec defines how" [VERIFY].

Client adoption is the real signal. As of early 2026, MCP is supported by Claude (desktop and Claude Code), Cursor, Windsurf, Zed, Sourcegraph Cody, and a growing list of AI coding tools [VERIFY]. OpenAI added MCP support to ChatGPT and its agents SDK. When both Anthropic and OpenAI agree on a protocol, the ecosystem tends to follow. The N-plus-M math gets more attractive as N grows.

The server ecosystem is maturing but slowly. The official servers — maintained by Anthropic or its partners — cover the obvious targets: GitHub, Google Drive, Slack, filesystem, PostgreSQL, SQLite. Community servers cover the long tail, with the quality distribution you'd expect from open source: a few excellent ones, many adequate ones, and a large number of weekend projects that work on the author's machine and nowhere else.

What MCP probably won't solve is the hard problem of tool selection. As the number of available servers grows, the model needs to decide which tools to call and when. Current LLMs get confused when presented with too many tools — performance degrades as the tool count increases. MCP makes it easy to connect thirty servers. It doesn't make it wise.

The Verdict

MCP is the right idea at the right time, executed at about 70% of where it needs to be. The core protocol is solid. The N-plus-M math is real. The adoption curve — especially with both Anthropic and OpenAI on board — suggests this will become the de facto standard for LLM-to-tool communication, not because it's perfect but because it's the only credible option.

What it is not: a guarantee of quality. MCP makes connection possible. It doesn't make connection reliable, secure, or well-maintained. The protocol is the easy part. The hard part is still building good servers, handling auth properly, managing state, and maintaining integrations as APIs evolve underneath them.

Use MCP if you're building AI tools that need external connections. Learn the protocol if you're evaluating which AI tools to adopt — MCP support is increasingly table stakes. But don't mistake "supports MCP" for "integrations work flawlessly." The plumbing is standardized. The water pressure still varies.


This is part of CustomClanker's MCP & Plumbing series — reality checks on what actually connects.