MCP: What Actually Connects and What Doesn't

Model Context Protocol is Anthropic's bet on how AI models should talk to the outside world. The pitch is clean: a single open standard for connecting Claude to any external tool, data source, or service. Instead of building custom integrations for every API, you write an MCP server once, and any MCP-compatible client can use it. In theory, this is USB for AI — one protocol to replace a mess of proprietary connectors. In practice, after spending three weeks setting up, testing, and breaking various MCP servers across Claude Code and Claude.ai, the picture is more nuanced. Some connections are genuinely solid. Some work on a good day. Some exist only as README files in abandoned GitHub repos.

What The Docs Say

Per Anthropic's MCP documentation, the protocol follows a client-server architecture. The MCP client (Claude Code, Claude.ai, or any compatible application) communicates with MCP servers that expose tools, resources, and prompts through a standardized JSON-RPC interface. Each MCP server wraps some external capability — file system access, a database connection, a web API — and presents it to Claude as a callable tool. Claude sees the tool's name, description, and parameter schema, decides when to call it based on the conversation context, and handles the response.

The docs describe three transport mechanisms: stdio (for local servers running as child processes), HTTP with Server-Sent Events, and a newer Streamable HTTP transport. The protocol supports tool discovery, resource listing, and prompt templates. Anthropic positions MCP as an open standard — the spec is public, and third-party developers can build both servers and clients. The official MCP server registry lists dozens of available integrations across categories like developer tools, databases, cloud services, and productivity apps.

What's Production-Grade

File system access through MCP is the most reliable integration I tested. Claude Code's built-in file system capabilities run over MCP under the hood, and they work as advertised — reading files, writing files, searching directories, creating new files. This is so seamless that most Claude Code users don't even realize MCP is involved. The file system MCP server handles path resolution, permission checks, and file encoding consistently. I threw edge cases at it — symlinks, binary files, deeply nested directories, files with Unicode names — and it handled all of them without complaint. This is the gold standard for what an MCP integration should feel like: invisible.

Git operations via MCP are similarly robust. Claude Code can run git commands, read diffs, check logs, and understand repository state through MCP tooling. I used it daily across multiple repos for two weeks. It occasionally gets confused by unusual git configurations — submodules can trip it up, and repos with thousands of branches sometimes cause timeouts — but for standard git workflows (status, diff, log, commit, branch), it's reliable. The key detail: Claude doesn't just execute git commands blindly. It reads the output, understands what changed, and uses that understanding to inform subsequent actions. This is MCP working as intended — the tool provides data, the model provides intelligence.

Database queries work well when the setup is right. The SQLite MCP server is straightforward to configure and handles read queries reliably. I tested it with databases up to a few hundred megabytes and it handled schema inspection, SELECT queries, and basic aggregations without issues. PostgreSQL integration via MCP servers from the community also works, though setup requires more configuration — connection strings, SSL certificates, and network access all need to be correct before the MCP layer can do its job. The pattern I noticed: MCP adds minimal overhead when the underlying connection is solid. Most "MCP failures" with databases are actually database configuration failures that would break any client.

Web search and web fetching through MCP are functional but with clear boundaries. Claude can search the web and fetch page content through MCP tools, and the results are generally accurate for straightforward lookups. It handles documentation pages, Wikipedia articles, and blog posts well. It struggles with JavaScript-heavy single-page applications, paywalled content, and pages that require authentication. This isn't an MCP limitation — it's a web scraping limitation that MCP inherits.

What's Fragile

Complex API integrations sit in the fragile category. I tested MCP servers for GitHub's API, various cloud providers, and productivity tools. They work — sometimes. The failure modes are instructive. Authentication is the most common breaking point. MCP servers that require OAuth flows, API keys in environment variables, or token refresh logic introduce failure points that have nothing to do with the protocol itself. I set up a GitHub MCP server that worked perfectly for public repos but failed silently when accessing private repos because the token scope was wrong. The error message from MCP was generic; diagnosing the actual issue required reading the server's source code. This is a tooling maturity problem, not a protocol problem, but the distinction doesn't help when you're debugging at 11 PM.

Multi-step tool chains are where MCP's elegance meets practical friction. The protocol handles individual tool calls cleanly. But when a task requires calling tool A, using its output to parameterize tool B, then combining both results for tool C, reliability drops with each step. I tested a workflow that involved searching for files (tool 1), reading their contents (tool 2), and writing a summary to a new file (tool 3). Each step individually worked 95%+ of the time. The full chain completed successfully maybe 80% of the time. The failures were usually Claude misinterpreting an intermediate result or a timeout on one step cascading to the next. This is manageable for developer-facing tools where you can retry and adjust. It's not ready for unattended automation where silent failures matter.

Server discovery and configuration remain rough. The official getting-started experience involves editing JSON config files, sometimes running Docker containers, and frequently consulting GitHub issues when the documented configuration doesn't work. I spent more time configuring MCP servers than using them in the first week. The config file format is straightforward — JSON with server names, commands, and arguments — but the documentation for individual servers varies wildly. Some have excellent READMEs. Some have a README that was accurate six months ago and hasn't been updated since a breaking change. The Claude Code /mcp command helps, but you still need to know what you're configuring and why.

What's Vaporware

The MCP server registry and the broader ecosystem contain entries that range from "polished tool" to "someone's weekend experiment that was abandoned in 2024." I audited about 30 MCP servers from public registries and GitHub search results. Roughly a third were actively maintained with recent commits and responsive maintainers. Another third worked but hadn't been updated in months and had open issues with no responses. The remaining third were effectively dead — broken dependencies, incompatible with current MCP spec versions, or so minimally documented that getting them running would require reading the source and reverse-engineering the config.

The problem isn't that these exist — open source is like that. The problem is that MCP's marketing suggests a rich ecosystem of "connect Claude to anything" when the reality is "connect Claude to about a dozen things reliably, another dozen with effort, and the rest with heroic patience." If you go into MCP expecting a plug-and-play app store, you'll be disappointed. If you go in expecting a solid protocol with an early-stage ecosystem, your expectations will be met.

Claude Code vs. Claude.ai MCP

This distinction matters and the docs don't emphasize it enough. Claude Code runs MCP servers locally as child processes. You configure them in your project's .mcp.json or your user-level config, and Claude Code launches them when needed. This means you get full access to local resources — file system, local databases, local git repos, anything your machine can reach. The server runs with your permissions, which is both powerful and something to be aware of.

Claude.ai's MCP support is more constrained. As of testing, Claude.ai supports remote MCP servers through its integrations panel, but the selection is curated and you can't bring arbitrary servers. The available integrations are generally more polished — Anthropic has presumably vetted them — but the range is narrower [VERIFY]. If you need MCP for serious development work, Claude Code is the path. If you want a few reliable integrations without touching config files, Claude.ai's approach is more appropriate.

When To Use This

MCP is worth setting up when you have a recurring workflow that involves Claude interacting with external systems. If you're a developer who wants Claude to read your codebase, query your database, and interact with your git repo — MCP is already doing that in Claude Code, and it works well. If you have a specific API you interact with frequently and an MCP server exists for it, the setup time pays off after the third or fourth use.

MCP also makes sense as an architecture choice if you're building Claude-powered applications. The protocol is well-designed, the spec is stable enough to build on, and having a standard interface between your model and your tools is better than the alternative of ad-hoc function calling wrappers. For teams building with the API, MCP provides a clean separation between "what the model decides to do" and "how the tool executes it."

When To Skip This

Skip MCP when a simpler approach works. If you need Claude to process the contents of one file, just paste it into the conversation. If you need data from an API, call the API yourself and give Claude the response. MCP adds value through automation and repeated use — for one-off tasks, it's overhead. Skip the ecosystem servers you find on GitHub unless you're prepared to read their source code and debug their configurations. The maintained, well-documented servers are worth using. The rest are research projects, not tools. And skip MCP entirely if your use case is conversational — chatting with Claude, asking questions, brainstorming. MCP is for Claude-as-tool-user, not Claude-as-conversation-partner.

The protocol itself is genuinely good engineering. The ecosystem around it is where it was always going to be at this stage — uneven, partially built, full of promise and README files. Use the parts that work. Ignore the parts that don't. Check back in six months.


This article is part of the Claude Deep Cuts series at CustomClanker.