The Connector Landscape: What Plugs Into What

Before you promise a client — or yourself — that "the AI can connect to X," check this. The marketing pages say "integrates with 1,000+ tools." The reality is that integration quality exists on a spectrum from production-grade to vaporware, and the number on the marketing page doesn't tell you where your specific need falls. This article is the reference for what actually connects to what, how well, and through which mechanism. It will be out of date by the time you read it. That's the nature of connectors — they're a moving target. But the framework for evaluating them doesn't change.

What It Actually Does

The AI connector ecosystem has four layers, and understanding which layer you're looking at determines whether an "integration" is real or aspirational.

Native integrations are built into the AI tool by its maker. Claude's MCP tool use, ChatGPT's plugins and GPT Actions, Gemini's Google Workspace access — these are first-party, maintained by the platform team, and generally the most reliable. "Generally" is doing some work in that sentence. Native integrations still break, still have gaps, and still make tradeoffs about what's supported. But when they work, they work without you maintaining anything.

MCP servers are the standardized connector layer. An MCP server wraps an API or data source in the Model Context Protocol, letting any MCP-compatible client interact with it. The quality range here is enormous — from Anthropic-maintained reference servers to weekend projects that haven't been updated in months. MCP servers are where the ecosystem is growing fastest and where the quality variance is highest.

No-code connectors — Zapier, Make, Pipedream, n8n — bridge AI tools to services through their automation platforms. These have the widest coverage (Zapier alone claims 7,000+ integrations [VERIFY]) but add a layer of indirection, ongoing cost, and their own failure modes. They're connectors to connectors — your AI talks to Zapier, which talks to the service.

Direct API calls are the fallback. If no connector exists, you write code that calls the API. Full control, full responsibility, no abstraction layer to blame or lean on.

Every "integration" you encounter is one of these four. Knowing which one matters because the reliability, maintenance burden, and failure modes are completely different.

The Major AI Clients: What Connects Natively

Claude (via Claude.ai and Claude Code) supports MCP natively. Claude Code can connect to any MCP server — filesystem, GitHub, databases, Google Drive, Slack, and the growing ecosystem of community servers. Claude.ai on the web and mobile supports MCP integrations through the Integrations menu [VERIFY], with a curated set of connectors. The Claude API supports tool use, which is the underlying mechanism, but MCP server integration requires a client that speaks the protocol. Claude's connector story is the strongest for MCP-based workflows, weakest for proprietary plugin ecosystems.

ChatGPT has GPT Actions (formerly plugins, sort of) which let custom GPTs call external APIs. The OpenAI ecosystem also has a growing list of built-in integrations — web browsing, DALL-E, Code Interpreter, and some third-party actions. ChatGPT does not support MCP natively [VERIFY]. Connecting ChatGPT to external services means either using GPT Actions (which require building an OpenAPI spec and hosting an endpoint) or routing through a no-code platform. The plugin ecosystem has been through several iterations and the current state is more stable than the original plugins marketplace, but less open than MCP.

Gemini connects natively to Google Workspace — Docs, Sheets, Gmail, Calendar, Drive. For Google-ecosystem workflows, this is the tightest integration available. Outside of Google services, Gemini's extension system supports a smaller set of connections — Google Maps, YouTube, Flights, Hotels [VERIFY]. Custom extensions are available through Vertex AI for enterprise users. If your workflow is Google-centric, Gemini has an edge. If it's not, you're looking at API middleware.

Cursor supports MCP and has become one of the more popular MCP clients alongside Claude Code. It can connect to MCP servers for code-adjacent workflows — databases, documentation, APIs. Its native integrations are focused on the development workflow: file system, terminal, git. The model underneath (Claude, GPT, or others depending on configuration) handles the tool calls, Cursor handles the context.

Windsurf (Codeium's IDE) supports MCP as well, with a similar profile to Cursor — development-focused, MCP-compatible, growing connector support. The MCP client implementation varies between these IDE tools, and not every MCP server works identically across all clients. Testing with your specific client matters.

MCP Server Availability: The Current Map

Here's where things get specific. This is a snapshot — the ecosystem moves weekly — but the categories are stable.

Production-grade MCP servers (actively maintained, reliable, used in real workflows):
- Filesystem — Reference server. Reads and writes local files. Works everywhere.
- GitHub — Official reference server. Repos, issues, PRs, code search. Well-maintained.
- PostgreSQL / SQLite — Database access. The official servers handle basic queries reliably.
- Google Drive — Community and third-party servers exist. Quality varies. The OAuth setup is the hard part.
- Slack — Community servers available. Basic message reading and sending works. Advanced features (threads, reactions, channel management) are inconsistent across implementations.
- Brave Search / Web search — Several implementations. The Brave-backed one is the most maintained.

Works-but-fragile (functional, but expect maintenance):
- Google Calendar — Community servers exist. OAuth is a pain. Event creation works; complex scheduling queries are hit-or-miss.
- Notion — Community servers. Notion's API is complex and the MCP wrappers simplify aggressively, losing features.
- Linear — Community server. Works for basic issue management. The API surface is large and the MCP server covers a subset.
- Jira — Community servers exist. Atlassian's auth is its own adventure. Basic issue CRUD works. JQL queries through MCP are unreliable.
- Confluence — Same Atlassian auth headaches. Read-mostly works. Write is flaky.

Demo-only (impressive README, unreliable in practice):
- Most email MCP servers beyond basic Gmail — full email management through MCP is fragile. Reading works. Composing and sending is risky.
- Salesforce — Community attempts exist but Salesforce's API complexity makes a generic MCP server nearly useless without heavy customization.
- HubSpot — Similar to Salesforce. The CRM data model is too complex for a generic wrapper to be useful.
- Twitter/X API — The API changes too frequently and the access tiers are too restrictive for community MCP servers to stay current.

Gaps (no good MCP server exists, as of March 2026):
- QuickBooks / accounting software — The auth is complex, the API is poorly documented, and the consequences of bugs are financial. Nobody has shipped a production-grade MCP server for accounting.
- Healthcare/EHR systems (Epic, Cerner) — Regulatory and access barriers. Not happening through community MCP servers.
- Most fintech APIs (Plaid, banking APIs) — Compliance requirements make open MCP servers impractical.
- Adobe Creative Cloud — No MCP server for Photoshop, Illustrator, or Premiere API access.
- Internal/proprietary tools — By definition, these need custom MCP servers. This is where building your own makes sense.

No-Code Connector Coverage

Zapier has the widest coverage at 7,000+ app integrations [VERIFY]. Quality tiers within Zapier: "Built-in" integrations are maintained by Zapier's team and are generally reliable. "Partner-built" integrations are maintained by the app developer and vary. "Community" integrations are maintained by whoever built them, which might be nobody. The Zapier AI Actions feature lets LLMs trigger Zaps, bridging the AI-to-Zapier gap. The cost scales with usage — free tier is limited, and at volume the per-task pricing becomes significant.

Make (formerly Integromat) has fewer integrations than Zapier but offers more complex workflow logic — branching, iteration, error handling built into the visual builder. Make's integrations tend to be deeper than Zapier's for the services they cover, with more granular control over API parameters. The learning curve is steeper. If you need "when X happens, do Y," Zapier is faster. If you need "when X happens, check condition A, iterate over B, handle error C, then do Y," Make is often better.

n8n is self-hosted (or cloud-hosted), open-source, and has 400+ integrations [VERIFY]. The advantage is no per-task pricing — you pay for hosting, not usage. The disadvantage is that you maintain it yourself. For AI workflows specifically, n8n has native LLM nodes that call OpenAI, Anthropic, and other model APIs directly within the workflow. This makes it a natural fit for AI-in-the-middle pipelines.

Pipedream is developer-focused. Every step in a Pipedream workflow can include arbitrary code (Node.js, Python), which means the "integration" for any service with an API is "write the API call." Pipedream provides pre-built actions for common services, but the real value is the code-first approach for developers who find Zapier's visual builder constraining.

Connection Quality Tiers

When evaluating whether "AI connects to X," use this framework:

Tier 1 — Production-grade: The connection is maintained by the platform team or a dedicated team. It handles auth renewals, retries errors, has been tested under load, and has a track record of months or years of reliable operation. Examples: Claude's filesystem MCP server, Stripe's Zapier integration, Gemini's Google Workspace access.

Tier 2 — Works with supervision: The connection functions but requires periodic attention. Auth might need manual refresh. Edge cases aren't handled. Updates from the source API might break things. You can use it, but you need monitoring. Examples: most community MCP servers for major platforms, Make integrations for mid-tier SaaS tools.

Tier 3 — Demo-only: It worked when someone recorded the demo. It might work for you today. It probably won't work next month. No maintainer is actively watching for breakage. Examples: most MCP servers with fewer than 50 GitHub stars and no commits in 90 days.

Tier 4 — Vaporware: The README describes what it will do. The code doesn't work, or doesn't exist, or hasn't been updated since the API it targets changed. The integration is announced but not shipped. Examples: more common than anyone admits.

What's Coming

The connector landscape is consolidating around MCP as the standard for AI-specific integrations. Anthropic's push, combined with adoption by Cursor, Windsurf, Zed, Sourcegraph, and others, means MCP server availability will continue to grow. The emerging registries — Smithery, mcp.run, the official MCP server list [VERIFY] — are starting to provide discovery and quality signals, though they're not yet reliable enough to replace manual evaluation.

OpenAI hasn't adopted MCP and may not [VERIFY]. If OpenAI builds its own competing standard, the ecosystem fragments. If OpenAI adopts MCP, it wins. This is the single biggest variable in the connector landscape's future.

On the no-code side, both Zapier and Make are adding AI-native features — AI-triggered workflows, LLM steps, natural language workflow creation. The line between "connector platform" and "AI agent platform" is blurring, and it's unclear whether that's good (more capability) or bad (more abstraction hiding more failure modes).

The Verdict

The connector landscape in early 2026 is wide but shallow. Coverage exists for most major services through some combination of MCP servers, no-code platforms, and direct APIs. But "coverage exists" and "works reliably" are different things, and the gap between them is where projects get stuck.

Before committing to any integration path, verify three things: that the specific connection you need actually exists (not just the platform, the specific endpoint or action), that it's been updated within the last 90 days, and that someone other than the author has confirmed it works. If any of those checks fail, budget time for building or fixing the integration yourself.

The most honest thing about the connector landscape is that it's still early. The standards are settling, the ecosystem is growing, and quality is improving. But right now, "AI connects to everything" is a marketing claim, not an engineering statement. What AI connects to, reliably, is a shorter list — and knowing what's on that list before you start building is the difference between a project that ships and one that stalls at the integration layer.


This is part of CustomClanker's MCP & Plumbing series — reality checks on what actually connects.