The Tool I Can't Quit (And Why That Bothers Me)
The tool is Claude. And before this reads like a love letter, let me explain why it bothers me — because the reasons it bothers me are more useful than the reasons I keep using it.
How I Got Here
I didn't start with Claude. I started with GPT-3.5, like most people, moved to GPT-4 when it launched, and used ChatGPT as my primary AI tool for about eight months. It was fine. GPT-4 was the best available model at the time, the interface was clean, and the capabilities were impressive enough that I didn't have a reason to look elsewhere. Then Claude 2 came out, and I tried it out of professional curiosity — I review AI tools, so ignoring a major new model would have been negligent.
Claude 2 was interesting but not a switch-worthy improvement. What changed was Claude 3 Opus, followed by 3.5 Sonnet, followed by the progression that led to the current model. At some point in that progression — I can't identify the exact moment — Claude became the tool I opened first every morning, used most throughout the day, and recommended most to other people. The shift wasn't dramatic. It was the slow accumulation of moments where Claude gave me a better answer than what I'd been getting, handled a harder task than what I expected, or produced output that felt like it was written by someone who understood what I was trying to do rather than someone who was pattern-matching on my prompt.
Today, Claude handles my writing assistance, code review, research synthesis, strategic thinking, and most of my professional AI usage. I have a Claude Pro subscription. I have Claude Code for development work. My daily workflow — which I wrote about recently — has Claude at the center. This is the dependency that bothers me.
What Claude Does Better
I need to be specific here, because "I like Claude better" is useless without explaining why. There are three capabilities where Claude consistently outperforms what I've used elsewhere, and understanding them is important for understanding the dependency.
First, writing quality. When I ask Claude to help with written content — editing, feedback, drafting sections, restructuring arguments — the output has a quality that I don't consistently get from GPT-4o or Gemini. It's hard to quantify, but I'll try: Claude's writing has better paragraph-level coherence. The transitions between ideas are more natural. The word choice is more precise. And — this one matters to me — Claude is better at varying sentence length and rhythm in a way that makes prose readable rather than monotonous. GPT-4o tends toward uniformity in sentence structure. Claude doesn't. For someone whose work is primarily text, this difference is significant.
Second, reasoning depth. When I bring Claude a complex problem — architectural decisions, strategic analysis, multi-factor evaluations — the reasoning is more thorough and more honest about uncertainty. Claude is more likely to say "there are two reasonable approaches and the right choice depends on a factor you haven't specified" than to confidently recommend one option and ignore the tradeoffs. I've tested this across dozens of complex prompts, and the pattern is consistent enough that I trust it. GPT-4o is more confident, which sounds like a strength but often isn't — because confident and wrong is worse than uncertain and honest, especially when I'm relying on the output for decisions.
Third, instruction following. When I give Claude a detailed brief — specific format, specific constraints, specific tone, specific things to include and exclude — Claude follows the brief more precisely than other models. This matters because my prompts are often complex: I'm not asking for a simple output, I'm asking for a specific output under specific conditions, and the distance between "mostly followed the brief" and "exactly followed the brief" is the distance between output I can use and output I need to rewrite.
These three advantages — writing quality, reasoning depth, instruction following — compound over time. Each positive interaction reinforces the habit. Each time Claude gives me a better result than I expected, the default gets stronger. And that's the mechanism of dependency: not a single decision to commit, but a thousand small confirmations that accumulate into an assumption.
Why the Dependency Bothers Me
The dependency bothers me for three reasons, in order of how much they keep me up at night.
First, vendor lock-in is real even when the vendor is good. Anthropic is a company. Companies change. They change pricing, they change terms of service, they change model behavior, they change strategic priorities. When I was using GPT-4, OpenAI changed the model's behavior in ways that made it worse for my use cases — the outputs became more cautious, more hedged, more generic. If Anthropic does the same thing — and model behavior changes are not hypothetical, they're a consistent pattern across every LLM provider — I have no fallback that gives me equivalent quality. My workflows, my habits, my muscle memory are all calibrated to Claude. Switching would cost me weeks of productivity while I recalibrate.
This is exactly the dependency I warn people about when they build on any single platform. I write about vendor risk, about the importance of exit strategies, about not building critical workflows on a foundation you don't control. And then I look at my own stack and see Claude at the center of everything. The hypocrisy is not lost on me.
Second, single-model dependency limits my perspective. When I evaluate other AI tools, I'm comparing them to Claude — which means Claude is both my tool and my benchmark. If Claude has a systematic weakness that I've adapted to without noticing — a blind spot I've compensated for so thoroughly that I no longer see it — I wouldn't know, because my evaluation framework is built on top of the tool I'm evaluating. This is circular and I know it's circular. I try to correct for it by periodically using other models for the same tasks, but the correction is imperfect because the tasks I choose and the way I evaluate the outputs are still shaped by my Claude-trained expectations.
Third — and this is the one that actually bothers me philosophically — I don't like needing a tool this much. I'm a builder. My professional identity is built on self-reliance, on understanding the tools I use well enough to replace them, on never being captive to a single vendor or platform. The fact that Claude has become load-bearing infrastructure in my daily work — that removing it would meaningfully degrade my output — conflicts with that identity. I self-host my automation platform because I don't want to depend on Zapier. But I depend on Claude more than I ever depended on Zapier. The inconsistency is uncomfortable.
What I've Done About It
The honest answer is: not enough. But here's what I have done.
I maintain proficiency in GPT-4o and Gemini by using them for specific tasks on a weekly basis. Not as primary tools — that would be forcing myself to use worse tools out of principle, which is a different kind of irrational — but as regular check-ins. Once a week, I take a task I'd normally give Claude and give it to GPT-4o instead. I compare the output. I note where GPT-4o is better (faster response times, better at certain creative tasks, stronger browsing integration) and where Claude is better (the three areas I mentioned above). This keeps my evaluation skills calibrated and ensures that if Anthropic makes a change that degrades Claude, I'll have a current alternative ready.
I've also built my workflows to be tool-agnostic where possible. My prompts are stored as plain text files that work with any LLM — they reference my preferences and constraints but don't use Claude-specific features like Projects or Artifacts in ways that would be hard to replicate elsewhere. This means switching tools would require re-running my prompts in a new interface, not rewriting them from scratch. It's an imperfect migration strategy, but it's better than being locked to a proprietary feature set.
And I've started being more deliberate about what I use Claude for versus what I do myself. The thinking-partner role — where Claude helps me reason through a problem — I've kept. The execution role — where Claude produces the actual output I use — I've been gradually pulling back. Not because Claude's output is bad, but because the act of producing the work yourself keeps skills sharp that atrophy when you outsource them. I noticed my first-draft writing getting lazier over the past few months, and I traced it to the habit of relying on Claude for structural feedback rather than developing the structural instinct myself. The feedback loop had become a crutch. I'm correcting that.
The Broader Point
Every person reading this who uses AI tools daily has a version of this dependency. Maybe it's ChatGPT. Maybe it's Cursor. Maybe it's Midjourney. Whatever tool you open first, use most, and would miss most if it disappeared — that's the one worth thinking about.
The dependency isn't necessarily bad. Depending on good tools is rational. The question is whether the dependency is conscious and managed, or unconscious and growing. If you know you're dependent, if you've evaluated the alternatives, if you have a migration plan even if you never use it — that's informed dependency. If you've never seriously tried the alternative, if switching would be catastrophic, if you couldn't articulate what you'd do without your primary tool — that's captured dependency. The difference is awareness.
I'm in the informed-dependency category for Claude. I know I depend on it. I know why. I have alternatives identified and proficiency maintained. And I still don't love it, because informed dependency is still dependency, and the person who controls your tool controls a piece of your capability.
The tool I can't quit is the best tool I've used. That's why I can't quit it. And the fact that "it's the best available option" and "I'm uncomfortable depending on it" can both be true at the same time — that's not a contradiction. That's just what it feels like to be a builder in 2026, using tools you don't own, built by companies you don't control, to do work you care about.
This article is part of the Weekly Drop at CustomClanker — one take, every week, no fluff.