One Year of AI Tools — What Survived, What Didn't

A year ago I had 14 active AI subscriptions. I know this because I went back through my credit card statements — a practice I recommend and also find slightly nauseating. Fourteen tools, most of which I was paying for monthly, each of which seemed essential when I signed up. Today I have six. This is the story of what survived, what got cut, and what I learned from watching eight tools fail the only test that matters: whether I kept using them.

The Starting Roster

In March 2025, my AI tool stack looked like this: Claude Pro, ChatGPT Plus, Midjourney, Cursor, n8n (self-hosted but with cloud costs), ElevenLabs, Perplexity Pro, Notion AI, Jasper, Descript, Runway, Copy.ai, a Stable Diffusion cloud instance, and Otter.ai. Some of these were monthly subscriptions. Some were annual that I'd committed to during a Black Friday sale. All of them, at the time of purchase, solved a problem I was convinced I had.

Looking at that list now, I can sort them into three categories that predict survival almost perfectly: tools I used daily, tools I used weekly, and tools I used "whenever I get around to it." The daily tools survived. The weekly tools survived if they were genuinely the best option for their task. The "whenever" tools are all gone. Every single one.

What Survived (And Why)

Claude Pro was never in danger. I use Claude for writing assistance, code review, document analysis, and a dozen other tasks that touch every part of my work. It's the hub of my AI usage, and it has been since I switched from GPT as my primary model in late 2024. The extended thinking feature, the longer context window, the consistency of output quality — Claude earned its slot on merit and keeps earning it. The competition got better over the year, but Claude kept pace or stayed ahead on the tasks I care about.

Cursor survived because I write code and Cursor makes me faster at it. Simple as that. Agent mode improved significantly over the year. The integration with Claude's models tightened. The autocomplete went from "sometimes helpful" to "usually right." I tested alternatives — Windsurf, Claude Code as a standalone, Copilot — and kept coming back. Not because Cursor is perfect. It hallucinates import paths. It sometimes suggests code that's syntactically correct and logically wrong. But the net effect on my productivity is positive enough that it earns its monthly fee comfortably.

ElevenLabs survived because voice synthesis is a specific capability that nothing else in my stack provides. I use it for audio content, voice prototyping, and — increasingly — for generating narration that would otherwise require scheduling recording time. The quality improved noticeably over the year. The pricing became more reasonable. The voice cloning got good enough that I stopped thinking of it as a novelty and started thinking of it as a production tool.

n8n survived because automation, once set up correctly, runs without attention. My n8n workflows handle content distribution, data syncing, and monitoring tasks that would otherwise require manual intervention every day. The key word is "correctly" — I spent significant time in the first quarter of the year rebuilding workflows that were overengineered. The workflows that survived are simple. Five to ten nodes, clear logic, minimal branching. The 47-node cathedral I built in early 2025 got torn down and replaced with three simple workflows that do the same job.

Midjourney survived but barely. I use it less than I used to — GPT-4o's image generation and Flux handle many tasks that used to go to Midjourney. What keeps Midjourney alive in my stack is aesthetic quality for specific styles. When I need something that looks like art rather than like a functional image, Midjourney still produces the best results. But it's on notice. If GPT-4o's image generation improves at its current rate, Midjourney might not survive the next year.

Perplexity Pro was the last survivor — the tool that almost got cut multiple times and kept proving itself useful at the last minute. I use it for research tasks that need current information with source citations. Claude can't browse the web natively in the Pro tier. GPT's browsing is inconsistent. Perplexity's entire product is "search with citations," and for that specific use case, it's the best option. The $20/month stings, but the alternative — manually Googling, opening tabs, verifying sources — costs more in time.

What Died (And Why)

ChatGPT Plus was the biggest psychological hurdle to cancel. I'd been a subscriber since GPT-4 launched. It felt like canceling a founding membership. But when I looked at my usage honestly, I was using ChatGPT maybe twice a week, always for tasks Claude could handle, mostly out of habit. The image generation in GPT-4o almost saved it — it's genuinely good. But I couldn't justify $20/month for a tool I used as a secondary option. Canceled in June 2025. Haven't missed it.

Jasper was dead on arrival in retrospect. I subscribed because I was writing marketing copy and Jasper was "the AI for marketers." What I learned quickly is that any general-purpose LLM writes marketing copy as well as Jasper, and Claude writes it better. Jasper's templates were constraints disguised as features — they limited what the model could do, not expanded it. Canceled after two months.

Copy.ai — same story as Jasper, slightly different packaging. The specialized AI writing tools can't compete with general-purpose models that do the same thing without the wrapper. Both of these tools are products of an era when prompt engineering was hard and wrappers added value. That era is over.

Notion AI was the most instructive cancellation. I use Notion daily. I pay for Notion happily. Notion AI — the add-on that puts AI features inside Notion — is a textbook example of an AI feature that sounds useful in a product demo and adds almost nothing in practice. "Summarize this page" — I can read the page. "Generate a to-do list from these notes" — the to-do list it generates is always wrong enough to need complete revision. "Ask questions about your workspace" — the answers are based on a limited subset of my data and are unreliable. Canceled. Notion is better without it.

Runway was a victim of the video generation reality. AI video is still in the "impressive demo, disappointing practice" phase. I generated maybe 15 video clips with Runway over six months. Used two of them. The rest were interesting but not usable — uncanny motion, artifacts on faces, physics that didn't quite work. I'll revisit AI video when it crosses the quality threshold, but in March 2026, it hasn't.

Otter.ai got replaced by functionality built into other tools. Meeting transcription is now available in Zoom natively, in Google Meet, and through various free options. Paying separately for transcription stopped making sense when the platforms I was already using added it.

Descript was the hardest cut because it's genuinely a good product. But I wasn't using it enough. My audio editing needs are met by simpler tools, and Descript's full feature set — video editing, podcast editing, screen recording — overlapped with too many things I was already doing elsewhere. It's a great tool for someone whose primary workflow is audio/video production. That's not me.

Stable Diffusion cloud instance was a cathedral. I set up a cloud GPU instance to run Stable Diffusion with various fine-tuned models. The setup was interesting. The image quality was good. The maintenance was constant. Every model update required reconfiguration. Every new checkpoint required testing. I was spending more time maintaining the infrastructure than generating images. Replaced entirely by Midjourney and GPT-4o's image generation, which produce slightly worse results with zero maintenance overhead.

The Patterns

Three patterns emerged from this year-long natural experiment.

Pattern one: daily use is the survival test. If you're not using a tool daily — or at minimum several times a week for a specific recurring task — you won't keep using it. The "I'll use this when I need it" tools never get used, because when you need something, you reach for the tool you already have open. The tool you already have open is the tool you use daily.

Pattern two: wrappers die, platforms survive. Jasper, Copy.ai, and to some extent Notion AI are all wrappers around LLM capabilities. They add a UI, some templates, some integrations, and charge a premium. When the underlying LLMs got good enough that the wrapper stopped adding value, the wrapper became dead weight. The tools that survived are either the LLMs themselves (Claude) or tools that do something the LLMs can't (Cursor's IDE integration, ElevenLabs' voice synthesis, n8n's automation execution).

Pattern three: the six-tool limit is real. I didn't set out to reach six tools. I didn't follow the hex framework when I started — I developed it after observing this pattern. But six is roughly the number of AI tools a single person can use actively, maintain proficiency in, and keep updated on. Beyond six, tools start to rot — you stop learning the new features, you stop optimizing your usage, you stop getting full value. The tools become subscriptions instead of tools. You're paying for access you're not using.

The Math

Fourteen subscriptions at an average of roughly $25/month — some were more, some less — came to around $350/month or about $4,200/year. Six subscriptions at roughly the same average come to about $150/month or $1,800/year. The savings are real, but they're not the point. The point is that eight of those tools weren't making me more productive. They were making me feel more equipped, which is a different thing. The gap between "equipped" and "productive" is where most AI tool spending lives.


This article is part of The Weekly Drop at CustomClanker.

Related reading: The Tool You Should Cancel This Month, The Hex Explained, The $20/Month AI Budget