The Hex in Practice — My Six Tools and Why
I've written about the hex constraint in the abstract — why six tools, why constraint matters, why the tool collector's instinct works against you. This is the concrete version. These are my six tools, why each one earns its slot, what I've tried to replace them with, and what I'd change if I had to rebuild the stack from zero tomorrow. No theory. Just the setup that actually runs.
Slot One: Claude Pro
Claude is my primary LLM. It handles writing assistance, document analysis, code review, research synthesis, brainstorming, and about two dozen other tasks that used to be spread across multiple tools or done manually. If I could only have one AI tool, it would be Claude. Everything else in the hex is built around it.
Why Claude over GPT: consistency. Claude produces output that requires less editing across a wider range of tasks. GPT-4o is better at specific things — image generation being the most obvious — but as a general-purpose reasoning and writing tool, Claude has been more reliable in my testing over the past year. The extended thinking feature is genuinely useful for complex analysis. The context window handles the long documents I work with regularly. The conversation quality stays high over long interactions in a way that GPT tends to degrade.
What I've tried to replace it with: ChatGPT Plus, Gemini Advanced, local models through Ollama. ChatGPT was my primary for about a year before I switched. The switch wasn't dramatic — Claude was incrementally better for my use cases, and incremental advantages compound over daily use. Gemini's context window is impressive but the output quality for writing tasks trails both Claude and GPT. Local models are interesting for privacy and cost but can't match the quality on tasks that matter to me.
What would make me switch: if another model significantly outperformed Claude on writing assistance and code review simultaneously, for the same price. That hasn't happened yet.
Slot Two: Cursor
Cursor is my code editor. It replaced VS Code plus GitHub Copilot in my workflow and has remained the strongest option in the AI-assisted coding space for my use case — which is working on real codebases with real complexity, not scaffolding greenfield projects from scratch.
Why it earns the slot: agent mode handles multi-file edits that would take me three times as long manually. The autocomplete is right often enough to be worth the occasional wrong suggestion. The Claude integration means I'm using the same model in my editor that I use everywhere else, which creates a consistency that matters more than I expected. When I describe a pattern to Cursor and then discuss the same pattern with Claude in conversation, they're drawing from the same understanding. That coherence reduces friction.
What I've tried to replace it with: Windsurf (close competitor, slightly less polished), Claude Code standalone (powerful but I prefer a visual editor), GitHub Copilot (fell behind on agent capabilities). Each has strengths. Windsurf is cheaper. Claude Code is more powerful for agentic workflows. Copilot has the broadest editor integration. None of them pulled me away from Cursor as a daily driver.
The honest limitation: Cursor burns through tokens faster than I'd like, especially in agent mode. A complex refactoring session can use $5-10 of API credits on top of the subscription. I track this and it's consistently the most variable cost in my stack. Some months it's reasonable. Some months I wince.
Slot Three: ElevenLabs
ElevenLabs handles voice synthesis and audio generation. This is the most specialized tool in my hex — it does one thing that nothing else in the stack can do.
Why it earns the slot: I produce audio content regularly enough that voice synthesis saves me measurable time. My voice clone is production-quality for narration. The TTS for drafts and prototypes is better than any competitor I've tested. When I need to hear how a piece of writing sounds before finalizing it, I generate audio and listen. This catches problems that reading silently doesn't — awkward phrasing, sentences that are too long to speak naturally, tonal shifts that look fine on paper but sound wrong out loud.
What I've tried to replace it with: PlayHT (good quality, less intuitive interface), Bark (open source, lower quality), macOS text-to-speech (usable for proofing, not for output). ElevenLabs wins on voice quality and the voice cloning feature specifically. If I didn't use voice cloning, PlayHT would be a viable alternative at a lower price point.
The honest limitation: the pricing scales awkwardly. The Creator plan gives me enough characters for my current usage, but if I scaled audio production significantly, I'd hit the next tier quickly. The per-character pricing model means I'm always slightly aware of how much audio I'm generating, which adds a friction that flat-rate pricing wouldn't.
Slot Four: n8n (Self-Hosted)
n8n is my automation platform. It runs content distribution, data syncing, monitoring, and a handful of operational workflows that would otherwise require daily manual attention.
Why it earns the slot: automation, when it works, is the closest thing to free productivity. My n8n workflows run every day without my involvement. They move content between platforms, sync data between tools, and send me notifications when things need attention. The total time these workflows save me is roughly 30-45 minutes per day — time I'd spend on repetitive tasks that don't require human judgment.
Why self-hosted: cost and control. Cloud n8n pricing escalates with workflow complexity and execution volume. Self-hosting on a $20/month VPS gives me unlimited executions and full control over the environment. The tradeoff is maintenance — I spend about an hour per month updating the instance and occasionally debugging workflows that break after a dependency update. That hour per month is worth the savings.
What I've tried to replace it with: Make (simpler but more expensive at scale and can't self-host), Zapier (significantly more expensive, less flexible), Pipedream (good for developers but the AI nodes are less mature). n8n wins on the combination of flexibility, self-hosting, and active community. The learning curve is steeper than Make or Zapier, but I'm past the curve now and the payoff is clear.
The honest limitation: n8n's AI nodes are mediocre. I don't use n8n for AI tasks — I use it for moving data between systems. When I need AI in a workflow, I call Claude's API through an HTTP node rather than using n8n's built-in AI features. This works fine but it means n8n isn't my "AI automation" tool — it's my automation tool that sometimes calls AI as a step.
Slot Five: Midjourney
Midjourney is my image generation tool. It produces the visual content I need for projects, articles, and presentations.
Why it earns the slot: aesthetic quality. For images where the visual quality matters — not diagrams, not screenshots, but images that need to look like they were created with artistic intent — Midjourney still produces the best results. The v6 model handles composition, lighting, and style with a sophistication that the competition hasn't matched for my use cases. When I need a hero image that sets a tone, Midjourney is where I go.
Why it's on notice: GPT-4o's image generation is catching up fast, and it's included in a subscription I already pay for (well — I dropped ChatGPT Plus, but if I hadn't, it would be bundled). The convenience of generating images in a chat conversation versus going to Midjourney's Discord or web app is significant. Midjourney's separate interface is its biggest weakness. If GPT-4o's aesthetic quality matches Midjourney's within the next six months, Midjourney loses its slot and I drop to five tools.
What I've tried to replace it with: DALL-E 3 (good but not as aesthetically refined), Flux (excellent for photorealism, weaker for stylized work), Stable Diffusion (too much maintenance overhead), Leonardo AI (close competitor, slightly less consistent). Midjourney holds the slot by a margin, not a landslide.
Slot Six: Perplexity Pro
Perplexity is my research tool. It handles the specific use case of "I need current information about a topic, with sources I can verify."
Why it earns the slot: Claude doesn't browse the web. GPT's web browsing is inconsistent. When I need to research something current — a tool's pricing change, a recent platform update, what the community is saying about a new feature — Perplexity gives me sourced answers faster than manual search. The citation quality is good enough that I can verify claims without starting from scratch.
Why it almost got cut: $20/month for a search tool feels expensive. Every few months I try replacing Perplexity with "just Google it and paste results into Claude," and every few months I come back because the time savings are real. The integrated search-plus-synthesis workflow is worth the price. Barely.
What would make me cut it: if Claude Pro added reliable web browsing. That's it. The moment Claude can search the web and cite sources in the same interface I'm already using, Perplexity loses its slot. This is the most precarious position in my hex — one feature addition by a competitor kills it.
The Hex as a Whole
The six tools together cost me roughly $140-170/month depending on Cursor's token usage and n8n's hosting costs. They cover: general AI (Claude), code (Cursor), audio (ElevenLabs), automation (n8n), image generation (Midjourney), and research (Perplexity). Each tool handles a distinct capability. No two tools overlap significantly. Each one gets used at minimum weekly, most of them daily.
The tools I don't have are as defining as the ones I do. No dedicated video generation tool — AI video isn't production-grade enough to earn a slot. No dedicated writing tool beyond Claude — the wrapper products don't add value. No second LLM subscription — one primary model, used well, beats two models used casually.
This stack isn't permanent. I expect at least one swap in the next six months — most likely Midjourney out for something that integrates better with my workflow, or Perplexity out if Claude adds web search. The hex isn't about these specific six tools. It's about the discipline of six. Having a number forces the evaluation that most people skip: does this tool earn its place against everything else I could use instead.
If you want to build your own hex, don't copy mine. My tools reflect my work — writing, code, content, automation. Your work is different. The exercise is the same: list every AI tool you pay for, ask which ones you use daily, ask which ones do something no other tool in the list can do, and cut until you hit six. The tools that survive are your hex. The tools that don't were subscriptions, not tools.
This article is part of The Weekly Drop at CustomClanker.
Related reading: The Hex Explained, One Year of AI Tools — What Survived, The Tool Collector's Guide to Actually Shipping