The Solo Dev Hex: Shipping Products With Six Tools

Every developer with a side project has a version of this story. You start building the thing on a Saturday morning. By Saturday afternoon, you're not building the thing — you're configuring the tools you'll use to build the thing. By Sunday, you're evaluating whether a different set of tools might be better. The thing itself hasn't gained a single feature. You've spent the weekend on infrastructure that serves the process of development rather than the product of development.

One solo developer lived this loop for eight months. Applied the hex constraint. Shipped the project in three. The constraint didn't make them a better developer. It made them a developer who actually develops.

The Profile

Software developer with a full-time day job, building a SaaS side project on evenings and weekends. The project was a relatively simple tool — a dashboard for tracking freelance income and expenses, aimed at solo consultants. The MVP required user auth, a data model, a basic UI, Stripe integration, and a landing page. Nothing that a competent developer couldn't build in a few months of focused weekend work.

The developer started the project in January. By August, the landing page existed as a placeholder, the auth system had been rebuilt twice with different frameworks, and the core dashboard had been started in three different front-end libraries. Not because the developer lacked skill. Because every Saturday morning began with the same ritual: open the project, notice something that could be improved, wonder if a different tool would make the improvement easier, spend two hours researching the tool, spend three hours integrating it, realize it doesn't quite fit, revert to the previous setup, and close the laptop having written zero lines of product code.

The toolchain by August looked like this. Cursor for AI-assisted coding. GitHub Copilot — because it was already running from the day job and "might as well leave it on." Claude Code for planning and architecture decisions. A local LLM running on an M2 MacBook for "privacy-sensitive" operations that, in retrospect, were not meaningfully more sensitive than anything being sent to Claude. A custom agent framework built on LangChain for automating test generation — which had taken three weekends to build and generated tests for a codebase that barely existed. A monitoring stack for the agents — observability for the AI tools that were supposed to be building the product, rather than observability for the product itself.

The developer estimated that 60% of weekend "development time" was spent configuring, evaluating, switching between, and debugging AI tools. The remaining 40% was split between actual product development and the recovery time needed after each tool-related context switch. The effective product development rate was somewhere around 25% of total time invested. At that rate, the SaaS that should have taken three months of weekends would have taken two years — if the developer's motivation survived that long, which it wouldn't, because nothing kills motivation like eight months of effort with nothing to show a user.

The Hex Applied

The hex question for a developer is: does this tool produce shipping code? Not "does it help me think about code" or "does it make the development environment nicer" or "could it theoretically speed things up." Does it result in code that ends up in production, used by a customer, doing the thing the product is supposed to do?

Three tools survived the first pass. The developer gave themselves six slots but could only fill three with tools that passed the shipping test.

Cursor stayed. It was the primary code-writing environment, and the AI assistance — inline completions, chat-based refactoring, file-level edits — directly produced code that shipped. The developer had gotten good at Cursor's patterns over eight months of use, and that accumulated skill was worth preserving.

Claude — via the web interface, not Claude Code — stayed for planning and architecture. When the developer needed to think through a data model, evaluate a technical tradeoff, or outline an implementation approach, Claude was the thinking partner. The key distinction was that Claude produced plans that became code, not code that competed with Cursor's output. The developer stopped using Claude Code specifically to eliminate the "should I write this in Cursor or Claude Code" decision that preceded every coding session.

Vercel stayed for deployment. Push to main, Vercel builds and deploys. No configuration rabbit holes. No custom CI/CD pipeline that needed its own maintenance. The developer had spent two weekends building a GitHub Actions pipeline before realizing that Vercel's zero-config deployment did the same thing out of the box.

That left three empty hex slots. GitHub Copilot got cut — not because it was bad, but because running it alongside Cursor created a constant low-level interference. Two AI systems suggesting different completions in the same editor. The developer spent cognitive energy choosing between suggestions instead of writing code. Removing Copilot didn't make Cursor's suggestions better. It made the developer's response to suggestions faster because there was only one to evaluate.

The local LLM got cut. The "privacy-sensitive" operations it handled were API integrations with Stripe and a database schema — neither of which contained user data, because no users existed yet. The developer had been solving a privacy problem they didn't have, using a tool that ran slower than the cloud alternatives, to protect data that didn't exist. The local LLM was security theater for an empty theater.

The LangChain agent framework got cut, and this was the painful one. The developer had invested three weekends — roughly 30 hours — building a system that generated unit tests from code comments. It worked. The tests it generated were reasonable. The problem was that the framework itself required maintenance, the generated tests occasionally needed manual correction, and the total time spent maintaining the agent plus correcting its output exceeded the time it would take to write the tests manually. The developer was maintaining an automation that cost more time than the manual process it replaced. That's the Cathedral — but in developer tooling, it's harder to see because building infrastructure is literally what developers do. The line between "building the product" and "building tools to build the product" is invisible until you draw it on purpose.

The Productivity Shift

The developer started the hex on a Saturday in September. By the following Saturday, something felt wrong. The developer described it as "anxiety" — the weekends had always been full of activity, and now there was an unfamiliar quiet. Open Cursor. Open Claude in a browser tab. Start writing the feature. That's it. No tool evaluation. No "but what if" detour. No configuring a new integration. Just the work.

The anxiety passed after two weekends. What replaced it was a feeling the developer hadn't experienced in eight months of the side project: progress. Visible, measurable, show-someone progress. Features that worked. Pages that rendered. Stripe integration that accepted test payments. The product was growing at a rate that felt disproportionate to the effort — not because the developer was working harder, but because 90% of the effort was now going to product code instead of 25%.

By October, the developer had built more of the product in four weekends than they had in the previous eight months. The math was that stark. The same developer, the same hours, the same skill level — but a tool environment that funneled effort toward the product instead of dispersing it across the toolchain.

The Temptation Pattern

New code generation tools launch constantly. For a developer running the hex, every launch is a test of discipline. The developer described the pattern: see the announcement on Hacker News, watch the demo video, feel the pull. The demo always shows the new tool solving a problem in thirty seconds that takes five minutes manually. The developer's brain does the math — "if I switch, I save four and a half minutes per occurrence, times twenty occurrences a week, that's ninety minutes saved" — and the math feels compelling in the way that all hypothetical math does.

The hex response: which of my three tools does this replace, and is the cost of migration — the lost muscle memory, the new learning curve, the configuration time, the risk of new bugs — worth the marginal improvement? The answer, evaluated honestly, was no every time. Not because the new tools weren't good. Because the switching cost was real and the improvement was speculative. The demo showed thirty seconds. The demo didn't show the thirty hours of adaptation.

The developer added one rule to the hex that isn't in the original constraint PDF: a 30-day waiting period for any new tool. See it, bookmark it, wait 30 days. If after 30 days the developer still believes the tool would be worth switching to, evaluate it properly. If the developer has forgotten about it — which happened with every tool during the first six months — the evaluation isn't needed. This filter alone eliminated what the developer estimated to be four to six hours per month of "just checking out" new tools that would never have been adopted anyway.

The Result

The side project launched in December — three months after the hex started, after eight months of pre-hex tool-fiddling that produced an incomplete auth system and a placeholder landing page. At launch, the product had user auth, the income tracking dashboard, Stripe billing for the premium tier, and a documentation site. Not polished. Not feature-complete. But shipped — which, for a side project, is the only metric that separates a product from a hobby.

The developer's retrospective: "I wasn't unproductive for eight months. I was productively building the wrong things. The AI tools, the agents, the monitoring — all of it worked. I had a functioning AI development environment with observability and automated testing. I just didn't have a product. The hex didn't add skill. It subtracted distraction."

The three tools are still the same three tools seven months later. Cursor, Claude, Vercel. The empty hex slots remain empty — not as a point of pride, but because nothing has passed the filter. The developer has considered adding a dedicated database GUI and a design tool for the landing page, but in both cases the existing tools — Cursor's built-in database viewer and a plain HTML/CSS template — are doing the job well enough that the addition isn't justified.

The constraint, for a solo developer, works differently than it does for a publisher or a freelancer. A publisher's tool sprawl wastes money. A freelancer's tool sprawl wastes client time. A developer's tool sprawl wastes the rarest resource of all: the finite hours between the day job and sleep during which a side project either gets built or doesn't. Every hour spent configuring a tool is an hour not spent shipping a feature. The hex makes that tradeoff visible — and once you see it, the choice is obvious.


This is part of CustomClanker's Hex in the Wild series — real setups from real people. Start with The Hex Explained if you haven't downloaded the constraint PDF yet.