The Consultant Hex: Advising Clients While Practicing the Constraint

An independent AI consultant — the kind who helps small businesses figure out which tools to adopt — applied the hex constraint to their own practice. This created a tension that turned out to be the entire point: the person whose job is knowing about every tool chose to use only five. The result was better client outcomes, higher close rates, and a business model that stopped depending on complexity for its revenue.

The Paradox

The consultant's job requires landscape knowledge. When a client asks "should we use ChatGPT or Claude for our customer service drafts," the consultant needs to have tested both, recently, with tasks similar to what the client actually does. When a new tool launches — and in 2025-2026, that's roughly weekly — the consultant needs to evaluate it fast enough to have an opinion before clients ask. Staying current across the AI tool landscape is not optional. It's the product.

But using every tool you evaluate is a different thing entirely. The consultant had fallen into the pattern common among people in advisory roles: testing a tool, keeping the subscription "in case a client needs it," slowly accumulating active accounts across 23 different platforms. Monthly spend: $680. Active daily use: 4 tools. The other 19 sat in a browser bookmark folder labeled "AI Stack" that hadn't been reorganized in eight months.

The gap between "tools I know about" and "tools I use" had collapsed. Everything was in one undifferentiated pile — tools the consultant relied on, tools they'd tested once, tools they'd meant to cancel, tools they kept because a client might ask about them someday. The hex forced the distinction back into existence.

The Hex as Credibility

Here's the thing the consultant didn't expect: constraint made them more persuasive. The old pitch to clients was implicitly "I use all these tools and I can help you use them too." The new pitch was "I've tested 40 tools and I use 5 — let me explain why these 5 and not the other 35." The second version closes better. It closes better because it signals judgment, not just familiarity.

Clients hiring an AI consultant are — whether they articulate it or not — hiring someone to reduce their decision space. They're drowning in options. A consultant who says "here are 12 tools you should consider" is adding to the problem. A consultant who says "here are 3 tools that will actually help, and here's why I rejected the other 9 you've been reading about" is solving it. The hex gave the consultant a framework for being that second person, backed by a practice that matched the advice.

The credibility shift showed up in close rates. Before the hex, the consultant's proposal-to-engagement conversion was [VERIFY] roughly 30-35% — typical for independent consultants in the tech advisory space. After six months of constraint-based positioning, it moved to approximately 45%. The consultant attributes this partly to the hex framing and partly to the clarity it forced in recommendations. When you've committed to using only five tools yourself, your recommendations to clients get sharper by default. You stop hedging with "well, it depends on your use case" and start saying "use this one, here's why."

The Personal Stack

The five tools that survived the hex audit, and why each one earned its slot:

Claude — primary LLM for client deliverables, proposal drafting, research synthesis, and the consultant's own content creation. Picked over ChatGPT after a month of parallel testing on real client tasks. The reasoning: Claude produced output that required less editing for the consultant's specific use cases — advisory memos, implementation guides, and comparison documents. ChatGPT was faster for quick questions but worse for sustained analytical writing. The consultant's work is almost entirely sustained analytical writing.

Notion — project management, client knowledge bases, and deliverable tracking. The consultant had tried Linear, Asana, and ClickUp. Notion won not because it was the best project management tool — it probably isn't — but because it doubled as the client-facing deliverable platform. One tool doing two jobs meant one fewer tool in the stack.

Loom — async communication with clients. Replaced Zoom for most interactions. A 5-minute Loom explaining a recommendation replaced a 30-minute meeting. Clients preferred it because they could watch it at 2x speed. The consultant preferred it because it eliminated scheduling friction.

Canva Pro — slide decks, one-pagers, social graphics. Not the best design tool. Not the best AI image tool. But good enough across all three use cases, which meant it replaced Figma, Midjourney, and Gamma simultaneously. The hex rewards versatility in a way that pure tool-quality rankings don't.

Google Workspace — email, docs, sheets. The boring answer. Also the correct one. The consultant had been paying for three tools that did things Google Workspace does natively, just with more features they didn't use.

Everything else — the 18 remaining subscriptions — got cancelled over a two-week period. Total monthly spend dropped from $680 to $140. The consultant did not notice a reduction in capability for any client engagement.

The Testing Protocol

Dropping to five tools didn't mean ignoring the other 35. The consultant still needs landscape knowledge. The solution was separating testing from adoption — a distinction that sounds obvious but that the consultant had never formalized.

The protocol: one afternoon per week — Thursday, 2-5pm — is dedicated to testing new tools. The consultant signs up for free tiers or trials, runs them through a standardized evaluation (three real client tasks, timed, output quality scored), documents the results, and cancels the subscription. The documentation becomes client-facing content: "I tested X so you don't have to." The testing never bleeds into the production stack.

This separation solved the real problem, which was not ignorance but contamination. Before the hex, testing a new tool meant installing it, integrating it into a workflow, using it for a week, and then either keeping it or — more commonly — forgetting to cancel it. The testing window creates a firewall. Tools exist in one of two states: production (the five) or evaluation (time-boxed, disposable). Nothing lives in the ambiguous middle anymore.

Client Reactions

The consultant started explicitly sharing the hex framework with clients — not as a product to sell, but as a lens for making recommendations. "Here's what I recommend, and here's the constraint I used to arrive at this recommendation." The transparency changed the dynamic.

Clients stopped pushing back with "but what about [tool they saw on LinkedIn]." When the consultant could say "I tested it on March 6th, here's my evaluation, here's why it didn't make the cut," the conversation moved forward instead of sideways. The hex created a shared vocabulary for constraint — a way to talk about why fewer tools is a position, not a limitation.

The more interesting client reaction was downstream. Clients who adopted constraint-based stacks — three to five tools, chosen deliberately, everything else rejected — reported back with the same pattern: less time managing tools, more time doing the work the tools were supposed to enable. One client — a marketing agency with eight employees — dropped from 14 AI subscriptions to 6 and reported a 20% reduction in project turnaround time. The consultant can't prove causation, but the correlation showed up consistently enough across engagements to become part of the pitch.

The Business Model Alignment

This is where the hex stopped being a personal productivity framework and became a business strategy. The old model — implicitly — was: the more tools the consultant knew, the more valuable the consultant was. This model scales toward complexity. It rewards staying subscribed to everything, recommending more rather than fewer tools, and building elaborate implementation plans that justify ongoing advisory fees.

The hex model inverts this. The value proposition is reduction, not expansion. "I will help you get from 15 tools to 5" is a different service than "I will help you adopt 5 new tools." The first is bounded, deliverable, and measurable. The second is open-ended and creates dependency. Clients prefer the first. They also refer more — because "she cut our AI spend by 60% and nothing broke" is a story people tell, while "she helped us add some tools" is not.

The consultant's revenue didn't drop when the recommendations got simpler. It went up — partly from higher close rates, partly from referrals, and partly from the credibility premium that comes with having a clear, practiced point of view rather than a menu of options. The hex wasn't a sacrifice. It was a positioning decision that happened to also reduce the consultant's own expenses and cognitive load.

The honest caveat: this works for a specific type of consultant selling to a specific type of client. Small businesses overwhelmed by options respond to constraint-based advice. Enterprise clients with procurement processes and multi-stakeholder evaluations may need a different approach — one where comprehensive landscape knowledge matters more than curated recommendations. The consultant's market is the former, and the hex fits it like it was designed for it. Which, in a sense, it was.


This is part of CustomClanker's Hex in the Wild series — real setups from real people.