The Nonprofit Hex: Doing More With Less (For Real This Time)
"Do more with less" is a phrase that nonprofits hear so often it has lost all operational meaning. It usually arrives attached to a budget cut or a well-intentioned board member's suggestion, and it usually results in staff working longer hours rather than working differently. One three-person nonprofit adopted AI tools because a tech-forward board member said they should "leverage AI to scale impact." The result was eight AI tools — six free-tier, two paid — none integrated, each owned by a different staff member, and a collective sense that "using AI" was now a thing the organization did. Nobody could point to a specific outcome it had improved.
The hex constraint did what enthusiasm couldn't. It replaced the scattered collection with one shared tool that actually helped.
How the Tools Got There
The nonprofit operates at the local level — community services, grant-funded, three full-time staff covering program delivery, communications, fundraising, and administration. The budget is tight in the way that small nonprofit budgets are always tight: not zero, but close enough that every subscription gets scrutinized by someone on the board during the annual audit.
The AI adoption started at a board meeting. One board member — a tech industry professional who genuinely wanted to help — presented a slide deck about AI tools for nonprofits. The deck mentioned ChatGPT, Canva with AI features, Mailchimp's AI writing assistant, an AI grant writing tool, an AI social media scheduler, and two more tools the board member had seen at a conference. The board voted to "explore AI opportunities." Nobody specified what that meant. Nobody allocated a budget. Nobody designated a person to own the initiative. The vote was aspirational, which is how most nonprofit technology adoption starts.
What happened next was predictable. Each staff member interpreted "explore AI" through the lens of their own work. The communications person signed up for ChatGPT (free tier) and Canva's AI features (already included in their Canva subscription). The fundraising person signed up for a different ChatGPT account, an AI grant writing tool (free trial that converted to $39/month), and a donor management AI add-on ($29/month). The program director signed up for a third ChatGPT account and an AI scheduling tool for program logistics.
Six free-tier accounts across three services. Two paid subscriptions totaling $68/month. No shared logins. No shared workflows. No documentation of what any tool was being used for or whether it was producing results. The board member's slide deck had started something. It hadn't started anything organized.
The Audit
Six months after the board meeting, the executive director asked a question that should have been asked at the beginning: what has AI actually produced for us? The question triggered a series of uncomfortable conversations.
The communications person had been using ChatGPT to draft social media posts and newsletter copy. The drafts were usable — they saved about thirty minutes per week compared to writing from scratch. Canva's AI features were generating social media graphics, though the communications person admitted that the AI-generated graphics "look AI-generated" in a way that was beginning to feel off-brand for an organization that prided itself on personal, community-level communication.
The fundraising person's experience was more revealing. The AI grant writing tool had been used for four grant applications over six months. The tool produced first drafts that were structurally sound — correct sections, appropriate language, proper formatting. But the editing required to bring those drafts to submission quality was roughly equivalent to writing from scratch. Grant writing is not a formatting problem. It's a persuasion problem — specific to the funder's priorities, specific to the organization's track record, specific to the community's needs. The AI tool produced generic nonprofit language that any funder had seen a hundred times. The fundraising person estimated that AI-assisted grant drafts saved zero net time compared to human-written first drafts once editing was factored in. The grant success rate hadn't changed.
The donor management AI add-on was analyzing donor behavior patterns and suggesting optimal outreach timing. For an organization with 340 active donors, the "patterns" were not complex enough to require AI analysis. The fundraising person could name the major donors, knew roughly when they gave each year, and had personal relationships with about fifty of them. The AI tool was producing sophisticated analysis of a simple dataset — the equivalent of using a satellite navigation system to find the grocery store two blocks away.
The program director had barely used the AI tools at all. The scheduling tool had been tested once, found confusing, and abandoned. The ChatGPT account was used occasionally to draft program descriptions but had not become part of any regular workflow.
The Hex Applied
The executive director presented the hex constraint to the staff not as a mandate but as a relief. "We tried eight tools. Most of them aren't helping. Let's pick what's working and drop what isn't." The conversation lasted forty-five minutes — shorter than any of the three staff members expected — because the audit had already made the answer obvious.
One tool survived in a modified form: a single shared Claude account replaced all three individual ChatGPT accounts and the AI grant writing tool. Claude was chosen over ChatGPT for one practical reason — the Projects feature allowed the nonprofit to create persistent contexts for each recurring task. One project for newsletter drafts, loaded with the organization's voice guidelines, past newsletters, and audience description. One project for donor communications, loaded with templates and donor relationship notes. One project for grant writing, loaded with the organization's boilerplate, past successful applications, and a summary of the program outcomes data that funders ask about.
The communications person used the shared Claude account for the same tasks they'd been using ChatGPT for, with better results. The persistent project context meant Claude already knew the organization's voice, the audience, and the format requirements. The communications person stopped spending the first five minutes of every session re-explaining who the organization was and what it did. The time savings were modest — about fifteen minutes per week — but on a three-person team, fifteen minutes is not trivial.
The fundraising person used the same account for donor thank-you letters, event invitations, and grant narrative sections that didn't require deep customization. The grant writing tool was not replaced one-for-one — the fundraising person acknowledged that grant writing was going to remain primarily human work, and the AI's role was limited to "give me a first sentence for this section" rather than "write this section." That honest scoping of the AI's role produced better results than the grant writing tool had, because the tool had promised to write complete grant sections and delivered something that needed complete revision, while Claude was asked to write sentence-level suggestions and delivered something that could be used as-is roughly 70% of the time.
Everything else got cut. Canva's AI features remained available because they were included in the existing Canva subscription, but the staff stopped using them for auto-generated graphics and returned to template-based design. The donor management AI add-on was canceled. The scheduling tool was canceled. The cost went from $68/month in paid subscriptions plus the time cost of managing eight tools to $20/month for the Claude team plan [VERIFY].
The Communication Win
The area where AI genuinely helped was the one nobody had initially prioritized: routine communications. Donor thank-you letters. Event announcement emails. Social media posts for program updates. Newsletter drafts. Board meeting summaries.
These tasks share three characteristics that make them good AI candidates. They have a clear format. They have an established voice. And they are repetitive in structure while varying in content. A thank-you letter to a donor who gave $500 has the same structure as a thank-you letter to a donor who gave $50 — gratitude, impact statement, warm close. The AI writes the structure. The staff member customizes the content. Total time: five minutes per letter versus fifteen minutes starting from a blank page.
The communications person estimated that Claude saved four to five hours per week across all routine communication tasks. On a team of three — where "communications" is roughly one-third of one person's job, not a dedicated role — those hours represented a significant capacity increase. The newsletter went from monthly to biweekly. Social media posting went from sporadic to consistent. Donor acknowledgment went from "when I get around to it" to "within 48 hours of receiving a gift." None of these improvements required new tools. They required one tool used well for tasks it was actually good at.
The Grant Writing Discovery
This finding is worth stating plainly because it runs counter to the marketing of every AI grant writing tool: AI-assisted grant drafts needed the same amount of total editing time as human-written first drafts. The time savings from faster first-draft generation were consumed by the editing required to make the AI output specific, persuasive, and accurate.
The exception — and it's a real one — was boilerplate sections. Organizational history, mission statements, program descriptions that appear in every grant application with minor variations. These sections are high-repetition, low-variation, and AI handles them well because the "right answer" already exists in the organization's past applications. Loading Claude's grant writing project with past successful applications and asking it to "draft the organizational background section for a grant to [funder]" produced usable boilerplate in under a minute. That specific, narrow use case saved genuine time. The mistake had been expecting AI to handle the entire grant narrative — the sections that require original argument, specific data, and authentic organizational voice — with the same efficiency.
The Lesson
"Do more with less" has a specific, operational meaning when applied through the hex constraint. It means fewer tools, each one doing more, with the staff's limited time going to the work instead of to the tools. The nonprofit's eight-tool phase was not "doing more with less." It was "doing more with more, except the 'more' wasn't producing anything." The one-tool phase is doing more with less — genuinely, measurably, in a way the board can see in the quarterly report.
The broader pattern for resource-constrained organizations: free tools are not free. They cost attention, learning time, context-switching, and the cognitive load of maintaining multiple accounts across multiple platforms. A small nonprofit doesn't have surplus attention to spend. Every hour a staff member spends figuring out an AI tool is an hour not spent on programs, fundraising, or community relationships — the things the organization exists to do. The hex constraint makes that tradeoff visible and forces the organization to choose the tools that earn their attention cost, not just the tools that have a free tier.
One shared account. Templates for recurring tasks. Everything else dropped. The nonprofit is still doing more with less. It's just doing it honestly now.
This is part of CustomClanker's Hex in the Wild series — real setups from real people. Start with The Hex Explained if you haven't downloaded the constraint PDF yet.