The Educator Hex: Teaching With AI Without Drowning in Tools
A community college instructor teaching three courses per semester started using AI tools for course prep, quiz generation, syllabus drafting, slide creation, and student resource compilation. Within one semester, the tools designed to save time were consuming more time than the manual processes they replaced. The hex constraint cut the stack from six tools to two, and the instructor got something back that no tool could provide: hours in the week to actually teach.
The Accumulation
The instructor — adjunct, three sections of introductory courses, responsible for all course materials — started the way most educators start with AI. ChatGPT for generating quiz questions. It worked well enough to justify the subscription. Then Claude for drafting syllabus language, because a colleague mentioned it produced more nuanced academic prose. Then Gamma for slide decks, because building slides manually was the single most tedious part of course prep. Then a specialized education-focused AI tool for rubric generation. Then another for creating differentiated student handouts. Then a third for generating discussion prompts tailored to specific readings.
Six tools. Each one adopted because it solved a real problem. Each one adding its own interface, its own login, its own workflow quirks, its own learning curve. The total subscription cost was modest — under $80/month, with several on free tiers — but the time cost was not modest at all.
The instructor tracked it during a particularly heavy week in the fall semester. Six hours spent on AI-assisted course prep. The breakdown: 1.5 hours generating and reviewing quiz questions across two tools, 1 hour building a slide deck in Gamma and then fixing the formatting issues it introduced, 1 hour generating rubrics and handouts and then editing them to match the department's standards, 45 minutes creating discussion prompts and then rewriting the ones that were too generic, and 45 minutes of pure overhead — logging in, navigating interfaces, copying outputs from one tool into another, troubleshooting a tool that changed its UI.
The previous semester — before AI tools — the same prep work took four hours. Done manually, in Word and PowerPoint, with templates built up over years of teaching. The AI tools had added two hours to the process while creating the subjective feeling of being more efficient.
The Time Audit
The instructor didn't need the hex framework to spot the problem. The time tracking made it obvious. But the hex provided something the raw numbers didn't: a decision framework for what to cut and what to keep.
The hex question for an educator is a variation of the standard one: does this tool directly improve learning outcomes, or does it improve the production of materials around the learning? It's a subtle distinction, but it matters. A tool that generates better quiz questions improves assessment quality, which feeds back into learning. A tool that makes prettier slides improves presentation aesthetics, which — beyond a baseline of legibility — has [VERIFY] limited measurable impact on student comprehension or retention.
Applied to the six tools, the audit sorted fast. The quiz and rubric generators were doing work the instructor could evaluate and use directly. The slide generator, discussion prompt tool, and handout creator were producing outputs that required so much editing they functioned more like rough drafts than finished products. And rough drafts from an AI tool take longer to fix than rough drafts from your own head, because AI-generated content fails in ways you don't expect — wrong emphasis, slightly off tone, technically correct but pedagogically useless.
What Survived
Two tools. One LLM — Claude, chosen over ChatGPT after a semester of using both — and the college's existing learning management system. That's it.
The LLM handles everything text-based: quiz questions, syllabus language, rubric drafts, discussion prompts, student email templates, recommendation letter frameworks. One tool, one interface, one set of prompts refined over time. The key insight was that a single well-prompted general-purpose LLM replaced five specialized tools — and replaced them with better results, because the specialized tools were essentially wrappers around the same underlying models with added constraints that didn't match what the instructor actually needed.
The quiz generation example is illustrative. The specialized quiz tool offered templates: multiple choice, true/false, short answer, matching. It had a nice interface. It also generated questions that were, roughly 40% of the time, either too easy, ambiguously worded, or testing recall of trivial details rather than conceptual understanding. The instructor would generate 20 questions and keep 8. With Claude, using a prompt refined over several weeks that specified the course level, the type of reasoning being assessed, and common student misconceptions to test against, the keep rate went to about 70%. Not perfect. Still requiring review. But meaningfully better, and all in one place.
The LMS — which the college already paid for and required instructors to use — handled everything the slide tool and handout tool were doing, just without the AI-generated polish. Slides went back to simple bullet-point decks built in the LMS's native presentation tool. They looked worse. [VERIFY] Students did not perform differently on assessments covering material taught with AI-polished slides versus plain ones. The instructor checked. Three semesters of grade data. No statistical difference.
The Pedagogical Application
Something unexpected happened when the instructor internalized the hex constraint: it changed how they advised students about AI use. Previously, the guidance was the standard institutional boilerplate — "AI tools can be used for brainstorming but not for final submissions" with a list of approved and prohibited uses. After living the hex, the instructor replaced that with something more honest and more useful.
The new guidance: pick one AI tool. Learn it well. Use it for specific, defined tasks. Do not collect tools. The instructor shared their own experience — the six-tool sprawl, the time audit, the realization that more tools meant less teaching. Students, predictably, resonated with it. They were living their own version of the same problem: ChatGPT for essays, Grammarly for editing, Quillbot for paraphrasing, a citation generator, a summarization tool, a flashcard maker. Six tools for tasks that one LLM and their own brain could handle.
The constraint framework — applied to students — produced a side benefit the instructor hadn't anticipated. Students using one tool well developed a better understanding of what AI actually does and where it fails. Students using six tools treated each one as a black box and never learned the limitations of any of them. The hex, as a pedagogical principle, taught more about AI literacy than any lecture on the topic.
The Outcome
The numbers are straightforward. Course prep time dropped from 6 hours per week (with six tools) to 3.5 hours (with two tools). That's 2.5 hours reclaimed — every week, for a 15-week semester — totaling roughly 37 hours over the term. For an adjunct instructor being paid per course, 37 hours is not an abstraction. It's the difference between sustainable workload and burnout.
The subscription cost dropped from $78/month to $20/month for the Claude Pro subscription. The LMS is institutionally provided. The savings are small in absolute terms but meaningful for an adjunct salary.
Course evaluation scores — the instructor's primary formal feedback mechanism — showed no decline. Student comments about course materials didn't change in tone or frequency. The materials were simpler, less polished, and produced in half the time. Nobody noticed. Or more precisely: the things students notice about a course — clarity of explanation, fairness of assessment, responsiveness of the instructor — are downstream of the instructor's time and attention, not downstream of the tools used to produce handouts.
The instructor made an observation about this that stuck: "The AI tools were a way of performing thoroughness. The slides looked more professional. The rubrics were more detailed. The handouts had better formatting. None of that translated to better teaching. It translated to me feeling like a better teacher, which is a different thing."
The Larger Pattern
The educator hex maps onto a tension that exists across higher education right now. Institutions are encouraging AI adoption — sometimes mandating it — without providing frameworks for discriminating between tools that help and tools that add overhead. The default mode is accumulation: hear about a tool at a conference, try it, keep it, add another. The result is instructors spending more time managing AI tools than they saved by adopting them.
The hex offers a corrective that's simple enough to be institutional policy: each instructor gets a constrained number of AI tool slots. Not a prohibition — a constraint. If the institution provides an LLM through its LMS, that's one slot used. An instructor who wants to add a second tool — for any purpose — needs to articulate what it does that the first one doesn't. This isn't bureaucratic gatekeeping. It's the same logic that the instructor applied individually, formalized for a department.
The instructor has since shared the framework with four colleagues in the same department. Three adopted a version of it. Two reported similar time savings. The fourth — the most tech-enthusiastic member of the department — increased their tool count. Which is fine. The hex is a constraint, not a commandment. But the instructor notes, with the kind of dry precision that comes from tracking one's own time for three semesters, that the colleague with the most tools also reports the most hours spent on course prep. The correlation is consistent and the instructor has stopped being polite about pointing it out.
This is part of CustomClanker's Hex in the Wild series — real setups from real people.