My Actual Daily AI Workflow (No Fluff)
Every "AI workflow" article I read follows the same pattern. Someone lists 15 tools, explains how each one feeds into the next, and presents a beautiful diagram of interconnected services that make their life seamlessly productive. It reads like a fantasy. It is a fantasy. Nobody actually uses 15 AI tools in a coordinated daily workflow. The operational overhead alone would eat the productivity gains.
This is what I actually use. Every day. Tested over months, not assembled for a screenshot. The boring version that works.
The Morning: Research and Triage
I start the day with two tools open: Perplexity and Claude. That's it.
Perplexity handles my information-gathering. I read across several domains — AI tool releases, automation platform updates, developer community discussions, and general tech news. Instead of checking 20 sources individually, I ask Perplexity targeted questions: "What shipped in the n8n ecosystem this week," "What are the common complaints about [tool] in the last month," "What changed in [platform's] pricing since [date]." The answers come with citations I can verify, which is why I use Perplexity instead of asking Claude — I need to know the information is current and sourced, not plausible and confabulated.
The Perplexity session takes about 20 minutes and produces a list of things worth paying attention to and things I can ignore. I paste the relevant findings into a running note — a plain text file, not a fancy app — that becomes my daily context. This is the least glamorous part of the workflow and the most valuable. Knowing what happened while I was asleep means I don't spend the rest of the day discovering things reactively.
Claude handles the thinking work that follows. Once I have the day's context, I open a Claude conversation — usually in Claude Code if I'm going to be writing or building, or the web interface if I'm just reasoning through something. The first prompt of the day is almost always a briefing: I paste my notes and ask Claude to identify what's actually significant versus what's noise. Claude is good at this because the evaluation criteria — "does this change how a tool works in practice, or is it just a press release" — is the kind of judgment call where an LLM with good reasoning adds real value. It doesn't catch everything, but it catches enough to make the triage faster.
This morning block — Perplexity for gathering, Claude for evaluating — takes 30-40 minutes. It replaced a process that used to take 90 minutes of manual browsing, tab-hopping, and trying to remember what I'd already read. The time savings are real and consistent.
The Build Block: Claude Code and Cursor
The middle of my day is building. Writing code, writing content, building automations, testing tools. This is where the "AI workflow" actually happens — not as a pipeline of 15 tools, but as two tools I switch between depending on the task.
Claude Code is my primary tool for anything that involves multi-file work, complex reasoning, or tasks where I need the AI to understand a broad context. When I'm working on a project that spans multiple files — which is most real projects — Claude Code's ability to read, reason about, and edit across an entire codebase is the capability that matters. I give it a task, it reads the relevant files, proposes changes, and makes them. I review, adjust, and move on. The workflow is conversational: I describe what I want, Claude Code does it, I evaluate the result, and we iterate.
The key habit I've built is giving Claude Code explicit context about what I'm trying to accomplish, not just what I want it to do. "Add error handling to this function" is a worse prompt than "This function currently silently fails when the API returns a 429. Add retry logic with exponential backoff, and after three failures, log the error and return a meaningful error object that the calling function can handle." The second prompt takes longer to write and produces dramatically better output. Specificity is the whole game with AI code tools — the more precise the instruction, the less time you spend on review.
Cursor handles the quick stuff. When I'm editing a single file, when I need autocomplete that understands my codebase, when I want to make a small change without starting a whole conversation — Cursor is faster. The tab-to-complete workflow is almost frictionless for small tasks: I type a comment describing what I want, tab, and the implementation appears. It's wrong maybe 20% of the time, but correcting a wrong autocomplete is faster than typing the correct code from scratch, so the math works out.
I switch between Claude Code and Cursor throughout the day, and the decision is simple. If the task touches multiple files or requires reasoning, Claude Code. If the task is a single-file edit or a quick implementation, Cursor. I don't think about the decision anymore — it's become instinctive, the way you reach for a screwdriver for screws and a wrench for bolts.
The Automation Layer: n8n Running in the Background
n8n doesn't figure into my daily workflow in the sense of "I open n8n and do things." It figures in by running. I have roughly 20 workflows that execute automatically on schedules or webhook triggers. They handle things I'd otherwise do manually: monitoring uptime, pulling data into spreadsheets, sending notifications when certain conditions are met, syncing content between platforms.
The important thing about n8n in my workflow is that I don't think about it most days. It runs. When a workflow fails — which happens a few times a month, usually because an upstream API changed something — I get a notification, open n8n, fix the issue, and move on. The total maintenance time is maybe an hour per week, which is vastly less than the time the automations save.
I built most of these workflows over the course of several months, adding one or two per week as I identified manual tasks that were worth automating. The discipline I follow — which I've written about before — is that I never automate something I haven't done manually at least four times. If I've done it manually four times and it's still a regular task, it's earned automation. If I've done it once and it was annoying, that's not enough to justify the time investment of building a workflow.
The Writing Block: Claude Again
I write daily — articles, documentation, notes, correspondence. Claude is involved in about 60% of this, but not in the way most people assume. I don't ask Claude to write for me. I use Claude as a thinking partner and an editor.
The thinking partner role looks like this: I describe what I want to write — the topic, the angle, the key points I want to make — and ask Claude to identify what I'm missing, what's weak in my argument, and what the strongest counterargument is. This conversation produces a better outline than I would have written alone, because Claude is good at generating the thing I didn't think of. The writing itself is mine. I type it. The voice, the opinions, the specific examples — those come from me. Claude didn't live through the experiences I'm writing about.
The editor role looks like this: after I've written a draft, I paste it into Claude and ask specific questions. "Where does this argument lose coherence?" "Which paragraph is weakest?" "Am I repeating myself?" "Is the opening interesting enough to keep reading?" Claude's feedback on these questions is surprisingly useful — not because it has perfect taste, but because it catches the structural problems that are hard to see when you're inside your own prose. I act on maybe 40% of the suggestions, ignore the ones that would flatten the voice, and the final product is consistently better than it would be without the review.
What I Don't Use
The list of what I don't use daily is longer than the list of what I do, and it's more informative.
I don't use Midjourney daily. I generate images when I need them for specific projects, which is a few times a week, not every day. I don't use GPT-4o — not because it's bad, but because Claude handles my text and code needs better and I don't need two general-purpose LLMs. I don't use any "AI productivity" apps — the Notion AIs, the Mem.ais, the various AI-enhanced note-taking tools. My notes live in plain text files. The AI that helps me think is Claude, in a separate window, not embedded in my note-taking app.
I don't use AI agents for anything other than Claude Code's agentic mode. I tested several autonomous agent frameworks — CrewAI, AutoGPT predecessors, Devin — and none of them produced reliable enough output to earn a place in a daily workflow. They're interesting experiments. They're not daily tools. When autonomous agents work reliably, I'll adopt them. Until then, the human-in-the-loop approach — me driving, AI assisting — produces better results.
And I don't use AI for decisions. I use it for information (Perplexity), for reasoning (Claude), for building (Claude Code, Cursor), and for editing (Claude). But the decisions about what to build, what to write, what to prioritize, and what to ignore — those are mine. The day I start outsourcing judgment to an LLM is the day I'm not a builder anymore. I'm a prompt operator, and that's a different job with a lower ceiling.
The Honest Numbers
Total AI subscriptions: Claude Pro ($20/month), Cursor Pro ($20/month), Perplexity Pro ($20/month). Total: $60/month. Plus $12/month for the VPS running n8n. Grand total: $72/month.
Time saved per day, estimated honestly: 2-3 hours. Some of that is real time savings — tasks that took an hour now take 20 minutes. Some of it is quality improvement that I'm counting as time savings because the alternative was spending more time to get the same quality. The number is imprecise and I'm not going to pretend it's scientific.
Tools I interact with directly on a typical day: three (Perplexity, Claude/Claude Code, Cursor). Tools running in the background: one (n8n). Total active tools: four.
That's it. Four tools. Not fifteen. Not a beautiful interconnected diagram. Just the things that actually work, used in the ways they actually work, every day.
This article is part of the Weekly Drop at CustomClanker — one take, every week, no fluff.