Not every painful task is worth automating. Score yours across five dimensions and get a clear recommendation — automate, semi-automate, or leave it alone.
Built from patterns we’ve seen across real automation engagements. Includes five common traps that waste time and budget.
Think of one specific task your team does repeatedly. Answer five questions about it.
Count every instance across your team, not just your own.
From the moment someone starts to the moment it’s done. Include context-switching time.
Automations break when the underlying process changes. How often does yours?
The difference between “follow the checklist” and “it depends.”
Not “has it ever gone wrong” — what’s the blast radius when it does?
Answer all five questions to see your results.
Patterns we’ve seen repeatedly in real engagements. The score gets you started. These keep you from building the wrong thing.
If the manual process has workarounds, undocumented exceptions, or steps that exist because “we’ve always done it that way,” automating it just makes a bad process run faster. Fix the process first. Then automate the fixed version.
Signal: nobody can explain the full process end-to-end without saying “well, sometimes we also...”
A task can happen 50 times a day and still be a bad automation candidate — if every instance is different enough to require judgment. High volume plus high variance means you need a human-in-the-loop system, not a fully automated one. Those are different builds with different costs.
Signal: the team says “it’s the same thing over and over” but can’t write down the rules that cover 90% of cases.
Five minutes per instance sounds trivial. But 5 minutes × 8 times a day × 5 days a week = 3.3 hours every week. That’s a full day per month of senior time on a task that probably doesn’t need senior judgment. Do the multiplication before deciding it’s not worth fixing.
Signal: the person doing it shrugs it off as “just part of the job” — they’ve stopped noticing the cost.
Building the full automation across every workflow before testing it on one is the most expensive way to learn it doesn’t work. Start with a single workflow, a single team, or a single task variant. Prove it works there. Then expand.
Signal: the project plan jumps straight from “requirements” to “full rollout” with no pilot phase.
Some tasks are painful because they require judgment under ambiguity — reading between the lines of a vague request, deciding which stakeholder to loop in, interpreting tone. That’s not a task that’s waiting to be automated. That’s a task that’s hard because it genuinely needs a person.
Signal: when you ask “what makes this annoying?” the answer is about ambiguity, not repetition.
Download the one-pager version of this assessment. Includes the scoring rubric, all five traps, and a blank scorecard you can use with your team.
Download the One-PagerPDF — free, no strings
We build AI automations for GTM teams — scoped, priced, and delivered in days, not months.
Book a Discovery Call