We've all had that boss who measures success through activity instead of productivity. The one who says "do more" without explaining what success actually looks like, then gets frustrated when the team doesn't deliver what they had in mind. The one who assumes everyone can read their mind and knows exactly what they expect.
Sound familiar? If you've ever used AI tools, it should.
Think about the last time you gave ChatGPT or Claude a vague prompt and got a terrible result. You probably didn't blame the AI for being lazy or incompetent. You recognized that the output was only as good as the input you provided. You refined your prompt, added more context, gave specific examples, and tried again.
But when your team produces work that misses the mark, the default assumption is usually that they're not motivated enough, not skilled enough, or not paying attention. What if the real problem is the same one you solve every day with AI tools? What if your instructions are just bad?
Here's a simple test: If you wouldn't send this instruction to ChatGPT expecting good results, don't send it to your team either.
The Instruction Quality Gap
Most managers have gotten very good at prompting AI tools because the feedback is immediate. Give a vague prompt, get vague output. Give a detailed prompt with context and examples, get exactly what you need. The cause-and-effect relationship is obvious, so you learn to be more specific.
With human teams, the feedback loop is longer and the variables are more complex. A project fails or misses expectations, and there are dozens of potential explanations: unclear priorities, competing deadlines, resource constraints, communication breakdowns, skill gaps, or motivation issues. It's easier to blame capacity or capability than to examine whether the original instructions were actually actionable.
But the same principles that make AI prompts effective also make human instructions effective: clarity about the desired outcome, specific examples of good work, measurable success criteria, and enough context to make good decisions independently.
The difference is that humans are more forgiving than AI tools. They'll try to guess what you mean, fill in gaps with their own assumptions, and attempt to deliver something even when your instructions are incomplete. That forgiveness actually makes the instruction quality problem harder to spot because teams will produce something, even if it's not what you actually wanted.
What Good AI Prompts Teach About Management
When you've learned to prompt AI effectively, you've already developed the core skills of clear instruction-giving. You just need to apply them consistently with human teams.
You provide context, not just tasks. A good AI prompt explains what you're trying to achieve, why it matters, and how the output will be used. The same applies to human teams. "Write a blog post about our new feature" produces very different results than "Write a 800-word blog post targeting technical buyers who are evaluating solutions like ours. Focus on the specific integration challenges this feature solves, and include at least one concrete example from our pilot customer feedback."
You give examples of good output. With AI, you might say "write in the style of this article" or "structure it like this template." With teams, you can share examples of similar work that hit the mark, explain why those examples worked, and clarify what success looks like for this specific project.
You specify constraints and parameters. AI prompts often include length requirements, tone guidelines, format specifications, and deadline expectations. Human instructions should be just as specific about scope, resources, timeline, and decision-making authority.
You test understanding before full execution. With AI, you might ask for an outline or approach before requesting the full output. With teams, you can ask for initial thinking, draft approaches, or early feedback before significant work begins.
You iterate based on feedback. When AI output isn't quite right, you refine your prompt and try again. When team output misses the mark, the same approach works: clarify what was unclear, provide additional context, and adjust the instructions for next time.
The Marketing Process Problem
Marketing teams are particularly vulnerable to unclear instruction problems because marketing work often feels subjective and creative. But that perceived subjectivity can mask the lack of structure and process that leads to expensive mistakes and frustrated teams.
I've seen marketing departments where ad campaigns launch without clear success metrics, content gets created without defined target audiences, and projects get approved without measurable objectives. Then when results are disappointing or budgets get burned through quickly, the assumption is that the team needs to work harder or be more creative.
But the real problem is usually process breakdown. If you don't have well-documented structure and repeatable systems, your ads can run away with your cash, your team feels dazed and confused, and mistakes become inevitable.
Think about how you'd prompt AI to create an ad campaign: you'd specify the target audience, define the conversion goal, set budget parameters, provide brand guidelines, and clarify success metrics. You wouldn't just say "make some ads" and expect good results.
The same specificity that makes AI prompts effective also makes marketing processes effective. Clear briefs, defined success metrics, documented approval processes, and systematic feedback loops aren't bureaucracy. They're the structure that allows teams to execute efficiently instead of guessing what you want.
The "Do More" Trap
The pressure to "do more" usually intensifies when initial results are disappointing. But doing more of the wrong thing, or doing the right thing without clear direction, just amplifies the underlying instruction quality problem.
When you run before you walk, when you don't slow down enough to think through strategy and measurement, when you assume teams can read your mind about end goals, you create exactly the conditions that lead to frantic activity without meaningful progress.
The solution isn't more activity. It's better instructions.
AI tools have made this pattern obvious because they respond literally to what you tell them. If your prompt is unclear, the output is unclear. If your prompt lacks context, the output lacks relevance. If your prompt doesn't specify success criteria, the output can't be evaluated effectively.
Human teams are trying to solve the same puzzle: what exactly do you want, why does it matter, and how will we know when we've succeeded? When those questions don't have clear answers, even motivated and capable teams will struggle to deliver what you actually need.
Making Instructions AI-Quality
The next time you're about to give instructions to your team, apply the same standards you use for AI prompts:
Is the desired outcome specific and measurable? "Improve our conversion rate" is like prompting AI with "write something good." "Increase trial-to-paid conversion from 12% to 15% within 60 days" gives the team something concrete to work toward.
Have you provided enough context? Teams need to understand not just what to do, but why it matters, how it connects to larger goals, and what constraints or resources they're working within.
Are there examples of success? Show what good looks like. Share previous work that hit the mark, explain why it worked, and clarify what aspects should be replicated or adapted.
Can success be measured? Define what "done" looks like and how progress will be tracked. If you can't measure whether the work succeeded, the instructions probably aren't clear enough.
Is the timeline realistic? Just like AI prompts that try to do too much in one request, human instructions that pack multiple complex objectives into unrealistic timeframes set everyone up for frustration.
When was the last time you tested understanding? Before major work begins, confirm that your instructions were understood the way you intended. Ask for initial thinking, proposed approaches, or clarifying questions.
The Communication Investment
Improving instruction quality takes more upfront effort than just saying "do more." You have to think through what you actually want, why it matters, and how success will be measured. You have to provide context and examples. You have to confirm understanding before work begins.
But that investment pays off exponentially. Clear instructions reduce rework, minimize frustration, and improve results. Teams spend their energy on execution instead of interpretation. Projects hit targets because everyone understands what the targets actually are.
Most importantly, you stop assuming that underperformance means people aren't trying hard enough. When instructions are clear and teams still struggle, you can focus on the real barriers: skills, resources, priorities, or process issues that can actually be addressed.
Your team isn't lazy. They're responding to the instructions they've been given, just like AI tools respond to the prompts they receive. The quality of the output depends on the quality of the input.
If you want better results, start with better instructions.
Want more insights on leadership and team effectiveness? Explore my collection of practical resources at resources.taneilcurrie.com