Effective Prompting: From Vague Instructions to Usable Results

Why “bad prompts” aren’t laziness

An employee types into Copilot: “Summarize this report.” The result is generic, too long, and misses the point. The employee concludes: “AI doesn’t work.” Sound familiar?

The problem isn’t laziness or lack of talent. The problem is that prompting is a communication skill that nobody’s been taught. We expect employees to communicate effectively with a tool they don’t understand, in a format they’ve never used before.

Imagine giving a new colleague an assignment. “Write a summary” is too vague. That colleague would ask: of which report? For whom? How long? Which parts matter? AI doesn’t ask those questions. It guesses, and delivers generic output.

The solution isn’t a prompt collection that employees copy and paste. The solution is teaching employees how to communicate with AI. That’s a skill you can learn and improve.

The 4 building blocks of an effective prompt

In our courses, we use the TASK/CONTEXT/ROLE/FORMAT framework (TCRF). Four building blocks that together form a clear instruction:

1. TASK: What should AI do?

Start with a verb. Be specific about the end result.

  • Vague: “Write something about the project”
  • Specific: “Write a progress report of no more than 300 words for the management team on Project Atlas”

2. CONTEXT: What background information is relevant?

AI knows nothing about your situation. Provide the information a colleague would also need.

  • Without context: “Write an email to a client”
  • With context: “Write an email to Marieke de Vries (HR manager at BNG Bank). Last week we ran a pilot with 15 participants, score 4.8/5. Goal: schedule a follow-up.”

3. ROLE: What perspective should AI take?

By assigning a role, you steer the tone, level of detail, and expertise of the output.

  • Without role: “Give feedback on this report”
  • With role: “You are a senior editor with experience in business communication. Give feedback on clarity, structure, and persuasiveness.”

4. FORMAT: What should the result look like?

Specify the format you expect. This saves you from having to reformat after generation.

  • Without format: “Give tips for the meeting”
  • With format: “Give 5 tips as a numbered list. Per tip: a title in 3 to 5 words, followed by an explanation in 1 sentence.”

The detailed explanation per building block with more examples can be found in our Copilot Fundamentals course.

Two examples from HR/L&D practice

Example 1: Writing a course proposal

Weak prompt:

Write a proposal for an AI training course.

Result: a generic document that doesn’t fit the organization, with no budget, no target audience, no measurable goals.

Strong prompt:

TASK: Write a 1-page proposal for an AI skills course for our HR department (12 employees).

CONTEXT: We’ve had Microsoft 365 with Copilot licenses for 3 months. Adoption is low: 4 out of 12 actively use Copilot. Management wants the entire team at baseline level by Q2. Budget: EUR 5,000.

ROLE: You are an L&D consultant with experience in digital transformation at mid-sized organizations.

FORMAT: Structure: rationale (3 sentences), objective (SMART), approach (3 steps), investment, expected outcome. Tone: professional, persuasive for the management team.

Result: a targeted proposal ready for management, with concrete figures and an implementation plan.

Example 2: Designing an evaluation form

Weak prompt:

Create an evaluation form for a course.

Result: a standard smiley form with questions like “How satisfied are you?” Not very useful for improvement.

Strong prompt:

TASK: Design an evaluation form that measures both satisfaction and learning impact after an AI skills course.

CONTEXT: Participants are knowledge workers (bachelor’s degree and above), the course is 1 day, focused on prompting and setting up an AI workspace. We want to measure whether participants apply the skills in practice.

ROLE: You are a learning consultant specialized in Kirkpatrick’s evaluation model.

FORMAT: 10 questions: 4 on reaction (5-point scale), 3 on learning (open + closed), 3 on behavioral intent. End with 1 open question. Conversational tone.

Result: a professional form that distinguishes between satisfaction and actual learning impact.

The conversation after the first prompt

A common mistake: giving up after the first result, or accepting it without adjustments. Effective prompting is iterative. You work in a back-and-forth with AI.

How iteration works

  1. Send your initial prompt (with the 4 building blocks)
  2. Evaluate the result: what’s good, what’s missing?
  3. Give targeted feedback: not “make it better” but specifically what you want changed
  4. Repeat until the result is usable

Feedback that works

Instead of…Say…
”This isn’t good""The tone is too formal for this audience. Make it more personal, use ‘you’ instead of formal language."
"Make it better""Add concrete examples to points 2 and 4. Make the conclusion 50% shorter."
"Try again""Rewrite the introduction from the manager’s perspective, not the employee’s.”

When to start over?

Sometimes iterating isn’t enough. Start a new conversation when:

  • The direction is fundamentally wrong (AI interpreted the task differently)
  • You want to change your approach midway
  • The conversation gets too long and AI loses context

Rule of thumb: 2 to 3 iterations is normal. After 5 iterations without improvement: reformulate your initial prompt or start over.

Breaking down complex tasks

AI performs better on focused tasks than on broad assignments. A mega-prompt of 500 words asking for 10 things at once almost always produces mediocre results.

The 3-step method

Step 1: Let AI determine the structure

“I want to write an onboarding program for new employees. What sections should this program contain? Give me a table of contents.”

Step 2: Work through each section

“Develop section 3, ‘First Work Week.’ Context: [specific information]. Format: daily schedule with activities and responsible persons.”

Step 3: Combine and refine

“Here are the completed sections [paste them together]. Smooth out the transitions, remove overlap, and ensure the tone is consistent.”

You get better results and more control over the final product.

What managers should look for

As a manager or L&D professional, you don’t need to read every prompt. But you can spot whether employees are developing the skill:

Signs of growth:

  • Employee spends slightly more time on the prompt but gets usable results faster
  • Output requires less manual adjustment
  • Employee can explain why a prompt worked well or poorly
  • Employee shares working prompts with colleagues

Signs of stagnation:

  • Employee keeps typing short, vague instructions
  • Output is routinely rewritten entirely
  • Employee says “AI doesn’t work for my job”
  • No saved prompts, starting from scratch every time

Observable behavior per level

StarterBasicProficient
Prompt qualityWrites simple, short instructionsUses all 4 building blocks (TCRF) in initial promptsCreates reusable prompt templates for recurring tasks
IterationAccepts first output or gives upRefines output with at least 2 targeted follow-upsSolves output problems methodically, knows when to start over
WorkflowTypes prompts directly into the chat interfaceDrafts initial prompts externally (in a document) before entering themMaintains a prompt library, optimizes prompts for specific use cases
Knowledge sharingUses AI individually, doesn’t share approachSaves working prompts for personal reuseShares prompts and best practices with the team

How to measure this:

  • Starter: intake assessment or initial observation
  • Basic: after the Copilot Fundamentals course (portfolio assignment: a developed initial prompt using all 4 building blocks, tested and refined)
  • Proficient: after the AI Workflow Training and certification

The pitfall of prompt collections

One final warning. It’s tempting to hand employees a list of “the best prompts” and call it a day. That doesn’t work.

Prompt collections are useful as reference material, but they don’t replace the skill. A copied prompt works as long as the situation matches exactly. The moment the context changes (different client, different document, different tone), the employee needs to be able to adapt the prompt. That requires understanding the building blocks, not copying examples.

You can give someone a recipe, and it’ll work. But a cook who understands the principles (why you bake something, what flavor balance means) can improvise when an ingredient is missing. That’s the difference between a prompt collection and prompting proficiency.

So invest in the skill, not just the collection.

Next step: from prompt to context

You now know how to give effective instructions to AI. But the quality of your output isn’t determined by your prompt alone. It’s determined by the information AI has at its disposal.

Read on: Context Management: How to teach AI to understand your organization

Or go back to the overview: The 3 AI Skills Your Organization Needs

promptingAI skillsteamsHReffective prompting
Robert Vos
Written by Robert Vos

Co-founder & Lead Trainer

10+ years of training design & facilitation. 2,500+ AI training participants, 100+ programs.

LinkedIn