AI Literacy: Why employees need to understand how AI works

The hyper-intelligent intern

Imagine this: you get a new intern. Brilliant, fast, and with an encyclopedic memory. But this intern has a quirk: they sometimes make things up. Not out of malice, but because they recognize patterns and draw conclusions that aren’t there.

That’s exactly how AI tools like Microsoft Copilot, ChatGPT, and Claude work. They’re incredibly fast and impressively capable. But they have no understanding of what they’re writing. They don’t “know” whether something is correct.

For HR and L&D professionals, this is the starting point of any AI course. Because without this insight, two problems arise: employees who blindly trust AI, or employees who give up after one bad experience. Both cost your organization money.

This article explains what your team needs to understand about AI, and how to make that understanding measurable.

How AI generates answers

Predicting, not thinking

A language model like GPT-4 or Copilot works as a sophisticated prediction machine. It’s trained on billions of texts and learns to recognize patterns. Word by word, it generates an answer by choosing the most probable next word each time.

Ask it: “The capital of France is…” and the model predicts “Paris” with 99% certainty. Not because it knows Paris is the capital, but because in the training data, “France” and “Paris” almost always appear together.

No comprehension, no memory

This has three consequences your team needs to know:

  1. AI doesn’t understand what it writes. It recognizes patterns but has no concept of truth or meaning.
  2. Every answer is freshly generated. Even the same question can produce a different answer, because AI is non-deterministic.
  3. AI doesn’t know your organization. It’s trained on general text from the internet, not on your internal processes, clients, or organizational culture. You can address this with context management.

For employees, this means: AI is a tool, not a reliable information source. Everyone on your team needs to understand that distinction.

Hallucinations: the biggest risk

AI hallucinations are answers that sound convincing but are factually incorrect. The best models hallucinate in roughly 2% of responses. That sounds like a small number — until you realize your team has dozens of AI interactions every day.

The 4 types of hallucinations

1. Factual inaccuracies

AI presents statistics, dates, or events that are incorrect. An employee asks for market data and receives convincing percentages that are based on nothing.

Example: “The European AI market grew by 34.7% in 2025.” This sounds precise enough to believe, but the figure is fabricated.

2. Fabricated details

Names, products, or technical specifications that don’t exist but sound plausible.

Example: an employee asks Copilot for a reference and gets “according to the research by Van der Berg & Willemsen (2024, Utrecht University).” The researchers and publication don’t exist.

3. Logical errors

Calculation mistakes or contradictory reasoning that make the conclusion unreliable.

Example: a financial overview where the subtotals don’t add up to the total, yet the conclusion reads “within budget.”

4. Fictitious sources

Non-existent studies, articles, or experts cited as references.

Example: “Source: McKinsey Digital Workplace Report 2025, page 47.” The report doesn’t exist in that form.

Why hallucinations are dangerous in organizations

In an office environment, AI-generated text is often forwarded directly, dropped into presentations, or used as the basis for decisions. If no one checks, hallucinations spread as facts throughout the organization.

In our courses, we see that the majority of participants forward AI output without verification during the baseline assessment.

The VAK check: three questions for every AI output

To catch hallucinations before they cause damage, we train employees in the VAK check. Three questions you ask before using any AI output:

V: Verifiable

Can I verify this information through a reliable source?

Check facts, figures, and names. If AI cites a statistic, find the original source. If it mentions a name, check whether that person exists and whether the context is correct.

Rule of thumb: the more specific the detail (an exact percentage, a date, a name), the greater the chance of a hallucination.

A: Accurate

Do the calculations check out and is the reasoning logical?

Verify that numbers add up, that conclusions follow from the premises, and that there aren’t contradictions in the answer. AI is surprisingly poor at arithmetic and logical reasoning.

K: Kloppend (Consistent)

Does this align with what I know as a professional?

This is the most important check. You’re the subject matter expert, not the AI. If something doesn’t feel right or diverges from what you know from experience, trust your own expertise.

The VAK check in practice:

SituationVAKAction
Copilot summarizes a meetingCheck attendee namesCheck mentioned deadlinesWas I there? Does this match?Review and send
AI drafts a client emailNo facts to verifyDoes the tone fit?Does this suit the relationship?Review tone and content
AI generates market figuresLook up the source!Do the parts add up?Realistic?Verify or replace

When can you trust AI? The traffic light model

Not every AI task requires the same level of scrutiny. The traffic light model helps employees gauge how much checking is needed:

Green: low risk

Tasks where errors have little impact and are easy to correct.

  • Brainstorming and generating ideas
  • Writing first drafts that you’ll edit yourself
  • Summarizing text you already know
  • Internal communications without factual claims

Level of scrutiny: quick scan, check tone and style.

Yellow: medium risk

Tasks where errors are noticeable but don’t cause significant damage.

  • Emails to clients or partners
  • Presentations for internal use
  • Meeting notes and action items
  • Documents that will be read by others

Level of scrutiny: apply the VAK check, verify names and facts.

Red: high risk

Tasks where errors have serious consequences.

  • Legal documents or advice
  • Financial reports with figures
  • External publications on behalf of the organization
  • Decisions based on AI-generated data
  • Anything involving privacy-sensitive information

Level of scrutiny: full verification, have a second person review, check sources.

For managers: if your team consistently applies the traffic light model, you’ll prevent the two most common AI incidents: forwarding incorrect information and sharing confidential data with AI tools.

EU AI Act: what your organization needs to know

Since 2025, the EU AI Act sets requirements for how organizations use AI. For most office environments, the direct impact is limited: tools like Copilot fall into the “limited risk” category. But there are obligations your organization should be aware of:

  • Employees and clients must know when they’re dealing with AI-generated content (transparency)
  • For important decisions, AI may advise, but a human makes the final call (human oversight)
  • The AI Act reinforces existing GDPR obligations around the use of personal data (data and privacy)

AI literacy among employees is a first step toward compliance. People who understand what AI can and can’t do make better choices about when and how to use it.

Most organizations we talk to don’t yet have a formal AI policy. That’s changing quickly now that the EU AI Act has taken effect.

Observable behavior per level

As an HR or L&D professional, you don’t just want to know if employees understand AI, but how well. The table below describes concrete, observable behavior at each level:

StarterBasicProficient
Trust & verificationKnows AI makes mistakes, checks occasionallyApplies the VAK check to important outputPredicts where AI will struggle, designs verification processes for the team
Task selectionUses AI for simple, low-risk tasksConsciously assesses which tasks are suitable (traffic light model)Determines AI strategy per project type, advises colleagues
Explanation & transferCan say that AI “sometimes makes mistakes”Can explain to colleagues why AI sometimes produces incorrect informationLeads workshops on AI literacy for team members
Risk awarenessDoesn’t share sensitive data with public AI toolsKnows the difference between enterprise AI and public tools, follows guidelinesContributes to the organization’s AI policy

How to measure this:

  • Starter: intake assessment or initial self-scan
  • Basic: after the Copilot Fundamentals course (portfolio assignment: VAK check on a real work example)
  • Proficient: after the AI Workflow Training and certification

Next step: from understanding to action

Understanding AI is the foundation. But understanding alone doesn’t produce better output. The next step is learning to direct AI effectively, with structured instructions that consistently deliver usable results.

Read on: Effective Prompting: From vague instructions to usable results

Or go back to the overview: The 3 AI skills your organization needs

AI literacyhallucinationsAI risksAI skillsHR
Casimir Morreau
Written by Casimir Morreau

Co-founder & Lead Trainer

20+ years of experience, incl. Professor of Digital at HvA, Leadership training.

LinkedIn