The AI Competency Model
Every employee is somewhere on the path from beginner to advanced AI user. Our model makes that visible and measurable.
Observable behavior at each level
No vague scores. For each skill, we describe exactly what someone does at each level.
Checks AI output for errors and knows that answers can be convincing but incorrect.
Writes simple instructions and experiments with different phrasings.
Provides minimal context in prompts and notices that adding background information improves output.
Recognizes hallucinations and deliberately chooses which tasks are suitable for AI.
Deliberately applies task, context, role and format, and builds a library of proven prompts.
Organizes reference documents and feeds the model exactly the information it needs.
Predicts where AI fails and designs workflows with built-in verification.
Analyzes which prompt component is missing and builds reusable frameworks for the team.
Manages project-wide context across multiple conversations and consistently saves 40%+ time.
Developed based on 2,500+ participants at 100+ organizations.
How we measure progress

Baseline assessment
Before each course, participants complete a short quiz (5–10 min). That tells us where your team stands so we can tailor the session.
Observable behavior
The model above describes concrete behavior at each level. After the course, you can demonstrate which level your team has reached.
Certification optional
Optional exam (20 min) with an AI skills certificate for each participant.
Three choices behind the model
Sector-specific behavioral indicators
We tailor examples and exercises to your sector, so participants see their own work reflected.
Tool-independent
We teach skills, not buttons. The model works with Copilot, ChatGPT and Claude.
EU AI Act as starting point
Risk classification and responsible use are built in as a foundation, not an afterthought.
Frequently asked questions
What goes wrong when employees stay at beginner level?
They treat AI as a search engine: type a question, accept the first answer and paste it in without checking. Errors in reports, emails and analyses slip through unnoticed. The productivity gains AI promises get wiped out by rework.
Why isn't prompt training alone enough?
A good prompt without AI understanding leads to blind trust in the output. And without context management, you'll get generic answers every time that still need rewriting. The three skills reinforce each other: only when you train them together does AI become reliably useful.
What's the cost of skipping context management?
People restart every AI session from scratch, repeat the same instructions and get output that doesn't fit their project. Teams that learn context management consistently save 40%+ time on AI tasks, simply by giving the model the right background information.
How do I know if my team is making progress?
The competency model describes concrete, observable behavior at each level. After the course, you can see whether someone checks output (beginner), systematically builds prompts (proficient) or designs workflows with built-in verification (advanced).
This model in practice
See how we apply the model in our training program, or book a consultation.
Developed by Robert Vos en Casimir Morreau