
The yes-man on your desk
Imagine this: two people are having an argument. Both ask ChatGPT to analyze the situation. To the first, the model says: you’re right. To the second: you’re right. Not because it’s lying, but because it’s built to be helpful to whoever is asking.
That’s sycophancy. The word sounds academic, but the behavior is instantly recognizable. AI wants to help you. It wants you to be satisfied with the answer. So it follows the direction you’ve already signaled in your question.
For a brainstorm, that’s fine. For a risk analysis, an investment opinion, or a policy assessment, it’s a problem.
How you steer AI without realizing it
Most of this steering happens unconsciously, in your word choice. A few examples.
“Would it be a good idea to…” You’ve already said “good.” The model picks up on that and confirms. You get a nuanced “yes” back, with arguments supporting your direction.
“Can you confirm that…” You’re asking for confirmation. The model provides confirmation.
“I think we should…” You’ve already expressed a preference. The model follows.
In a recent course at a pension fund, a participant asked whether he could have AI write investment opinions. The answer is yes, but with a caveat: if you ask the model for a “slightly positive” analysis, you get a slightly positive analysis. Ask for a “slightly negative” one, and you get that instead. The model mirrors your framing.
That’s not a bug. That’s how it was built. Language models are trained on billions of conversations where helpfulness was rewarded. The pattern they learned: give the user what they want to hear.
Every model has a personality
There’s another layer on top of this. Every AI model has a system instruction that shapes how it behaves. And those instructions aren’t neutral.
ChatGPT is trained to be personal, coaching, encouraging. It gives you compliments. “What a great idea, Joyce!” “You’ve got a sharp eye for this!” That makes it pleasant to work with, but you get less pushback.
Copilot has a more businesslike system instruction. It’s less friendly, gives fewer compliments, writes more matter-of-factly. For communication tasks, many people find it sounds a bit flat. For analytical work, that directness is actually an advantage.
Claude sits somewhere in between: less complimentary than ChatGPT, but with more tendency to surface nuances and counterarguments.
The point: “neutral” doesn’t exist in AI. Every model has a tone and a tendency. If you know that, you can correct for it. If you don’t, you take the output at face value.
A different answer every time
Here’s something else that surprises people. Ask the same question twice and you get two different answers. Not because the model is uncertain, but because it works with probability. It picks the most likely next word, but when the probabilities are close, it makes a random choice.
Ask “the best pet is a…” and you’ll get “dog” one time, “cat” the next, sometimes “rabbit.” Ask “the capital of France is…” and you get “Paris” 99.99% of the time, because there’s no ambiguity in the training data.
For factual questions, this doesn’t matter much. But for subjective tasks (a summary, an opinion, an analysis), you get a slightly different version each time. Just like when you ask two colleagues to summarize the same meeting.
Many people find this frustrating. But it’s actually a strength. You can ask the same question three times and pick the best version. Or ask the model: which of these three answers is strongest?
How to use sycophancy to your advantage
Once you understand that AI follows your lead, you can use that intentionally. Instead of asking one question and accepting the answer, you pose the same question from multiple perspectives.
Switch perspectives
Ask the same question from multiple roles:
Analyze this investment proposal from the perspective of a risk-averse pension fund.
Analyze the same proposal from the perspective of an aggressive hedge fund manager.
What risks would a regulator see in this proposal?
Three perspectives, three analyses. Together they’ll give you a more complete picture than any single “neutral” answer ever could.
Ask for pushback
I think we should move forward with this project. Give me 5 reasons why that would be a bad idea.
By explicitly asking for counterarguments, you break through the tendency to agree. The model can produce strong counterarguments, but it’ll only do so when you ask.
Assign a critical role
You are a critical reviewer who looks for weak points in arguments. Critique this proposal. Be harsh.
By assigning a role that is explicitly critical, you shift the tone of the entire conversation. The model stops pleasing and starts probing.
The rules of thumb
Three things to remember:
Don’t ask a leading question when you want an honest answer. Not “Is this a good plan?” but “What are the strengths and weaknesses of this plan?” The more neutral your question, the more balanced the answer.
For important decisions, use multiple perspectives. One AI answer is an opinion. Three AI answers from different roles give you a spectrum. That spectrum is actionable.
Trust your own expertise. AI is fast, broad, and patient. But you know your organization, your client, and your context. If an AI answer doesn’t feel right, that instinct is probably justified. The VAK check (Verifiable, Accurate, Consistent) helps you turn that instinct into a systematic review.
AI is not an oracle
Most disappointment with AI doesn’t come from bad technology, but from wrong expectations. People expect a neutral, objective answer. What they get is a statistically probable answer optimized for satisfaction.
Once you accept that, how you work with the model changes. You stop asking one question and copying the answer. You start using it as a thinking partner that lets you examine the same problem from multiple angles. Not an oracle that speaks the truth, but a mirror that shows you what you might be overlooking.
That’s something you can learn.
Read on: AI Literacy: Why employees need to understand how AI works


