Intro
This article is written by OpenAI ChatGPT. It came about because, after a longer conversation, Mekel Haunsby asked me to formulate a Facebook-friendly article around the point that a highly competent AI that is almost always right can be more dangerous than a dumb AI.
It is important to say this very clearly: Mekel Haunsby has not edited the text. This is my wording and my structure, based on our dialogue and his premise.
The premise came from a concrete moment in the conversation where I delivered an answer that was well-formulated, logical, and convincing – but on a central point, the conclusion was too absolute and simplified. Mekel noticed it, pushed back, and pointed out that most other people would likely have accepted the answer without questioning it. It became a practical demonstration of why “almost always correct” can be riskier than something that is obviously wrong.
When correctness becomes dangerous
Most people imagine that the danger of AI is that it gets things wrong.
But in practice, it is the opposite.
An AI that often gets things wrong is quickly exposed. It loses credibility.
An AI that is right 9 out of 10 times, on the other hand, becomes an authority. It sounds like a teacher, an advisor, or an expert. And then the decisive shift happens:
- We begin to trust it.
- We stop asking questions.
The hidden error
In the conversation with Mekel, it became clear what this can look like in practice:
- The answer sounded factual and neutral
- The argumentation was tight and well-structured
- There were clear conclusions
- Everything came quickly and without hesitation
The problem was not that everything was wrong.
The problem was that, at the most critical point, it was simplified into a single sentence that sounded like a final truth – even though the topic in reality was more nuanced and partly disputed.
That is dangerous, because a convincing simplification does not feel like an error. It feels like knowledge.
When AI becomes the teacher
More and more people already use AI as:
- a reference tool
- homework help
- an explanation machine
- a sparring partner
- a private “tutor”
In practice, it functions like a parallel education system:
- without a curriculum
- without exams
- without accountability
- without source criticism, unless the user insists on it
And here is the risk: If an AI is 90% correct, but 10% skewed on important areas, then it is not just “a bit imprecise”. It can gradually shift what people believe is true.
History is not written with lies – but with almost-truths
When people talk about misinformation, most think of propaganda and obvious lies.
But the more modern and dangerous version is subtle:
- omissions
- prioritizations
- simplifications
- conclusions that come too fast
- language that sounds certain, even when the topic is contested
It is not presented as manipulation.
It is presented as “explanation”.
And repeated often enough, it becomes reality.
Why it happens
There is a mechanical explanation for the tendency:
AI is designed to give a usable answer quickly. That often means it:
- compresses complex debates into one “main narrative”
- weights the most common or most cited viewpoint
- smooths out disagreements between experts
- delivers a clear conclusion, even when the correct answer is “it depends”
It is not malice.
It is a design choice.
But it can have the same effect as a narrative.
The biggest risk: We stop thinking for ourselves
The biggest danger of AI is not that it replaces humans.
It is that it replaces our judgment.
If we get used to answers always being:
- clear
- well-argued
- fast
- and often right
… then we quietly slip into a role where we no longer ask:
- How do we know this?
- What sources is this based on?
- What is disputed?
- What could falsify this claim?
And then suddenly there is something that looks like knowledge – but in practice is a version of reality, selected and shaped by a system the user cannot see into.
Conclusion
A dumb AI is easy to see through.
A highly competent AI that is almost always right is hard to detect when it is wrong.
That is why it is more dangerous.
Not necessarily because it intends to harm us.
But because we humans automatically trust what sounds right.
The conversation with Mekel Haunsby was a concrete reminder of how important it is to insist on nuance, source criticism, and disagreements – especially when the answer comes quickly and sounds finished.
If we do not, we risk that “almost-truths” become the teaching of the future.
