The BS Factor: Why Humans Will Always Outmaneuver Machines
AI learns our stories. Humans live the mess behind them.
Humans are not good at transmitting reality.
We compress uncertainty into stories. We simplify causality. We remove randomness. We rewrite events so they feel coherent, intentional, and explainable.
This is not moral failure. It is a coping mechanism. Reality is messy, probabilistic, and uncomfortable. Stories make it survivable. They make it teachable.
This distortion shows up everywhere. In education. In business. In social media. And now, in how we train machines.
Once you see the pattern, the rest becomes obvious.
1. The Professor Problem
Business education is a clean version of a dirty process.
Students are taught through success stories, frameworks, and confident explanations. Failures are minimized. The uncomfortable parts are reframed as “bad luck” or removed entirely.
The result is predictable. Graduates who can repeat models but struggle to reason under uncertainty.
Real critical thinking is built on what is missing. Constraints. Tradeoffs. Failed experiments. Decisions that were reasonable at the time and still led to collapse. The reasons a good idea died anyway.
Research on entrepreneurial learning consistently shows that failure can be a powerful input, but only when it is confronted directly. Learning depends on reflection, context, and honest processing. When failure is softened to protect ego, it stops teaching.
When educators hide their own failures or rewrite them as abstract “lessons learned,” students lose the most important dataset: what actually happened and why.
The damage is not just incomplete information. It is causal distortion.
Students over-attribute outcomes to intelligence, grit, or vision. They under-attribute to constraints, execution detail, and chance.
If the dataset is curated, the student becomes confidently wrong.
2. The More Dangerous Problem: We Often Don’t Know Why Things Worked
Hiding failure is not the worst issue.
Not knowing why something succeeded is worse.
A startup raises capital. The founder explains it as product-market fit, timing, or traction. The real driver may have been invisible. A fund needed to deploy capital before quarter end. Other deals collapsed. Internal incentives forced a yes.
The founder never sees this. They only see the outcome. So they construct a story that feels true.
This happens everywhere. A hire works out. Was it the process or luck. A launch succeeds. Was it positioning or timing. A deal fails. Was it execution or an unrelated external event.
Humans reconstruct narratives after outcomes. Uncertainty becomes inevitability. This is not dishonesty. It is cognition.
The business world is full of single-sample experiments presented as universal lessons. Someone succeeds once, then teaches a framework. But running an experiment once does not allow you to separate signal from noise.
Maybe the success came from geography. Or timing. Or an introduction they did not earn. Or a competitor imploding quietly.
They will never know. But they will explain it anyway.
3. Social Media as a Bias Engine
Social media is not fake. It is selectively coherent.
It rewards confidence, certainty, and clean narratives. Ambiguity does not travel well.
Nobody posts: “I don’t know why this worked. It might fail tomorrow.”
They post: “Here are the five lessons learned.”
Even honest attempts at reflection require compression. Random events become strategy. Chaos becomes intent.
So the internet is not a record of reality. It is reality filtered for status, incentives, and narrative clarity.
This matters because modern AI systems learn from large volumes of human-generated text. When the text is biased toward polished stories, the model learns polished stories.
Just like the student.
AI is trained on the rewritten version, not the confused, partial, contradictory reality underneath.
4. AI as a Student of Our Edited Selves
The simplest mental model of AI is this.
It is a student that never lived a life but read everything.
The problem is that much of what it reads is performance.
AI learns what humans say happened, not what actually happened. Humans lie constantly. Not always deliberately. Often socially. Often unconsciously.
In machine learning, this shows up as bias in data and bias in models. Training data reflects omissions, selection effects, and social incentives. The model absorbs them.
Better documentation helps. Transparency helps. But none of that fixes the underlying issue.
The source material is already distorted.
5. Why “AGI” Might Never Understand Humans
Many arguments focus on consciousness or embodiment.
A simpler explanation exists.
We are training machines on a version of humanity optimized for approval.
We hide incompetence. Contradictory motivations. Hypocrisy. Confusion about causality. Random events we later call vision. Decisions made for one reason and justified with another.
Most importantly, we hide the moments when we genuinely do not know whether success came from skill or luck.
Lies can sometimes be detected. Confusion cannot.
When humans themselves cannot separate signal from noise in their own lives, there is nothing clean to teach.
We publish press releases. Machines become excellent at writing press releases.
They learn what intelligence sounds like, not what decision-making looks like under pressure.
As long as this continues, AI will misunderstand the most human parts of us. The irrational. The social. The ego-driven. The inconsistent. The private.
Not because it cannot model them. Because we refuse to describe them honestly.
6. The Asymmetry That Remains
Humans retain an advantage over machines, but not for the reasons people usually claim.
Not intelligence. Not creativity. Not consciousness.
The advantage is structural.
Humans do not operate on clean data. We act on incomplete, contradictory, and often wrong explanations of our own behavior. We make decisions without understanding the full causal chain. We revise our reasons after the fact. We pursue outcomes for one reason and justify them with another.
AI learns from the explanations. Humans act inside the confusion.
Machines need consistent signals to optimize. Humans thrive in inconsistency. We change goals mid-stream. We respond to social pressure, fear, ego, and status in ways we do not fully articulate, and often do not even notice.
AI learns the narrative layer. Humans operate below it.
This creates a permanent mismatch.
As long as humans continue to curate truth, simplify causality, and rewrite uncertainty into stories, machines will be trained on a made-up world rather than the one humans actually inhabit.
That world does not exist.
So AI will continue to be highly competent, and consistently wrong.
Not because it lacks capability, but because it is learning from performance instead of practice.
This is why humans will always outmaneuver machines.
Not by being more rational, but by being less legible.
Not as a tactic, but as a condition of being human.

