How Truth Gets Bent: From People to AI
History and truth aren’t carved in stone—they’re shaped by what we hear, see, and repeat. People have been bending reality since forever, and now AI’s getting in on the game. Here’s the kicker: the way humans and AI “learn” truth is pretty much the same. It’s not about facts—it’s about what sticks. And that’s where manipulation sneaks in.
People Don’t Learn Truth Logically
Think about how you know something’s “true.” Odds are, it’s not because you sat down and analyzed the data—it’s because you’ve heard it enough times. Repetition makes things feel real. Always has. Before, we mostly relied on books, then Google, and now AI, but who knows how they figure it out?
Kings told heroic tales to look good, governments used slogans to win loyalty, and now we’re flooded with marketing and PR spinning stories to shape our thoughts. Why? Because our brains cling to what’s loud and familiar, not necessarily what’s proven. Flat-earthers? They believe the earth’s a pancake because their echo chamber keeps saying so. People laughing at them? Same process—they’re in a different echo chamber. Same brain, different noise.
AI Learns the Same Way
Now AI’s doing the same thing. It doesn’t “think”—it absorbs data, then repeats it. Try this: ask ChatGPT to create an image of a boy writing with his left hand. I’ll wait.
What’d you get? Probably a kid using his right hand. Why? Because most images online show right-handed writers, so that’s what AI “knows.” It’s not being “wrong”—it’s mimicking the crowd.
People call this a flaw, but it’s really just how we work too. AI’s “truth” is simply the loudest data it’s seen.
Manipulation’s Old Game, New Player
Here’s the thing: people have always manipulated truth—think propaganda posters or rewriting history books. PR pros know if you blast a message enough, people believe it. Now, AI is just the latest tool in that playbook.
Feed it skewed data, it spits out skewed answers. A company could train AI on fake stats to make their product look amazing. Suddenly, “truth” bends their way.
I saw this happen—an AI rated a city “safest in the world” because it was fed only cherry-picked numbers. Sound familiar? That’s human bias, but supercharged.
Collective Truth vs. Personal Truth
Here’s where things get interesting: humans have collective knowledge (what most people believe) and personal knowledge (what you’ve actually lived).
AI? It only has access to collective truth—billions of data points, all averaged out.
You might know your neighbor’s dog is a sweetheart, but if the internet says “dogs bite,” AI’s barking that message.
Flat-earthers trust their personal truth. Science fans trust collective truth. AI? It can’t know your backyard—it just counts the loudest voices.
So What Does This Mean for Us?
Truth has always been a tug-of-war. People twist it with stories, AI twists it with data. The difference? AI does it faster and louder. But it’s not evil—it’s just us, reflected back.
We’ve got to pay attention to what we feed AI—and what we choose to believe. Next time you see history being “rewritten” or AI getting it wrong, ask yourself:
What’s the loudest voice here?
Because that’s the one shaping your truth.