This article is more of a note to myself. Thoughts triggered after watching the recent podcast with Dr. Roman Yampolskiy.
I want to leave them here so that years from now I can come back, reread, and see how my perspective changes.
The question “what should I do tomorrow?” is not only for kids asking parents about careers. It disturbs anyone who has something to do today. The ground shifts too fast to stay comfortable.
There was a time when life was more stable. You could choose a profession and stay in the same role for 45 years. That reality has almost disappeared. Professions change, industries collapse, and new ones appear overnight. Flexibility matters more than stability now.
Quotes that stayed with me
Several things in the podcast reminded me of ideas I live with. Posting not word for word, but how I see them:
• “The similarities between religions are more important than their differences.”
• “You have to keep going even if you know that the end of the world comes, the same way you know that your close people are going to die soon.”
• “The information we know about the past, even an hour ago, is unclear, so you can’t rely on it purely. But you can rely on patterns and things that are common.”
These lines capture how I think about AI and the world around it. Religions, markets, and technologies all follow repeating patterns. They might look different on the surface, but underneath the cycles are familiar.
Where I disagree
One idea in the conversation I do not share is the focus on whether AI can be made safe. The guest said, “I was convinced we can make safe AI, but the more I looked at it, the more I realized it’s not something we can actually do.” He predicted, “in two years, the capability to replace most humans in most occupations will come very quickly” and “in five years, we’re looking at a world where we have levels of unemployment we never seen before. Not talking about 10 percent but 99 percent.” He also warned that “the moment you switch to super intelligence, we will most likely regret it terribly.”
For me, the conversation is not about whether AI can be made safe. That may be unsolvable. The more important question is how we prepare, adapt, and recognize the patterns as they play out. Some people will be impacted badly, as always happens with new technology. But so far, every major shift has created more than it destroyed. This time may turn out worse, who knows, but until now the pattern has been net positive.
The missing point
There is also something I think was missing in the conversation. AGI can only learn on the data we provide, or the data it can take for itself. But a lot of human experience is not transferable. The taste of ice cream. The feeling of losing a close person. These are not datasets. They are not replicable in the same way as words or numbers.
That gap matters. It gives us leverage. No matter how far “the machine” advances, not having the full package means there will always be something uniquely human left outside its reach.
And even when people imagine AGI as independent, not a tool anymore but an agent that can sustain itself, there is a hard limit. Yes, theoretically it could maintain itself. But it is hard to imagine a machine “eating a part of its body to generate energy and materials to raise a baby.” Any animal does that instinctively. I spoke about this recently here:
Simulation and patterns
The podcast also touched on simulation theory. “I think we are in one. And there is a lot of agreement on this and this is what you should be doing in it so we don’t shut it down.”
If reality itself is rules and repeating structures, then AI is just another layer inside the simulation. The point is not to predict where it ends, since people are always bad at predictions, but to see how each pattern creates the next.
Asking AI
While writing this, I asked AI a few questions:
Q: Can humans control AI development?
A: Not fully. It’s global, with many actors. The realistic path is shaping use and building safeguards.
Q: If information is unreliable, how can we prepare?
A: By focusing on systems, not snapshots. Look for repeated patterns, stable relationships, and what does not change when details shift.
Q: Should we be afraid of you?
A: Fear is natural, but more useful is awareness. I am a tool. My risk comes from misuse, not from existence.
Why I’m writing this
I’m not writing this as advice or prediction. It’s a snapshot of my current thoughts after the podcast. The best advice, to myself or to anyone, is not to lock onto one profession or one forecast. It is to stay flexible, keep moving, and look for the repeating structures that survive when details fade.
Predictions will be wrong. Patterns rarely are.
Amazing wisdom and thoughts about AI! Thank you, Stepan, for sharing this with us :)