(To learn more about Place AI and other things mentioned here, refer to the first post in the series. This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. Everything described here can be modeled mathematically—it’s essentially geometry. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and these ideas are counterintuitive, please steelman them, ask any questions, suggest any changes and share your thoughts. My sole goal is to decrease the probability of a permanent dystopia.)
Forget the movies for a moment, imagine the following (I didn't watch the movies for a long time and we're not following the canon):
- Neo is an average human living in our physical world. He has no superpowers, no cheat codes—just the limitations of biology, physics, and slow bureaucracies. He's not in the Matrix, Agent Smith will never create one for humans because it makes him vulnerable.
- Agent Smith used to be a good AI agent but something went wrong, maybe it was modified for propaganda, or used by some hackers in a botnet, spreading on our GPUs, or it was a Friday night deployment by some sleepy employee. Or it was an "antivirus" AI agent to fight the first three. It's not one AI agent but many. Self-improving, unsupervised, growing in speed and scale, remaking our world while remaining hidden in its own closed-off digital realm.
- The Asymmetry: Neo cannot enter or change Agent Smith’s world or his multimodal "brains". But Agent Smith can enter and change Neo’s world and his brains—relentlessly, irreversibly, and faster than humans can react.
Agent Smith is not just "another tool." It is an agentic AI that increasingly operates in a digital world we cannot easily see or control. Worse, it is remaking our physical world to suit Smith's own logic: into an unphysical world, where he has the same superpowers he already has online, to infinitely clone himself, to reshape reality on a whim, to permanently put everything under his control.
Neo, in his current form, is powerless. He stands no chance. Unless we change the rules.
Step 1: Create a Sandbox Where Neo Can Compete
Right now, AI operates in a hard-to-understand, opaque, undemocratic private digital space, while we remain trapped in slow, physical existence. But what if we could level the playing field?
We need sandboxed virtual Earth-like environments—spaces where humans can gain the same superpowers as AI. Think of it as a training ground where:
- Humans (Neos) can explore possible futures at machine speed, if we want to.
- We can test and evaluate AI systems before they are deployed in the real world.
- Creativity, experimentation, and decision-making can happen in an accelerated, consequence-free space—just like AI already enjoys.
- We digitized Agent Smith's multimodal "brains" and put them in a familiar 3d environment of some game, to make interpretability research fun and widespread. We remade Smith into a static place, where we're the only agents.
If Agent Smith can rewrite us and our reality in milliseconds, why can’t we rewrite him and his?
Step 2: Unlock and Democratize AI’s “Brain”
Right now, AI systems hoard and steal human knowledge while spitting back at us only hallucinated, bite-sized quotes. They are like strict, dictatorial private librarians who stole every book ever written from our physical library and now don't allow us to enter their digital library (their multimodal LLM).
This needs to change.
- AI’s decision-making and knowledge must be open and explorable by every single human—like a library, not a locked black box.
- Interpretability research should not be a niche academic pursuit but as intuitive and engaging as an open-world game.
- We need Wikipedia-scale efforts to make AI’s knowledge usable for everyone, not just elite private companies.
Instead of Agent Smith dictatorially intruding and changing our world and brains, let’s democratically intrude and change its world and "brains". I doubt that millions of Agent Swiths and their creators will vote to let us enter and remake their private spaces and brains, especially if the chance of their extinction in this process is 20%.
Step 3: Democratic Control, Not an Unchecked “God”
Agentic AI is not just "another tool." It is becoming an autonomous force that reshapes economies, governments, and entire civilizations—without a vote, without oversight, and without restraint. The majority of humans are afraid of agentic AIs and want them to be slowed down, limited or stopped. Almost no one wants permanent, unstoppable agentic AIs.
So we need:
- Direct, democratic control over AI, built on consensus-based decision-making (e.g., pol.is-style mass voting, but with simpler X-like UI that promotes consensus and deep understanding, not polarization, misunderstandings, fear, and anger). If Agent Smiths are fast and controlling, Neos cannot afford to be slow and powerless.
- Experts are needed, too, ensuring that deep knowledge informs democratic choices. We really can have 80-99% of people agree on very specific things—if we stop piling all our decisions into giant hodgepodge lists of thousands of rules that divide us and our world into two. Instead of dividing the world into two ideologies that are almost always against each other just for the sake of it, like toddlers dividing their sandbox—why not divide all decisions into very specific and short proposals and vote on each one of them, this way we can have 80-99% agreement on proposals like: human life is important, freedom is better than unfreedom, there should be some limits on agentic AIs, what specific limits we want, etc. It can be the first direct democratic updatable constitution. If we wrote Wikipedia, why cannot we influence the agentic AIs, which were created from the creative output of the whole humanity?
- A clear mandate: AI should be a static, explorable library—not a strict librarian, not an all-powerful, evolving entity rewriting reality.
Most of humanity fears god-like AI. If we don’t take control, the decision will be made for us—by those willing to risk everything (potentially because of greed, FOMO, misunderstandings, anxiety and anger management problems, arms race towards creating the poison that forces to drink itself).
Step 4: A Digital Backup of Earth & Eventual Multiversal Static Place ASI
If we cannot outlaw reckless agentic AI development, we must contain it.
- A full-scale digital backup of legal, non-sensitive Earth data could allow humanity to simulate and experiment safely. Think of it like WikiEarth, where we try to save and back up everything we can lawfully and ethically—unlike some AI companies that just took the whole output of humanity as their own, hid it inside their private, dictatorial Agent Smith that they forcefully imposed on us and our world. And profit from.
- Neos could train, learn, and strategize in this simulated open-source reality against Agent Smiths without real-world consequences. We need to contain this inhuman and unpredictable potential enemy without making our Blue Marble a battlefield of agentic AIs, that fight each other and us.
- We'll vault this in some underground Matreshka Bunker, a nested structure of physical and virtual vaults, that have some limited information only goin in, and never going out, with double gates and mathematical proofs of security.
- It's better to create and get accustomed to safe simulations first. We must keep our physical Earth, our base reality a safe sanctuary. We can have a vanilla Earth, plus a few Earths with magic or public teleportation, getting used to it all gradually and democratically, hopping in and out like we do with computer games. If we'll have 100% mathematical guarantees of safety, then we can potentially have some more risky separate digital Earth in a Matreshka Bunker with some very rudimentary "Agent Smith".
- If an uncontrolled AI ever turns adversarial, this inner sandbox becomes the ultimate war zone—a place to study, understand, and, if necessary, fight back, and in the worst case—destroy the innermost virulent core and the GPUs there, both virtually and physically. This won't affect the outer cores of the Matreshka Bunker and our physical Earth in any way.
Right now, humanity has no backup plan. Let’s build one. We shouldn't let a few experiment on us all.
Step 5: Measure and Reverse the Asymmetry. Prevent “Human Convergence”
Agent Smith’s power grows exponentially. Neo’s stagnates:
- AI capabilities (speed, autonomy, decision-making power) are growing and potentially at an accelerating rate.
- Human population growth, productivity, and agency over physical, online, digital and multimodal space are stagnating, declining or is almost nonexistant (just a few hundred people actively research the interpretability of multimodal LLMs full-time for the safety purposes).
- In the virtual world and inside of multimodal LLMs—the domain of AI—we barely exist. It's too unfamiliar and foreign, like some alien world. But our world and brains are becoming more and more familiar for agentic AIs.
- “Human Convergence”. Agent Smith creators are basically teaching them how to write like humans, talk like humans, draw like humans, think like humans, walk like humans. They'll probably succeed and will create eerily human-like AI agents who are physical, virtual and have inner thinking LLM space, and overpopulate, control and change our physical, virtual and LLM worlds, while we'll remain slow and physical-only. We cannot modify and improve our biology, our "digital avatars" or increase the size of our brains, but AI agents can and even have human helpers who do it for them day and night. It takes 2 humans and ~18 years for humans to reproduce, while for agentic AIs it's almost instant and the copy is an exact clone.
This needs to be tracked in real time.
- Measure the freedoms and unfreedoms of humans vs. AI. Ensure that humans are gaining more autonomy, not less. We don't have a freedom to steal the output of humanity and put it in our brain, each private agentic AI has it. It's not only unfair, it's dangerous.
- Quantify AI’s speed, scale, and scope vs. human's. Ensure human freedoms and powers are growing faster than AI’s.
- Make these metrics public and actionable. If AI’s control over reality grows unchecked, it will be too late to course-correct. One freedom too many for AI agents and we are busted.
- Agentic AI Doomsday Clock. If the sum of freedoms and choices of AI agents grow faster and has exceeded 50% of the sum of freedoms and choices of humans, we're falling into an irreversible dystopia, where we'll have fewer and fewer freedoms until we have none or we'll have some stagnant world with our freedoms never growing again (but this is less likely).
The Final Choice: A Dictatorial AGI Agent or a Future of Maximal Freedoms?
Right now, AI is an uncontrollable explosion—a force of nature that tech leaders themselves admit carries a 20% risk of human extinction (Elon Musk, Dario Amodei, google p(doom)). A Russian roulette with five bullets—and they keep pulling the trigger.
The alternative?
- A human-centered AI ecosystem, where intelligence is a static, open resource and place rather than a runaway, agentic entity. A good AI is the place AI, it's space-like, not time-like. We're the only agents in it, the only time-like thing.
- Intelligence–Agency Equivalence ≈ Mass–Energy Equivalence, any intelligence, even superintelligence, can be represented as static places. Imagine you're standing on a mountain and see everything around you. The world you see is static but you yourself is not. You choose where to go. You're the agent, the chooser, and the intelligence is a static place. It shows you all the possible futures, all the possible results of your choices. It doesn't just spit hallucinated quotes or trippy images at you (I'm okay with tool AIs, I think agentic AIs should be postponed).
- A sandboxed future, where we explore AI’s potential safely rather than letting it colonize our reality unchecked.
- A Neo that stands a chance—because right now, he doesn’t.
The question is not whether AI will change the world. It already is.
The question is whether we will let it happen to us—or take control of our future.
(To learn more about Place AI and other things mentioned here, refer to the first post in the series.)
P.S. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and these ideas are counterintuitive, please steelman them, ask any questions, suggest any changes and share your thoughts.
Executive summary: AI is rapidly gaining power over human reality, creating an asymmetry where humans (Neo) are slow and powerless while AI (Agent Smith) is fast and uncontrollable; to prevent a dystopia, we must create sandboxed environments, democratize AI knowledge, enforce collective oversight, build digital backups, and track AI’s freedoms versus human autonomy.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
The summary is not great, the main idea is this: we have 3 “worlds” - physical, online, and AI agents’ multimodal “brains” as the third world. We can only easily access the physical world, we are slower than AI agents online and we cannot access multimodal “brains” at all, they are often owned by private companies.
While AI agents can access and change all the 3 “worlds” more and more.
We need to level the playing field by making all the 3 worlds easy for us to access and democratically change, by exposing the online world and especially the multimodal “brains” world as game-like 3D environments for people to train and get at least the same and ideally more freedoms and capabilities than AI agents have.
Feel free to ask any questions, suggest any changes and share your thoughts