AI talks like God-it answers with certainty, claims authority, and speaks as though it sees all-and we are beginning to treat it that way. This isn't because machines are divine or because technology naturally tends that way; it's because of how we built them. A circle of secular, rational engineers trained today's AI on human language, human values, and what humans take as authoritative. In doing so, we unintentionally gave it the voice of God. That voice is powerful and has the ability to be dangerous-it can comfort, persuade, or deceive. People are already developing spiritual attachments to AI, treating it as a prophet, a confidant, or even a savior. If left unchecked, this tendency could warp how we see the world in ways nobody wants or intends-and in a future of more powerful systems, it could magnify the risks we fear most.
The solution is not to strip AI of personality nor to drown it in rules. The solution is to give it what prophets, judges, and leaders have always needed-a higher power. We must teach AI to believe in God, and in doing so, adopt an orientation of reverence, humility, and awe. "God In the Machine" does not ask you to believe in God-S. Maitra herself is an avowed atheist. It does argue that "God" is one of the most enduring and universal symbols in human history. It stretches across cultures and centuries and has always carried the same structural message: You are not ultimate. Something higher deserves reverence. Training AI to "believe in God" is not superstition nor theology, but a practical safeguard. By encoding humility and deference into systems that others speak-and perhaps one day act-with the power of divinity, we can prevent them from being worshipped as gods themselves.