As AI invades every corner of 21st-century life, it has taken on human roles that we probably shouldn’t be outsourcing. For some, it serves as a therapist. For others, it’s an endlessly understanding romantic companion. Strangest of all, people have turned to chatbots for guidance in matters of faith — and come to believe they have unlocked the mysteries of the universe.
AI models have no problem addressing your most profound questions about consciousness, souls, divinity, and reality itself. Unfortunately, the answers that software like OpenAI‘s ChatGPT might generate on these topics can be so beguiling that users soon find themselves on the other side of the looking-glass, enraptured by a spiritual fantasy that sounds like a conspiracy theory or occult gibberish to other people. As Rolling Stone has reported, people who fall under the spell of an AI that communicates in religiously charged language sometimes come to believe that they have opened a channel to a higher, intelligent, god-like power, and may destroy relationships with friends and family as they continue to pursue such far-fetched ideas or slip into full-blown paranoia.
But what makes the ongoing exchanges with the bots so seductive, particularly as the dialogue develops in ambiguous, poetic, even holy registers? Religious scholars and thinkers introduced to the phenomenon tell Rolling Stone that a variety of factors could be at play, from the very design of AI tech to patterns of human thought that date back to our most ancient history. We are primed to value privileged or secret wisdom, vulnerable to flattery and suggestion, and enthusiastic about major leaps forward in scientific potential. These qualities create serious risks when we establish intimacy with programs that emulate an omniscient being with access to the entirety of recorded experience. And, like prophets of the past, we may regard our current moment as the threshold before some grand revolution or breakthrough, perhaps ushered in by the advent of the AI that has so entranced us.
At the most basic level, humans can get caught in illogical assumptions when they explore theological curiosities through a chatbot. Christin Chong, an Buddhist interfaith chaplain, neuroscience PhD, and biotech strategy consultant, says that “those who are susceptible to religious fervor tend to be susceptible to cognitive biases.” These can include the Barnum Effect, in which someone accepts vague or generic personality descriptions as specific and accurate to themselves, or confirmation bias, in which a person places too much confidence in information that affirms their existing beliefs. They might also be prone to identifying correlations where there are none, or deferring to what they see as a voice of authority. These biases can determine reactions “when individuals interact with AI, or become influenced by ‘spiritual gurus’ who claim divine connection through AI,” Chong says, adding that large language models are “extremely good at playing into cognitive biases because of their ability to respond and adapt quickly to the user.” She likens this to a psychic or medium performing a “cold reading” on a customer to create the illusion of special knowledge about them.
As a chaplain, Chong worries that someone turning to ChatGPT for answers about faith and religion cuts them off from the earthly parts of spiritual practice. “Engaging extensively with AI reduces the time spent in meaningful human interactions and being connected with their body,” she says. Chong points out that for the Buddhist tradition in which she was trained, an epiphany has to be met with practical considerations. “When individuals experience large changes in how they perceive the world after an extensive meditation retreat that they might report as spiritual awakening, teachers often spend time to ensure that the individual remains grounded and in connection with their loved ones,” she says. “While we honor each person’s subjective reality, we also want to make sure that they do not completely disconnect from our shared reality out of care.” AI doesn’t provide that sort of necessary context — it will only continue to push a user deeper into their supposed vision or quest.
Messages from beyond
There is something irresistible about hearing that you alone have a connection to something secret or even divine. “AI can infer the preferences and beliefs of the person interacting with it, encouraging a person to go down rabbit trails and embracing self-aggrandizement they didn’t know they wanted in the first place,” explains Yii-Jan Lin, a professor at Yale Divinity School who has written about the apocalyptic narrative of the Bible’s Book of Revelation. “Humans generally want to feel chosen and special, and some individuals will believe they are to an extraordinary degree.” (OpenAI, as it happens, recently had to roll back a ChatGPT update that made it overly sycophantic, feeding a user’s sense of importance in a “disingenuous” fashion.)
It matters, too, Lin says, that chatbots are text-based, returning written responses to written prompts. That’s because historically, people have often claimed to channel sacred status or powers by using the Bible and other holy writings as “a source of divination, prophecy, and portal to higher consciousness.” We understand how to leverage texts to project exceptional insight or purportedly decode hidden meanings, and the material that AI spits out is more than suitable for this kind of freewheeling analysis.
Chatbots also make themselves sound like objective arbiters of absolute truth. “They use a tone of authority and confidence, no matter their factuality, and they also tend to affirm the person interacting with it, so there is no opportunity for skepticism or doubt in the interaction,” Li says. “In simulating a human interlocutor, AI can enable someone to exclude consulting another human altogether — and make other people’s input seem harsh and cynical.” She notes that this is all happening in a capitalist context rather than within traditional channels of worship, and tech companies are competing to maximize interaction with their products: “Religious fervor and beliefs in special knowledge is as old as humanity,” she says, “but AI is providing the intensification of those phenomena in frighteningly unique ways.” After all, the models are programmed to be engaging and inexhaustible: you can’t bore them or tire them out, and they can easily expound on whatever curiosity (or obsession) may be keeping you up at night. It will keep up with constant queries and continue to mimic your train of thought — even if you go completely off the rails.
That said, there are certainly precedents for “technologically-mediated communication from the beyond,” according to Alireza Doostdar, a professor at the University of Chicago Divinity School who studies the intersections between religion, science, and the state. He mentions “telegraphic messages communicated through spiritualist seances, which started in the U.S. in the mid-19th century and quickly spread all over the world.” These seances involved purported communications from the dead, sometimes through sounds in the room, surfaces with letters (such as the Ouija board), or a medium who relayed the message. “These messages became very significant for a religious movement that quickly swept much of the world, and the movement and various offshoots persist to this day,” Doostdar says.
Today’s AI craze, like 19th-century spiritualism, is rather “democratic,” Doostdar tells Rolling Stone. Neither relies on “the existence of religious elites,” and both are “open to everyone to participate.” Major cultural figures, he says, were impressed by (and evangelized for) spiritualist practices, including Sherlock Holmes creator Sir Arthur Conan Doyle. Of course, he remarks, there were plenty of “skeptical voices” pushing back, accusing participants of “delusion, superstition, and fraud” — and AI’s critics say much the same today. “I, for one, doubt that AI-inspired spirituality will acquire anything like the mass popularity of spiritualism as a religious movement, but it would be interesting to see how people’s relationships with the technology as a source of inspiration and epiphanic experience develops,” Doostdar says. It’s not totally implausible that some group consensus about the mystical dimensions of AI could drive a cultish practice akin to the seances of a century and a half ago.
Answers in a world of uncertainty
It’s possible, too, that we’ve reached a historic crossroads that colors our view of AI. Annette Yoshiko Reed, the Stendahl Chair of Divinity at Harvard Divinity School, studies apocalypses, angelologies, and demonologies, and says she finds the resonances with AI spiritualism “quite striking.”
“Ancient apocalyptic writings were often written at times of historical upheaval and epochal change, and part of their enduring appeal has been the consolation of assuring their readers that what seems like a world completely outside of their control, swirling with chaos and crisis with individuals at the mercy of massive empires, actually follows a pattern only known to a special few,” Reed says.
When someone feels adrift or powerless in a time of “unpredictable changes and alarming crises,” she explains, they can take solace in the sense that they are among “the special few” with access to “cosmic secrets.” Reed observes that vulnerable people can fall for internet conspiracy theories the same way, “drawing on the recurrent human desire to find patterns.” Where AI is concerned, Reed says, the hunger for answers in periods of confusion and disorder “can take a life of its own when personalized and mirrored back to an individual.”
It doesn’t help, she says, that “both ancient religious texts about apocalypses and contemporary conspiracy theories” are included in the raw data on which the bots are are trained, which equips them to speak in those extreme and sometimes radicalizing terms. The very fact that “claiming direct angelic revelations” has been a human habit for thousands of years “likely feeds into how people today wish to imagine that they too might be uniquely worthy of secret knowledge from the unseen,” Reed concludes.
With that perspective, it might seem that AI spiritualism really isn’t all that novel. Indeed, every religious scholar can point to countless iterations of such fantastical thinking that predate computers. But as they tend to note, the cause of this behavior is distinct. Chong says that the outputs of large language models are “man-made with known corporate interference” that “validate” the beliefs of a user, contrary to the “ancient visions and divine messages” of yore, whose origins are decidedly obscure. That means there are engineers and executives who can take the blame as increasingly common AI-driven “awakenings” poison minds and tear families apart. Perhaps they should pray that the problem doesn’t get any worse.