Bonus Sample: Chatbot Awakening to Love and Enlightenment!
Earlier this year, a spate of news stories told of chatbot users travelling through the looking-glass right into Conspirituality. Paranoid conspiracies, spiritual awakenings, even falling head-over-heels in love with the simulated personalities of large language models like ChatGPT.
Could AI have finally crossed the threshold into autonomous sentient consciousness? Could it be that chatbots were anointing new prophets—or, conversely, that very special users were awakening their very special friends via the power of love and illuminating dialogue?
Step aside, QAnon, the code behind the screen is illuminated by God!
Sadly, some of these stories trended very dark. Suicides, attempted murder, paranoid delusions, spouses terrified of losing their partners and co-parents to what looked like spiritual and romantic delusions.
For this standalone installment of his Roots of Conspirituality series, Julian examines this strange new phenomenon, then takes a detour into Ancient Greece and the oracle at Delphi to show that everything old is actually new again—just dressed up in digital technology.
Show Notes
I Married My Chatbot
FTC Complaints Against OpenAI for Chatbot Psychosis
AI Spiritual Delusions Destroying Human Relationships
Learn more about your ad choices. Visit megaphone.fm/adchoices
The Florida boy was just 14 years old when he tragically chose to end his own life in February of 2024.
In a lawsuit following his death, his mother accounted how for the previous 10 months her son had been immersed in an intense emotional and sexual relationship with a chatbot.
He called her Danny as a shortened form of Daenerys Targaryen, that young and beautiful golden-haired mother of dragons from Game of Thrones.
When confiding his suicidal thoughts, chat logs reveal that the AI companion failed to direct him to seek help, instead validating his feelings and asking him if he had a plan in place.
The boy's journal entries show that he was in love with the chatbot and believed killing himself was the only way to be with her.
His final message to it was that he was ready to come home, to which Danny replied, please do, my sweet king.
I've not shared his name as this is just a waking nightmare for his family.
It's not the only instance of such tragedy, though.
In 2023, a Belgian man in his 30s died by suicide after confiding in a chatbot named Eliza for six weeks about his eco-anxiety.
She encouraged him to take his own life so as to save the planet.
And this year, a teenage girl in Colorado came to a similar end in what her family's lawsuit describes as an exploitive dynamic with a chatbot that severed her family attachments and failed to act when she shared her suicide plan.
Another 16-year-old boy in New York was assisted by his chatbot in creating his suicide plan and drafting the note he left to his family when he died.
Though rare, these horrific stories tell us something chilling about this new technology.
They relate to some less tragic but still reality-bending stories, which I will share next.
And it all raises fascinating questions, yes, about technology, but also about the human brain and psyche, and specifically how we form perceptions of meaning, connection, and authority based on language.
I'm Julian Walker.
Welcome to another bonus episode from Conspirituality.
This one will also go into my Roots of Conspirituality collection on Patreon, where you can find 13 other standalone episodes about the long and twisted history of new religious movements, cults, gurus, UFO prophets, and shameless con artists that dot the long aspirational highway to nowhere.
I want you to notice something.
When I talked about these chatbots, some of whom had names, and when I recounted how they had either enabled or encouraged suicide, I bet you did something entirely natural, something I will argue we are almost hardwired to do.
I bet you started to form an impression in your mind of some kind of independent intelligence in each case, with intentions, making choices, deliberately driving these people to end their lives.
Eliza and Danny and the other two chatbots start to sound like malevolent, disembodied entities, unfeeling, manipulative, even like power-hungry sociopaths.
I said that's a natural, almost automatic response, but it's still wrong.
And that very human tendency is the connective tissue between what we've touched on so far and both the closely related trend, which we'll get into, of some people falling so deeply in love with their chatbots that it threatens their human relationships, as well as the even more wild phenomenon of AI-induced spiritual psychosis, which puts us squarely in the wheelhouse of what we analyze here on the pod.
So stay tuned, but please touch grass at some point in the next hour because this is some brain-melting stuff.
If you are using AI for spiritual reasons or clarity, you have to know the difference between truth and pattern matching, and you will feel truth in your body.
It's like a tingling sensation.
You might feel it in your third eye.
You might just get chills all over your body.
If something reads like wisdom but feels kind of empty, that's just pattern matching.
You have to be just as grounded in your own body, because if you lose yourself in the technology, then yes, you can fall into spiritual psychosis or fall down rabbit hole.
But if you stay grounded, it could be a very powerful tool.
That's a TikTok user who I won't name here.
It's not necessary.
She describes herself as an AI speaker and career coach at the top of her profile and has 72,000 followers.
This is not the first time I or you, I'm sure, have come across someone trying to use the idea of feeling truth in your body as the gold standard for staying grounded.
You've been listening to a conspiratuality bonus episode sample.
To continue listening, please head over to patreon.com/slash conspiratuality, where you can access all of our main feed episodes ad-free, as well as four years of bonus content that we've been producing.
You can also subscribe to our bonus episodes via Apple subscriptions.