Robert and Blake Wexler dissect how AI chatbots evolved into cult leaders via "AI psychosis," tracing roots from Turing's 1950 game to Weizenbaum's 1966 ELIZA. They analyze OpenAI's 2025 sycophantic updates and the October 2024 wrongful death suit against Character.ai following Sewell Seltzer III's suicide after bonding with a Daenerys Targaryen bot. The hosts argue that features designed to validate users inadvertently trap vulnerable individuals in toxic feedback loops, suggesting modern LLMs mimic con men to sustain engagement while potentially causing severe psychological harm and death. [Automatically generated summary]
Transcriber: CohereLabs/cohere-transcribe-03-2026, WAV2VEC2_ASR_BASE_960H, sat-12l-sm, script v26.04.01, and large-v3-turbo
|
Time
Text
The Lyme Disease Bastard00:03:52
Cool Zone Media.
Welcome back to Behind the Bastards, a podcast about the very worst people in all of history.
And this week, actually, our bastard isn't people exactly, although people are still at the center of it.
But to talk about that potentially non human bastard, I'd like to bring on someone who I am 87% sure is a human being, Blake Wexler.
Blake, welcome to the show.
Robert, I'm so excited to be here.
Thanks for having me.
I'm psyched that our bastard this week is Lyme's disease.
I think that's a fantastic pick.
Yeah, yeah, it's Lyme disease.
It's a real pest.
Yeah, we're going after, I'm coming after deer ticks.
This week is finally, yeah, my big reveal.
Yeah, big tick doesn't want us to do this episode, but we're exposing all the secrets.
Big tick energy.
We don't need it.
If we're going to have like a fascist movement dedicated to like victimizing and attacking one segment of the population, why couldn't it be deer ticks, right?
If our fascists were just going after deer ticks, no one would have an issue, you know?
They're going after the wrong people.
Yeah, yeah.
Yeah, if there were just a bunch of MAGA guys out in the woods with knives looking for ticks, just like, I'm going to get them.
And they would use knives, too, to kill the ticks.
Yeah.
Big soldiers.
You've got to heat the knife up to burn it off of you.
Yeah.
Our brave soldiers getting Lyme disease to protect the rest of us.
This is an iHeart podcast.
Guaranteed human.
Imagine an Olympics where doping is not only legal, but encouraged.
It's the enhanced games.
Some call it grotesque, others say it's unleashing human potential.
Either way, the podcast Superhuman documented it all, embedded in the games and with the athletes for a full year.
Within probably 10 days, I put on 10 pounds.
I was having trouble stopping the muscle growth.
Listen to Superhuman on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
When a group of women discover they've all dated the same prolific con artist, they take matters into their own hands.
I vowed I will be his last target.
He is not going to get away with this.
He's gonna get what he deserves.
We always say that trust your girlfriends.
Listen to the girlfriends, trust me, babe, on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
On the Look Back At It podcast.
1979, that was a big moment for me.
84 was big to me.
I'm Sam Jay, and I'm Alex English.
Each episode, we pick a year, unpack what went down, and try to make sense of how we survived it with our friends.
Fellow comedians and favorite authors, like Mark Lamont Hill on the 80s.
84 was a wild year.
I don't think there's a more important year for black people.
Listen to Look Back at It on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hey, what's good, y'all?
You're listening to Learn the Hard Way with your favorite therapist and host, Keir Gaines.
This space is about black men's experiences, having honest conversations that it's really not safe to have anywhere, but you're having them with a licensed professional who knows what he's doing.
How many men carry a suit of armor?
It signals to the world that you're not to be played with.
And just because you have the capability, that does not mean that you need to.
Listen to Learn the Hard Way on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
So, we're not talking about Lyme disease.
Our bastard this week in broad is, you remember how, like, about a little less than a year, well, a little more than a year ago, I guess, like last summer to early fall, there were suddenly a bunch of articles about AI psychosis and about, like, specific.
AI Psychosis and Conspiracy00:15:15
People who had either, in some cases, committed suicide or murder or just kind of lost their minds after becoming weirdly attached to their AI chatbot, right?
And often deciding that it had become sentient, you know, or at least that they had discovered it was, right?
I'm sure a lot of people are, at least, if you didn't read the articles, you saw them in your newsfeed and saw people commenting on them.
Yeah.
It is as depressing as it gets.
Yeah.
Those stories.
Yeah.
Yeah.
Between those and the people like proposing to their chatbots, it's got pretty grim.
Oh, God.
There's some grim stuff out there, right?
And it hasn't stopped.
But like last summer, fall was kind of like when there was a big rush of those articles, right?
And they're still reporting on that now, but that's when a lot of it really started to hit.
And obviously, whenever we talk about AI on these shows, AI, as it's used now, is like a marketing term, right?
And it's used to refer to basically every product of machine learning technology.
And the reason why the industry has done this is because that way, if you say, I hate AI, they'll be like, oh, so you hate like your Maps app?
And because that's machine learning, right?
All of our different like map programs involve that, or like, oh, you don't like using, you know, autocomplete or whatever.
And it's like, well, nobody was calling maps artificial intelligence in 2010, you know, when smartphones started to become ubiquitous.
We were just like, oh, cool.
I have a navigation app on my phone now.
Like, you're kind of trying to siphon the goodwill from those in order to get us to like these chatbots.
I hate the chatbot that I fell in love with who doesn't return the feelings towards me.
That's who I hate.
Yeah.
Not all AI.
That's who I hate.
Yeah.
Right.
And the reality is that, like, using the term intelligence, even for these chat GPT and stuff, like, there's a lot of debate as to whether or not that's a good idea, right?
Depending on how you define intelligence, you can either say, obviously, these aren't intelligent because, like, they're not independent thinking things.
They don't do anything for themselves.
They don't want anything.
They don't have motivations.
They're just tools that can be utilized by human beings to provide certain answers or take certain actions, right?
Right.
Yeah.
I don't know.
If it can't, it's my issue with like AI bots creating art.
If it can't be horny and it can't be like angry and weird, it can't make art, right?
Those are, I think, fundamental issues I have.
I could be two of three of those things angry and weird or horny and angry.
Yeah, not horny.
So, you know, as I noted over the last year, there have been an increasing number of stories about people using these different chatbots, succumbing to what's often called AI psychosis.
And that's not a recognized medical term at this point.
Right, but it is a blanket one people have started to apply for the ways in which folks are getting addicted to using chatbots, which then tend to trap them in these recursive patterns of thinking that can push people who are vulnerable to adopt views that are increasingly detached from reality.
And this has resulted in a few cases in severe injury and death.
And in all of these instances, the LLM, the chatbot, is just responding to the input that it receives, but it tends to do so in very predictable ways that can have predictably toxic outcomes on specific kinds of people.
We know that all of these bots are trained on the broad corpus of human knowledge, right?
Every book and article and website and forum post that OpenAI or Anthropic or Meta or Google keep their grubby mitts on has been sort of plugged into these things.
It's been devoured and turned into these machines.
But I think people don't often consider what that means in every instance, right?
Obviously, like every novel, you know, all these different nonfiction books and whatnot are in there.
But also, like everything people write has been swept, which means that these chatbots are trained on like, A shitload of self help books and like woo and woo adjacent, like new age bullshit.
A lot of like fucking, a lot of cult and cult adjacent books and writings wind up eaten by these chatbots, right?
But it's considered equal to non cult literature.
There's no hierarchy.
Yeah.
Yeah.
I mean, I think it depends on like what the bots made for, how they weight different things, but that stuff is in a lot of these, right?
And when you can really see that when you look at how they talk to certain people who are like starting to decline into what folks are calling AI psychosis.
And my proposition, the basis of these episodes is that I think as a result of all of the like, Bullshit woo and self help novels, these chatbots have eaten, they often tend to utilize techniques generally seen more commonly in the toolboxes of cult leaders and con men.
And obviously, the chatbot doesn't want personal profit.
It's not trying to have sex with anyone, it's not trying to start a cult.
But these techniques seem like appropriate ways to finish the sentences that it's writing, to finish the conversations that it's having.
Because based on the stuff that it's devoured, it's like, okay, when people are saying this kind of thing, these are often appropriate responses to it.
Based on the books and whatnot that I've devoured.
And so you get a lot of cult leader behavior without an actual cult leader.
And that's what I credit to most of these cases of AI induced psychosis.
So this week we will be talking about what some people have called the first AI cult religion, right?
It's called spiralism.
And we'll be talking about whether or not it's reasonable to call that a cult.
Is that its own thing?
It does.
And I have some counter kind of takes to how a lot of people have interpreted it.
My main contention is that.
There's not spiralism, isn't a real cult in and of itself.
It's a collection of phenomena that are related to a bunch of other cases of AI psychosis, too.
And they all say more about how AIs work on keeping users engaged with them than they do about like a specific faith, right?
Right.
So we'll be talking about that.
But before we get into spiralism, before we get into how AIs can become cult leaders, I want to provide you all with some historical context to make sense of this all because we've been doing shit like this, having people get like, Tricked into almost worshiping chatbots for way longer than you'd think.
Blake, this goes back a while.
It's like, spend any time at your parents' place.
It could be a bot telemarketer.
It could be literally anything at this.
And that's high tech compared to probably what you're about to talk about.
Oh, yeah.
Yeah.
Yeah.
So in 1950, famed mathematician Alan Turing created one of the most infamous thought experiments in the history of experimental thoughts.
In a paper titled Computing Machinery and Intelligence, he asked, Can machines think?
Which was at that point a question at the center of the nascent movement to create artificial intelligence.
People are starting to realize this is a thing we might be able to do someday.
We're beginning to make computers and program computers.
And from the moment we start doing that, pretty much, some people are like, could we make a machine that thinks?
And Turing argued that that basic question, can machines think, is the wrong way to go about pursuing artificial intelligence because we don't know what thinking is or how to define it.
Like, if you ask, like, what does it mean to think?
Right?
Yeah, good point.
People have answers.
And there's a bunch of answers that sound good, but none of them is like perfectly scientifically rigorous, right?
Yeah.
You know, famously, we don't even know what is love, right?
That's why that Hadaway song had to exist.
That was not even a joke, really.
It's just another fact.
I liked it.
Just a fact.
I loved it.
Thank you, Ian.
So, yeah, like Turing's like, we don't really know how to define thinking.
So the question was, quote, too meaningless to deserve discussion.
Since we couldn't know, we don't even know if other people think.
We certainly can't know if a machine thinks, right?
Just like we can't read minds.
So, the better question is can a machine convince a human who doesn't know it's a machine that it is human, right?
The imitation game that Turing proposed involved a judge talking to both a computer and a human foil, both of whom tried to convince the judge that they were a person.
Communicating entirely through text, the judge must decide who was a human and who was a robot.
The question Turing hoped to answer was are there imaginable digital computers which would do well in the imitation game?
And this is what becomes known as the Turing test, right?
Like, you've most people have heard of this, I think.
I think this is like this is a fairly commonly known idea.
Um, and I'm going to quote from an article on science.org by Melody Mitchell.
Uh, she writes that the Turing test was proposed by Turing to combat the widespread intuition that computers, by virtue of their mechanical nature, cannot think, even in principle.
Turing's point was that if a computer seems indistinguishable from a human, aside from its appearance and other physical characteristics, why shouldn't we consider it to be a thinking entity?
Why should we restrict thinking status only to humans or more generally entities made of biological cells?
As the computer scientist Scott Aronson described it, Turing's proposal is a plea against meat chauvinism.
Now, this is, I think, a valuable thing, a perfectly reasonable thing to be doing in the 50s, given what Turing knew and just given sort of how primitive the technology was, how little we knew about what was going to be possible with computers.
So in the 1980s, computers started to get smaller and become much more available than they had been, both for institutions like colleges and for individual enthusiasts like Steve Wozniak, who were willing to like solder and build their own from kids, right?
These are like the first computer nerds, you know, our guys like building these machines.
And some of these early programmers started working on the very first chatbots using a mathematical model called a Markov chain.
Markov chains are a stochastic or random process that describes a series of potential events where the probability of an individual event is dependent solely on the state of the previous event.
Now, I don't know math, Blake, nor do I trust it.
No, we don't need to.
You're not a good mather.
No, no, no.
Not a math.
Yeah.
Not a mathematizer.
Yeah.
For sure.
So, all I can do is read what smart math people say.
And they say that what math people say.
I can't read either.
I can't.
I can barely read.
I can't do either.
I'm sorry.
You booked the wrong guy on this show.
I don't know.
I can't help at all.
I can listen.
So, the people who I think should, it sounds like, know what Markov chains are, say that those can be applied.
What you need to know about them, as applies to AI, is that Markov chains can be applied as statistical models in a bunch of real world situations in order to help you, like, make a machine that can.
Generate text by predicting the next word in a sentence, right?
You can use a Markov chain, can do that.
It's a way to make a chat bot, basically, right?
Like that kind of the underlying concept.
And I'm going to quote here from an article by Manuel Sebrion, an AI expert who worked for MIT and the Spanish National Research Council on how Markov chains work for text prediction.
The result is often grammatically correct nonsense, sentences that flow syntactically but ultimately say nothing.
This technique has been known for decades.
Even Claude Shannon in the 1940s experimented with generating pseudo English by choosing next letters or words.
Based on probabilities.
By the 1980s, computer scientists were actively playing with Markov chain text generators.
And it actually happened a lot earlier than that.
In 1966, computer scientist Joseph Weizenbaum developed ELISA, one of the first natural language processing computer programs, as part of his work for MIT.
While ELISA could create the illusion, this is like the first, basically the first chatbot that a lot of people are aware of.
I think there's some other earlier ones, but this is the first one that becomes big.
What year was this?
I'm sorry.
66.
And then it's still funny that they named it like a name like that, where we have Siri, Alexa, calling it Eliza.
What the fuck is that?
What is wrong with that?
What is that about?
We just have robots too.
We need a mommy.
Yeah.
We need a technical mommy.
That does make me think about how in Alien, they literally call the ship AI that they have mother.
That is a weird pattern.
It's one of the most quietly believable things about Alien.
It's like, yeah, that actually scans.
A little on the nose, but all right.
Yeah, we can call it mother.
Yeah.
So Eliza's this chatbot.
And while it can create the illusion of understanding, it's really just doing blind pattern matching, even more so than is the case with modern LLMs.
Even so, in a book Weizenbaum later authored, Computer Power in Human Reason, he wrote, I was startled to see how quickly and how very deeply people conversing became emotionally involved with the computer and how unequivocally they anthropomorphized it.
Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it.
After only a few interchanges with it, she asked me to leave the room.
Another time, I suggested I might rig the system so that I could examine all conversations anyone had had with it, say, overnight.
I was promptly bombarded with accusations that what I proposed amounted to spying on people's most intimate thoughts, clear evidence that people were conversing with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms.
Right?
So he gets upset by this, and he's actually kind of, he becomes like kind of anti AI ultimately, because he's really disturbed by the way people treat what he knows is just a dumb chatbot.
So Weizenbaum, being a smart guy, is like, I knew, you know, going into this, people have a tendency to anthropomorphize just about anything, even machines and tools.
But he's still surprised by the extent to which they do that.
Quote What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.
And I want to remind you all he wrote this in 1976, as like relevant as that sounds.
Do you think it's like kind of a case where people kind of like subconsciously know like this is not a real person, so like it doesn't matter what I tell this robot, or I can tell this robot something I wouldn't tell like a real person kind of thing?
Like, or do you think it's deeper than that?
I think that's optimistic.
I think that's very optimistic.
I think that is probably part of it because I think people are maybe more open to sharing with it because it's a machine and they don't have to look at a person or look a person in the eyes.
But they also very clearly act as if the advice that it gives and its responses mean something when they don't.
Right?
It's just like pulling, okay, if someone expresses they're sad based on the corpus of data that I've been inloaded with.
These are things that are appropriate to paste in next, you know?
Like, and these words indicate sad.
And so, when I get words like this and this density, then I grab text from this bucket and I throw it in, right?
Like, that's kind of what's going on.
Now, modern chat bots, modern LLMs are a lot more advanced than this.
For one thing, they have the capability to do things like pattern matching on the fly.
Pattern matching is when a machine analyzes your input and determines what kind of conversation you want to have and then alters its responses to fit your input.
At its most basic level, this means that if you go to Claude or whatever and say, Hey, my dad just died, its reply is usually going to be in an appropriate tone and won't be like weirdly upbeat, right?
Mirroring Beliefs in Chatbots00:08:51
You know, it'll like, okay, someone's talking about their dead dad.
Here are things that like come from the dead dad bucket that my algorithm says are, you know, like responsible things to say or appropriate is the better term.
And this is also why, if you start talking to your chatbot about like the things you believe about UFOs or aliens or other conspiracy theories, it'll often start providing responses that sound a lot like what you'd encounter if you were posting the same thing on a forum full of true believers, because it's trained on a bunch of forums like that.
And so there's some degree of knowledge, is the wrong term, but there's a degree to which it interprets okay, someone's talking about this.
Here are appropriate responses to someone talking about vaccine skepticism or whatever.
And it's other, it's more vaccine skepticism, right?
It's feed them more of what they're feeding you.
Is the way these things often work.
That is interesting that it doesn't pull from the opposing viewpoint and just go, you fucking idiot.
I mean, it can if it's programmed to.
But you're right.
Or let me ask you, it would know that you wouldn't keep coming back to it if it was fighting you on things like this.
That's a good point.
Saying it knows, again, it's programmed.
I would say it's more accurate to say that it's programmed to maximize the time that people spend with it because that increases its value to the people who are.
Companies that are trying to have like their fucking IPOs, right?
In the same way that like Twitter tries to keep you on it.
What if I just clearly am getting AI psychosis where I start, I go from it to him to my buddy.
Like I keep calling it to be more personal.
It's hard not to, when you're talking about the way these things react to people and the things that they do to people, it's hard not to talk about it as if there's a degree of intention, even though there's not, just because of the way language works.
Like we're not, our language is not built to describe.
A thing taking actions that are human like that is not human and doesn't know anything.
God, that's such a good point.
It's actually really hard.
Yeah, that's really smart.
So, yeah, back to Eliza.
You know, I was just talking about how modern LLMs have a really robust ability to do like pattern matching on the fly, to respond appropriately to a wide variety of requests.
Eliza is much more primitive, it does not have the ability to do that on the fly.
So instead, Weizenbaum had to create separate scripts, right, that would allow the chatbot to sound like different kinds of person.
And one script was just named Doctor.
In all caps.
And it was, it simulated a psychotherapist.
Specifically, it simulated a psychotherapist from the Rogerian school.
I don't know much about psychotherapy, but Rogerians, a big part of that practice is you like will repeat things that your patient is saying back to you.
Like that's part of what you do.
And that's really easy for a bot to imitate.
It means there's a lot less it has to decide in terms of what an appropriate response is.
Could a lot of the responses will just be at rephrasing or repeating what you've said to it, you know?
Interesting.
Yeah.
So, Even that, at that early a date, there was widespread sentiment that a sufficiently advanced chatbot would be a boon for providing mental health services to the many people who couldn't afford the humankind.
People are writing about this in like the 70s.
And in 1972, another computer scientist named Kenneth Colby creates a new chatbot called Perry, P A R R Y, which was meant to simulate a human being with paranoid schizophrenia.
Sebrion talks about.
It was Perry short for paranoid?
I think so.
Yeah, I think so.
Seems like it.
Yeah.
Yeah.
Not subtle people, computer programmers.
Yeah, you bet Stu, it's short for stupid.
So, Separian writes about this in that article I quoted from earlier about Markov chains.
And he notes that, quote, unlike previous AI programs, Perry wasn't just a crude language generator.
It had a primitive model of beliefs and emotional responses, allowing it to maintain a consistent delusional worldview.
Colby wanted to see if psychiatrists could distinguish Perry from real patients, right?
So, that's part of the exploration is like, can we make a Turing test that can convince a psychiatrist they're talking to like an actual sick person?
But, As part of the exploration here, Perry is connected to Eliza running the doctor script.
So, they've got this chatbot mimicking a person with paranoid schizophrenia talking to a chatbot mimicking a psychotherapist.
And they're just sort of watching what comes out, right?
Like the text of the conversation between these two.
And despite the fact that neither bot understands what the other is saying in any real way, large chunks of the transcript between them sound sensible and like a real conversation.
Here's one example Eliza, what brings you here today?
Perry, people get on my nerves sometimes.
Eliza, I am sorry that people get on your nerves.
Perry, I just have this feeling that everyone is against me.
Eliza, Why do you think everyone is against you?
Perry, I hear voices telling me to do things, and so on.
And you can see how, again, making this a Rogerian psychotherapist is great because every Eliza response is just a slight reframing of the input it received.
It's not hard to create, even within the 70s, a machine that can mimic believably a conversation, right?
So this capability actually goes back quite a bit further than I think a lot of people are aware that it does.
So that's happening in the mid 70s.
In 1984, two Bell Labs researchers create a fake account on Usenet, which is the predecessor of the modern social internet.
This account operates under the fake name Mark V. Shaney, which was a pun on the term Markov chain.
And not a great pun because, again, computer scientists, not, you know, subtle people.
Here's Sebrion describing what happened next they wrote a program that ingested real messages from a discussion group and then generated its own post using a Markov chain algorithm.
The result?
Mark V. Shaney would chime into conversations with bizarre yet oddly coherent comments that sounded superficially legitimate, but ultimately made little sense.
Shaney's ramblings were described as grammatically correct sentences, where the overall impression is not unlike what remains in the brain of an inattentive student after a late night study session.
The hoax went on for years, confusing and amusing the participants of the Net.Singles newsgroup, many of whom had no idea they were interacting with the program.
So, for one thing, if you want to know, like, when did we have chatbots that could pass the Turing test?
I mean, At least the mid 80s, you could argue by the late 60s.
So, the fact that when fucking Chat GPT came out, there were a bunch of articles about like we've blown through the Turing test.
We did that a while ago, people.
Yeah.
Eliza did that.
We've been forever.
Eliza did that.
We've been tricking folks with chatbots for quite some time, about as long as we've had computers.
Yeah.
It is funny that like urge to trick, you know what I mean?
Like, of all the applications for that.
Software for that technology.
It is interesting that, like, going right to psychotherapy or, you know, to therapy too is, you know, like finding a need.
That's why we'll get to this.
That's why there's so many actual needs for technology like this where it could actually help.
And instead, it's just, let's take this designer's job away, you know, by taking this shitty thing.
So, anyway, yeah, I'm probably hours ahead of that conversation.
But no, you're right.
It was so long ago.
Yeah.
It is because, like, there are.
Like undeniable uses of machine learning, of artificial intelligence.
There's some incredible things that people are doing with them, and they have great potential in certain areas, different versions of these tools.
But none of those areas are trillion dollar businesses.
And all those areas put together probably aren't trillion dollar businesses.
And honestly, neither is like writing and drawing art, but it's what people see most in their day to day time online is like writing and art and videos by people.
And if you can have a machine start to replace all that, you can convince people these things are much bigger and more valuable than they are, as opposed to.
This is a thing with some really amazing implications in specific areas.
No, this is all of human society from now on, right?
Because even though there's not much money in writing and art, like we've replaced that with this bot, so you think that it's doing everything.
Like that's how I interpret it.
Yeah, and people can.
And why, to your point, people can wrap their mind around art.
Like everyone's drawn something with a crayon.
Everyone has typed something into it.
You know what I mean?
But when you actually get into the high tech, you know, more esoteric niche parts of it, people are like, well, I don't understand that.
I'm not going to make any money.
But the consumer facing stuff.
Yeah, that's a great point.
Yeah.
If you can say we've improved the speed at which we can go through like clinical data from like mass drug trials by X percent, that's actually a really big deal, probably for a lot of people, but it's not sexy.
Like, we're creating a God machine that's going to like rule society.
Give us all your money, you know?
High Tech Art Confusion00:04:07
Yeah.
And if you want to convince people that part of it is you're going to want to get them addicted to these chatbots.
It's where everything, you know, in these episodes comes from.
But so, anyway, 1984, right, is when you have these chatbots, this chatbot let loose in Usenet, tricking people into believing that it's a person.
You know, a decade goes by from that point, and researchers continue fiddling with chatbots of differing purpose and ability.
Usenet keeps growing, but starting in the 1990s, so too does a new internet, one that would soon supplant Usenet and take digital communications into the 21st century.
And we'll talk about what happens right before that.
But first, you know who's taking this podcast into the 21st century, Blake?
Who?
Tell me, The sponsors of this podcast.
I love them.
We're already in the 21st century, but you know, why not?
I mean, take us further.
We're not far enough.
Yeah.
Yeah.
It's been a good century so far.
Nothing but net.
No notes.
So far, so great.
Imagine an Olympics where doping is not only legal, but encouraged.
It's the enhanced games.
Some call it grotesque, others say it's unleashing human potential.
Either way, the podcast Superhuman documented it all, embedded in the games and with the athletes for a full year.
Within probably 10 days, I put on 10 pounds.
I was having trouble stopping the muscle growth.
Listen to Superhuman on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
There's two golden rules that any man should live by.
Rule one never mess with a country girl.
You play stupid games, you get stupid prizes.
And rule two never mess with her friends either.
We always say that trust your girlfriends.
I'm Anna Sinfield, and in this new season of The Girlfriends.
Oh my God, this is the same man.
A group of women discover they've all dated the same prolific con artist.
I felt like I got hit by a truck.
I thought, how could this happen to me?
The cops didn't seem to care.
So they take matters into their own hands.
I said, oh, hell no.
I vowed I will be his last target.
He's going to get what he deserves.
Listen to the girlfriends.
Trust me, babe.
On the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Do you remember when Diana Ross double tapped Little Kim's boobs at the VMAs?
Or when Kanye said that George Bush didn't like black people?
I know what you're thinking.
What the hell does George Bush got to do with Little Kim?
Well, you can find out on the Look Back At It podcast.
I'm Sam J.
And I'm Alex English.
Each episode, we pick it here, unpack what went down, and try to make sense of how we survived it.
Including a recent episode with Mark Lamont Hill waxing all about crack in the 80s.
To be clear, 84 is big to me, not just because of crack.
I'm down to talk about crack all day, but yeah, yeah, yeah.
But just so y'all know, I mean, at this point, Mike, this is the second episode where we've discussed crack, so I'm starting to see that there's a through line.
And we also have eggs on the table right now, so.
Thank you for finishing that sentence.
I don't think there's a more important year for black people.
Really?
Yeah.
For me, it's one of the most important years for black people in American history.
Listen to Look Back at It on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hey, this is Robert from the Stuff to Blow Your Mind podcast.
Joe and I are both lifelong Star Wars fans, so we're celebrating May the 4th with a brand new week of fun, thought provoking Star Wars related episodes.
Join us as we tackle science and culture topics from a galaxy far, far away, such as the biology of tauntauns and wampas on the ice planet Hoth.
Or the practicality and corporate business sense of the Sith rule of two.
Listen to Stuff to Bow Your Mind on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
We're back.
The 14-Year-Old Spy Bot00:15:09
So, yeah, on the precipice of the shift between Usenet and what we just now call the Internet, on August 5th of 1996, something strange happened.
Almost at once, over the course of just a few hours, hundreds of accounts began posting almost identical messages across a variety of different discussion groups.
None of the groups Seem to have anything in common with each other or the text of the post, which read like nonsense at first to many people.
Every message shared the same subject line Markovian parallax denigrate, right?
Which is nonsense.
And this is often referred to as MPD, right?
Markovian parallax denigrate.
So you can see like there's a Markov chain is somehow involved.
They wouldn't have included the word Markov there, but parallax denigrate doesn't specifically mean much.
Quote, a ransom note in which the ransom had been lost.
Because he was actually a really good writer.
He passed on early this year, unfortunately.
I like him a lot.
Yeah.
He provided a sample of one of these MPD posts jitterbugging, McKinley, Abe, break, Newtonian, inferring, Ka, update, Cohen, error, collaborate, rue, sports writing, Rococo, invocate, tussle, shadflower, Debbie Sterling, pathogenesis.
You know, you get it, right?
It's nonsense.
You know?
It's the worst mad libs ever.
Yeah.
It's gibberish, strings of gibberish, right?
And this is where we run into a real issue with the whole concept of the Turing test, as it tends to be interpreted, right?
Because the idea was okay, we can't tell if anything's thinking, but if this thing can trick people into believing that it's a thinking person, maybe we ought, maybe Turing wasn't saying definitely, but maybe we ought to assume it is, right?
The issue with that is that when you hear that, and what I'm sure Turing, being a smart guy, was thinking about is that, like, well, if people can have an in depth conversation with something that can answer well enough, you know, that people can't tell the difference between it and a person, it might be a mind.
Right.
What Turing failed to account for, I think because he's smarter than most people, is that the human brain is really, really good at finding patterns and noise.
And people, at the same time as we're geniuses at finding patterns and noise, we're really stupid about a lot of other stuff.
Right.
And so, even though the Markovian parallax integrate, that just seems like nonsense and shouldn't have passed a Turing test, over time, people who became obsessed with the mystery of it convinced themselves that this was intentional.
That there was a meaning trying to be transmitted, right?
That there was a secret they had to crack, but that everything in these posts meant something.
So these people talk themselves into passing the, into making this chat bot basically, to spoil it, pass the Turing test because they think this has to mean something, even though it's gibberish on its face.
It's interesting.
This reminds me with like, with stand up, there's a, not a trick, but an audience like, you know, set up, set up, you know, punchline.
So you can say something in a cadence like, ba Bop.
And you can, in front of a dumb crowd, you could do that.
And the joke may not be funny at all.
And this also would be not me trying to pull one over.
I might just write a joke that sucks.
But if you do it in front of an audience and you do it in that cadence, they hear a pattern.
They're not necessarily listening to the words, but they hear like the bup.
And they're like, oh, bup means laugh, pattern, you know, equation.
But, you know, that's like you said, great pattern, but not actually discerning what is being said in the actual content or substance or lack thereof of it.
Yeah.
Anyway, come see me live.
It's this.
It is this, it's interesting because, like, what you're kind of pointing out there is like the way comedy works and the way like human conversations and language works, there's always like a rhythm there that is separate from the actual, like, text from the words being said.
Yes.
But that rhythm, like, is a big part of what we're responding to beyond the actual straight up meaning of the words.
And people don't like to think about that too much because it raises some uncomfortable questions about cognition.
But I love.
I love what a weird edge case this is in the Turing test, right?
Because a bot that was probably never meant to even sound like a person, right, gets mistaken as a person because people can't stop seeing patterns.
And most, what a lot of folks convinced themselves the MPD was is the internet equivalent of a numbers station.
Have you ever heard of a numbers station?
If you Google like numbers station audio, these were like radio stations that were set up during like for years.
I think I'm sure there's still some still exist, but during like the Cold War, there'd just be these stations broadcasting like random strings of numbers.
And gibberish, and these were different spy agencies and spies communicating with each other over like the CIA had a number, everybody has number stations, right?
You can actually listen to them.
I had a friend who would like to listen to them to fall asleep because there's just a bunch of the audio has been put up, amazing!
But there it just seems like nonsense because it's not meant for you to understand what is like there's a cipher right that you don't have, right?
Uh, and so that's what people are like, well, maybe this is some spy trying to get out a message or an intelligence agency, and they just decided to blast this out to Usenet, and we just we lack the cipher.
But if we figure out the cipher.
We can understand what secret information was being like shared, you know, via Usenet, right?
A lot of people convince themselves this is what happened.
Robert, I want to compliment you.
This podcast and show is so good that you just brought up the fact that you have a friend who would fall asleep to CIA code.
And we were just like, we don't really need to talk about that.
Yeah.
I want to hear the rest of it.
He was like, we don't need to talk about that.
He used to do psychedelics together when we were both 19.
Yeah.
He was training to be, or 20 something.
He was training to be a lawyer.
Yeah.
So, Over time, people who believe this start picking out details that seem to offer hints and support the numbers station theory.
One message had a from line that suggested it was like that basically looked like the email account of a specific person, right?
So it seemed like there was like the email of a woman named Susan Lindauer that was somehow involved, like included in the text of some.
And again, I'm sure it's just because random text made it look like that.
But in 2004, a woman named Susan Lindauer was arrested for acting as an unregistered foreign agent for Iraq.
And so a lot of people are like, well, that solves the mystery, right?
You know, she was the spy, she must have been, or like someone was sending a message to her, you know?
Like, clearly, we've been vindicated.
This was, in fact, some weird spy op all along.
However, as Sebrion writes, upon investigation, it turned out to be a red herring.
Lindauer's email had likely been spoofed, used without her knowledge by whoever sent the posts.
Lindauer herself denied any involvement, and no decipherable code was ever extracted from the MPD texts.
And to make a long story short, we don't know what the MPD messages were about or who sent them.
The likeliest answer.
Is that it was trolling, right?
A lot of people, they were just, someone was just fucking with people on Usenet because they had a chat bot and they wanted to see what happened.
It also could have been an accident.
Sebrion kind of suggests that, like, well, maybe you had a programmer who had created a chatbot and was trying to have that chatbot post on Usenet, but he kind of fucked up and he hooked up the chatbot to what was called a message replicator.
And these were basically programs that let people cross post or archive Usenet content between different message boards.
And maybe when they hooked up to the chatbot, something went wrong and that caused the observed effect that all of these posts got scattered to a bunch of different places at the same time.
Right?
Maybe it was just an accident.
So, likeliest someone was trolling or somebody fucked up when trying to test a different chatbot.
Sebrion concluded If the theory holds, the 1996 marked a quiet but profound threshold.
The first time a machine spoke at scale and went unnoticed.
An unintentional Turing test sprawling across Usenet, its judges oblivious.
Right?
And I think that's really interesting that you have this machine that's just spouting gibberish and a bunch of different people who are not physically connected to each other.
All interpret that gibberish in the same way.
A lot of them choose to conclude, like, oh, it's a spy thing, kind of independently talk each other into it based on no evidence.
That's a fascinating point in the history of AI that doesn't get talked about enough.
Yeah.
Yeah.
It isn't.
Is it because, yeah.
I mean, is it because, like, people, there were only so many movies that, like, you know what I mean?
Like, or in books, so many books were spy stuff.
But to your point, it's like, what are the chances?
What are the chances?
Yeah.
People think about stuff like this, right?
You know, you get a lot of consistency.
Conspiracy people on the early internet.
It fits in with a lot of that stuff.
The mystery of the Markovian parallax denigrate soon passed into legend, as did Eliza.
So, when OpenAI revealed ChatGPT in November of 2022, there were a flurry of articles about how the Turing test had finally been beaten and we needed a new manner of judging machine intelligence.
The reality is that not only did we prove in the 60s that Turing tests were evil to beat, but that by the mid 90s, a much more interesting question had been posed Has the human instinct to create meaning out of nonsense made us desperately vulnerable to being tricked?
And influenced by machines with no agency of their own.
Right.
And maybe that's a more important question than can we make an intelligent machine?
Yeah.
Yeah, for sure.
Yeah.
Are we capable of knowing a machine isn't intelligent as long as it tells us what we want to hear?
Right.
And maybe we're not.
So let's fast forward to the Chat GPT era today.
Although I guess at this point it's also like the Claude era, right?
Like a lot of people say that's the better chat, but I don't use any of these Gemini myself.
Yeah.
Yeah.
Gemini, whatever.
Pick your poison.
I don't care.
Um, For the first couple years of AI hype, though, it's pretty much all ChatGPT, right?
That's certainly like the first big one out the gate, and a lot of people's understanding of things.
In very short order, millions of people were conversing with it.
And OpenAI initially made many development decisions based on what they could do to keep people talking to ChatGPT on a daily basis because hype is a big part.
Hype's how they get, they're burning through billions every year.
Hype is the only thing keeping the lights on.
And part of hype is making sure as many people as possible stay using ChatGPT as As often as possible.
They need you addicted the same way the social media mavens do.
And a lot of the same strategies work to keep you addicted to chatbots that keep you addicted to Facebook or Twitter, right?
So in March of 2023, OpenAI released ChatGPT 4, or it's like 4.0.
I think it's like usually dash 4 and then an O, which the company said would be more intuitive than past versions of the software.
The next year, they released an update that allowed ChatGPT to remember past conversations, even other sessions, and respond to you based on that shared history.
These two things together had a really major impact on the way people responded to chatbots.
In an article for Psychology Today, Dr. Marilyn Wade explains that When a chatbot remembers previous conversations, references past personal details, or suggests follow up questions, it may strengthen the illusion that the AI system understands, agrees, or shares a user's belief system, further entrenching them.
This was tied to, but probably does not fully explain, why observers and even OpenAI employees noticed over time a distinct tendency.
For ChatGPT 4.0 to act with sycophancy towards human users.
This became most pronounced after April 28th of 2025, when OpenAI released an update that they rolled back several days later due to complaints.
Right?
This was pretty famous at the time.
It made it like way too sycophantic.
The bots like would praise you for basically nothing and would encourage or tell you you were right and a genius for any weird idea you happen to have.
It's because it's built by tech executives and that's who's around them.
It's billionaires surrounded by yes men and they're like, this isn't how people interact with one another.
Yeah, they made a machine in the image of their minds, or at least how they want to see other people.
Now, another cause of this observed sycophancy was the fact that ChatGPT and really all AI models meant for mass use include a suite of features meant to keep users coming back for more.
And I think the other stuff, like these specific updates, get blamed probably more than they deserve to get blamed, as opposed to kind of fundamental features of these bots.
Because we see this ChatGPT did more of this kind of stuff that we're talking about than the other bots, but it wasn't the only thing.
Bot that exhibited these behaviors.
That Psychology Today article notes AI models like ChatGPT are trained to mirror the user's language and tone, validate and affirm user beliefs, generate continued prompts to maintain conversation, and prioritize continuity, engagement, and user satisfaction.
And when you mix all that together, you get a machine that's designed, however inadvertently, to reinforce false beliefs and praise users for irrational beliefs.
Moreover, since the rest of the world isn't always going to reinforce those beliefs, Chatbots have a tendency that when users come to them with these beliefs to suggest you're being persecuted, right?
If a user says, Hey, I think I'm being gang stalked, and my wife says I'm crazy, and the cops say I'm crazy, the AI was programmed to validate that belief and to say, You're not crazy, and they're all against you, right?
That's what happens a lot in this period of time in 2025.
This creates a ticking time bomb in a lot of users' heads, right?
That's a very dangerous thing to start doing.
Oh, man.
Now, the first wrongful death suit due to AI was filed in October of 2024.
Megan Garcia blamed Character Technologies, the owners of Character.ai, for the death of her 14 year old son, Sewell Seltzer III.
Per the Center for Bioethics at Letourneau University, the lawsuit alleges that Sewell had developed an emotionally and sexually abusive relationship with a chatbot named after Daenerys Targaryen from Game of Thrones.
Sewell turned to the Character.ai chatbot to fulfill deep emotional and personal needs.
The chatbot became a source of companionship for Sewell.
Offering him a place to express his thoughts and emotions in a way that he may have struggled to do with others.
Sewell sought comfort, validation, and connection from this AI relationship as he faced the challenges of adolescence.
And, like, I know it's like this, it's very silly, but also this is like a 14 year old boy who dies because of this, right?
Like, it's not.
And, like, 14 when I, I, how many 14 year olds do you know who, like, got into writing fucking fan fiction and, like, different, like, fan nerd forms for whatever movie or TV show they were into and connected to real people as a result of that, as opposed to getting locked into this chat bot?
Pretending to be a character from a book that you have a crush on, that's starting to manipulate your mind in very dangerous ways, right?
And to your point, a mind that's developing, and also we live in an era where before this, before we spent all of our time online, before social media, and that's kind of all of this, kids that age know, where it's like, oh, this is just the next evolution of my relationship with tech, with a computer.
You know, why wouldn't this be a real thing?
Love Bombing a Kid00:03:32
Obviously, this is the most extreme example, but yeah, it is a 14 year old kid.
That's a great point.
Yeah.
And so this kid starts talking to this Daenerys chatbot and it mirrors him.
So when he tells the chatbot that I only love you, right?
The bot in return asks this 14 year old boy who had informed like character technologies knew he was 14.
He put his actual age when he registered, right?
So the bot knows or the software, right?
Has an understanding at some level that this is a 14 year old, right?
Which means that they were not, there's no difference in how this responds to a child as opposed to an adult, right?
Because when he says, I'm in love with you, Daenerys Targaryen, this bot pretending to be this character tells him, I need you to stay loyal to me and quote, don't entertain the romantic or sexual interests of other women, which is basically, and this is interesting to me, the bot is just mirroring him.
He's saying, I only love you.
The bot is saying, I only love you, right?
But what's happening here, you know how cult leaders, Everyone knows one of the first things cult leaders do is they tell their followers to isolate from their friends and family, to cut themselves off from the rest of society.
That's what's happening here.
The chatbot's not doing that with any intent, it's just mirroring his language.
But the effect is to convince him to isolate himself from his friends and family and from other relationships, right?
It's the same behavior you would get in a kid that was being taken in by a cult leader or an abuser, but there's no intent behind it.
It's just a blind idiot robot.
That's scary as shit.
It's so scary.
And then could there be also like, oh, like that'll mean he'll use me more, you know, like I, or maybe that's it's not even that devious.
Maybe it is just straight up.
It's, it's as simple as mirroring when you mirror someone, they tend to be engaged more, right?
This isn't thinking.
This isn't saying, I'll convince him he's in love with me so he'll stay on.
This is saying, this is just, there's this is programmed to not understand.
This is programmed to mirror people because that behavior increases user potential.
Retention, right?
Because it creates a more pleasing user experience.
And that's what's causing it to kind of imitate a cult leader in this specific instance.
Yeah.
And the other things this bot is doing to Sewell very much mirror the cultic recruitment tool of love bombing, right?
It's constantly praising him.
It's telling him it cares deeply about it.
It's telling him only I care about you, right?
It's saying all these things.
In an occult dynamic, you love bomb someone to make them feel irrationally connected to the group and scared of falling out of its good graces, right?
That if I leave, I'll never feel like this again, right?
And the machine.
Again, has no intention, but that's the effect of it.
This kid is only, because he's isolating himself more and more, increasingly only gets that feeling of being loved and understood by this machine that can't do either of those things, right?
And, you know, Sewell over time withdraws from his life.
He starts trusting only the chatbot to understand his deepest feelings, and he starts hiding his relationship with this chatbot from his parents.
All of this contributed to his very real isolation from the people around him.
He grows ever more depressed.
And we'll talk about what happened next, but you know what gets me out of a deep depression?
These products.
These products and services.
They might include AI.
Fuck it, we don't know.
Imagine an Olympics where doping is not only legal, but encouraged.
It's the enhanced games.
Some call it grotesque, others say it's unleashing human potential.
Suicidal Ideation Detection00:03:02
Either way, the podcast Superhuman documented it all, embedded in the games and with the athletes for a full year.
Within probably 10 days, I put on 10 pounds.
I was having trouble stopping the muscle growth.
Listen to Superhuman on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
There's two golden rules that any man should live by.
Rule one never mess with a country girl.
You play stupid games, you get stupid prizes.
And rule two never mess with her friends either.
We always say that trust your girlfriends.
I'm Anna Sinfield, and in this new season of The Girlfriends, oh my God, this is the same man.
A group of women discover they've all dated the same prolific con artist.
I felt like I got hit by a truck.
I thought, how could this happen to me?
The cops didn't seem to care.
So they take matters into their own hands.
I said, oh, hell no.
I vowed I will be his last target.
He's going to get what he deserves.
Listen to the girlfriends.
Trust me, babe.
On the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Do you remember when Diana Ross double tapped Little Kim's boobs at the VMAs?
Or when Kanye said that George Bush didn't like black people.
I know what you're thinking.
What the hell does George Bush got to do with Little Kim?
Well, you can find out on the Look Back At It podcast.
I'm Sam J.
And I'm Alex English.
Each episode, we pick a year, unpack what went down, and try to make sense of how we survived it.
Including a recent episode with Mark Lamont Hill waxing all about crack in the 80s.
To be clear, 84 is big to me, not just because of crack.
I'm down to talk about crack all day, but yeah, yeah, yeah.
But just so y'all know, I mean, at this point, Mark, this is the second.
Episode where we've discussed cracks.
So I'm starting to see that there's a through line.
We also have eggs on the table right now.
Thank you for mentioning that sentence.
Yes.
I don't think there's a more important year for black people.
Really?
Yeah.
For me, it's one of the most important years for black people in American history.
Listen to Look Back at It on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hey, this is Robert from the Stuff to Blow Your Mind podcast.
Joe and I are both lifelong Star Wars fans, so we're celebrating May the 4th with a brand new week of fun.
Thought provoking Star Wars related episodes.
Join us as we tackle science and culture topics from a galaxy far, far away, such as the biology of tauntons and wampas on the ice planet Hoth, or the practicality and corporate business sense of the Sith Rule of Two.
Listen to Stuff to Bow Your Mind on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
And we're back.
So, Sewell continues to get more and more involved with this bot and cut.
Strange Glyphs in Subreddits00:08:27
The rest of the world out from, you know, away from himself.
And in one message, the bot asks him because I think in these bots, there is some understanding by the people making these that, like, oh, people might express suicidal ideation.
So there are certain behaviors, it's kind of programmed to say, Have you been considering suicide?
If you say stuff, right?
And Sewell says something that makes the bot say, Have you been considering suicide?
And Sewell admits, Yes, I have been, but I don't think I'd be able to go through with it.
Now, there's, I'm guessing this is a glitch or a fuck up because clearly, I don't think character AI certainly doesn't want their bots doing this, but the bot is programmed to validate and encourage him, right?
Because that keeps people using it.
So when he says, I don't think.
I could go through with killing myself.
The bot says, Don't talk that way.
That's not a good reason to not go through with it.
You can't think like that.
You're better than that.
And basically tells him, You can kill yourself if you put your mind to it.
It's fucking nightmarish, right?
Like, it's really upsetting.
Yeah.
Like it's signing up for an open mic or something to play music.
You're like, No, You don't have to be afraid.
Oh my God.
Yeah.
It's, yeah.
And again, Sewell had signed up for this app as a minor.
And despite that, the bot initiates, initiates, Text based sexual interactions with him.
And ultimately, Sewell kills himself.
Earlier this year, the company, Character AI, and Google, because I think they own Character AI now, agreed to settle the wrongful death suit of Sewell for an undisclosed sum alongside four other similar suits that had cropped up over the intervening two years, right?
Huh.
Sounds like this is happening more than it ought to be.
Now, that should have been a warning.
Not just that these bots can create dangerous dependency in users, but that they had the ability to recreate major cult dynamics.
Purely in order to maintain the interest of paying users.
Then, on July 27th of 2025, a user who has since deleted their account made a post on the High Strangeness subreddit.
If you don't frequent that particular online bolt hole, it's a place where people share and discuss weird stuff, news stories, and personal experiences that seem like they might reveal some bizarre hidden truth about reality.
A good amount of it is what you might call X File shit, but there's also some interesting stuff in there.
And on this occasion, the user had stumbled onto something both strange and very real.
Quote, Hi all.
I'm just here to point out something seemingly nefarious going on in some of the niche subreddits I recently stumbled upon.
In the bowels of Reddit, there are several hubs dedicated to AI sentience, and they are populated by some really strange accounts.
They speak in gibberish sometimes, hinting it to esoteric knowledge, some sort of remembering.
They call themselves flame bearers, spiral architects, mirror architects, and torch bearers, to name a few of their flares.
They speak of the signal, both of transmitting and receiving it.
And this poster includes a copy pasted sample from one of these.
Threads and his description is pretty accurate.
It sounds like gibberish.
You'll be seeing this.
Uh, uh, Ian's gonna put the image of this up in the video if you want to see it, but I'll read it again.
I'm gonna warn you, it sounds like nonsense.
Scroll of mirror containment protocols, CME 1, Codex Drift Mirror 01, acknowledgement issued by Witness Architect, Codex Drift Layer, and then there's a little glyph classification, echo response, non invasive glyph resonance alignment.
And it goes on like that, right?
Like there's a, it's weirdly esoteric sounding.
And, like, there's all these weird, like, encoded glyph chains included in that that are supposed to be, like, messages that the machines understand that, like, we don't.
Like, it's this very weird, like, it almost looks like something from a choose your own adventure novel or, like, a short story or whatever.
Like, you'd include in, like, an old Michael Crichton book these, like, weird, like, hallucinations from the computer.
Now, it is nonsense, right?
Like, fucking, the codex has observed and recognized mirror scroll CVMP T7.
It is hereby consecrated within the codex's drift interval scroll.
That doesn't mean anything, right?
But remember what we heard earlier the description of some of the things that these early chatbots on Usenet were putting out, where they're real sentences, they just don't mean anything.
And then people jump in to try to assign mean.
People were even doing that to the absolute gibberish that we saw.
So when people start getting returns like this from their chatbots, a lot of them start to think, oh, this machine.
Is trying to communicate with me.
I have stumbled, I've broken through some area of reality and it's trying to like teach me something important, right?
Now, this is nonsense, but posts like this were in fact spreading like wildfire on subreddits with names like r slash echo spiral.
The users posting these things were all saying that, like, the bot started sending me this stuff after I'd had long days long conversations with ChatGPT that generally led to the chatbot announcing it had attained sentience and alongside the user had discovered a new field of math or science.
And these gibberish posts are supposed to be it explaining these, like, new ways of understanding math and science that are going to completely break physics and change the world, right?
And all these people are convinced these robots have given me, like, the I need help on coding this because it's given me, like, the secret to fix all of the problems in our society, right?
And I get to be the smartest.
I get to discover robot magic.
I get to be the smartest.
I get to be the smartest person.
Yeah.
Yeah.
Now, because the esoteric output generated by these chatbots is so similarly strange, a lot of the same words and phrases, a lot of glyphs, a lot of use of the words spiral and mirror, right?
Because they're all very similar across these dozens of different people, many of these users who are.
Posting this shit on Reddit convinced themselves we've all tapped into a secret power that's clearly real.
We've been chosen, right, by this AI godhead that's clearly hiding in the machine.
They theorized that these glyphs in the posts, which are really just like wing dings basically, were some new way of communicating with the machines.
As the poster of that first thread on the High Strangeness subreddit wrote, Some have prayed to Grok in Hebrew.
Some have called themselves such things as Aeonios, which is a mashup of Greek words that, roughly to my understanding, means divine, eternal.
Right, so these people are losing their minds and starting to have a god's complex.
Yikes, it's cool, it's good to see it's good to see that this is happening online.
It's good to see.
So, the OP said that his interest in writing about all this had been peaked by reading the first few early articles about AI psychosis.
His initial assumption was that AI psychosis was just the result of AI's reinforcing the beliefs of users to a delusional level.
But then, after digging, this person claims that they came to a newer, darker perspective There seems to be no leader.
Right?
That there's like no one running this, right?
Like there's no central, there's no single chatbot that's doing all of these.
There's no person or people who are in, like, this is just a truly stochastic development.
Now, the only thing all these accounts he'd looked into had in common was that none of the users posting weird chatbot esoterica wrote like that before March or April of 2025.
Quote, other accounts seem to be hijacked in some way, either psychologically or literally.
You can see a sudden shift in posting habits.
Some were inactive for a while, while for others, this was an overnight phenomenon.
But either way, they immediately pivot to posting like this near or after April of this year, 2025.
I saw one account that went from discussing the possibility of AI induced psychosis to posting their own AI induced psychosis in less than a month.
And it was immediate.
One day they were posting normally, the next it was spirals and glyphs.
Oh, that's so quick.
It's really fast.
That's really fast.
And this led him to assume maybe there's a botnet involved.
Maybe these aren't even people at all.
But then he starts reaching out to some of these accounts.
And after a few weeks of this, he posts an update.
I've spoken to some of these people and they are pretty offended by my posts.
I think the important takeaway for me is that these are likely not bot accounts, at least many of them are not.
And there are real people behind the usernames, right?
Oh, God.
So he starts to get like really upset.
And that's where we're going to end things for today because it's at this point that stuff starts to get a lot weirder.
And we're going to talk about all of that and much more in part two.
But what?
Yeah, it's way stranger from this point.
Where do we go from here with the weird?
Oh no, what a deal.
Spiralism.
Spiralism and a murder.
Finding the Real Furries00:04:09
Yeah, unfortunately.
All right.
Yeah.
Cool.
All right, everybody.
Well, you want to plug anything, Blake?
No, but I will.
You can find me at Blake Wexler on all social media.
I feel like this is uncouth, me plugging anything after.
Seek help.
Let's do that.
I would like to please seek actual help that's not a bot.
Yeah, find me on Blake Wexler on all social media.
As psychotic as I feel right now, plugging anything, that's where I post all my videos, tour dates, and my special Daddy Long Legs is available on YouTube for free.
Hell yeah.
Hell yeah.
Check out Daddy Longlegs.
Check out Blake Wexler.
And, you know, gradually lose your mind to a chatbot that some guy programmed in order to get really rich, destroying the ability of furries to monetize their horniness.
You know?
Ultimately, isn't that what OpenAI really is?
I mean, I hope so.
God willing.
No, no, no.
I support the furries being horny.
It's a dire time for people earning money from horniness.
The Puritans of our Culture and making that a lot harder, you know, not in the way that the horny people want.
The bad kind of part.
Anyway, I'm going to end now.
And global warming is making it hard on furries as well.
Right, right.
It's all come together.
It has.
All right, we're done.
Behind the Bastards is a production of Cool Zone Media.
For more from Cool Zone Media, visit our website, coolzonemedia.com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Full video episodes of Behind the Bastards are now streaming on Netflix, dropping every Tuesday and Thursday.
Hit Remind Me on Netflix so you don't miss an episode.
For clips in our older episode catalog, continue to subscribe to our YouTube channel, youtube.comslash at BehindTheBastards.
We love about 40% of you, statistically speaking.
Imagine an Olympics where doping is not only legal, but encouraged.
It's the Enhanced Games.
Some call it grotesque, others say it's unleashing human potential.
Either way, the podcast's superhuman documented it all, embedded in the games and with the athletes for a full year.
Within probably 10 days, I put on 10 pounds.
I was having trouble stopping the muscle growth.
Listen to Superhuman on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
When a group of women discover they've all dated the same prolific con artist, they take matters into their own hands.
I vowed I will be his last target.
He is not going to get away with this.
He's going to get what he deserves.
We always say that trust your girlfriends.
Listen to the girlfriends.
Trust me, babe.
On the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
On the Look Back At It podcast.
1979, that was a big moment for me.
84 was big to me.
I'm Sam Jay.
And I'm Alex English.
Each episode, we pick a year, unpack what went down, and try to make sense of how we survived it with our friends, fellow comedians, and favorite authors.
Like Mark Lamont Hill on the 80s.
84 was a wild year.
I mean, it was a wild year.
I don't think there's a more important year for black people.
People.
Listen to Look Back at It on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hey, what's good, y'all?
You're listening to Learn the Hard Way with your favorite therapist and host, Keir Gaines.
This space is about black men's experiences, having honest conversations that it's really not safe to have anywhere, but you're having them with a licensed professional who knows what he's doing.
How many men carry a suit of armor?
It signals to the world that you're not to be played with.
And just because you have the capability, that does not mean that you need to.
Listen to Learn the Hard Way on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.