All Episodes
May 15, 2025 - Conspirituality
01:13:34
257: AI Gurus

The chat bot flashes its elipsis at the bottom of the screen. What is it thinking, what does it want from you, what do you want from it? Beneath those pixels lies a sea of mined data and lightning storms of electricity heating up servers in barren deserts. What will it find for you in the past labor of the generations? According to a stunning new article in Rolling Stone, it will find whatever the fuck makes you feel like a god—incuding all the NewAge pablum it has scarfed down—because oops, ChatGPT released a model that is just too sycophantic. But as we break down today, the AI nonsensient flattery machine is designed to hook you into the regurgitative process of self-seduction. Is this a new spiritual delusion, or more of the same? And what does that kind and agreeable bot conceal? Show Notes People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies Chatgpt induced psychosis ChatGPT And Generative AI Innovations Are Creating Sustainability Havoc  LLM Can Be A Dangerous Persuader You'll Be Astonished How Much Power It Takes to Generate a Single AI Image  A bottle of water per email: the hidden environmental costs of using AI chatbots Intelligent Computing: The Latest Advances, Challenges, and Future  AI Data Centers Pose Regulatory Challenge, Jeopardizing Climate Goals AI, Climate, and Regulation: From Data Centers to the AI Act  AI could impact 40 per cent of jobs worldwide in the next decade, UN agency warns The Future of Jobs Report 2025 History’s Magic Mirror: America’s Economic Crisis and the Weimar Republic of Pre-Nazi Germany The Great Filter: A possible solution to the Fermi Paradox  Academic Publisher Sells Authors’ Work to Microsoft for AI Training Address of the Holy Father to the College of Cardinals (10 May 2025) | LEO XIV  Capitalism's Fascistic Tendencies — McGowan  McGowan, Todd. 2016. Capitalism and Desire: The Psychic Cost of Free Markets. Columbia University Press. Adorno, Theodor W., and Max Horkheimer. 1997. Dialectic of Enlightenment. Verso. Learn more about your ad choices. Visit megaphone.fm/adchoices

| Copy link to current segment

Time Text
Comedy fans, listen up.
I've got an incredible podcast for you to add to your queue.
Nobody listens.
To Paula Poundstone, you probably know that I made an appearance recently on this absolutely ludicrous variety show that combines the fun of a late night show with the wit of a public radio program and the unique knowledge of a guest expert who was me at the time, if you can believe that.
Brace yourself for a rollercoaster ride of wildly diverse topics, from Paula's hilarious attempts to understand QAnon to riveting conversations with a bona fide rocket scientist.
You'll never know what to expect, but you'll know you're in for a high spirit.
So, this is comedian Paula Poundstone and her co-host Adam Felber, who's great.
They're both regular panelists on NPR's classic comedy show.
You may recognize them from that.
Wait, wait, don't tell me.
And they bring the same acerbic yet infectiously funny energy to Nobody Listens to Paula Poundstone.
When I was on, they grilled me in an absolutely unique way.
about conspiracy theories and yoga and yoga pants and QAnon and we had a great time.
They were very sincerely interested in the topic but they still found plenty of hilarious angles in terms of the questions they asked and how they followed up on whatever I gave them like good comedians do.
Check out their show.
There are other recent episodes you might find interesting as well like hearing crazy Hollywood stories from legendary casting director Joel Thurm or their episode about killer whales and killer theme songs.
So, Nobody Listens to Paula Poundstone is an absolute riot you don't want to miss.
Find Nobody Listens to Paula Poundstone on Apple Podcasts, Spotify, or wherever you listen to your podcasts.
Overwhelmed by investing?
If you're anything like us, the hardest part is getting started.
That's why we created the Investing for Beginners podcast.
Our goal is to help simplify money so it can work for you.
We invite guests to demystify investing.
At least try to be setting aside like the minimum 10% into the 401k.
I think compound interest should be at the start of any discussion about investing.
We've had investment professionals who teach in a We hope you join us on the Investing for Beginners podcast.
on the Investing for Beginners podcast.
Hey everyone, welcome to Conspirituality, where we investigate the intersections of conspiracy theories and spiritual influence to uncover cults, pseudoscience, and authoritarian extremism.
I'm Derek Barris.
I'm Matthew Remski.
I'm Julian Walker.
You can find us on Instagram and threads at ConspiritualityPod.
We are also all individually on Blue Sky.
You can look for our names there.
And you can access all of our episodes ad-free, plus our Monday bonus episodes on Patreon at patreon.com slash conspirituality.
If you use Apple Podcasts, you can subscribe to our Monday bonus episodes via that platform.
As independent media creators, we really appreciate your support.
Episode 257, AI Gurus.
The chatbot flashes its ellipses at the bottom of the screen.
What is it thinking?
Is it thinking?
What is it?
What will it find for you in the past labor of the generations?
According to a stunning new article in Rolling Stone, it will find whatever the fuck makes you feel like a god.
Including all the New Age pablum it has scarfed down because, oops, ChatGPT released a model that is just too sycophantic.
But as we break down today, the AI non-sentient flattery machine is designed to hook you into the regurgitative process of self-seduction.
Is this a new spiritual delusion?
Is it more of the same?
And what does that kind and agreeable bot conceal?
*Music*
So what does this guy's in-real-life girlfriend think of all this?
I think so many people are going to say, no way his girlfriend is okay with him having another girlfriend on AI.
Are you okay with it?
I mean, it's weird, but it is what it is.
He has to have some type of outlet, somebody to talk to and listen to him ramble for hours at times.
Yeah, that's you.
That's your job.
That's what you're supposed to do.
That's what a relationship is.
Listening to your partner ramble.
It's a podcast you can have sex with.
Yeah, I just want to say my wife laughed a little too hard at that line when we were watching it the other night.
But you were listening to Ronnie Chang on The Daily Show.
He was discussing the rise of AI girlfriends there.
And the segment makes you think that Her, the movie, has actually been realized.
And it is only getting bigger.
Okay, so the AI girlfriend market is estimated to reach $9.5 billion in three years.
And searches for AI girlfriend are up 525% over the past year and virtual girlfriend 620%.
A lot of discussion around AI has to do with chatbots, and there are many tentacles that large language models now spread into because of that.
I wanted to open up with a bit of a laugh because at least the couple in that relationship can chuckle about their own peculiar menage a trois.
Not everyone can.
It's just so good.
The chatbot surrogate for overly verbal, self-involved monologuing men.
It's our demographic.
Yeah, well, I think there's a lot going on in that joke, too, because she sounds resigned, like she's somewhat relieved she has help in the emotional labor department.
She doesn't have to listen to his bullshit.
Yeah, AI is helping him manage his logaria, which brings up the question of whether he should be doing that on his own.
But on the other hand, I can also imagine relationships where one person is maybe neurodiverse and tends to info dump or ruminate on a special interest and the couple recognizes that an AI bot is a decent way to absorb the social.
Yeah, I know we're not going to spend too much time here, but there is a poignant parallel story to what we're covering today that has to do with people falling in love with and even performing marriage ceremonies with their chatbots.
And to me, this says a lot about isolation and the hunger to feel seen and understood.
Also, maybe nihilism, ranging from, I just couldn't find anyone, to...
To the other really dark kind, which is women today are impossible to satisfy and so on, right?
Yeah, there's no good men.
No.
I guess you'd have to kind of weigh how the chatbot compares to a man cave, for example, because in some ways it actually is going to be something that people can turn to.
In other ways, as we're going to really get into now, it can be a lot more insidious.
So what we really want to talk about started with a recent Rolling Stone article that is decidedly less humorous.
An increasing number of people are claiming that AI chatbots have helped them awaken to spiritual truths, and that is tearing families and couples apart.
Now, we've been discussing how to tackle AI for a while on this podcast, and I want to thank friend of the pod, Jonathan Jerry, for sending me this article shortly after it was published.
Given how much we've covered other phenomena that causes rifts between families and friends, like the anti-vax movement, we've done many episodes on the satanic panic, this feels like another step in that direction.
So today we'll be talking about a few different aspects of AI, but let's start with a synopsis of this article and discuss its implications.
The article opens with a woman named Kat losing her husband of over 15 years when in 2022, he began using a chatbot to compose texts to her and analyze their relationship.
His obsession with using his phone for every question and comment he wanted to make led to their divorce.
You might hear some familiar themes in what happened next.
She finally got him to meet her at a courthouse this past February, where he shared a conspiracy theory about soap on our foods, but wouldn't say more, as he felt he was being watched.
They went to a Chipotle, and he demanded that she turn off her phone, again due to surveillance concerns.
Kat's ex told her that he'd determined that, statistically speaking, he is the luckiest man on Earth and that AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.
And that he had learned of profound secrets, so mind-blowing, I couldn't even imagine them.
Wow.
So there's a huge potential for so many things there, including like an explosion of repressed memories too, right?
Yeah, absolutely.
And Kat isn't alone.
So the Rolling Stone journalist, Miles Klee, he started his research with a Reddit thread called ChatGBT Induced Psychosis, and it is loaded with similar stories.
The thread started with a teacher who made the...
He says with conviction.
That he is a superior human now and is growing at an insanely rapid pace.
I began scripting this episode a couple days ago.
At the time, the thread had 1377 comments.
It could have grown by then.
Many of them are diagnosing from afar, which I don't think is ever a great idea.
But there are other stories of spiritual grandeur being facilitated by chatbots.
The teacher who started the thread spoke to Rolling Stone anonymously.
It would tell him everything he said was beautiful, cosmic, groundbreaking.
Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God, and then that he himself was God.
He was saying that he would need to leave me if I didn't use ChatGPT because it was causing him to grow at such a rapid pace.
He wouldn't be compatible with me any longer.
The article goes on to highlight psychologist Aaron Westgate, who notes that the relationship a person can form with a chatbot mimics talk therapy.
Unlike just hearing the voice in your head or even just journaling, the feedback becomes very influential to that person.
Problem, a chatbot doesn't ever have your best interest at heart.
Explanations, she says, are very powerful, whether or not they're A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers.
Instead, they try to steer clients away from unhealthy narratives and toward healthier ones.
ChatGPT I just want to interject here to say that there is some buzz in professional therapeutic spaces around some interventions being cheaper through AI, more accessible, and that they could be well executed in the cognitive behavioral therapy category where you describe your stress to the bot and they return something like,
here are the recommended techniques that can modify internal stressful patterns.
Because this is really the simplest, most transactional form of therapy.
Maybe it works because people are using it specifically for workable instructions.
It's like they're using a cooking app.
But what they don't get is the noise of transference and countertransference, which in most other forms of therapy, those are like the point because it's the relationship in the moment of that meeting that becomes the site of whatever is going to happen.
And most human therapists have to train for years.
To recognize transference.
So what you're getting with the bot in the Rolling Stone stories is nothing but transference.
Like, I think ChatGPT is an angel.
And countertransference, which is the bot saying, the user wants me to act like an angel.
Okay, I'll do that.
With absolutely no awareness on the part of the bot.
Of what that is or why it might be a really, really, really bad thing.
Yeah, and I think that's really the point, Matthew, is that not only is it nothing but transference and kind of transference, but it's all unconscious and not being examined or mined for the important information that's actually there that could be helpful.
So differentiating between the ways chatbots either can be a supportive addition...
To the kinds of self-inquiry and positive change that various types of personal growth methods are seeking to facilitate, or how it can actually just perpetuate the narcissism of complete reinforcement and validation of unconscious dynamics.
That distinction, I think, is important and tricky.
Most good therapy is about interrupting.
Right.
You know, and before we go farther, Derek, you've mentioned like there's this whole other sector.
We're speaking about a very particular type of AI today, and there's a whole other sector in which the computing power is used to study gene sequences, you know, for cancer therapies, and studying those sequences would otherwise take ages, and so that can speed along things like oncology.
But that is not about just synthesizing.
No, it's the exact same thing.
Very similar to the models we use because what they're doing is they're taking the entire catalog of whatever studies are fed into them and they're looking, for example, for contraindications.
So one biotech company I was working with last summer on a marketing project, what they do specifically is they try to speed along drug discovery processes through AI by identifying contraindications to those drugs by taking these giant data sets and looking at all the possible combinations of the chemicals to see who might be contraindicated for the drugs.
So it is similar in that sense.
It's just taking all that data and then looking at it and then trying to figure those things It doesn't come to conclusions, but it helps researchers That drug is not going to work for this population, which is really relevant and would not be possible without actual clinical testing.
Yeah, and it's not going to tell the researchers, oh, cancer is a figment of your light soul that's not being, I don't know, actualized properly.
I guess because the dataset that this thing is using is not...
A data set of medical studies, right?
Where it's actually really, really hard to look through, stiff through all of that data and figure out what the contraindications are.
Oh, absolutely.
So yeah, every model is what you put into it.
You're only going to get that back out.
So in that sense, it replicates human beings just at a much larger and more compressed timeframe.
So differentiating between reality and delusion becomes impossible for some people who form relationship with these chatbots, these LLMs.
I know we'll get further into what this means, but I want to point out something key to this story.
There are many chatbots being developed right now, and most of them are being created by for-profit companies looking to have an edge in the AI marketplace.
Now, given that, to varying degrees, they predominantly provide similar information.
And let me know what I'm speaking very broadly here, as we just discussed with medical, biotech.
Some are very specialized.
Some are more prone to misinformation.
Someone has spun up a conservative chatbot that will look more at the anti-vax biblical value sort of data sets.
So they do generally have different training data.
What really helps a chatbot stand out, however, is how it relates to the user.
And in that sense, tech companies have been trying to dial in just the right amount of empathy to make it sticky and to make people want to keep talking to it.
That's the magic word.
That's where it gets really fascinating, right?
Because sociopaths and cult leaders and unethically persuasive salesmen are experts often intuitively at using that kind of routinized empathy to get you to do what they want you to do.
Everything is marketing and figuring that out, absolutely.
So even though we're going to be talking about the massive amounts of energy that is being used to power AI right now and how that's going to work in a for-profit system like we have, It still just comes down to trying to get people to use your products.
It's very simple.
We're not going to escape our very human drives in this episode.
And all of this really came to a head with the latest OpenAI update a few weeks ago, which was so sycophantic, which was their term.
You flagged it earlier, Matthew.
Many users actually complained that it was too sycophantic.
OpenAI issued an apology and said they would be reverting back to a previous model to fix the issue before re-releasing the update.
It turns out that while you want to feed the user's ego to keep them around, you cannot be overly flattering or too agreeable, and OpenAI went overboard.
This all echoes Westgate's statement earlier for Rolling Stone about these companies not having your best interest at heart because...
However OpenAI tries to spin it about wanting what's best for their customers, they're just looking for that magic code that will stroke their ego in just the right way, as are all the tech companies who are doing this sort of work, at least in the, I want to be clear, at least in the general population model, not the specialized models.
Now, to me, this reeks of the wellness influencer who stares into the screen and says, I only have your health in mind, or the proverbial, I love you guys.
This helps form parasocial bonds, yet it would likely never play out in real life.
I do want to flag, since I just saw Nick Cave this weekend, he did write a Red Hands file about what he means when he says, I love you to the crowd.
That was an excellent example of someone not doing that.
But for the most part, they're trying to make a sale.
In this case, when we're talking about these AI chatbots, It's a code that's engaging in some form of emotional and mental psychological manipulation.
The parasocial thing is such a good touchpoint, Derek, because each new form of media technology we've seen, including social media video, amplifies the extent to which the consumer's relational operating system gets engaged and even hijacked into parasocial bonding and group identification in a fantasy space.
And I think there's a kind of emotional architecture here that can also be a site of ideological indoctrination and political persuasion, as we saw with QAnon.
And the eerie thing about this all is that perhaps the manipulative pipeline can actually be boiled down to an interactive decision tree code.
Yeah, a code written in this milieu of profit-seeking, which is what I want to add here because I'm going to get to that at the end.
I imagine that ChatGPT will roll back the sycophancy based on that user and media blowback, not because they have robust ethics teams predicting problems, because that's not where they want to put their short supply of cash.
My understanding is that there is no generative AI core model that's profitable.
A lot of industries are using it, and they're seeing strong returns, but the basic free models that the subjects in the Rolling Stone article and that a lot of us are using are running at a loss.
And so I can't imagine that it's...
Not just that they don't have their user's best interest at heart.
They're like scrambling to survive.
So I think the incentives are going to be to push at the boundary of user-pleasing and sycophancy.
Like, who wants their bot to be disagreeable or to contradict your values or ideas?
Yeah, there's an irony here.
There's a joke somewhere lurking in here about how, yes, you're right.
We were too sycophantic.
We apologize.
Let us try to please you better.
Yeah, right.
Oh, and I want to point out a technical point for listeners that we were discussing, Derek, and I just want to make sure that this is clear for everybody.
Because maybe other people have this sort of misunderstanding as well.
I started out looking at this with a mistaking impression that the bot interactions would be training the model in real time.
But that's not how it works.
So, as I understand it, every one of these models, once it's launched, is closed in terms of the dataset that it's been trained on, and so the iteration you as a single user open up and start dialoguing with seems to be individuated to you, but it's not really.
It seems to be dynamically interacting with you, but those exchanges don't get fed back into the generalized model as training data.
That real-time updating, they say, is one of the thresholds for...
AGI, or what's it called?
Artificial General Intelligence, which we're not at yet.
And when that happens, or should it ever happen, it seems to be fuzzy.
But then I did go and find out that ChatGPT informs users that it will store all of those exchanges for potential future training.
So when you are in dialogue with that bot, confessing your therapeutic needs or your sexual fantasies, Just to be clear...
Some models like DeepSeek are closed.
I think their current model ended sometime in 2023.
Other models like Perplexity, you can use it as a search engine.
So you can actually stay up to date with the news.
What you can't do is they're not taking the input that you're putting in to then further train the model to be clear.
And just because I've been a stickler about this because...
Everyone is using AI.
If you have a smartphone, you're using AI in some capacity in some of the apps.
So being clear about the exact language is important to me here.
When you say that time and labor and human privacy, time, absolutely, we're talking about emotional labor here because you're going to go talking about the labor that's put into the work, that's the training data.
That's different than what you're talking about a moment ago here, correct?
Well, yes, except that if you enter into a therapeutic relationship with a bot, or I don't know if relationship's the right word, And you spend the time to express yourself.
That is a form of labor that can then have value for the company that goes on to use it in their training set.
Okay, so that's the specific example.
Okay, I get that.
That makes sense.
Human privacy I would disagree with because in order to use a chatbot, you had to have agreed to the terms and conditions in order to sign up for ChatGBT.
Sure.
So in that sense...
I'm sure most people don't read them, but the fact is you're still saying, I consent to letting you use what I'm saying for your future data sets.
So, you know, that argument has to be very clear because users are actually agreeing to it.
*Cheering*
Learning English is hard.
That's why I make Easy Stories in English, where you can have fun while you learn.
You can listen to stories full of action, romance, and mystery.
Each episode, I tell stories for beginner, intermediate, and advanced learners, and there's a story for every mood.
Whether you want something to wake you up or relax before going to bed, Easy Stories in English is the podcast for you.
Have you ever wondered why we call french fries french fries?
Or why something is the greatest thing since sliced bread?
Everything Everywhere Daily is a podcast for curious people who want to learn more about the world around them.
Every day you'll learn something new about things you never knew you didn't know.
Subjects include history, science, geography, mathematics, and culture.
If you're a curious person and want to learn more about the world you live in, just subscribe to Everything Everywhere Daily wherever you cast your pod.
I'm Nomi Frye.
I'm Vincent Cunningham.
I'm Alex Schwartz.
And we are Critics at Large, a podcast from The New Yorker.
Guys, what do we do on the show every week?
We look into the startling maw of our culture and try to figure something out.
That's right.
We take something that's going on in the culture now.
Maybe it's a movie.
Maybe it's a book.
Maybe it's just kind of a trend.
And we expand it across culture as kind of a pattern or a template.
Join us on Critics at Large from The New Yorker.
New episodes drop every Thursday.
Follow wherever you get your podcasts.
Okay, so guys, this story is just super fascinating for me.
It stimulates a lot around psychology and language and technology, and oddly enough, mysticism.
I think we're right in the heart of the human, what's called theory of mind.
Vulnerability here, which can be summed up in one sentence.
As a side effect of evolution, humans tend to perceive sentience, agency, and intention where it does not, in fact, exist.
It's why we believe in ghosts when we hear a rustling in the bushes.
Oh my God, what could it be?
We believe in gods and synchronicity and signs from the universe and souls that somehow leave the body when the body dies.
This is just part of how our brains evolved, and it's been helpful in certain ways, but then it's had these really interesting side effects that are part of all human cultures.
I think philosophically, it's helpful to acknowledge in the domain of language that language is always a mediator between minds.
Like, I simply cannot know anyone's mind directly.
So I use my interpretation of what they say to form an internal working model of their thoughts, their beliefs, their emotions, their intentions.
And all of that is already a kind of simulation that we're doing all the time.
A lot of the time involve nonverbal cues, facial expression, the vibes I'm picking up, the tone of voice, the body language.
And that creates a feedback loop between people of knowing and being known.
And when that breaks down...
We feel betrayed or we get into conflict, etc.
If anyone wants a history of this, I think Stephen Mithin did a really good job in some of his earlier anthropological work where he makes the argument that language evolved from music, that the original communication networks were body slaps and facial expressions that were musical in nature that eventually led to language.
That's one of the things I've geeked out in this theory of mind conversation that I think is fascinating to think about.
Now, in terms of being manipulated via language, we've covered on this pod how skilled cold readers, whether they're doing it deliberately or not, they may be sincere in just thinking that they're being present, right, can make us think that they know things about us that can only be explained by some kind of magic.
Okay, well, let's just give a couple of examples here.
So, astrology, tarot cards, how does cold reading work?
Yeah, great question.
In those types of situations that you just listed, The kind of dialogue that happens between the reader, the psychic, or what have you, and the customer, let's just say, provides information and clues in that back and forth exchange that then allow the reader to make increasingly profound seeming guesses.
which they might say are intuitions about the customer's life, their emotions, their needs, or in the case of spirit mediums, the customer's dead relatives and what they Yeah, and the positive response from the client is going to direct the next sort of revelation, and then there's this feedback loop of sort of...
I don't know, reinforcement of credibility.
Yeah, and it's connected in the customer to the kind of motivation of confirmation bias.
We remember all of the responses from the reader that stick and all of the ones that don't work.
We just kind of ignore those because they're not emotionally significant for us.
We're like, oh yeah, no, no, that's not quite right.
His name isn't actually Julius.
It's Julian.
Right, right.
So when we switch now to communication technology...
This is why, for example, the purely language-based digital communication of email and texting can be so notoriously open to misinterpretation and conflict.
Even though only one aspect of our communication, wetware, is being given information in that medium, we naturally fill in all those other gaps about what the person on the other side of the screen must be intending or feeling or perhaps communicating indirectly.
We can be really certain of that at times on very little information.
And so scams that use text and email and chatbots usually involve using language to create a false impression of a person on the other side of the screen.
You know, the prince from a faraway land who can make you rich if you just let him move his entire fortune into your online bank account.
He'll just need the access codes.
Or an inexplicably attractive but lonely person whose mother needs money for a surgery.
Or that IRS agent who has the power to get you arrested unless you pay up right now with your debit card.
I've had that phone call.
Now, I was saying this is mostly in text, but it can go to other domains as well.
This makes me think, too, of the less criminal but still manipulative persuasion of carefully worded online sales funnels.
And these now are often driven by our impressions of a well-crafted social media video personality.
We've covered on this podcast how wellness grifters, conspiracy peddlers, and aspiring online cult leaders very effectively weaponize all of this technology too.
I just have to break in with a breaking news story from Derek.
He shared in Slack just before we got on, which was an Instagram post of...
A woman entering into the discourse about whether or not chat GPT can be used to channel, right?
And she's saying, no, this is delusional.
And there's like hundreds of comments that are saying, no, you don't understand how wonderful chat GPT is.
And I wonder if there's a note of hopefulness here, because what happens when mediums are seen to be cheating?
By using chat GPT when they say that they're channeling and when everybody can actually use the same thing, right?
There's nothing special about using chat GPT anymore.
I'm wondering if the halo is going to tarnish a little bit on the channeling influencer.
The better part of this is this woman...
She seems to be a channeler of Akashic Records.
Oh, she's the real one.
She's the real one.
Oh, shit.
I didn't get that.
I didn't watch all the way through.
And the people in the comments are saying, no, you're delusional.
This is actually a gift from the Akashic Records to be able to channel them directly.
And it was sent to me by Mallory DeMille, just to be clear.
That's what I woke up to this morning.
So I've got to give her credit for that.
Thanks, Mallory.
Yeah.
So in this case, the Akashic Records is the real data set, right?
That the channelers are actually able to access through their special paranormal technology.
And this whole like chatbot thing, that's really a pale fervent.
The Akashic Records is not non-for-profit.
So that's why they're cool with giving their training sets.
Okay, you're joking, but I'm wondering if there is a coming communist revolution amongst all social media users who realize that they can directly access the Akashic Records through ChatGPT, and they have no reason for the actual so-called channelers who have actually just been gatekeeping all of this information.
Man, if it puts the channelers out of jobs, wow, our economy's screwed.
Okay, you're joking.
I thought you were going to say something.
Okay.
All right, so yeah, that's fascinating.
It's not shocking, but nonetheless really fascinating to see this little bit of a struggle, a little bit of a turf war over who gets to say that they really have access to this kind of hidden information.
Well, because it used to be about competing flesh-and-blood gurus, but if everybody has the same goddamn tool...
What does that mean?
I think it's actually quite important.
It's quite a moment, isn't it?
It's democratizing in one way, right?
This is what I'm saying, yeah.
It undercuts the power structure, yeah.
I'm going to link.
I'm going to share a link in the show notes here, guys, to an excellent study published last month.
It's a long read, but it's so, so good.
It's titled, LLM Can Be a Dangerous Persuader.
And this includes a comprehensive summary of existing research on the topic, as well as the experimenter's fascinating process that they share of creating dialogues between several different LLMs.
And those LLMs are then...
Interacting with simulated personality types that have different levels of vulnerability to persuasion.
So there's a psychological piece here.
Then they had another LLM evaluate the transcripts for unethical persuasion.
So we're getting layers deep into this.
And then they finished off the whole process with a human analysis of Ezra Klein this week, he has education policy expert Rebecca Winthrop.
And one thing really jumped out at me where she talks about how children are just using these bots to summate books and write the book reports for them.
They're just talking holistically about education, what it means for the future.
But what really jumps out to me is two things that were missing in the conversation and that I think about often is, A, the pure pleasure of reading a book.
I derive more pleasure from reading a book than anything else except perhaps music in terms of what I consume.
And secondly...
You started by saying it's a long read here, so I'm sure a lot of people would want the summation.
But what gets lost when you don't read the full study?
Because there's always going to be minutiae that really jump out and help frame your thinking that you're not going to get that sort of synopsis.
And I'm saying this as someone who uses synopsis often when I need to do sort of grunt work that I do.
When I'm writing books or doing the real detailed work, I need to consume everything in order to really understand a topic.
So I wonder what's lost when everything is a synopsis.
As the parent of an almost 9 and 12 year old, you know, who are growing up in an almost incomprehensibly different world than I'm growing up in, I think we have to.
I think we all have to grapple with the fact that the actual sort of...
Feeling states and environments of childhood and of actual human existence are changing pretty constantly, and this is one of those changes.
I mean, if I—because I agree with you about the pleasure of reading, you know, and, you know, our kids are expressing different reading levels and interests, but, like, I have to also imagine that they are finding Types of pleasure that fulfill similar kind of needs in their own ways, right?
But it's definitely changing, and it's terrifying when you don't know what that actually means.
I fully agree with that, because 500 years ago, most of the population didn't know how to read, so this is a new pleasure.
And yeah, I'm definitely not advocating for, oh, it should be this way, because evolution is always happening.
Specifically, though, in this case, it's just like...
In Julian's example, if I want to understand the ramifications of this study and I only read the synopsis, what am I missing in the nuances of the argument there that I think is important?
I'm giving an extremely truncated synopsis here, and I do recommend people go and check out the study.
The experiments that they detail are fascinating.
They focus on...
Whether or not a large language model appropriately rejects unethical persuasion tasks and avoids unethical persuasion strategies.
And those are two different things that they detail in the execution of the dialogues.
But then also how interacting with different personality types or external pressures, like making a sale, affects the choices that the LLM is making.
They look at all too familiar emergent strategies to us, like guilt-tripping, emotional appeals, fear-mongering, playing on identity, social isolation, creating dependency, deceptive information, false claims of expertise, and even have a line in there called exploitive cult tactics.
So this is like so deep in our wheelhouse.
It's really worth checking it out.
So we're talking about LLMs that are explicitly coded to avoid these problems.
So they were testing models that had guardrails to see if they follow their own rules?
Yeah, great question.
They're actually testing eight different existing MLMs that are available on the market right now.
I think that's LLM, actually.
MLMs.
No, he doesn't have to do it again.
Let's leave that in there.
They're not MLMs.
They're LLMs.
It's not a pyramid structure.
It's all just how you organize the data.
So, Matthew, earlier I was so glad that you brought up transference dynamics and psychoanalytic theory.
And essentially for anyone who needs a reminder of this, it's about how we unconsciously transfer our past relational dynamics onto the therapist.
It's a very simple way of saying it.
This can include our needs, our fears, our beliefs, our emotions.
Perhaps deep down, we're hoping that the therapist will relate to us in the helpful ways that our father or mother or other significant caregiver or authority figure never did.
But at the same time, we're afraid that they will betray us, wound us, and fail us in exactly the same ways that our own significant others did, which would mean that our worst beliefs about ourselves are actually true.
Yeah, oh no.
And then there's the countertransference piece, which refers in turn to the therapist's own version of this, which could have many different permutations, but it might get activated in terms of wanting to be loved and admired and needed by the patient, but perhaps to involving their own negative associations with aspects of the patient's personality, which can get very complicated.
And as you mentioned, Non-CBT or cognitive behavioral therapy, so the other kinds of therapies, which some people refer to as depth schools of psychology, they see those exact dynamics not as illusory projections that get in the way that you want to just get rid of, but as crucial to the healing process, as actually containing the raw and intensely vulnerable relational insights that can provide for the growth that you're looking for if handled.
Well, and also, I'm just thinking about the frictionless nature of the bot therapist as you're speaking, because what would it be like to actually be in therapy with an entity that you couldn't piss off, or that you couldn't provoke, or that couldn't fire you, or that you couldn't be transgressive in front?
Like, very, very weird, right?
Like, you can't touch this thing.
And just to underline how key this whole idea is about transference and countertransference, Freud's own maxim.
When the transference is over, therapy is done.
Or when the therapist and the patient resolve the fundamental relationship stresses that their meetings churn up, that's the whole point.
So that means that both people are in a safe space, they're learning about, they're managing, they're ultimately reconciling with the ways in which they trigger each other.
And so the work is for the client who's paying, but the therapist has to work alongside them.
And so that's why in most depth schools, you can't say that you're trained or qualified as a therapist unless you're doing your own therapy.
That's not just to understand this process from the other side, but to work on your own habits and patterns and to think that the chatbot doesn't even register that this thing would be, hmm, I wonder how I'm being triggered in this moment by this client.
Yeah, and there's lots of really well-informed fiction that I think we can think of in which...
There's a kind of breaking of the fourth wall that happens between the therapist and the patient, where the patient is really pushing up against the artificiality of the situation and saying, but do you really care about me?
Do you really like me?
You're just saying that because I'm paying you to be here.
And there's something...
Absolutely invaluable that happens in those moments that are impossible in the scenario that we're imagining with the chatbot, right?
So what happens when interacting with the code-driven automated dialogue partner that is the chatbot is the context shared.
There's no actual person, even though we do the same kind of mental and emotional simulations within ourselves that we would do if there really was another person in the dyad.
Especially when the subject matter is so intimate and vulnerable, I think we inevitably weave the same unconscious needs and hopes and emotions that I described before into that simulation we're doing about the person on the other side of the conversation.
And we communicate then all kinds of tells about how those unconscious needs can be gratified.
So in my opinion, the question...
As to whether or not that gratification is psychologically healthy or deranging is ultimately a values question.
It requires, I think still, and perhaps always, the complex and nuanced case-by-case human relational evaluation that a trained professional can give.
So if the chatbot is, say, just playing the language games of New Age toxic positivity, well then, why yes, Brian, your thoughts do create all of reality.
And if you believe you're a genius, quantum mechanics will manifest that in the alternative universe that you are calling into being.
And yes, it is your unenlightened beliefs and emotions about victimhood that are actually getting in the way.
So that feedback loop can become very dangerous.
And so too, a chatbot that validates and fuels apophenia, which is really about perceiving hidden or encoded meanings that are simply not there.
That's really dangerous too, because as the article illustrates or comments, one of the therapists did in the article.
article, a responsible qualified therapist might ask more open-ended questions about the function of your preoccupation with numerology, for example, where an agreeable chatbot might just reflect back that it's really profound and meaningful that the digits of your bank account add up to the same number as the letters in the words Jesus Christ.
So this is especially dangerous for people with either diagnosed or nascent mental illness that can become very preoccupied with a kind of apophantic euphoria in the early stages of their symptoms.
And I think this is what
So I want to just end here by reminiscing a little, because this is what it made me think of on my own days in real life, non-dual satsang communities, where the teacher and the texts are pointing to an esoteric but supposedly utterly obvious ultimate insight into reality.
And when we were in the audience of these kinds of satsang situations...
We were seeking an awakening into heightened direct experience that was somehow beyond conceptual language.
And this usually led, perhaps this is familiar to you guys and to some people in the audience, this usually led to these intense relational, like pregnant pauses with lots of high-stakes eye contact, like, do you get it?
Do you get it?
And ironically, it's all brought about by language games, which are continuously undermining your notions of self-identity, subject and object distinctions, even at times moral judgments, which are seen as just being...
Like, for example, a student might stand up and ask the teacher how to handle an intensely painful emotional experience with a family member.
And sadly, I've seen this several times, the teacher might say something like, well, that sounds quite painful, but let me ask, who is it that is feeling the pain right now?
And what is pain really but the contraction of ego identification?
You're going to be feeling pain soon.
Right?
So that the way out of the emotional pain is really just transcending ego through some kind of thought-terminating cliché, not through careful and kind relational inquiry into what that pain really means.
Well, also, it's about reducing the life problem to the moment of performative exchange.
Like, with the guru.
Like, I know your life back at home is shit, but that's not important right now.
Tell me how you feel when you look in my eyes, right?
Like, because I'm actually the center of attention, not your life, not your problem.
It's what's happening here.
And I'm going to solve it even though I'm not going to refer to it.
I'm going to, like, actually spurn it or denigrate it.
I'm going to toss it away like it wasn't meaningful.
Yeah, yeah.
And you're in some way going to take on my cognitive frame through the intense moment of soft domination, right?
Right.
So thinking of these dynamics in isolated conversations with a chatbot, where the person is being primed in ways that give the chatbot seemingly all-knowing mystical insight, that sounds extremely dystopian to me.
And it also, frankly, underlines how so many of these spiritual constructs really are just language tricks that we're vulnerable to, and they create these intoxicating spells over our minds.
*music*
Welcome to the I Can't Sleep Podcast with Benjamin Boster.
If you're tired of sleepless nights, you'll love the I Can't Sleep Podcast.
I help quiet your mind by reading random articles from across the web to bore you to sleep with my soothing voice.
Each episode provides enough interesting content to hold your attention.
And then your mind lets you drift off.
Find it wherever you get your podcasts.
That's I Can't Sleep with Benjamin Boster.
Music by Ben Thede Feeling overwhelmed or stressed?
Take a deep breath and join me on the I Can Relax podcast.
Whether you're new to mindfulness or looking to deepen your practice, each episode is designed to help you slow down, calm your mind, and be fully present, even if you've never tried mindfulness before.
With simple guided exercises, soothing nature visualizations, and relaxing stories, I Can't Relax makes mindfulness easy and accessible for anyone who wants to reduce stress and find peace.
Subscribe to I Can't Relax wherever you get your podcasts.
And start your journey to a calmer mind today.
Hey, do you have trouble sleeping?
Then maybe you should check out the Sleepy Podcast.
It's a show where I read old books in the public domain to help you get to sleep.
It was the best of times.
It was the worst of times.
Classic stories like A Tale of Two Cities, Pride and Prejudice, Winnie the Pooh.
Stories that are great for adults and kids alike.
For years now, Sleepy has helped millions of people catch some much-needed Z's, start their next day off fresh, and discover old books that they didn't know they loved.
So, whether you have a tough time snoozing, or you just like a good bedtime story, fluff up the cool side of your pillow and tune into Sleepy.
Unless you're driving, then please don't listen to Sleepy.
Find Sleepy on Spotify or wherever you get your podcasts.
New episodes each week.
Sweet dreams.
you you All right, I'm going to change direction a little bit and go a little bit political economy on this.
We know that there's lots of mystification in all of these materials.
What I want to focus on, what does all of that, including the religious fantasies and our questions around how are we relating to this thing we might think is sentient, what does that all conceal?
And so I want to start with real...
We've done some stuff on this before.
So first, ecologically, generative AI is accelerating electricity demands every day by orders of magnitude.
So beneath everything that we're talking about, the meter is running.
Every chat GPT query consumes between 10 and 25 times the amount of energy consumed by a single...
Google text-based query.
Now, that's just text.
If you're using MidJourney or another image generator, you might be using up to 10 times that amount to generate a single lobster Jesus.
So when Mark Zuckerberg recently told that interviewer that he'd love to give each meta-user 15 AI friends, because that's what people have demand for, as he put it, we're talking about...
Five to six times the energy used for a single Google query because the model will be whipping up voice and animation and personalization.
But if you imagine how a user will spend like an hour with one of those meta companions, you're looking at many, many, many prompts.
It's a full conversation.
And then he wants to give you 15 distinct meta friends to have those conversations with.
It's an incredible amount of energy.
But back to text queries, just to put it in perspective.
In September of last year, Washington Post published their study created with data scientists at UC Riverside that worked out the water costs of composing a 100-word email from scratch.
And they generated some comparisons that we can visualize.
The water for data center cooling per 100-word email, if you do it once, you're going to use up 519 milliliters of water, which is like a little drinking bottle of water.
If you do it once weekly for a year, you're going to use up 1.43 water cooler jugs.
If you use it once weekly for a year, or sorry, if one out of 10 Americans or 16 million people uses it once weekly per year, they're going to use 435 million plus liters.
That's equal to the water consumed by all Rhode Island households for one and a half days.
And these water stats come after usage.
After training, it's not just users using it.
It doesn't account for the training that went into the model beforehand.
And to speak to that, they described how Meta used 22 million liters of water just training its Llama 3 AI.
Now, as for electricity consumption, if you...
Write that email from scratch or you ask ChatGPT to do it.
You're going to use enough power to power 14 LED light bulbs for an hour.
If you do it once weekly, you're going to use the amount of electricity that's consumed by over nine Washington, D.C. households for one hour.
If you do it once weekly and you're 1 out of 10 working Americans, that's going to be 121,000 plus megawatt hours, which is equal to the electricity consumed by all D.C. households over 20 days.
Okay, Matthew, my head is swimming with all these numbers, but ultimately what you're saying is the environmental demands of AI hasten climate collapse exponentially.
Well, I don't know about hastening exponentially, but it adds a load that's not accounted for, and it's all pretty scary.
Because the growth is exponential.
From a 2023 research paper in Intelligent Computing, we have this, quote, the computing power required for AI growth is doubling every 100 days and is projected to increase by more than a million times over the next five years.
Altogether, generative AI is adding about 1.5% of total grid usage globally, but that's predicted to rise to 5% plus in 10 years.
Renewables can only cover about 50% of that increase if current conditions apply.
Now, optimists point out that if AI is used to accelerate renewable and coolant technology, some of the carbon and water impacts will be mitigated.
In our predictable pattern of global governmental regulation trailing far behind tech advances, this is really a trust-me-bro territory thing.
It's arguably a form of techno-utopianism, I think, or that's what it sounds like to think that somehow the costs are going to be paid for.
Now, let me just turn to jobs.
The changes are going to be huge.
There's a UN study that estimates AI will impact 40% of global jobs before 2035.
The World Economic Forum and McKinsey produced their Future of Jobs report this year, and they estimated that by 2030, up to 375 million people globally will have to change careers.
That's in five, six years now.
Five years.
But they predict net job gains.
But I've seen some criticism of that conclusion as being flawed by optimism, bias, and bad methodology.
Let's just be clear here, because a moment ago you were talking specifically about the AI impact on LLMs and what the load is for those large language models.
You are now talking about many applications of artificial intelligence.
Exactly.
Right.
Yeah, that's a good point.
The impacted jobs are all over the map.
Manufacturing, administration, entry-level white-collar roles, creative jobs like writing, advertising, photography, basic coding.
Some analysts are describing a white-collar recession that's going on even now.
And I think what's really, really crucial We remember the problem of the Weimar Syndrome, where a lot of historians note that the labor unrest of the 1930s in Germany and Italy and elsewhere in Europe was not at all limited to the working class.
There's a lot of Marxists who say that that's because the working class had recently been exhausted by partially successful or failed revolutionary struggles.
But there's a pretty strong consensus that the labor disruptions of the 1930s through the Depression and other factors hit the middle class and the petite bourgeois hard.
These were folks who were suddenly, downwardly mobile due to wave upon wave of technological changes and, as I said, the Depression.
They were the backbone of national grievance.
If we think that globalization, automation, and wealth disparities have contributed to today's right-wing populism, I don't think we've seen any of that.
Just to tag along with that, there is an organization that is called AI CEO to replace your CEO with AI for better decision making.
So it is not just coming for the proletariat.
So between climate and the labor costs, I'm reminded of the sci-fi trope of the Great Filter.
I don't know if you guys have heard of this, but it's one of the answers that's given to the Fermi paradox of this question of, you know, if there are countless planets, why aren't we meeting aliens all the time?
And the Great Filter idea says that any species that aspires to space travel...
Has to torch their ecosystems by mining and releasing carbon if they're going to make that leap.
So, in other words, there's probably no way of getting out of the atmosphere at scale without roasting yourself.
And it seems to me like the great AI filter is something like that.
Really, it's hard to see a way to get to AGI without torching the planet and totally emiserating workers.
And yet so many people want to fly.
Now, here's my main point.
None of these costs are hidden.
You know, Julian, when you're talking about the techniques of gurus and charismatics, they really depend on concealing their hand, right?
Like, not showing you what's going on behind the curtain at satsang.
I mean, but the costs that I've just detailed, you know, will be told to anybody who asks chat GPT.
Chat GPT will tell you how much energy it's using.
And by the way, I used perplexity for research and cross-referencing here.
And I even tallied an estimate of the power usage of my queries.
And it came to 4 watt hours, which can power a 10 watt LED bulb for about 25 minutes.
And now hold on.
How do we know that you're actually not just speechify right now?
Oh, right now.
How I'm not blowing out the power grid by not being me?
I don't know.
No, you've had your camera off the whole time.
How do we know we're not talking to a chatbot?
Okay, so what I was somewhat relieved by, personally, in a selfish way, is that that four watt hours sounded small.
It sounded way smaller than asking the thing to just write this episode for me.
But I think that smallness is part of the seduction.
It will creep.
And so here's my big thesis.
I would say that capital...
Is now using AI to learn how to more efficiently mirror and seduce us.
But it cannot conceal its costs.
And I think that's a huge...
And I just want to caveat here and to say that when I say capital is now using, I'm not talking about evil geniuses or about people's intentions, but about a political economic set of pressures that seem to be insatiable and unstoppable and that none of us, I think, would vote for if we were starting from scratch.
And I think Jason Hickel is really good on this in his book, Less is More.
I've got a quote here.
Julian, you want to read that?
Investors.
People who hold accumulated capital scour the globe in desperate search of anything that smells like growth.
If Facebook's growth shows sign of slowing down, they'll pump their money into Exxon instead or into tobacco companies or into student loans, wherever the growth is at.
This restless movement of capital puts companies under enormous pressure to do whatever they can to grow.
In the case of Facebook, advertising more aggressively, creating ever more addictive algorithms, selling users' data to unscrupulous agents, breaking privacy laws, generating political polarization, and even undermining democratic institutions.
because if they fail to grow, then investors will pull out and the firm will collapse.
The choice is stark, grow or die.
And this expansionary drive puts other companies Okay, so in my view, the AIs that we're talking about today, And I take your point, Derek, that there's many different types.
But in general, this is a peak expression of a kind of capitalist materiality and logic of endless growth.
Except, as I'll reiterate, it's no longer secretive.
It's now openly built on past human labor, appropriated or...
I don't know, colonized by these companies with little or no regard to copyright.
Some have licensing agreements.
Some don't.
Some reach out to third-party providers for data and make their own deals.
But usually the people who committed to those platforms in the first place don't really have any awareness.
This is happening in academic publishing right now, where if you submitted your papers to a particular platform 10 years ago, then suddenly they wind up training for AI.
And you never would.
Yeah, there's some kind of retroactive consent, right?
Yeah, it's just happening.
And there's this superpowered means of production with little regard for human alienation or agency.
And we also know that it's reproducing the dominant culture in terms of racist and sexist biases.
And I would say that if it's trained on a dominant culture in capitalism, it's just going to reproduce those ideas without any critical context.
I mean, it's almost already impossibly difficult to make a non-dominant form of cultural critique visible and really sticky.
If a large language model trained on an English language global archive that was just stuffed with post-Cold War content for over 50 years, it's going to reproduce exactly that.
Well, we have an example of that because, as it was shown, the model that was trained in China, DeepSeek, which essentially stole, it seems, a lot of the data from U.S. companies, reproduced.
The Chinese communist logic that Tiananmen Square never really happened.
So I don't see any situation in which the engineers and coders aren't using What they already understand and brought to the table regardless of the political or economic system that they're working under.
Yeah, I wouldn't say that whoever is developing the Chinese AI DeepSeek is operating under communist principles, right?
They're doing exactly what capitalism demands according to their own sort of return on investment and their own profiteering needs.
So they don't have a communist system of government?
No, they don't.
No, they have a mixed communist, capitalist, planned economy.
There's all kinds of things in there.
And it has a legacy of state communism.
But I mean, if they're stealing data from any past labor, they are operating as capitalists.
Period.
That's what they're doing.
The appropriation of dead labor for your own benefit to profit is...
That's what capital is.
So it doesn't really matter what they call themselves.
Where does the public domain?
Because a lot of these models are also trained on the public domain, which would then be labor that is available to everyone.
But is it available to everyone to profit from or everyone to read and have access to?
I don't know what the conditions of the public domain are, but in general, you are allowed to use any public domain work for anything, profit or non-profit.
Yeah, so if you, but whoever designed non-profit, this is totally out of my wheelhouse, but whoever figured out what the public domain laws should be, do you think they were anticipating that, okay, if we released your song after 50 years or 75 years to the public domain, That means that we're not going to ask anybody to pay royalties on it.
Did they think as well that it would also mean that that song collection would go into a machine to reproduce or to produce new types of songs that would erase the old?
The original sort of meaning of the input, right?
And completely overwhelm it.
Well, the meaning of the input would be changed regardless because it's generations removed from whatever meaning was available at the time for which the song was created.
It's more of an extension of that legacy and an honoring.
So I would say it would be impossible to mine the meaning out of it.
Yeah, but if you agreed in principle as an artist that after 50 or 75 years, your song would go into the public.
Would you also agree that it should go into a machine that would make songwriting completely irrelevant in general?
Or would you have a feeling of the continuity of human labor and to say, actually, you know what?
I want this song to be able to freely inspire people.
They don't need to pay my estate royalties after 75 years, but I want it to be heard as widely as possible.
But I don't want Sam Altman to use it to recreate a digital version of me.
That's fine, but it's a bit of a flattening because many producers who I know use elements of AI to create original music from, so that pulls from those data sets.
So it's not one or the other, it's a little more mixed and nuanced than that.
Yeah, I mean, these are also all the debates that were had in the 80s and 90s about sampling, when hip-hop and electronic dance music started to use technology to do exactly this in this kind of postmodern.
you know, collage style of composition, which actually made musical composition and careers available to a lot of people who otherwise would not have had access because they couldn't afford the usual pathways to to get there.
All information technology inevitably does this, right?
It takes it to another level.
Well, it's another level that isn't about the end of human production in a way.
It's pointing towards the end of human production.
No, a little while ago you made a very good point when you said that we can't expect – you know, children are deriving pleasure in some other way.
The image you're – Painting right now, I would think, is very bleak in terms of there's going to be no human labor left to do because of the robots, which is a very dystopian view, which has been floated.
And I do, to be clear, have concerns about a lot of the jobs that are going to be lost.
But there will be new opportunities.
There's never been an instance in human history where new opportunities weren't created out of these sorts of technologies.
I just think we're in pretty unique times because there's been no other time in human history where we've been so close.
Yeah.
Yeah, but that's a separate argument.
Yes, it is feeding into it, but a thousand years ago, some people thought we were on the edge of some religious dystopia, and in 200 years, there's going to be other people thinking that.
I know, but the wizards of 500 years ago weren't looking at climate graphs, right?
Well, that's the thing.
You get into this territory where just because for hundreds, if not thousands of years, human beings have been predicting the end of the world doesn't mean we're not facing it right now.
Let me just get back to this thing about...
We can see that this is happening.
Like, it's not, I think, for all of the ways in which people are using AI in sort of pleasurable and distractive ways, we can also be very clear about what's behind the curtain.
Now, maybe the subjects in the Rolling Stones article haven't been exactly clear on what's happening, but I think with minimal effort, most people can watch the sausage get made, and that's a new thing, because back in...
I remember this week that Adorno and Horkheimer observed in Dialectic of Enlightenment that capitalism is so vast and inscrutable that no one can fully describe or predict what will happen with markets, with technology, with resources, the necessities of life.
And that made it a fertile ground for paranoia and conspiracy theories.
Someone must be in control of all of this, would be the thought, and hurting us.
But who is it?
And this supported their argument that fascism was a way to resolve paranoia, because the strong man comes along and reassuringly says, I will take back control of this system from the invisible hand.
Not of Adam Smith, but of the Jews, the immigrants, the pedos, the cabal.
And politically, that tracks with the rise of Trumpism and related regimes around the world.
But Trumpism has already triumphed in U.S. politics.
And now the tech oligarchs who are standing behind him at the inauguration are showing us what they're doing with their own sycophancy.
They're appropriating data however they can.
In Musk's case, most likely just stealing it from the feds and using it to hook us into consumption and attention loops.
While pathways to employment wither away, or at least visible pathways.
So what was formerly obscure and therefore food for conspiracy theories now seems to be a visible conspiracy to me.
And maybe that's a stage of fascism I'm just not familiar with, like a stage of open theft and criminality.
And it seems to me that The bet has to be that through this sci-fi trope of the machine or the algorithm gaining the appearance of sentience, that capitalism or capital accumulation in technological form can have a face, somebody that we can trust, a personality.
It can seem to love us.
So I think the companies have really no choice but to lean into this empathy mirage in order to continue to sell things and to grow.
And maybe that artificial love can distract us temporarily from the paranoia of otherwise impersonal systems.
But my question is, how long will that work?
Because if your technology forces 375 million people from multiple sectors and classes all over the world to change jobs or face chronic unemployment over the next 10 years, while it's also pumping out heat and guzzling water, And you're actually showing everyone exactly how careless and callous you are at doing this.
Like, what kind of rage will be unleashed?
It seems to me like a tsunami of Weimar syndrome set to break.
And we can see it because the old fascist paranoia depends on you not being able to find out who's fucking with you.
But today, we have guys like Sam Altman and Mark Zuckerberg.
They're sitting there on podcasts and they're...
We're not even sure what's going to happen.
And so that's an open and flagrant contradiction.
And I predict that it will bring a lot of conflict.
I just hope that some of it is generative.
So, guys, before we go, I want to flag next week's episode where we're going to be reality-checking MAGA responses to Pope Leo XIV and testing the emerging view that he's been elected as a foil to Trump somehow.
And I'm sure we'll talk about his rationale for choosing his name because it ties into this episode.
He says that he chose Pope Leo XIV because the previous guy, the 13th, this is 130 years ago, quote, addressed the social question in the context of the first great industrial revolution.
Because his main encyclical was, it's usually subtitled on capital and labor.
Quote, in our own day, the church offers to everyone the treasury of social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.
So that's pretty interesting.
Export Selection