Sam Harris speaks with Paul Bloom about AI and current events. They discuss the state of LLMs, the risks and benefits of AI companionship, what it means to attribute consciousness to AI, relationships with AI as a form of psychosis, Trump’s attacks on science and academia, what Trump may be trying to hide in the Epstein files, how damaging Trump’s connections to Epstein might be, MAGA’s obsession with conspiracy theories, questions surrounding Epstein’s death, Paul’s research on revenge, Sam’s falling out with Elon Musk, where things went wrong for Elon, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Just a note to say that if you're hearing this, you're not currently on our subscriber feed and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
I am here with Paul Bloom.
Paul, thanks for joining me.
Sam, it's good to talk to you.
As always.
Great to see you.
It's been, we were just talking off mic.
It's been years.
It's crazy how that happens.
But it has been.
I've been following you, though.
You've been doing well.
Yeah, nice.
Yeah.
Well, I've been following you.
I've read you in the New Yorker recently writing about AI.
I think you've written at least two articles there since we actually wrote a joint article for the New York Times seven years ago, if you can believe that.
Oh, yeah.
About Westworld and the philosophical import of watching it and realizing that only a psychopath could go to such a theme park and rape Dolores and kill children, et cetera.
And I think we predicted no such theme park will ever exist because it'll just have, it'll be a bug light for psychopaths and normal people will come back.
And if they do anything like that, they'll scandalize their friends and loved ones and be treated like psychopaths appropriately.
We'll see.
We may be proven wrong.
Who knows in this crazy time?
But yeah, that was a fun article to write.
And I think we wrestled with the lemmas of dealing with entities that at least appear to be conscious and the moral struggles that leads to.
Yeah.
Well, so I think we'll start with AI, but remind people what you're doing and what kinds of problems you focus on.
I think though we haven't spoken for several years, I think you still probably hold the title of most repeat podcast guest at this point.
I haven't counted the episodes, but you've been on a bunch, but it's been a while.
So remind people what you focus on as a psychologist.
Yeah, I'm a psychology professor.
I have positions at Yale, but I'm located at University of Toronto.
And I study largely moral psychology, but I'm interested in development, in issues of consciousness, issues of learning, notions of fairness and kindness, compassion, empathy.
I've actually been thinking, it's funny to be talking to you because my newest project, which I've just been getting excited by, has to do with the origins of morality.
And the spark was a podcast with you talking to Tom Holland, author of Dominion.
Oh, nice.
How so?
What was the actual spark?
Yeah, I found this a great conversation.
He made the case that a lot of the sort of morality that maybe you and I would agree with, that idea of respecting the right to universal rights and a morality that isn't based on the powerful, but instead maybe in some way respects the weak is not the product of rational thought, not the product of secular reasoning, but instead the product of Christianity.
And he makes this argument in Dominion and several other places.
And I've been thinking about it.
I mean, it's a serious point.
He's not the only one to make it.
It deserves consideration, but I think it's mistaken.
And so I think my next project, as I've been thinking about it, and I thank you for the podcast, getting me going on this, is to make the alternative case, to argue that a lot of our morality is inborn and innate.
I'm a developmental psychologist.
That's part of what I do.
And a lot of morality is the product of reasoned and rational thought.
I don't deny that culture plays a role, including Christianity, but I don't think that's the big story.
Nice.
Well, I look forward to you making that argument because that's one of these shibboleths of organized religion that I think needs to be retired.
And so you're just the guy to do it.
So let me know when you produce that book and we'll talk about it.
I mean, I got to say, I heard you guys talk and you sort of, you know, you guys engaged properly on the idea.
And Holland, I've never met, but he seems like a really smart and quite a scholar.
So it's not, it's not, you know, and he gets, I think, credence by the fact that he's in some way arguing against his own side.
He himself isn't a devout Christian, a secular.
Yeah.
But that doesn't make him right.
So I'm very interested in engaging these ideas.
So how have you been thinking about AI of late?
What has surprised you in the seven years or so since we first started talking about it?
A mixture of awe and horror.
I'm not a doomer.
I'm not as much of a doomer as some people.
I don't know when in the last time I checked with you, Sam.
What would you say your p-doom is?
I've never actually quantified it for myself, but I think it's non-negligible.
I mean, it's very hard for me to imagine it actually occurring in the most spectacularly bad way of a very fast takeoff, an intelligence explosion that ruins everything almost immediately.
But fast or slow, I think it's well worth worrying about because I don't think the probability is tiny.
I mean, I would put it in double digits.
I don't know what those double digits are, but I wouldn't put it in single digits given the kind of the logic of the situation.
I think we're kind of in the same place.
I mean, people always talk about, you know, you have a benevolent, super intelligent AI and you tell it to make paperclips and it destroys the world and turns us all into paperclips.
But there's another vision of malevolent AI.
I've been thinking about with the rise of, what's it called, Mech Hitler?
Mecha Hitler.
Mecha Hitler.
So, you know, it's a simpler scenario.
Some deranged billionaire creates an AI that fashions himself on Hitler.
The Trump Defense Department purchases it and gets it to connects it to all of its weaponry systems and hijinks the zoo.
How could you come up with such an outlandish scenario that could never happen?
There's just no way that it's a bizarre fantasy.
And by the way, it also makes porn.
So just to get the trifecta.
So anyway, like a lot of people, I worry about that.
I also find at the same time, I find AI an increasingly indispensable part of my intellectual life.
Oh, interesting.
I have a question, but not just a specific, I use it as a substitute for Google.
I say, where's a good Indonesian restaurant in my neighborhood?
And how do you convert these Euros into dollars?
But I also have a question like, I don't know, I got into argument with somebody about revenge.
So what is the cross-cultural evidence for revenge being a universal?
And who would argue against that?
And three seconds later, boom, a bunch of papers, books, thoughtful discussion, thoughtful argument.
Mixed in our hallucinations.
Occasionally I find it cites a paper written by me that I've never written.
But it's astonishing.
I've been credited with things.
It's told me I've interviewed people I've never interviewed and it's amazingly apologetic when you correct it.
I'm so sorry.
Yes, you are totally right.
Let me now give you citations of papers that actually exist.
Yeah.
In your working day, when you write, when you prepare for interviews, how much do you use it?
Well, I've just started experimenting with it in a way that's probably not the usual use case.
So we have fed everything I've written and everything I've said on the podcast into ChatGPT, right?
Yeah, yeah.
So actually we have two things.
We've created a chat bot that is me, that's model agnostic.
So it can be run on ChatGPT or a claude or whatever the best model is.
You can swap in whatever model seems best.
But so this is like at the, you know, a layer above at the system prompt level.
And it has, again, access to everything.
It's something like 12 million words of me, right?
So it's a lot.
That would be a lot of books.
We've just begun playing with this thing, but it's impressive because it also is using a professional voice clone.
So it sounds like me and it's every bit as monotonous as me in its delivery.
I mean, so I'm almost tailor-made to be cloned because I already speak like a robot.
Must be agonizing to listen to sound.
It is.
But it hallucinates and it's capable of being weird.
So I don't know that we're ever going to unleash this thing on the world.
But it's interesting because it's like, so even having access to every episode of my podcast, it still will hallucinate an interview that never happened.
You know, it'll still tell me that I interviewed somebody that I never interviewed.
And so that part's still strange, but presumably this is as bad as it'll ever be and the general models will get better and better, one imagines.
We're looking at each other on video and I imagine it doesn't do video yet.
But if we were talking on the phone or without video, would I be able to tell quickly that I was talking to an AI and not to you?
Only because it would be able to produce far more coherent and comprehensive responses to any question.
I mean, if you said, because it's because it's hooked up to whatever the best LLM is at the moment, you know, if you said, give me 17 points as to the cause of the Great Depression, it would give you exactly 17 points detailing the cause of the Great Depression.
And I could not do that.
So it would fail the Turing test as all these LLMs do bypassing it so spectacularly and instantaneously.
Actually, that's a surprise.
I want to ask you about that.
That's a surprise for me, how not a thing the Turing test turned out to be.
It's like the Turing test was the staple of the cognitive science literature and just our imaginings in advance of credible AI.
We just thought, okay, there's going to be this moment where we're just not going to be able to tell whether it's a human or whether it's a bot.
And that's going to be somehow philosophically important and culturally interesting.
But, you know, overnight, we were given LLMs that fail the Turing test because of how well they pass it.
And this is no longer a thing.
It's like it was never a Turing test moment that I caught.
All of a sudden, you and I, I have a super intelligent being I could talk to on my iPhone.
And I'll be honest, and I think other psychologists should be honest about if you had asked me a month before this thing came out how far away we were from such a machine, I would say 10, 20, 30 years, maybe 100 years.
And now we have it.
Now we have a machine we could talk to and sounds like a person.
And except, like you say, it's just a lot smarter.
And it is mind-boggling.
I mean, it's an interesting psychological fact how quickly we get used to it.
It's as if, you know, aliens landed, you know, next to the Washington monument and now they walk among us.
Oh, yeah.
Well, that's the way things go.
Oh, you developed teleporters.
Now we got teleporters.
And we just take it for granted now.
Yeah.
Well, so now, what are your thoughts about the implications, psychological and otherwise, of AI companionship?
I mean, at the time we're recording this, there's been recently in the news stories of AI-induced psychosis.
I mean, people get led down the primrose path of their delusion by this amazingly sycophantic AI that just encourages them in their Messiah complex or whatever flavor it is.
And I think literally an hour before we came on here, I saw an article about ChatGPT encouraging people to pursue various satanic rituals and telling them how to do a proper blood offering that entailed slitting their wrists and, you know, as one does in the presence of a superintelligence.
I know you just wrote a piece in the New Yorker that people should read on this, but give me your sense of what we're on the cusp of here.
I have a mild form of that delusion in that I think every one of my substack drafts is brilliant.
I'm told.
Paul, you've outdone yourself.
This is sensitive, humane, as always with you.
And no matter what I tell it to, I say, you don't have to suck up to me so much.
A little bit, but not so many.
And now I kind of believe that I'm much smarter than I used to be because I have somebody very smart telling me.
My article is kind of nuanced in that I argue two things.
One thing is there's a lot of lonely people in the world and a lot of people suffer from loneliness and particularly old people, depending on how you count it.
You know, under some surveys, about half of people over 65 say they're lonely.
And then you get to people in like a nursing home, maybe they have dementia.
Maybe they have some sort of problem that makes them really difficult to talk with.
And maybe they don't have doting grandchildren surrounding them every hour of the day.
Maybe they don't have any family at all, and maybe they're not millionaires, so they can't afford to pay some poor schmo to listen to them.
And if ChatGPT or Claude or one of his AI companions could make their lives happier, make them feel loved, wanted, respected, that's that's nothing but good.
I, I'm, I, you know, I think it in some way, it's like powerful painkillers are, are powerful opiates, which is, I'm not sure what I don't think 15-year-olds should get them, but, but somebody who's 90 in a lot of pain, sure, lay it on.
And I feel the same way with this.
So that's the pro side.
The con side is I am worried, and you're touching on it, it was this illusion talk, I'm worried about the long-term effects of these syncopathic sucking up AIs where every joke you make is hilarious.
Every story you tell is interesting.
You know, I mean, the way I put it is, if I ever ask, am I the asshole?
The answer is, you know, a firm no, not you, they're the asshole.
And I think, you know, I'm an evolutionary theorist through and through, and loneliness is awful, but loneliness is a valuable signal.
It's a signal that you're messing up.
It's a signal that says, you got to get out of your house.
You got to talk to people.
You got to, you know, you got to open up to apps.
You got to say yes, yes to the brunch invitations.
And if you're lonely when you interact with people, you feel not understood, not respected, not loved, you got to up your game.
It's a signal.
And like a lot of signals, like pain, sometimes there's a signal that where people are in a situation where it's not going to do them any good.
But often for the rest of us, it's a signal that makes us better.
I think I'd be happier if I could shut off.
I certainly as a teenager, I'd be happier if I could shut off the switch of loneliness and embarrassment, shame, and all of those things, but they're useful.
And so the second part of the article argues that continuous exposure to these AI companions could have a negative effect because for one thing, you're not going to want to talk to people who are far less positive than AIs.
And for another, when you do talk to them, you have not been socially entrained to do so properly.
Yeah, it's interesting.
So leaving the dementia case aside, I totally agree with you that that is a very strongly paternalistic moment where anything that helps is fine.
It doesn't matter that it's imaginary or that it's encouraging of delusion.
I mean, we're talking about somebody with dementia.
But so just imagine in the normal, healthy case of people who just get enraptured by increasingly compelling relationships with AI.
I mean, you can imagine.
So right now we've got chat bots that are still fairly wonky.
They hallucinate.
They're obviously sycophantic.
But just imagine it gets to the place where, I mean, forget about Westworld and perfectly humanoid robots, but very shortly, I mean, it might already exist in some quarters already.
We're going to have video avatars, you know, like a Zoom call with an AI that is going to be out of the uncanny valley, I would imagine immediately.
I mean, I've seen some of the video products, which like sci-fi movie trailers, which they don't look perfectly photorealistic, but they're getting close.
And you can imagine six months from now, it's just going to look like a gorgeous actor or actress talking to you.
That's going to be, imagine that becomes your assistant who knows everything about you.
He or she has read all your email and kept your schedule and is advising you and helping you write your books, et cetera.
And not making errors and seeming increasingly indistinguishable from just a super intelligent locus of conscious life, right?
I mean, it might even claim to be conscious if we build it that way.
And let's just stipulate that, at least for this part of the conversation, that it won't be conscious, right?
That this is all an illusion, right?
It's just a, it's no more conscious than your iPad is currently.
And yet it becomes such a powerful illusion that people just, most people, I mean, the people, I guess philosophers of mind might still be clinging to their agnosticism or skepticism by their fingernails, but most people will just get lulled into the presumption of a relationship with this thing.
And the truth is, it could become the most important relationship many people have.
Again, it's so useful, so knowledgeable, always present, right?
They might spend six hours a day talking to their assistant.
And what does it mean if they spend years like that, basically just gazing into a funhouse mirror of fake cognition and fake relationship?
We've seen, I mean, I'm a believer that sometimes the best philosophy, our movies do excellent philosophy.
And the movie, Her, came out, I think, in 2013.
Yeah.
It's an example of this guy, you know, lonely guy, normal lonely guy, but gets connected to an AI assistant named Samantha, played by Scarlett Johansson, and falls in love with her.
And she does all of you, the first thing she says to him, and what a meet cute is, I see you have like 3,000 emails that haven't been answered.
You want me to answer them all for you?
Yeah.
I fell in love with her there.
But the thing is, you're watching a movie and you're listening to her talk to him and you fall in love with her too.
I think we've evolved in a world where when somebody talks to you and acts normal and gives you and seems to have emotions, you assume there's a consciousness behind it.
Evolution has not prepared us for these extraordinary fakes, these extraordinary golems that elude all of the behavior associated with consciousness and don't have it.
So we will think of it as conscious.
There will be some philosophers who insist that they're not conscious, but even they will sneak back from their classes and then in the middle of the night turn on their phones and start saying, I'm lonely.
Let's talk.
And then the effects of it, well, one effect is real people can't give you that.
Married, very happily married, but my wife forgets about things that I told her.
And sometimes she doesn't want to hear my long, boring story.
She wants to tell her story instead.
And sometimes it's three in the morning.
And I could shake her away because I have this really interesting idea I want to share, but maybe that's not for the best.
She'll be grumpy at me.
And because the thing is, she's a person.
And so she has her own set of priorities and interests.
So too of all my friends.
They're just people.
And they have other priorities in their life besides me.
Now, in some way, this is, I think, what makes when you reflect upon it, the AI companion have less value.
You know, here you and I are.
And what that means is that you decided to take your time to talk to me and I decide to take my time to talk to you.
And that's the value.
When I switch on my lamp, I don't feel, oh my gosh, this is great.
It decided to light up for me.
It didn't have a choice at all.
The AI has no choice at all.
So I think in some part of our mind, we realize there's a lot less value here.
But I do think in the end, the scenario you paint is going to become very compelling and real people are going to fall short.
And it's not clear what to do with that.
Now, there's something that you've just come up with a fairly brilliant product update to some future AI companion, which is a kind of Pavlovian reinforcement schedule of attention, where it's like the AI could say, listen, I want you to think a little bit more about this topic and get back to me because you're really not up to talking about it right now.
Come back tomorrow.
I've wondered that.
That would be an interesting experience to have with your AI that you have subscribed to.
I've wondered at.
You asked the AI a question.
It says, is that really like a good question?
Does it seem like a question you couldn't just figure out just by thinking for a minute?
Listen, I know everything.
That's really what you want to ask me?
Don't you have something deeper?
You were talking to a super intelligent God and you want to know how to end a limerick.
Really?
I would wonder if these things, how people would react if these things came with dials.
Obviously, not maybe a big physical dial, you want a big physical dial.
And the dial is pushback.
So when it's set at zero, it's just everything you say is wonderful.
But I think we do want some pushback.
Now, I think in some way we really want less pushback than we say we do.
And it's this way with real people too.
So everybody says, oh, I like when people, people argue with me.
I like when people call me out of my bullshit.
But what we really mean is we want people to push back a little bit and then say, ah, you convinced me.
You know, you, you really showed me.
Or, you know, I thought you were full of it.
But now, upon reflection, you've really out-argued me.
We want them to fight and then acquiesce.
But you turn the dial even further.
Will the AI and say, you know, we've been talking about this for a long time.
I feel you are not very smart.
You're just not getting it.
I'm going to take a little break.
And you mull over your stupidity.
Yeah.
Culture human.
And the recalcitrance dial.
That's what we could build in.
All right.
I feel it's going to be the worst business.
We're going to get rich, Paul.
This is us.
With AI that calls you on your bullshit.
That's really the business they do this century.
But so what are we to think about this prospect of spending more and more time in dialogue under the sway of a pervasive illusion of relationship wherein there is actually no relationship because there's nothing that it's like to be Chat GPT6 talking to us perfectly clearly?
If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast.
The Making Sense podcast is ad-free and relies entirely on listener support.