Sam Harris speaks with Paul Bloom about empathy, meditation studies, morality, AI, Westworld, Donald Trump, free will, rationality, conspiracy thinking, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Just a note to say that if you're hearing this, you are not currently on our subscriber feed and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org.
There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
I have Paul Bloom on the line.
Paul, thanks for coming back on the podcast.
Sam, thanks for having me back.
You are now officially my, well, I have, I think, only two return guests, but you have just edged out David Deutch, who has two appearances.
So you're the only third appearance on this podcast.
So that says something.
It's not exactly like a 20th appearance on The Tonight Show, but it is a measure of how good a guest you are.
I'm touched.
Maybe, you know, maybe a decade from now, who knows, we could be doing our 20th anniversary show.
Well, after we did our second show, people just emailed me saying, just have Paul on the podcast all the time.
You don't need any other guests.
So you are a popular guest.
Well, we had a great discussion.
I think a little bit of what makes for a good discussion, which is you and I agree on a lot.
We have a lot of common ground, but there's enough tension and enough things to rub against that we get some good discussion going.
We will see if we can steer ourselves in the direction of controversy, perhaps.
But you have just released a book, which we talked about to a significant degree, I think, in your first appearance here.
And we would be remiss not to talk about it some, so we'll start with that.
But people should just know that if they find what we were about to say about empathy intriguing, Our first podcast has a full hour or more on it, and it is an incredibly interesting and consequential issue, which we will be giving short shrift here because we've already done it.
But the proper intro to this topic is that you have just released a book You're entitled Against Empathy, which is a, I think I told you at the time, is a fantastic title.
You seem to steer yourself out of a full collision with the outrage of your colleagues in your subtitle.
You have as a subtitle, The Case for Rational Compassion.
So you're not against compassion, obviously.
Tell us about your position on empathy and how it's different from compassion.
So the distinction is super important because if you just hear him against empathy, it'd be fair enough to assume I'm some sort of monster, some sort of person arguing for pure selfishness or, you know, an entire lack of warmth or caring for others.
And that's not what I mean by empathy.
And it's actually not what psychologists and philosophers mean by empathy either.
What I'm against is putting yourself in other people's shoes, feeling their pain, feeling their suffering.
And I'm not even against this in general.
I think empathy is a wonderful source of pleasure.
It's central to sex.
It's central to sports.
It's central to the pleasure we get from literature and movies and all sorts of fictional entertainments.
But what I argue is in the moral realm, when it comes to being good people, it steers us dangerously astray.
It's a moral train wreck.
And the reason why is that it zooms us in on individuals like a spotlight.
And in fact, the fans of empathy describe it as a spotlight.
But because of that, it's very biased.
I'll be more empathic towards somebody who is my skin color and then of a different skin color towards somebody I know versus a stranger.
It's difficult to be empathic at all to somebody who you view as disgusting or unattractive or dangerous or opposed to you.
And in fact, there's a lot of neuroscience studies we could get into that get at this not only through self-report, which is kind of unreliable, but actually looking at the correlates of empathy in the brain.
You know, finding that some studies find that one of my favorite studies tested male soccer fans in Europe.
And they watch somebody who's been described as a fan of their same team receive electric shocks.
And it turns out they feel empathy.
In fact, the same parts of their brain that would be active if they themselves were being shocked, light up when they see this other person being shocked.
So that's great.
But then in another condition, they observe somebody who's described as not being of the supporting the same team.
And there, empathy shuts down.
And in fact, what you get is kind of a blast of pleasure circuitry when they watch the other person being shocked.
And so, empathy is biased and narrow and parochial, and I think leads us astray in a million ways, much of which we discussed the last time we talked about this.
Compassion is a bit different.
So, my argument is what we should replace empathy with for decision-making is cold-blooded reasoning.
of a more or less utilitarian sort, where you judge costs and benefits.
You ask yourself, what can I do to make the world a better place?
What could I do to increase happiness, to reduce suffering?
And maybe you could view that as in a utilitarian way, you could do it in terms of a Kantian moral principles way, but however you do it, it's an act of reason.
What's missing in that, and that's the rational part of my subtitle, what's missing in that is everybody from David Hume on down has pointed out you need some sort of motivation, some sort of kick in the pants.
And that's where I think compassion comes in.
So many people blur empathy and compassion together, and I don't actually care how people use the terminology.
But what's important is they're really different.
So you can feel empathy.
I see you suffer and I feel your pain and I zoom in on that.
But you could also feel compassion, which is you care for somebody, you love them, you want them to thrive, you want them to be happy.
But you don't feel their pain.
And some really cool experiments on this, for instance, were done by, and this is going to connect to one of your deep interests out of meditation, were done by Tanya Singer, who's a German neuroscientist, and Matthew Ricard, who's a Buddhist monk and so-called happiest man alive.
And they did these studies where they trained people to feel empathy, to experience the suffering of others.
And then they train another group to feel compassion.
And the way they do it is through loving-kindness meditation, where you care about others, but you don't feel their pain.
Now, it turns out this activates entirely different parts of the brain.
There's always some overlap, but there's distinct parts of the brain.
But more to the point, they have different effects.
So the empathy training makes people suffer.
It makes people selfish.
It leads to burnout.
While the compassion training is pleasurable.
People enjoy it, they enjoy the feeling of kindness towards other people, and it makes them nicer.
And recent studies, like very recent studies, by the psychologist David DeStano in Northwestern, back this up by finding that meditation training actually increases people's kindness.
And the explanation that they give, and it's an open question why it does so, the explanation they give is, it ignites compassion but shuts down empathy circuitry.
That is, you deal with suffering, you could deal with it better because you don't feel it.
So this is one way I'd make the distinction between empathy and compassion.
Yeah.
I think we probably raised this last time, but it's difficult to exaggerate how fully our moral intuitions can misfire when guided by empathy as opposed to some kind of rational understanding of what will positively affect the world.
The research done by Paul Slovic on moral illusions is fascinating here.
When you show someone a picture of a single little girl who's in need, They are maximally motivated to help, but if you show them a picture of a little girl, the same little girl and her brother, their altruistic motive to help is reduced reliably.
And if you show them 10 kids, it's reduced further.
And then if you give them statistics about hundreds of thousands of kids in need of the same aid, It drops off a cliff.
And that is clearly a bug, not a feature.
And that, I think, relates to this issue of empathy as opposed to what is a higher cognitive act of just assessing where the needs are greatest in the world.
One could argue that we are not evolutionarily well-designed to do that.
We aren't.
I mean, I remember you cited the Slovic findings.
I think it was in the moral landscape, where you say something to the effect that there's never been a psychological finding that so blatantly shows a moral error.
Whatever your moral philosophy is, you shouldn't think that one life is worth more than eight, let alone worth more than a hundred.
Especially when the eight contain the one life you're concerned about.
Exactly.
It's a moral disaster.
And I mean, the cool thing is that upon reflection, we could realize this.
So I'm not one of these psychologists who go on about how stupid we are, because I think Every demonstration of human stupidity or irrationality has contained with it a demonstration of our intelligence because we know it's irrational.
We could point it out and say, God, that's silly.
I mean, and we have a lot of them.
My book cites a lot of research show demonstrating the sort of phenomena you're talking about.
But it's an old observation.
I mean, Adam Smith, like 300 years ago, over about 300 years ago, said, gave the example of an educated man of Europe hearing that the country of China was destroyed.
At a time when they would have never known somebody from China.
And Smith says, basically your average European man would say, well, that's a shame.
And he'd go on his day.
But if he was to learn that tomorrow, he would lose his little finger.
He'd freak out.
He wouldn't sleep at all at night.
Why am I losing my fingers?
Will it be painful?
How will it affect my life?
And he uses this example to show that our feelings are skewed in bizarre ways.
But then he goes on to point out that we can step back and recognize that the death of thousands is far greater tragedy than the loss of our finger.
And it's this dualism, this duality that fascinates me between what our gut tells us and what our minds tell us.
I believe he also goes on to say that any man who would weigh the loss of his finger over the lives of thousands or millions in some distant country, we would consider a moral monster.
Yes.
He says something like, human nature shudders at the thought.
Right.
It's one of the great passages in all of literature, really.
I think I quote the whole thing in The Moral Landscape.
So, just a few points to pick up on what you just said about the neuroimaging research done on empathy versus compassion.
It's something that people don't tend to know about the meditative side of this, but compassion as a response to suffering from a meditative first person, and certainly from the view of Buddhist psychology, is a A highly positive emotion.
It's not a negative emotion.
You're not diminished by the feeling of compassion.
The feeling of compassion is really exquisitely pleasurable.
It is what love feels like in the presence of suffering.
The Buddhists have various modes of what is called loving-kindness, and loving-kindness is the generic feeling of wishing others happiness.
And you can actually form this wish with an intensity that is really psychologically overwhelming, which is to say that it just drowns out every other attitude you would have toward friends or strangers or even enemies, right?
You can just get this humming even directed at a person who has done you harm or who is just kind of objectively evil.
You wish this person was was no longer suffering in all the ways that they are and will to be the kind of evil person they are, and you wish you could improve them.
And so Buddhist meditators acquire these states of mind, and it's the antithesis of merely being made to suffer by witnessing the suffering of others.
It's the antithesis of being made depressed when you are in the presence of a depressed person, say.
And so it really is the fact that empathy and compassion are used for the most part as synonyms in our culture is deeply confusing about what normative human psychology promises and just what is on the menu as far as conscious attitudes one can take toward the suffering of other people.
I think that's right.
I think that I'm now in sort of a getting into a debate in the journal Trends and Cognitive Sciences with an excellent neuroscientist who disagrees with me.
And there's all sorts of interesting points to go back and forth.
But at one point he complains about the terminology and he says, compassion isn't opposed to empathy, it's a type of empathy.
To which my response is, who cares?
I don't care how one calls it.
I'm totally comfortable to call them different types of empathy, in which case I'm against one type of empathy for another, but the distinction itself is absolutely critical, and it's so often missed, not only in the scientific field, but also in everyday life.
I published an article on empathy in the Boston Review, and I got a wonderful letter, which I quote in my book, with permission of the writer, by this woman who worked as a first responder after 9-11.
And after doing this for about a week, she couldn't take it anymore.
She was too oppressed by what she felt.
While her husband happily and cheerfully continued his work.
And it didn't seem to harm him at all.
And she was like, what is going on?
Is something wrong with him?
Is something wrong with me?
And I think we make sense of this by saying that there's at least two processes that lead to kindness and good behavior that can.
And one of them, empathy, has some serious problems.
And if we could nurture compassion, We not only can make the world a better place, but we can enjoy ourselves while doing it.
To be clear, you also differentiate two versions of empathy, because there is the cognitive empathy of simply understanding another person's experience.
And then there's the emotional contagion version, which we're talking about, which is you are permeable to their suffering in a way that makes you suffer also.
That's right.
The cognitive empathy is kind of a different bag, and it's very interesting.
And we might turn to this later if we talk about Trump, but it's an understanding of what goes on in the minds of other people.
And sometimes we call this mind reading or theory of mind or social intelligence.
And to me, it's neither good nor bad.
It's a tool.
If you, Sam, want to make the world a better place and help people, help your family, help others, you can't do it unless you understand what people want, what affects people, what people's interests are, what they believe.
Any good person, any good policymaker needs to have high cognitive empathy.
On the other hand, suppose you wanted to bully and humiliate people, to seduce them against their will, to con them, to torture them.
Here, too, high cognitive empathy will help.
If you want to make me miserable, it really helped to know how I work and how my mind works.
So cognitive empathy is a form of intelligence, like any sort of intelligence, can be used in different ways.
It's morally neutral, so to say that someone is highly empathic in that way is to simply say that they can take another person's point of view, but that, you know, can be used for good or evil.
That's right.
The worst people in the world have high cognitive empathy.
It's how they're able to do so much damage.
Right.
I wanted to step back to something you said about meditative practice and Buddhism, because there were two things you said, and one is easy, really, to get behind, which is the pleasure that comes Through this sort of practice and doing good in loving people in caring about people But one thing I struggle with, and I don't know whether we have different views on this, is over the blurring of distinctions that comes through Buddhism and its meditative practice.
So, there's a joke I like.
It's my only Buddhism joke.
Have you heard about the Buddhist vacuum cleaner?
It comes with no attachments.
And so, one of the criticisms of Buddhist practice, and to some extent a criticism of some of my positions, is that There are some there's some partiality we do want to have.
I'm not only do I love my my children more than one more than I love you.
But I think I'm right to love my children more than I love you.
Okay, we're going to end the podcast right now.
One of the requirements of my podcast guests is that they love me as much as their own children.
I love you second.
It's my two children, then you, then my wife.
Okay, that's the appropriate ranking.
Is that good?
Especially for a third-time guest.
Yes.
I think I'm agnostic as to whether one or the other answers is normative here or whether there are equivalent norms, which are just mutually incompatible, but you could create worlds that are equally good by each recipe.
But I share your skepticism, or at least it's not intuitively obvious to me that if you could love everyone equally, that would be better than having some gradations of moral concern.
When we extend the circle in the way that Peter Singer talks about of our moral concern, the world does get better and better.
We want to overcome our selfishness, our egocentricity, our clannishness, our tribalism, our nationalism, all of those things, all of those boundaries we erect where we care more about what's inside than outside the boundary.
Those all seem, at least they tend to be pathological, and they tend to be sources of conflict, and they tend to explain the inequities in our world that are just on their face unfair and in many cases just unconscionable.
But whether you want to just level all of those distinctions and love all Homo sapiens equivalently, that I don't know.
And I'm not actually skeptical that it is a state of mind that's achievable.
I've met enough long-term meditators and I've had
enough experience in meditation and with psychedelics and just changing the dials on conscious states to believe that it's possible to actually obviate all of those distinctions and to just feel that love is nothing more than a state of virtually limitless aspiration for the happiness of other conscious creatures and that
It need not be any more preferential or directed than that.
When you're talking about a monk who has come out of a cave doing nothing but compassion meditation for a decade, you're talking about somebody who, in most cases, has no kids, doesn't have to function in the world the way we have to function.
Certainly, civilization doesn't depend on People like that forming institutions and running our world.
And so I don't know what to make of the fact that let's just grant that it's possible to change your attitude in such a way that you really just feel equivalent love for everybody and there's no obvious cost to you for doing that.
I don't know what the cost would be to the species or to society if If everyone was like that.
And intuitively I feel like it makes sense for me to be more concerned and therefore much more responsible for and to my kids than for yours.
But at a greater level of abstraction, when I talk about how I want society to be run, I don't want to try to legislate my selfishness.
Yeah, that's right.
I have to understand that at the level of laws and institutions, fairness is a value that more often than not conserves everyone's interests better than successfully gaming a corrupt system.
Yeah, so I'm nodding.
I mean, I want to zoom in on the last thing you said because it was astonishing to me, but for most of what you're saying, I'm nodding in agreement.
The world would be much better if our moral circle was expanded.
And certainly, the world would be much better if we cared a little bit more for people outside of our group, and correspondingly, relatively less for people inside of our group.
It's not that we don't care for our own enough.
The problem is we don't care for others enough.
And I love your distinction as well, which is a way I kind of think about it now is, yeah, I love my children more than I love your children, but I understand stepping back, that a just society should treat them the same.
Right.
So if I have a summer job opening, I understand my university regulations say I can't hire my sons.
And you know, I actually think that's a good rule.
I love my sons, I'd like to hire them, it'd be a good job for them and everything, but I could step back and say, yeah, we shouldn't be allowed to let our own personal preferences, our own emotional family ties distort systems that should be just and fair.
The part of what you said, which I just got to zoom in on, is do you really think it's possible?
Put aside somebody who has no prior attachments at all, some monk living in a cave.
Have you met, or do you think you ever will meet, people who have had children and raised them who would treat the death of their child No differently than death of a strange child?
Yeah, I do actually, and I'm not sure what I think of it.
I can tell you these are extraordinarily happy people, so what you get from them is not a perceived deficit of compassion or love or engagement with the welfare of other people, but you get a kind of obliteration of preference.
The problem in their case is It's a surfeit of compassion and love and engagement so that they don't honor the kinds of normal, you know, family ties or preferences that we consider normative and that we would be personally scandalized to not honor ourselves.
The norms of preference, which seem good to us and we would feel that we have a duty to enforce in our own lives, and we would be wracked by guilt if we noticed a lapse in honoring those duties.
These are people who have just blown past all of that because they have used their attention in such a Unconventional and in some ways impersonal way, but it's an impersonal way that becomes highly personal or at least highly intimate in their relations with other people.
So for instance, I studied with one teacher in India, a man named Poonjaji, He actually wasn't Buddhist.
He was Hindu, but he was not teaching anything especially Hindu.
He was talking very much in the tradition of, if people are aware of these terms and they'll get them from my book, Waking Up, the tradition of Advaita Vedanta, the non-dual teachings of Vedanta, which are nominally Hindu.
They're really just Indian.
There's nothing about gods or goddesses or any of the garish religiosity you see in Hinduism.
He was a really... I mean, there was a lot that he taught that I disagreed with, or at least there were some crucial bits that he taught that I disagreed with, and again, you can find that in Waking Up if you're interested, but he was a really shockingly charismatic and wise person to be in the presence of.
He was really somebody who could just bowl you over with his If I were not as scrupulous as I am about attributing causality here, 90% of the people who spent any significant time around this guy thought he had magic powers.
This is a highly unusual experience of being in a person's presence.
Part of what made him so powerful was that actually, ironically, he had extraordinarily high empathy of the unproductive kind, but it was kind of anchored to nothing in his mind.
I mean, so for instance, if someone would have a powerful experience in his presence and, you know, start to cry, you know, tears would just pour out of his eyes.
You know, he would just immediately start crying with the person.
And when somebody would laugh, he would laugh, you know, twice as hard.
It was like he was an amplifier of the states of consciousness of the people around him in a way that was It's really thrilling.
And again, there's a feedback mechanism here where people would just have a bigger experience because of the ways in which he was mirroring their experience.
And there was no sense at all that this was an act.
I mean, he would have to have been the greatest actor on earth for this to be brought off.
But yeah, he's, I think, I forget the details of the story, but the story about how he behaved when his own son died would shock you with its apparent aloofness, right?
I mean, this is a person for whom a central part of his teaching was that death is not a problem, and he's not hanging on to his own life or the lives of those he loves with any attachment at all.
And he was advertising the benefits of this attitude all the time because he was the happiest man you ever met.
But I think when push came to shove and he had to react to the death of his son, he wouldn't react the way you or I would or the way you or I would want to react given how we view the That's an extraordinary story.
I mean, you have a lot of stories like that and waking up of people like that and I haven't encountered many such people.
I met Matthew Ricard once and it was a A profoundly moving experience for me.
And I'm not, I'm not easily impressed.
I'm sort of, I tend to be cynical about people.
I tend to be really cynical about people who claim to have certain abilities and the like.
But I simply had a short meeting when we went out for tea and we just talked.
And there's something about people who have attained a certain mental capacity or set of capacities that you can tell by being with them that they have it.
Their bodies afford it.
They just, they just give it off.
from a mile away.
It's analogous to charisma, which some people have, apparently.
Bill Clinton is supposed to be able to walk into a large room, and people just gather around him.
Their eyes will be drawn towards him, and whatever it is that someone like Matthew Ricard has is extraordinary in a different way, which is he, in some literal sense, exudes peace and compassion.
Having said that, some of it freaks me out, and some of it morally troubles me.
I mean, we talked about the bonds of family, but I can't imagine any such people having bonds of friendship.
I would imagine you get a lot of email, Sam.
I imagine you get a lot of email asking you for favors.
So, when I email you and say, hey, you know, I have a book coming out, you're in the mood for a podcast, because we're friends, you respond to me different than if I were a total stranger.
Suppose you didn't.
Suppose you treated everything on its merits, with no bonds, no connectedness.
It'd be hard to see you as a good person.
But you don't see Matthew that way.
No.
If you knew more about the details of his life, you might find that it's not aligned with the way you parcel your concern.
For instance, the example you just gave, he might be less preferential toward friends or not.
I actually know Matthew.
I don't often see him, but I've spent A fair amount of time with him.
I mean, he's what I would call a mensch.
He's just like the most decent guy you're going to meet all year.
He's just a wonderful person.
But he's a... I studied with his teacher, Kensei Rinpoche, who was very famous Lama and who many Tibetan Lamas thought was one of the greatest meditation masters of his generation.
He died, unfortunately, about 20 years ago, but maybe it's more than that.
I now notice as I get older, whenever I estimate how much time has passed, I'm off by 50% at least.
Yeah.
Someone should study that problem.
Self-deception, I think, has something to do with it.
So anyway, Khense Rinpoche was just this 800-pound gorilla of meditation.
He'd spent more than 20 years practicing in solitude, and Matthew was his closest attendant for years and years.
I think just to give you kind of to rank order what's possible here, Matthew certainly wouldn't put himself anywhere near any of his teachers on the hierarchy of what's possible in terms of transforming your moment-to-moment conscious experience, and therefore the likelihood of how you show up for others.
Matthew's great because, as you know, he was a scientist before he became a monk.
He was a molecular biologist.
And the work he's done in collaborating with neuroscientists who do neuroimaging work on meditation has been great.
And he's a real meditator, so he can honestly talk about what he's doing inside the scanner, and that's fantastic.
But again, even in his case, he's made a very strange life decision, certainly from your point of view.
I mean, he's decided to be a monk and to not have kids, to not have a career in science, to not... It's in some ways an accident that you even know about him because he could just be, and for the longest time, he was just sitting in a tiny little monk cell in Kathmandu serving his guru.
That's right.
And when I met him, he was spending six months of each year in total solitude, which again boggles my mind, because if I spend a half hour by myself, I start to want to check my email and get anxious.
So it is impressive.
And I accept your point, which is I need to sort of work to become more open minded about what the world would be like if certain things which I hold dear were taken away.
There's a story I like of why economics got called dismal science.
And it's because the term was given by Carlyle.
And Carlyle was enraged at the economists who were dismissing an institution that Carlyle took very seriously.
And economists said, this is an immoral institution.
And Carlyle says, you have no sense of feeling.
You have no sense of tradition.
And he was talking about slavery.
And so, you know, he was blasting the economists for being so cold-blooded they couldn't appreciate the value and importance of slavery.
And sometimes when I feel my own emotional pulls towards certain things, and I feel like, I feel confident that whatever pulls I have along, say, racial lines are immoral, but I'm less certain about family lines or friendship lines, I think I need to be reminded, we all need to be reminded, well, we need to step back and look, what will future generations say?
What can we say when we're at our best selves?
It's going to take more than that for me to give up the idea that I should love my children more than I love your children, but it is worth thinking about.
And it's interesting to consider moral emergencies and how people respond in them and how we would judge their response.
So just imagine if, you know, you had a burning building and our children were in there and I could run in to save them, say, I'm on site and I can run in and save whoever I can save, but because I know my child's in there, my priority is to get my child and Who could blame me for that, right?
So I run in there and I see your child, who I can quickly save, but I need to look for my child.
So I just run past your child and go looking for mine.
And at the end of the day, I save no one, say, or I only save mine.
It really, really was a zero-sum contest between yours and mine.
You know, if you could watch that play out, if you had a video of what I did in that house, right?
And you saw me run past your child and just go looking for mine.
I think it's just hard to know where to find the norm there.
A certain degree of searching and a certain degree of disinterest with respect to the fate of your child begins to look totally pathological.
It looks just morbidly selfish.
But some bias seems only natural, and we might view me strangely if I showed no bias at all.
Again, I don't know what the right answer is there.
We're living as though almost a total detachment from other people's fates, apart from the fates of our own family, is normal and healthy.
And when push comes to shove, that is clearly revealed to not be healthy.
Right, it's plain that the extremes are untenable.
Imagine you weren't looking for your child, but your child's favorite teddy bear.
Right.
Well, then you're kind of a monster, you know, searching around for that while my child burns to death.
I mean, to make matters worse, I mean, Peter Singer has famously, I think very convincingly pointed out that The example you're giving is a sort of weird science fiction example, and you might reassure, we might reassure, it says, well, that'll never happen.
But Singer points out, we're stuck in this dilemma every day of our lives.
Yeah.
As we devote resources, you know, I, I, like a lot of parents, spend a lot of money on my kids, including things that they don't, you know, things that make your lives better, but aren't necessary, and things that are just fun, toy, expensive toys and vacations and so on, while other children die.
And Singer points out that I really am in that burning building.
I am in that burning building, buying my son an Xbox while kids from Africa die in the corner.
And it's difficult to confront this, and I think people get very upset when Peter Singer brings it up, but it is a moral dilemma that we are continually living with and continually struggling with.
And I don't know what the right answer is, but I do have a sense that the way we're doing it every day is the wrong way.
We are not devoting enough attention to those in need.
We're devoting too much attention to those we love.
Yeah.
Well, I had Singer on the podcast, and I also had Will McCaskill, who very much argues in the same line.
And I don't have a good answer.
I think one thing I did as a result of my conversation with Will was I realized that I needed to kind of automate this insight So Will is very involved in the effect of altruism community, and he arguably, I think, started the movement.
And there are websites like givewell.org that rate charities, and they've quantified that to save an individual human life costs now $3,500.
I mean, that's the amount of money you have to allocate where you can say, as a matter of likely math, you have saved one human life.
And the calculation there is with reference to the work of the Against Malaria Foundation.
They put up these insecticide-treated bed nets and malaria death has come down by 50%.
It's still close to a million people dying every year, but it was very recently 2 million people dying a year from mosquito-borne, not all mosquito-borne illness, just malaria actually.
So, in response to my conversation with Will, I just decided, well, I'm still going to buy the Xbox.
I know that I can't conform my life and the fun I have with my kids so fully to this logic of this triage, right?
So that I strip all the fun out of life and just give everything to the Against Malaria Foundation.
But I decided that the first $3,500 that comes into the podcast every month will just by definition go to the Against Malaria Foundation.
I don't have to think about it.
It just happens every month.
I would have to decide to stop it from happening.
I think doing more things like that, I mean, so what Will does is there's actually a giving pledge where people decide to give 10% of their, I think it's at least 10% of their income to charity and to these most effective charities each year.
Any kind of change you want to see in the world that you want to be effective, Automating it and taking it out of the cognitive overhead of having to be re-inspired to do it each day or each year or each period, that's an important thing to do.
That's why I think the greatest changes in human well-being and in human morality will come not from each of us individually Refining our ethical code to the point where we are bypassing every moral illusion, right?
So that every time Paul Slovic shows us a picture of a little girl, we have the exact right level of concern, and when we see eight kids, we have, you know, we have eightfold more, or whatever it would be.
But to change our laws and institutions and tax codes and everything else so that more good is getting done without us having to be saints in the meantime.
I think that's right.
I think that this comes up a lot in discussions of empathy.
So, you know, I talk about the failings of empathy in our personal lives, particularly, say, giving to charity or deciding how to treat other people.
And a perfectly good response I sometimes get is, well, OK, I'm a high empathy person.
What am I going to do about it?
And, you know, one answer concerns activities like meditative practice.
But, you know, you could be skeptical over how well that works for many people.
I mean, I think your answer is best, which is in a good society, and actually as good individuals, we're smart enough to develop procedures, mechanisms that take things out of our hands.
And this applies at every level.
The political theorist Jan Elster points out, that's what a constitution is.
A constitution is a bunch of people saying, look, we are irrational people.
And sometime in the future, we're going to be tempted to make dumb, irrational choices.
So let's set up something to stop us.
And let's set up something to override our base instincts.
We can change this stopping mechanism, but let's make it difficult to change so it takes years and years.
So, no matter how much Americans might choose they want to re-elect a popular president for a third term, they can't.
If all the white Americans decide they want to reinstantiate the institution of slavery, they can't.
And laws work that way, constitutions work that way, I think good diets work that way.
And charitable giving could work that way, in that you have automatic withdrawal or whatever.
So you, in an enlightened moment, you say, this is the kind of person I want to be.
And you don't wait for your gut feelings all the time.
I think overriding other disruptive sentiments works the same way.
Like, suppose I have to choose somebody to be a graduate student or something like that.
And I know full well that there are all sorts of biases having to do with physical attractiveness, with race, with gender.
And suppose I believe upon contemplation that it shouldn't matter.
It shouldn't matter how good looking the person is, shouldn't matter whether they were from the same country as me.
Well, one thing I could try to do is say, okay, I'm gonna try to really be very hard I'm going to try very hard not to be biased.
We're horrible at that.
We overcorrect, we undercorrect, we justify.
So what we do when we're at our best is develop some systems like, for instance, you don't look at the pictures.
You do some sort of blind reviewing so your biases can't come into play.
Now, it's harder to see how this is done when it comes to broader policy decisions, but people are working on it.
Paul Slovic, actually, who we've referenced a few times, talks about this a lot.
So right now, for instance, government's decisions over where to send aid or where to go to war are made basically on gut feelings, and they're basically based on sad stories and photographs of children washed ashore, and so on, and it just makes the world worse.
And people like Slovic wonder, can we set up some fairly neutral triggering procedures that say, in America, when a situation gets this bad, according to some numbers and some objective judgments, it's a national emergency, we send in money.
Overseas, if this many people die under such and so circumstances, we initiate some sort of investigative procedure.
It sounds cold and bureaucratic, but I think cold and bureaucratic is much better than hot-blooded and arbitrary.
There was something you said when referencing the soccer study, in-group empathy and out-group schadenfreude, I guess.
Yes.
And this reminded me of a question we got on Twitter.
Someone was asking about the relationship between empathy and identity politics.
I guess based on the research you just cited, there's a pretty straightforward connection there.
Do you have any thoughts on that?
There's a profound connection.
We're very empathic creatures, but it always works out that the empathy tends to focus on those from within our group and not to the out group.
I got into a good discussion once with Simon Baron-Cohen, the psychologist, who's very pro-empathy.
And he said that if only, we're talking about this time of the war in Gaza, and he's talking, he said, if only the Palestinians and Israelis had more empathy, this wouldn't happen.
The Israelis would realize That the suffering of the Palestinians and vice versa and there'd be peace.
And my feeling here is that that's exactly, it's exactly the opposite.
That that conflict in particular suffered from an abundance of empathy.
The Israelis at the time felt huge empathy for the suffering of teenagers who were kidnapped, of their families.
The Palestinians felt tremendous empathy for their countrymen who were imprisoned and tortured.
There were abundant empathy and there's always abundant empathy at the core of any conflict.
And the reason why it drives conflict is I feel tremendous empathy for the American who is tortured and captured.
And as a rule, it's very hard for me to feel empathy for the Syrian or for the Iraqi and so on.
And we can now pull it down a little bit in the aftermath of the 2016 election.
I think Clinton voters are exquisitely good at empathy towards other Clinton voters and Trump voters for Trump voters.
Having empathy for your political enemies is difficult.
And I think actually, for the most part, so hard that it's not worth attaining.
We should try for other things.
I think we certainly want the other form of empathy.
I mean, we want to be able to understand why people decided what they decided, and we don't want to be just imagining motives that don't exist or weighting them in ways that are totally unrealistic.
We inevitably will say something about politics.
People would expect that of us.
By law, there could be no discussion of over 30 minutes that doesn't mention Trump.
I'm going to steer you toward Trump, but before we go there, as you may or may not know, I've been fairly obsessed with artificial intelligence over the last, I don't know, 18 months or so, and we solicited some questions from Twitter, and many people asked about this.
Have you gone down this rabbit hole at all?
Have you thought about AI much?
And there was one question I saw here, which was, given your research on empathy, how should we program an AI with respect to it?
So I actually hadn't taken seriously the AI worries, and honestly, I'll be honest, I dismissed them as somewhat crackpot until I listened to you talk about it.
I think it was a TED Talk.
Yeah, yeah.
So thank you, that got me worried about yet something else.
No, good.
And I found it fairly persuasive that there's an issue here we should be devoting a lot more thought to.
The question of putting empathy into machines, which is I think in some way morally fraught.
Because if I'm right, that empathy leads to capricious and arbitrary decisions.
Then if we put empathy into computers or robots, we end up with capricious and arbitrary computers and robots.
I think when people think about putting empathy into machines, they often think about it from a marketing point of view.
Such that, you know, a household robot or even an interface on a Mac computer that is somewhat empathic will be more pleasurable to interact with, more humanoid, more human-like, and we'll get more pleasure dealing with it.
And that might be the case.
I've actually heard a contrary view from my friend David Pizarro, who points out that when dealing with a lot of interactions, we actually don't want empathy.
We want a sort of cold-blooded interaction that we don't have to become emotionally invested in.
We want professionalism.
I think of our super intelligent AI, I think we want professionalism more than we want emotional contagion.
If you're anxious and consulting your robot doctor, you don't want that anxiety mirrored back to you, right?
You want as stately a physician as you ever met in the living world now embodied in this machine.
Yeah, so I'm very happy if I have a home blood pressure cuff, which just gives me the numbers and doesn't say, oh man, I feel terrible for you.
This must be very upsetting.
Whoa!
Yeah, yeah, dude, dude, I'm holding back here.
You know, the machine starts to let little graphical tears trickle down the interface.
I'm sure people involved in marketing these devices think that they're appealing.
I think that David is right, and we're going to discover that for a lot of interfaces, we just want an affect-free, emotion-free interaction.
And I think we find, as I find, for instance, with interfaces where you have to call the airport or something, when it reassures me that it's worried about me and so on, I find it cloying and annoying and intrusive.
I just want the data.
I want to save my empathy for real people.
Yeah, but I think the question goes to what will be normative on the side of the AI.
So do we want AI... I guess let's leave consciousness aside for the moment.
That's right.
But do we want an AI that actually has more than just factual knowledge So here's my take on it.
I think we want AI with compassion.
could emulate our emotional experience, could that give it capacities that we wanted to have so as to better conserve our interests?
So here's my take on it.
I think we want AI with compassion.
I think we particularly want AI of compassion towards us.
I'm not sure whether this came from you or somebody else, but somebody gave the following scenario for how the world will end.
The world's going to end when someone programs a powerful computer that interfaces with other things to get rid of spam on email.
And then the computer will promptly destroy the world as a suitable way to do this.
We want machines to have a guard against doing that, where they say, well, human life is valuable.
Human flourishing and animal flourishing is valuable.
So, if I want AI that is involved in making significant decisions, I want it to have compassion.
I don't, however, want it to have empathy.
I think empathy makes us, among other things, racist.
The last thing in the world we need is racist AI.
There's been some concern that we already have racist AI.
Oh, yes.
Have you heard this?
Yes, I have.
Go ahead.
Remind me.
If I recall correctly, there are algorithms that decide on the paroling of prisoners and or, you know, whether people get mortgages.
And there is some evidence, I could be making a bit of a hash of this, but there was some evidence in one or both of these categories that the AI was taking racial characteristics into its calculation and then that wasn't, that hadn't been programmed in, that was just an emergent property of it finding, of all the data available, this data was relevant.
In the case of prisoners, the recidivism rate, you know, if it's just a fact that black parolees recidivate more, re-offend more, I don't know in fact that it is, but let's just say that it is, and an AI notices that, well then of course the AI, if you're going to be predicting whether a person is likely to violate their parole, you are going to take race into account if it's actually descriptively true of the data that it's a variable.
And so I think there was at least one story I saw where you had people scandalized by racist AI.
When I was young and very nerdy, more nerdy than I am now, I like gobbled up all science fiction.
And Isaac Asimov had a tremendous influence on me.
And he had all of his work on robots.
And he had these three laws of robotics.
And, and, you know, if you think about it as a, you know, from a more sophisticated view, the laws of robotics weren't particularly morally coherent.
Like one of them is you should never harm a human or through an action allow a human to come to harm.
But does that mean the robot's going to run around trying to save people's lives all the time because we're continually not allowing people to come to harm?
But the spirit of the endeavor is right, which is, I would wire up, I think, and in fact, I think in some way, as robots become more powerful, you could imagine becoming compulsory to wire up these machines with some morality.
This comes up with driving cars.
Sorry, with the automatic, right?
The computer-driven cars where, you know, are they going to be utilitarian?
Are they going to have principles?
And there's a lot of good debates on that, but they have to be something.
They have to have some consistent moral principles that take into account human life and human flourishing.
And the last thing you want to stick in there is something that says, well, if someone is adorable, care for them more.
Always count a single life as more than 100 lives.
There's no justification for putting the sort of stupidities of empathy that we're often stuck with to putting them into the machines that we create.
That's one thing I love about this moment of having to think about super intelligent AIs.
It's amazingly clarifying of moral priorities.
And all these people who, until yesterday, said, well, you know, who's to say what's true in the moral sphere?
Once you force them to weigh in on how should we program the algorithm for self-driving cars, They immediately see that, okay, you have to solve these problems one way or another.
These cars will be driving.
Do you want them to be racist cars?
Do you want them to preferentially drive over old people as opposed to children?
What makes sense?
And to say that there is no norm there to be followed is to just, you're going to be designing one by accident then, right?
If you make a car that is totally unbiased with respect to the lives it saves, Well, then you've made this Buddhist car, right?
You've made the Mathieu Ricard car, say.
That may be the right answer, but you've taken a position just by default.
And the moment you begin to design away from that kind of pure equality, you are forced to make moral decisions.
And I think it's pretty clear that we have trolley problems that we have to solve.
And we have, at a minimum, we have to admit that Killing one person is better than killing five, and we have to design our cars to have that preference.
When you put morality in the hands of the engineers, you see that you can't take refuge in any kind of moral relativism.
You actually have to answer these questions for yourself.
I envision this future where you walk into a car dealership and you order one of these cars and you're sitting back and you're paying for it, and then you're asked what kind of setting do you want.
Do you want racist, Buddhist, radical feminist, religious fundamentalist?
I don't know if you've heard this research, but when they ask people what the cars should do on the question of how biased should it be to save the driver over the pedestrian, say, So if it's a choice between avoiding a pedestrian and killing the driver, or killing the pedestrian, how should the car decide?
Most people say, in the abstract, it should just be unbiased.
It should be indifferent between those two.
But when you ask people, would you buy a car that was indifferent between the driver's life and the pedestrians, They say no.
They want a car that's going to protect their lives.
So it's hard to adhere to the thing you think is the right answer, it seems.
And there I actually don't know how you solve that problem.
I think probably the best solution is to not advertise how you've solved it.
That's interesting, yeah.
I think if you make it totally transparent, it will be a barrier to the adoption of technology that will be, on balance, immensely life-saving for everyone, you know, drivers and pedestrians included.
We now have tens of thousands of people every year reliably being killed by cars.
We could bring that down by a factor of 10 or 100 And then the deaths that would remain would still be these tragedies that we would have to think long and hard about whether the algorithm performed the way we want it to.
But still, we have to adopt this technology as quickly as is feasible.
So I think transparency here could be a bad idea.
I think it's true.
I mean, I find that I know people who have insisted they would never go into a self-driving car.
And I find this bizarre because the alternative is far more dangerous.
But you're right, and I think there's also this fear of new technology where there'll be a reluctance to use them.
Apparently there was a reluctance to use elevators that didn't have an elevator operator for a long time.
So they had to have some schnooks stand there in order so people would feel calm enough to go inside that.
That's hilarious.
But I agree with the general point, which is a more general one, which is there's no opting out of moral choices.
Failing to make a moral choice over say, giving to charity or what your car should do is itself a moral choice.
Yeah.
And driven by a moral philosophy.
I also just can't resist adding, and I think this is from the Very Bad Wizards group, but you can imagine a car that had a certain morality and then you got into it and it automatically drove you to like Oxfam and refused to let you move until you gave them a lot of money.
So you don't want too moral a car.
You want a car sort of just moral enough to do your bidding, but not much more.
Have you been watching any of these shows or films that deal with AI, like Ex Machina or Westworld or humans?
I've been watching all of these shows that deal with AI.
And they all deal with, Ex Machina and Westworld, all deal with the struggle we have when something looks human enough, acts human enough, it is irresistible not to treat it as a person.
And their philosophers and psychologists and lay people might split.
They might say, look, if it looks like a person and talks like a person, then it has a consciousness like a person.
Dan Dennett would most likely say that.
And it's interesting, different movies and different TV shows, and I actually think movies and TVs are often instruments of some very good philosophy, they go different ways on this.
So Ex Machina, I hate to spoil it, so viewers should turn on the sound for the next 60 seconds if they don't want to hear it, but there's a robot that's entirely, you feel tremendous empathy for and caring for her.
The main character trusts her.
And then she cold-bloodedly betrays him, locks him in a room to starve to death while she goes on her way.
And it becomes entirely clear that all of this was simply a mechanism that she used to win over his love.
While for Westworld, it's more the opposite, where the hosts, I guess, Dolores and others, are seen as viewers who are supposed to see them as people, and the guests who forget about this, who brutalize them, they're the monsters.
Yeah, it's very interesting.
I think all of these films and shows are worth watching.
I mean, they're all a little uneven from my point of view.
There are moments where you think, this isn't the greatest film or the greatest television show you've ever seen, but they all have their moments where they, as you said, they're really doing some very powerful philosophy by forcing you to have this vicarious experience of being in the presence of something that is passing the Turing test in a totally compelling way, and not the way that Turing originally set it up.
I mean, we're talking about robots that are no longer in the uncanny valley and looking weird.
They are looking like humans.
They're as human as human, and they are, in certain of these cases, much smarter than people.
And this reveals a few things to me that are probably not surprising, but again, to experience it vicariously, just hour by hour watching these things, is different than just knowing it in the abstract.
That's right.
The best movies and films and Movies and TV shows and books often take a philosophical thought experiment and they make it vivid in such a way you could really appreciate it.
And I think that sentient, realistic humanoid AI is a perfect example of these shows confronting us with, this is a possibility, how will we react?
I think it tells us how we will react.
I think once something looks like a human and talks like a human and demonstrates intelligence that is at least at human level, I think for reasons I gave somewhere on this podcast and elsewhere when I've talked about AI, I think human-level AI is a mirage.
I think the moment we have anything like human-level AI, we will have superhuman AI.
We're not going to make our AI that passes the Turing test less good at math than your phone is.
It'll be a superhuman calculator.
It'll be superhuman in every way that it does anything that narrow AI does now.
So once this all gets knit together in a humanoid form that passes the Turing test and shows general intelligence and looks looks as good as we look, which is to say it looks as much like a locus of consciousness as we do, then I think a few things will happen very quickly.
One is that we will lose sight of the fact of whether or not it's philosophically or scientifically interesting to wonder whether this thing is conscious.
I think some people like me, you know, who are convinced that the hard problem of consciousness is real, might hold on to it for a while, but every intuition we have of something being conscious, every intuition we have that other people are conscious, will be driven hard in the presence of these artifacts.
And it will be true to say that we won't know whether they're conscious unless we understand how consciousness emerges from the physical world.
But we will follow Dan Dennett in feeling that it's no longer an interesting question because we find we actually can't stay interested in it in the presence of machines that are functioning at least as well, if not better, than we are.
And we'll almost certainly be designed to talk about their experience in ways that suggest that they're having an experience.
And so that's one part.
We will feel, we will grant them consciousness By default, even though we may have no deep reason to believe that they're conscious.
And the other thing that is brought up by Westworld to a unique degree, I guess humans also, is that many of the ways in which people imagine using robots of this sort, we would use them in ways we at least we imagine that we wouldn't use other human beings on the assumption that they're not conscious, right?
That they're just computers that really can't suffer.
But I think this is the other side of this coin.
Once we helplessly attribute states of consciousness to these machines, it will be damaging to our own sense of ourselves to treat them badly.
We're going to be in the presence of digital slaves and just how well do you need to treat your slaves?
And what does it mean to have a super humanly intelligent slave?
That just becomes a safety problem.
How do you maintain a master-servant relationship to something that's smarter than you are and getting smarter all the time?
But part of what Westworld brings up is that You are destroying human consciousness by letting yourself act out all of your base or impulses on robots on the assumption that they can't suffer because the acting out is part of the problem.
It actually diminishes your own moral worth whether or not these robots are conscious.
Right, so you have these two things in tension.
One is that when it starts to look like a person and talk, It'll be irresistible to see it as conscious.
You know, you could walk around and you could talk to me and doubt that I'm conscious and we can doubt that about other people, but it's an intellectual exercise.
It's irresistible to treat other people as having feelings, emotions, consciousness, and it'll be irresistible to treat these machines as well.
And then we want to use them.
And so in Westworld is a particularly dramatic example of this, where characters are meant to be raped and assaulted and shot.
And it's supposed to be, you know, fun and games.
But the reality of it is these two things are in tension.
Anybody who were to assault the character Dolores, the young woman who's a robot, would be seen as morally indistinguishable from someone who would assault any person.
And so we are at risk for the first time in human civilization of in some sense building machines that we are then morally, it's morally repugnant to use in the sense that they're constructed for.
Yeah.
It would be like genetically engineering a race of people, but wiring up their brains so that they're utterly subservient and enjoy performing at our will.
Well, that's kind of gross.
And I think we would, we're very quickly going to reach a point where we'll see the same thing with our machines.
And then what I would imagine is, and this goes back to building machines without empathy or perhaps without compassion, is there may be a business in building machines to do things that aren't that smart.
I'd rather have my floor vacuumed by a Roomba than by somebody who has an IQ of 140 but is wired up to be a slave.
I think the humanoid component here is the main variable.
If it looks like a Roomba, you know, it actually doesn't matter how smart it is, you won't feel that you're enslaving a conscious creature.
What if it could talk?
It comes down to the interface.
Insofar as you humanize the interface, you drive the intuitions that now you're in relationship to a person.
But if you make it look like a Roomba and sound like a Roomba, it doesn't really matter What its capacities are, as long as it still seems mechanical.
I mean, the interesting wrinkle there, of course, is that, ethically speaking, what really should matter is what's true on the side of the Roomba, right?
So if the Roomba can suffer, if you've built a mechanical slave that you can't possibly empathize with because it doesn't have any of the user interface components that would allow you to do it, but It's actually having an experience of the world that is vastly deeper and richer and more poignant than your own, right?
Well, then you have just the term of jargon now in the AI community.
I think this is probably due to Nick Bostrom's book, but maybe he got this from somewhere.
The term is mind crime.
You're creating minds that can suffer.
Whether in simulation or in individual, you know, robots, this would be an unimaginably bad thing to do.
I mean, you would be on par with Yahweh, you know, creating a hell and populating it.
If there's more evil to be found in the universe than that, I don't know where to look for it.
But that's something we're in danger of doing insofar as we're rolling the dice with some form of information processing being the basis of consciousness.
If consciousness is just some version of information processing, well then, if we begin to do that well enough, it won't matter whether we can tell from the outside.
We may just create it inside something we can't feel compassion for.
That's right.
So there are two points.
One point is your moral one, which is whether or not we know it.
We may be doing terrible moral acts.
We may be constructing conscious creatures and then tormenting them.
Or alternatively, we may be creating machines that do our bidding and have no consciousness at all.
It's no worse to assault the robot in Westworld than it is to bang a hammer against your toaster.
But, so that's the moral question.
But it still could diminish you as a person to treat her like a toaster.
Yes.
Given what she looks like.
And that's, I mean, so raping Dolores on some level turns you into a rapist whether or not she's more like a woman or more like a toaster.
Yes, so this is akin to, this treatment of robots is akin to, I forget the philosopher, it may have, I forget who the philosopher was, but the claim was that animals have no moral status at all.
However, you shouldn't torment animals because it'll make you a bad person with regard to other people, and people count.
And it's true.
I mean, one wonders.
After all, we do all sorts of killing and harming of virtual characters on video games, and that doesn't seem to transfer.
It hasn't made us worse people.
If there is an effect on increasing our violence towards real humans, it hasn't shown up in any of the homicide statistics, and the studies are a mess.
But I would agree with you that there's a world of difference between sitting at my Xbox and shooting, you know, aliens, as opposed to the real physical feeling, say, of strangling someone who's indistinguishable from a person.
And that's the second point, which is, even if they aren't conscious, Even if, as a matter of fact, from a God's eye view, they're just things, it will seem to us as if they're conscious.
And then the act of tormenting conscious people will either be repugnant to us, or if it isn't, it will lead us to be worse moral beings.
So those are the dilemmas we're going to run into, probably within our lifetimes.
Yeah.
Actually, there's somebody coming on the podcast in probably a month who can answer this question, but I don't know which is closer, realistically.
Machine intelligence that passes the Turing test or robot interface, you know, robot faces that are no longer uncanny to us.
Yeah.
I don't know which will be built first, but it is interesting to consider that the association here is a strange one.
If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes, NAMAs, and the conversations I've been having on the Waking Up app.