All Episodes
Aug. 29, 2016 - Making Sense - Sam Harris
55:28
#44 — Being Good and Doing Good

Sam Harris speaks with Oxford philosopher William MacAskill about effective altruism, moral illusions, existential risk, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

| Copy link to current segment

Time Text
Welcome to the Making Sense Podcast.
This is Sam Harris.
Just a note to say that if you're hearing this, you are not currently on our subscriber feed and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org.
There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
Today I'm speaking to William McCaskill.
Will is an associate professor in philosophy at Lincoln College, Oxford.
He was educated at Cambridge, Princeton, and Oxford.
He may, in fact, be the youngest tenured professor of philosophy in the world, and he's one of the primary voices in a movement in philanthropy known as effective altruism, a movement which he started with a friend.
And he's the co-founder of three non-profits based on effective altruist principles, Giving What We Can, 80,000 Hours, and the Center for Effective Altruism.
He's also the author of a book, which I just started reading, which is really good, and the title is Doing Good Better, Effective Altruism and a Radical New Way to Make a Difference.
And there is no question that Will is making a difference.
If you don't have two hours to spend on our whole conversation, which I absolutely loved, please listen to the last few minutes of this podcast so that you at least know the tangible effect the conversation had on me.
And now I give you Will McCaskill.
Well, I'm here with Will McCaskill.
Will, thanks for coming on the podcast.
Thanks for having me.
I first heard about you when you did your appearance on Tim Ferriss' podcast.
And that was a great interview, by the way.
And I'm now in the habit, sorry Tim, of poaching your podcast guests.
This is the third time I've done this.
I did it with Jocko Willink, the Navy SEAL, and Eric Weinstein, the mathematician VC.
Those were both great conversations.
And now I have Will here.
One thing I do is I try not to recapitulate the interview that was done with Tim, so we will not cover much of the ground you did there.
So I recommend that interview because that was fascinating, and you have a fascinating bio, much of which we will ignore because you described it with Tim.
But briefly, just tell me what it is you're doing in the world and how you come to be thinking about the things you think about.
Yeah, so I wear a couple of hats.
I'm Associate Professor of Philosophy at the University of Oxford, with a focus on ethics and political philosophy, a little bit of overlap with economics, and I'm also the CEO of the Centre for the Effect of Altruism, which is a non-profit
It's designed to develop and promote the idea of effective altruism, which is the use of your time and money to do as much good as you possibly can, and using evidence and careful reasoning and high-quality thought in order to ensure that when you try to do good, you actually do as much good as possible, whether that's through your charity or through your career or through what you buy, and helps you choose the causes where you can have the biggest impact.
And put that way, it seems like a purely commonsensical approach to doing good in the world.
I think as we get into this conversation, for people who are not familiar with your work or the effective altruism movement, They'll be surprised to learn just how edgy certain of your positions are, which is why this will be a fascinating conversation.
So I should say, up front though, you have a book entitled Doing Good Better, which I have only started, I regret to say, but it's a very well-written and very interesting book, which I recommend people read.
It covers many things we, again, probably won't cover in this conversation.
But tell me about the play pump.
You start your book with this story, and it really encapsulates much of what is wrong and much of what is potentially right with philanthropy.
Yeah.
So, The Play Pump was developed in the late 1990s.
And It was an idea that really caught the attention of people around the world, especially philanthropic development communities.
The Play Pump was built in South Africa, and the idea behind it was that it was a way of providing clean water to poor villages that didn't currently have clean water across sub-Saharan Africa and South Africa.
Where it was a combination invention.
It was a children's merry-go-round.
So children would push this thing, look just like a merry-go-round, but the force from the children pushing it would pump clean water up to a reservoir that would provide the clean water for the community.
So it looked like a win-win.
The children of the village would get their first playground amenity, and the people of the village would get clean water.
It really took off for that reason.
The media loved to pawn on the idea, they said pumping water is child's play, it's the magic roundabout.
It got a huge amount of funding.
The first lady, Laura Bush, at the time was part of the Clinton Global Initiative, gave it $17 million in funding to roll this out across sub-Saharan Africa.
It won the World Bank Development Marketplace Award for being such an innovative invention.
Jay-Z promoted it, Beyonce.
Really, it was the thing within development for a while.
And when I first heard about it, I thought, wow, what an amazing idea this is.
Um, great that you can do two things at once, um, making children happy, but then also providing water.
It just seems such a good example of, and everyone, of course, was like very well-intentioned behind it.
Yeah.
Well, I should say reading that section of your book, it's, which I, which again is the first few pages, the effect on the reader is really perfect because you, you, you find yourself on the wrong side of this particular phenomenon because you just think, oh my God, that is the greatest idea ever.
Yeah.
Right.
Yeah.
This is a merry-go-round for kids that has the effect of doing all of this annoying labor that was otherwise done by women, you know, pumping these hand pumps.
So now continue to the depressing conclusion.
Yeah.
As you might expect, there's a twist in the story, which is just that Simply, in reality, the play pump was a terrible idea from the start.
So unlike a normal merry-go-round, which spins freely once you push it, in order to pump the clean water, you need constant torque.
So actually pushing this thing would be very tiring for the kids.
I mean, there were other problems too.
Sometimes they'd fall off and break limbs.
Sometimes the children would vomit from the spinning.
But the main problem was that they would just simply get very tired, they wouldn't want to play on this thing all day.
But the community still needed this water, and so it was normally left up to the elderly women of the village to push this brightly coloured play pump round and round for all hours of the day, a task they found very undignified and demeaning.
And then secondly, it just wasn't even very good as a pump.
And often it was the placing very boring, but very functional Zimbabwe hand pumps, which when you actually asked the communities they preferred, it would pump more water with less effort, but actually a third of the price.
There were a number of other problems too.
They would often break that down.
There had initially been an idea that maintenance would be paid for with billboards on these reservoirs, but none of the advertising companies actually wanted to pay for it.
And so these things were often left in disarray, and no maintenance would happen to them either.
And so this all came to light in a couple of investigations, and thankfully, in what's actually a very admirable and rare case, the people who were funding this, especially the Case Foundation, acknowledged that this had been a big mistake, and then said, yep, we just made a mistake, we're no longer going to keep funding this.
What about the man who had invented or was pushing the idea of the pump?
Yeah, so the people who were pushing it, Play Pumps International and Cleverfield behind it, continued to go ahead with it.
They didn't accept the criticism.
This is perhaps a phenomenon you're very familiar with.
And so actually, the organization does still continue in a vastly diminished capacity today.
They're still producing play pumps sponsored by companies like Colgate-Palmolive and Ford Motors.
What is unusual in the world of doing good is that this actually was investigated.
Criticisms came to light, and people were willing to back out.
But the lesson from this is just that good intentions aren't good enough.
What seems like a really good idea, it just seems like, yeah, this is amazing.
It's a revolutionally new idea.
actually can just not be very effective at all.
It can even do more harm than good.
What we need to do, if we want to really make a big difference, is do the boring work of actually investigating how much does this thing cost?
How many people's lives are going to be affected by how much?
What's the evidence behind this?
And there are many other things that we could be spending our money on that are much less sexy than the play pump, but do vast amounts of good.
And that's why it's absolutely crucial, if we really do want to use our time and money to make a difference, that we think about what are the different ways we could be spending this time and money?
What's the evidence behind the different programs we could be doing?
And what's the one that's going to do the very most good?
It seems to me there are at least three elements to what you're doing that are very powerful, and the first is the common sense component, which really is not so common as we know, which is just to actually study the results of one's actions, and in the spirit of science, see what works, and then stop doing what doesn't work.
But the other element is you are committed.
I know you're personally committed, and to some degree, I guess you can just tell me how much the EA community is also committed explicitly to essentially giving until it hurts.
I mean, giving a what many people would view as a heroic amount of one's wealth to the poorest people in the world or to the gravest problems in the world.
And we'll talk about Peter Singer in a moment because you've certainly been inspired by him in that regard.
And the third component is to no longer be taken in by certain moral illusions, where the thing that is sexiest or most disturbing isn't often the gravest need or doesn't represent the gravest need.
And to cut through that in a very unsentimental way.
And this is where people's moral intuitions are going to be pushed around.
So let's start with the second piece, because I think the first is uncontroversial.
We want to know what is actually effective.
But how far down the path with Peter Singer do you go in terms of, because I've heard you say, I've watched a few of your talks at this point, I've heard you say things that more or less align you perfectly with Singer's level of commitment, which where he more or less argues, I don't think he has ever recanted this, You should give every free cent to help the neediest people on earth.
It's morally indefensible to have anything like what we would consider a luxury when you're looking at the zero-sum trade-off between spending a dollar on ice cream or whatever it is and saving yet another life.
So just tell me how much you've been inspired by Singer and where you may differ with his take.
So I think there's just two framings which are both accurate.
So the first is the obligation frame, just how much are we required to give.
And Peter Singer argues that we have an obligation to give basically everything we can, and argues for this by saying, well, imagine if you're just walking past a child down in a shallow pond.
and rescued that child, or failed to rescue that child.
Now that would be morally abominable.
What's the difference between that and spending a few thousand dollars on luxury items when that money could have been spent to save a life in a poor country?
If you decide to justify not saving the child in the shallow pond because you were gonna ruin your nice suit that cost you a thousand pounds.
That would just be, you know, no moral justification at all, nor would it be a justification if it was ten thousand pounds.
And so for that reason, he argues, yeah, we have this obligation.
There's another framing as well, which we call this excited altruism, which is, I use the story of, imagine if you're walking past a burning building and you run in, you see there's a child there, you kick the door down and, you know, you save that child.
On this framing, the thought's just, wouldn't that be amazing?
Wouldn't you feel like that was a really special moment in your life?
You'd feel like this hero.
And imagine if you did that several times in a year.
You save one child from a burning building, another time you take a bullet for someone, a third time you save someone from drowning.
You'd think your life was really pretty special, and you'd feel pretty good about yourself.
And the truth of the matter is actually, yeah, we can do that every single year.
We can do much more than that hero who runs into the burning building and saves that child's life just by deciding to use our money in a different way.
And so there's these two framings, obligation and opportunity.
And I actually just think both are true.
Many people in the effective altruism community don't actually agree with the obligation framing.
They think they're doing what they do because it's part of their values, but there's no sense in which they're obligated to do it.
I actually agree with Singer's arguments.
I think that certainly if you can help other people to a very significant extent, such as by saving a life, while not sacrificing anything of moral significance, then you're required to do it.
I mean, in my own case, I just think the level at which I'm at least approximately just maxing out on how much good I can do is just nowhere close to the level at which I think, wow, this is like a big sacrifice for me.
And so, perhaps a big sacrifice in financial terms.
As an academic, I'll be on a good middle-class income, and I'm planning to give away most of my income over the course of my life.
So in financial terms, it looks like a big sacrifice.
But in terms of my personal well-being, I don't think it's like that at all.
I don't think money is actually a big factor.
And if you look in my own personal happiness, if you look at the literature on well-being, this is also the case beyond even quite a low level of about $35,000 per year.
The relationship between money and happiness is very small, indeed.
On some ways of measuring it, it's non-existent, in fact.
And then, on the other hand, being part of this community of people who are really trying to make the world better is just very rewarding, actually.
It just has these positive effects in terms of my own well-being.
So the kind of answer is just that, yeah, in theory, I agree, even if it was the case that I would think that, yeah, this is a model requirement and so on.
But then in practice, it's actually just not really much of a sacrifice for me, I don't think.
Let's linger there, because I have heard you say, in response to challenges of the sort that Singer often receives, well, if you can live comfortably and do good, well, then that's great.
That's a bonus.
There's nothing wrong with living comfortably.
And now you have just claimed that you're living comfortably, but in fact by most people's view you, I think, so spell out how much do you, what is actually your commitment to giving money away at this point?
Yeah, so to giving money, so in 2009 I made a commitment to give everything above £20,000 per year.
Inflation and purchasing power parity adjusted to Oxford 2009.
Right.
So now that's about £24,000, with current exchange rate that's something like $33,000.
And just to then give everything above that.
And not to wait.
You do that every year.
Not to wait.
Yeah, I do it every year.
And then with my time as well, I just try and spend as much time as I can.
And do you actually think that would or will scale with vastly increased economic opportunity if you get dragged into a startup next week where now you're making millions of dollars at some point?
Do you aspire to keep it where you've set it now?
Yeah, absolutely.
I mean, I think, you know, the amount of money I've been earning over the last year is much greater than when I was a postdoc or PhD student.
And in fact, that's just been a plus.
I'm happier that I'm able to give away more.
I mean, the one worry, the biggest worry I have with my commitment is just the value of my time.
So there's certain ways you can spend money in order to save time, eating out rather than make free food for yourself, being more willing to get a taxi places rather than the bus.
Yeah, that's interesting.
That means I think I'd be making a mistake.
So I have a kind of balancing act.
One is just because I'm using my time to build up Center for the Effective Altruism and so on, promote these ideas, I want to ensure that I have as much time as possible to do that.
But then at the same time, I don't want to say, oh, well, people should be giving, or it's really good for people to be giving their money effectively, but I don't do that because my time's so valuable.
That would just seem kind of hypocritical.
So I also just want to demonstrate, like, yeah, you can do this and just, it's actually just a really good life.
It's not nearly as much of a sacrifice as it might seem.
Let's linger there for a moment because I think that if you are not following Singer all the way, so that the implications of his argument is really that there should be some kind of equilibrium state where you are more or less indistinguishable from the people you're helping at a certain point because you've helped them so much.
So if you are living a comfortable life, Really, at all.
I mean, a comfortable life by Western standards.
You are still, from Singer's view, culpably standing next to the shallow pond, watching yet another child drown.
And so I'm wondering how you draw that line.
And obviously, I mean, needless to say, there's no judgment in this, because the scheme you have just sketched out already qualifies you for sainthood in most people's worldview at this point.
So how do you think about that?
Yeah, so why don't I give even more?
So I think even if you endorse pure utilitarianism, you just should maximize the amount of good you can do.
I think that just for practical reasons, that doesn't mean that you should keep donating until you're living on $2 per day, not if you're in a rich country.
Because the opportunities you have to spend, let's just solely focus on money.
So there's lots of other ways of doing good.
But if you just say, okay, I'm going to live a normal, you know, keep going in my own job and just donate as much as I can until I'm earning like very little.
Firstly, I think, you know, that's going to damage your ability to earn more later.
It means that There's risks of yourself burning out, which is, I think, very significant.
If you're going to, you know, wear a hair shirt for three years and then completely give up on modality altogether, that's much worse than just donating a more moderate amount, but for the rest of your life.
And then also in terms of, yeah, your productivity and your work as well, it's just actually really important to ensure that you've got the right balance between how much you're donating so that you can do it positively.
And then finally, in terms of the influence you have on other people, I think if you're able to act as a role model, something that people actually really aspire towards, think, yeah, this is this amazing way to live a life, and look at these people are able to donate a very significant amount and still have a really great life.
That's much more powerful, because it might actually mean that many other people go and do the same thing.
And if just one other person does the same thing as you do, you've doubled your life's impact.
It's like a very big part of the equation.
Whereas if you're walking around utterly miserable, just so you can donate that last cent to fight global poverty, you might seem a little bit like an anti-hero.
And I think that's a very important consideration.
So I actually think that when it comes to the practical implication of Singer's ideas, it doesn't lead you to donate everything above $2 per day.
Instead, the optimal amount is actually quite a higher level, which is maximizing the amount of good you'll do over the course of your lifetime, bearing in mind the ability to, say, get promotions or change career, earn more, the value of your time, ensuring you're productive and ensuring you're a good role model to other people as well.
And so I actually think that, yeah, the case at which I try and maximize my own impact is way far away from the line at which I'd think, this is really, really a big, you know, a big hardship for me.
And I think that's true at least for many people.
Yeah, this is really a fascinating area, and it's going to get more fascinating because it just becomes strange the closer you look at it.
Now, I'm totally convinced by your opportunity framing, and while I had heard it put that way before, your emphasis on that is very attractive and very compelling.
And so, just to remind our listeners, By dint of having the resources you have, and if you're listening to this podcast in any developed country, you almost by definition have vast resources relative to the poorest people on earth.
And this puts you in a position to quite literally save the child from the burning building Any moment you decide to write a check for, I mean, what is it that actually, in your view, is sufficient to save a life using the best charity?
The best guess from GiveWell donating to Against Malaria Foundation is $3,400, statistically speaking, on average, to save a life.
They're keen to emphasize that that's just an estimate.
A lot of assumptions go into that and so on, but they're very careful, very skeptical.
It's the best estimate that I know of.
So that opportunity is always there and I guess one of the challenges from a philanthropic point of view and just the point of view of one's own maximizing one's own psychological well-being is to make that opportunity as salient as possible.
Because obviously writing a check doesn't feel like rushing across the street and grabbing the child out of the burning building.
And then being rewarded by all the thanks you'll get from your neighbors.
But if you could fully internalize the ethical significance of the act, something like that reward is available to us.
At least that's what you're arguing.
I'm convinced that is a good way of seeing it, and so therefore taking those opportunities more and more and making them more emotionally real seems like a very important project for all of us who have so much.
The other side, the Singarian obligation side, I think is fraught with other issues, and so I just want to explore those a little bit.
The problem we're dealing with here is that we are beset by many different forms of moral illusion, where we effortlessly care about things that are, in the scheme of things, not all that important, and can't be goaded to care about things that are objectively and subjectively, when you actually connect with the lives of the people suffering these conditions, the most important problems on earth.
And the classic example is, you know, a girl falls down a well, It's one girl, it's one life, and what you see is wall-to-wall coverage on every channel for 24 hours tracing the rescue, successful or otherwise, of this little girl.
And yet, a genocide could be raging in Sudan, and not only can't we be moved to care, We so reliably can't be moved to care that the news organizations just can't bear to cover it.
I mean, they give us a little bit of it just because it's their obligation, but it's five minutes, and they know that's a losing proposition for them anyway.
So that's the situation we're in, and that seems like a bug, not a feature of our ethical hardware.
For me, it exposes an interesting paradox here, because the most disturbing things are not reliably the most harmful in the world, and the most harmful things are not reliably the most disturbing.
And you can talk about this from the positive or negative sides.
We can talk about the goods we don't do, and we can talk about the harms people cause.
And so, this is an example from my first book, The End of Faith.
To find out that your grandfather flew a few bombing missions over Dresden in the war is one thing.
To find out that he killed a woman and her children with a shovel is another.
Now, he undoubtedly would have killed more women and children flying that bombing mission.
But given the difference between killing from 30,000 feet by dropping bombs and killing up close and personal, and this is where the paradox comes in, we recognize that it would take a different kind of person with a very different set of internal motives, intentions, and global properties of his mind and emotional life to do the latter versus the former.
So a completely ordinary person like ourselves could be, by dint of circumstance, detached enough from the consequences of his actions so as to drop the bombs from the plane.
It takes a proper psychopath or somebody who was pushed into psychopathy by his experience to kill people in that way with a shovel.
And to flip this back to philanthropy, it is a very different person who throws out the appeal from UNICEF, casually ignoring the fact that he has forgone yet another opportunity to save a life.
That person is very different from the person who would pass a child drowning in a shallow pond because he doesn't want to get his shoes wet.
And so the utilitarian equation between life and life
which Singer's obligation story rests on, doesn't acknowledge the fact that it really would require a different person to ignore suffering that was that salient, or to perpetrate, in the case of creating harms, suffering that's that salient, and yet we're being asked to view them as equivalent for the purpose of parsing the ethics.
So I think there's an important distinction between assessing acts and assessing a person, assessing a person's character.
And I think normally when we go about doing model reasoning, most of the time we're talking about people's characters.
So is this a good person in general?
Can I trust them to do good things in the future?
Is it the sort of person I want to associate with?
Whereas moral philosophers are often talking about acts.
And so I think Singer as well would agree that it's in some sense a much worse person who kills someone than who, like, intentionally kills someone than who just walks past a drowning child.
I entirely agree with that.
Because in part the idea that it's much worse to kill people intentionally is a far greater moral wrong in our society than merely failing to save someone.
Right.
Although, let me just... The difference between an act of commission and omission, I think it brings in a different variable here.
I mean, I agree that that is a difference that we find morally salient.
But what I'm talking about here is in both cases, you are declining to help someone on the side of not doing good.
And in both cases, in war, you are Knowingly killing people, but they're just very different circumstances.
So there's different levels of salience.
And so I agree that it's kind of, I would also just be very troubled by someone who wasn't moved by the more salient causes of suffering in human in some way.
But when we think about moral progress, I think it's absolutely crucial to pay particular attention to those causes of suffering that are very mechanized or have the salience stripped away from them.
I mean, if you look at the orders that were given to SS guards in terms of descriptions of how to treat Jews in the Holocaust, Every step has been taken to remove their humanity, to turn it into completely banal evil.
And it's through that almost mechanization of suffering that I think humanity has committed some of the worst wrongs in its history.
And I think that that's also going to be true today.
So when you look at the practice of factory farming, or if you look at the way we incarcerate people.
You know, if we saw a country, as happens, that was regularly flogging or inflicting corporal punishment on its criminals, I think that's absolutely barbaric as a practice.
But yet, putting someone in a prison cell for several years is a worse harm to them.
I think it's considerably worse, the punishment we're inflicting on them, but it doesn't give us that same emotional resonance.
And I think insofar as there's this track record throughout human history of people doing absolutely abominable acts, not realizing that it was morally problematic at all, even taking its common sense, precisely because the ways in which the harm caused had been stripped away, had been made sterile, as with the case of the SS guards.
That should give us pause when there's some case of, you know, extreme harm that has this property of being made sterile.
It should make us worry, are we in that situation again?
Are we just thinking, oh yeah, this is common sense, normal part of practice, but only because of the way that things have been flamed?
And the really powerful thought, I think, from Singer's arguments for thinking about extreme poverty is, well, maybe we're in that situation now with respect to us in the West compared to the global poor.
So if we look back to think of Louis XVI or something, or imagine some monarch who's incredibly wealthy with his people starving all around him.
Thank God, that's absolutely horrific.
It doesn't seem so different from the way that we are at the moment.
Everyone in a rich country in the US or UK is in the... Basically, most of the population are in the richest 10% of the world's population, even once you've taken into account the fact that money goes further overseas.
I imagine most of the listeners of this podcast are in the top few percent.
If you're earning above $55,000 per year, you're in the richest 1% of the world's population.
And this is a very unusual state to be in.
It's only in the last 200 years that we've seen such a radical divergence between our rich countries and the poorest countries in the world.
So it's not something that our moral intuition, I think, is really caught up to.
But in the spirit of thinking, well, what are the ways in which we could be acting in a way that seems radically wrong from the perspective of future generations, but that we take for common sense?
I think Singer's definitely put his finger on a possible candidate, which is the fact The fact that we have what, by historical standards and global standards, is immense wealth, immense luxury, and it's currently common sense or normal just to use that on yourself rather than to think of it as, in some sense, resources that really belong to all of humanity.
Okay.
Well, we're going to keep digging in this particular hole because this is where we are going to reach moral gold.
So you did a debate with Giles Frazier for Intelligence Squared, which I was amused to see that I had actually debated him as well.
He liked you much more than he liked me, I think, probably because you weren't telling him his religion was bullshit.
I can imagine.
Giles is a priest.
But I thought he raised a very interesting point in your debate, and your answer was also interesting, so I'll just take you back to that moment.
Again, we're in a burning house with a child who can be saved, but in this house, there is a Picasso on the wall in another room, and you really only have a moment to get out of there with one of these precious objects intact.
And Giles suggested that on your view, The Picasso is worth so much that you really should save it because you can sell it and turn it into thousands and thousands of bed nets that will save presumably thousands of lives.
I'm not sure that's actually the conversion from bed net to life, but in any case, we're talking about a multimillion dollar painting, let's say a $50 million painting.
And your child, or the child, is just one life.
And so he put that to you, I think expecting, perhaps not, but expecting that that is a knockdown argument against your position, and you just bit the bullet there.
So perhaps you want to respond again to that thought experiment.
Yeah.
So the first question, the first thing to caveat is that Giles is asking this as a philosophical thought experiment.
strip away extraneous factors like, what's the media going to think of this?
And perhaps also, are you going to be able to live with yourself afterwards as a matter of human psychology and so on?
So stripping away those things to just have the pure thought experiment of, well, you can save this child or you can save this Picasso and sell it to buy bed nets.
And the question I asked him was, Well supposing there's just two burning buildings and one there's a single child and in the other burning building there's, let's say it's a hundred children that you could save.
And the only way you can save those hundred children is by taking this Picasso off the wall and using it to prop open the door of this other building such that a hundred children can go through.
What ought you to do?
And I think in that case it's very clear you ought to save the 100 children, even if you're using this painting as a means to doing so.
The fact that you're using that as a means doesn't seem relevant.
And the reason I actually quite like this thought experiment is it really shows what a morally unintuitive world we're in.
That actually the situation we're in right now is that there's a burning building, it's just that it's behind you rather than in front of you.
And that there is those hundred children whose lives you can potentially save that are behind you and not salient to you.
And so, I think, like, what Giles was wanting to say was that, oh, isn't this very uncompassionate?
But actually, I think this is just far more sophisticated compassion.
The understanding that there are people on the other side of the world whose lives are just as important, who are just as a need of someone who's right in front of you, that shows a much more sophisticated, a much more genuine form of compassion than just simply being moved by whatever happens to be more salient to you at the time.
And so, yeah, it's a weird conclusion, but the weirdness comes from how weird the world is, how morally unintuitive the world we happen to find ourselves is in, which is that, yeah, Save the painting and, morally speaking, save the painting and therefore save hundreds of lives.
Having said that, of course, I wouldn't blame someone.
And I think, again, as we talked about in terms of natural human psychology, it's perfectly natural to save the person who's right in front of you.
So I wouldn't blame anyone for doing that.
But if we're talking about moral philosophy and what the morally correct choice is, then I think you just have to have to save the hundreds.
Also, you wouldn't blame them, and the decision to save the Picasso strikes us as so strange.
Those are two, I think, two sides of the same coin.
I mean, you wouldn't blame them because you're acknowledging how counterintuitive it is to save the Picasso and not the child.
And so you're really putting the onus on the world, on the situation, on all the contingent facts of our biology and our circumstance that causes us to not be starkly consequentialist in this situation.
So, my concern there is that that's not... I mean, one of the reasons why I don't tend to call myself a consequentialist, even though I am one, is because, for me, consequentialism, or historically consequentialism, has been so often associated with just looking at the numbers of bodies on both sides of the balance, and that's how you understand the consequences of an action.
Judges, it's moral merits, but there's more to it than that.
And Gile, and you just acknowledged that there was more to it than that, but you weren't inclined to put those consequences also into the picture.
So like the question of whether you could live with yourself, right?
And that's, I think that's the spectrum of effects certainly includes whether you could live with yourself.
And it includes, to come back to the moral paradox here, it also includes what kind of person you would have to be in order to grab the Picasso and not the child, given all of the contingent facts we just acknowledged, given how weird the world is, or given how not optimized the world is to produce a person who could just
algorithmically always save a hundred lives over one every time, no matter how the decision is couched.
We can even make the situation more perverse.
So, you know, I have two daughters.
I don't have a Picasso on the wall, but certainly I have something that could be sold that could buy enough bed nets to save more than two lives in Africa, right?
So the house burns down tonight.
I have a choice to grab my daughters or grab this thing, whatever it is, And I reason thusly, having been convinced by you in this podcast, that saving at least three lives in Africa is better than saving my two daughters.
And it's only a contingent property of my own biology and its attendant selfishness and my you know, the drive toward kin selection and all the rest that has caused me to even be biased toward my daughter's lives over the lives of faceless children in Africa.
So I'm convinced, and I grab the valuable thing and sell it and donate it to GiveWell, and it's used for bed nets.
And so I think even you, given everything you believe about the ethical imperative of the singer-style argument would be horrified, and rightly so, that I was capable of doing that.
That horror, or at least that distrust of my psychology, I think summarizes an intuition we have about all of the other consequences that are in play here that we haven't thought much about.
And again, this is a very complicated area, because I see that... I don't know if you know about my analogy with the moral landscape, where we can have various peaks of well-being.
I could imagine an alternate peak on the moral landscape, where we are such beings as to really care about all lives generically, as much as we care about our own children's lives.
So I feel I love my children, but I actually also feel the same love for people in Africa I've never met, and it's just as available to me, and therefore my disposition not to privilege my children is not a sign of any lack of connection to them, it's just I'm the Dalai Lama squared.
I'm the Bodhisattva of compassion, and I've got that connection everywhere.
So I grabbed the Picasso, and I can feel good about saving more lives in Africa.
But my concern is that, let's just acknowledge that is another peak on the moral landscape.
Between where we are now, where we love our children more than those we haven't met and would view it as an act of pathological altruism to let them burn and just grab the Picasso based on our knowledge that we could do more good with it, if we wanted to become
the Dalai Lama-style altruist, there may be this uncanny or unhappy valley between these two peaks that we would have to traverse where there would be something sociopathic about an ability to run this calculus and be motivated by it.
Yeah, so there's tons to say here.
So one thing, yeah, I want to say is like, I also don't describe myself as a consequentialist.
I think the correct thing to do is to, in a moral decision, is have a kind of variety of moral lenses.
My PhD was on this topic.
Have a variety of moral lenses and take the action that's the best kind of compromise between the competing moral views that seem most plausible.
Some of which are consequentialist, others might be non-consequentialist.
Um, and I do think that the case, you know, there's, and I just emphasize consequences because everyone should agree that consequences matter.
I know they're very neglected, um, in terms of the impact that we can have in the world that we're in.
And so I think the case where you've got special obligations, it's a family member or daughter or child or a friend is very different from the question when it's just, um, strangers.
I think it's at least a reasonable moral view to think.
I just do have this special obligation to my friends.
It's certainly a very deep part of common sense to think, I do have a special obligation to my child and if I can save my child, even if they were right in front of me, I can save one child, I can save ten strangers, I should save my child.
And so I think that's quite a different case.
Maybe we should just plant a flag there, because I think that's interesting.
I don't know what special obligations actually consist in, apart from some argument that One, that we're hardwired that way, and it's hard to get over this hardwiring.
But two, we are better off for honoring those obligations.
It does resolve to consequences in some way.
If our world would be much better if we ignored those hardwired obligations or a sense of obligation, then I think there's an argument for ignoring them.
We can table that.
But then the second question, which is very important, taking us back to the Picasso, and this is a way in which moral philosophy can often be very confusing to people who don't do it, is that moral philosophers do engage in these thought experiments where they say, oh well, put aside all of these other considerations that are normally relevant.
And then they expect you to still have a reliable intuition, even in this very, very strange, fantastical case.
And so I do think in the case of when you are bringing back factors like, am I actually psychologically capable of doing this?
How am I going to think about myself when I hear this child screams late at night every day?
Of course that's incredibly relevant.
And then similarly, I think, again, the kind of philosopher tends to focus on what acts are the best.
Whereas in terms of the life decisions we make, biggest decisions tend to be more like, what sort of person should I be?
What are my big projects?
And I think cultivating in yourself to become the sort of person who's empathetic enough that you won't, in this situation, simply do the calculations and just go and save the Picasso.
I think that might well be right, that you're just going to be a person that will do more good over the long run if you don't do that.
That's why it's kind of a subtle case where, again, you want to distinguish between what's the best kind of character to have.
And the best character to have is plausibly one that means you do the wrong thing a number of times.
That's very interesting.
I don't want to interrupt you if you had much more you wanted to say there because every one of these points is so interesting.
This has been my gripe with certain caricatures of utilitarianism or consequentialism, which is the idea that If you can sacrifice one to save five, well then you should, you know, you go into your doctor's office for a checkup and he realizes he's got five other patients who need your organs, so they just grab you and steal your organs and you now are dead.
But if you just look at the larger consequence of living in a world where at any moment any of us could be sacrificed by society to save five others, None of us want to live in that world, and I think for a good reason.
And so you have to grapple with a much larger spectrum of effects when you're going to talk about consequences.
So you just acknowledged one here, whereas that to be the kind of person you want to be who is really going to do good in the world and to be tuned up appropriately to have the right social connections to other human beings, you may in fact want to be the kind of person who
privileges love of one's friends and family over a more generic loving-kindness to all human beings, because if you can't feel those bonds with friends and family, that has moved enough of the moving pieces in your psychology so that you're not the kind of person who is going to care about the suffering of others as much as you might.
Yeah, so another example of this is a number of people I know, often a consequence of this mindset, with respect to their dietary choices, just kind of acknowledge that animal suffering is really bad, and non-human animals have real moral status that we should respect, and won't eat some very bad forms of meat, like factory farmed chicken on that grounds.
For something like beef or lamb, the animals, I think, just have reasonable lives.
Not amazing lives, but lives that are definitely worth living, better for them if they'd never existed.
You actually think that's true of factory farmed beef, or you're now talking about grass-fed, pasture-raised beef?
I mean, it's harder to factory farm a cow in the way that you can.
You can't feed them nearly as badly as you can a chicken, but as for which animals have lives that are worth living or not, it's a really hard question.
But there are at least some people who will justify eating that sort of meat because You know, what you're doing by buying that meat is increasing the demand for a certain type of meat that then means that more animals of that type come into existence.
And those animals have good lives.
So the question is, well, who's it bad for then?
It's not bad for the cow you're eating because that cow's already dead.
It's not bad for the cow that you bring into existence because it wouldn't have existed otherwise.
But then I can't imagine, in my own case, psychologically believing both that animal suffering is incredibly important and you should care a lot about animals, and then also just eating their flesh.
I don't think it's a psychological possibility for me.
So you're a vegetarian?
So I'm vegetarian.
I mean, another case first given to me by Derek Parfit is You know, your grandmother, who you love very much, you've got a very close relationship with her.
When she dies, you just throw her in the garbage.
Just questions, well, who's that bad for?
You know, it's not bad for her because she's no longer with us.
But again, it just seems like there's this, as a matter of human psychology, doing that is very inconsistent with what seems like genuine regard for that person.
I think that's worth acknowledging.
So, it's easy to cash that intuition out, again, in the form of consequences, in my view, which is, yes, it's not bad for your grandmother because she's presumably not there to experience anything, but the sense that there's something sacred about
a human life, or the sense that one's love of a person needs to be honored by an appropriate framing of their death, that is good for everyone else who's yet living, right?
And if we just chucked our loved ones in the trash, that would have implications for how we feel about them, and how we feel about them is the thing that causes us to recoil from treating them that way once they've died.
And this is going to become more and more pressing, these kinds of seemingly impractical bouts of philosophizing will become more and more pressing when we can really alter our emotional lives and moral intuitions to a degree that is, you know, very granular.
So just imagine you had a pill that allowed you to just no longer be sad, right?
So like this is the perfectly targeted antidepressant that has no other downside, no other symptoms.
You know, pharmacologically we may in fact get that lucky, maybe not, but Imagine a pill that just, if you're grief stricken, you take this pill, and you are no longer grief stricken, and you can take it in any dose you want.
Now, your child has died, or your mother has died, and you're in grief, then the question is, how long do you want to be sad for?
What is normative?
Now, would it be normative to 30 seconds after your child has died, right?
In fact, you may still be in the presence of the body, To just pop that pill in a sufficient dose to be restored to perfect equanimity.
I think most of us would feel that that is some version of chucking your grandmother in the trash.
It doesn't honor the connection.
I mean, what does love mean if you don't shed a tear when the person you love most has died?
But finding what is normative there is really a challenge.
I don't know that there's any principle we can invoke or that we're ever going to find that is going to convince us that we have found the right point.
But whatever feels comfortable at the end of the day there, I think is encapsulating all of the consequences that we feel or imagine we would feel in those circumstances.
And a loss of connection to other people is certainly a consequence that we're worried about.
Yeah, absolutely.
And also, just if you never felt sadness at your child, like injury or death to your child, I'm sure humans, as a matter of fact, would therefore take fewer steps to protect their children and so on.
There'd be a whole host of, I think, really bad consequences from that, quite plausibly.
And yeah, in terms of the general framing, I agree with you in terms of frustrations at consequentialism, where they create these cases where all sorts of real-world effects are just abstracted away.
I mean, this is true for the question of how much to give as well, where in these debates it's normally imagined that there is just this superhuman person who could just give all of their income to the lowest level and then not have any other areas of their life affected negatively.
But that's just a fiction.
I mean, I think if someone thinks, well, I should give 10% because if I give more than I'm likely to burn out, I'm like, yep, that's you should be really attentive to your own psychology.
And that's like a really important consideration.
Whereas that's the sort of thing that in arguments about consequentialism, for some reason, the critics and sometimes the proponents tend to miss out, tend to be very kind of simplified views of the consequences.
Again, the problem comes back to the singer side of this conversation, which is, if you're only giving 10%, then you are still standing next to the shallow pond.
It is one of those slippery slope conditions where you're just, once you acknowledge that there's this, the juxtaposition of your casual indulgence of your wants Any one of which is totally dispensable.
You could sacrifice it and your life would be no worse off, alongside the immediate need of someone who's intolerably deprived by dint of pure bad luck.
I mean, your privilege is a matter of luck entirely, and all of the variables that produce it, no matter how self-made you think you are, You didn't pick your genes.
You didn't pick the society into which you were born.
You can't account for the fact that you were born in a place that is not now Syria fractured by the worst civil war in memory.
Elon Musk is as self-made as anyone.
He can take absolutely no responsibility for his intelligence, his drive, the fact that he could make it to America and America was stable and he did it in a time when there was immense resources to help him do all the stuff that he's doing.
So, again, you're still at the pond, and it feels like the conversation you would want to have with the person who says, well, if I give any more than 10%, it's going to kind of screw me up, and I'm not going to give much, and I'm not going to be happy, I'm not going to be effective.
It sounds like there's still more Peter Singer-style therapy to do with that person, which is, listen, come on, 12%, 14%, that's really moving you into a zone of discomfort, and there is no stopping point short of, well, listen, I could make more money
If you would let me get on an airplane now and fly to the meeting I'm now going to miss because I don't have enough money for a ticket, and then you begin to invoke some of the, I think you call it, earning-to-give principles, which we can talk about.
But unless you're going to bring in other concerns there where you can be more effective at helping more drowning children in shallow ponds, you don't have an argument against Singer.
Yeah, I think so.
There's a distinction in consequentialist ethics between what gets called actualism and possibleism.
And so supposing you have three options.
You can stay home and study.
You can stay home and watch TV.
If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app.
Export Selection