Sam Harris speaks with David Edmonds about moral philosophy and effective altruism. They discuss Edmonds's book Death in a Shallow Pond, Peter Singer's famous drowning child thought experiment, arguments for and against thought experiments, "trolleyology," consequentialism, the origins of the Effective Altruism movement, the controversial strategy of "earning to give," Derek Parfit's influence on contemporary ethics, the backlash against effective altruists, Angus Deaton's critique of the efficacy of foreign aid, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Just a note to say that if you're hearing this, you're not currently on our subscriber feed, and we'll only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense podcast, you'll need to subscribe at samharris.org.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
I'm here with David Edmonds.
David, thanks for joining me again.
Thanks for having me back.
So, David, you have a new book, which I really enjoyed.
It's titled Death in a Shallow Pond, A Philosopher, a Drowning Child, and Strangers in Need.
And you've written a kind of a short bio of the philosopher Peter Singer, who's also been on the podcast several times, and of the effective altruism movement that he has spawned, along with Will McCaskill and Toby Ord, who've also been on the podcast several times.
But it's also a great history of moral philosophy in the analytic tradition.
So I just want to track through the book, really, because I think the virtues of effective altruism, as well as the concerns surrounding it, are still worth talking about.
And I think just the core concerns of moral philosophy and how we think about doing good in the world are really of eternal interest because it's not at all clear that we think about these things rationally or effectively or normatively in any other way.
But before we jump in, remind people what you do.
Because I've spoken before and you have your own podcast, but where can people find your work generally and what are you tending to focus on these days?
Gosh, well, I had a double life as a BBC journalist and a philosopher.
I've given up the BBC bit, so it's all philosophy from now on.
Nice.
I've got a podcast called Philosophy Bites, which I make with a colleague, a friend called Nigel Warburton.
And yeah, I now write philosophy books.
And I'm linked to a center in Oxford called the UHero Institute, which is a center dedicated to the study of practical ethics, applied ethics.
So yeah, those are the various strings to my bow.
So why did you write this book?
And why did you take the angle you took here?
Oh, gosh.
I mean, there's some prosaic explanations for why I wrote the book.
I just written this biography of a guy called Derek Parfit, who, as it happens, Peter Singer says is the only genius he ever met.
And so I was thinking, I had such fun writing that book, and he was such an extraordinary character.
I thought maybe I'll have a go at writing another biography.
And Peter Singer is probably the most famous philosopher alive today.
So I wrote to Peter and said, how about I write your biography?
And he said, no, thank you.
So then I thought I'd write a book about the history of consequentialism, which interests me.
And that would be a book that covered Bentham and Mill and Sidgwick and all the way up to Parfit and Singer.
And then I was sort of daunted by the prospect of that.
That was an enormous task.
And then I thought, what I'll do is I'll cover those subjects just through one thought experiment.
I'd written a book about 15 years ago called Would You Kill the Fat Man, which was a very similar kind of book.
It was, again, it was a biography of probably the most famous thought experiment in moral philosophy, which is the trolley problem.
And Peter Singer's thought experiment, which we're going to talk about, I hope, is probably the second most famous thought experiment in moral philosophy, but I would say much more influential than the trolley problem.
So anyway, that's what got me into the subject.
Yeah.
Yeah.
Well, I think we should start with the thought experiment, which as I've spoken to Peter and other philosophers on this topic before, it'll be familiar to people, but I think we can't assume everyone has heard of it.
So we should describe the thought experiment.
But before we do, perhaps we can discuss thought experiments themselves for a minute or two, because even the act of entertaining them is somewhat controversial.
What's the argument against thought experiments?
Give me the fore and against what we're about to do here.
Well, thought experiments covers an enormous range of subjects.
So there are thought experiments in every area of philosophy.
There are thought experiments in the philosophy of mind.
There are thought experiments in the philosophy of language.
There are thought experiments in epistemology.
And there are thought experiments in moral philosophy.
And the objections to thought experiments tend to be directed particularly at thought experiments in the moral realm, I would say.
So, for example, in the area of consciousness, there's a very famous thought experiment called the Chinese Room.
There's another famous thought experiment when people argue about physicalism and whether everything is physical.
And that's a thought experiment called What Mary Knew.
And on the whole, those are not, I mean, they are contentious and they're very heavily debated, but they don't arouse the kind of suspicion, I think, that many moral thought experiments arouse.
And the reason that moral thought experiments arouse suspicion, well, there are many reasons, but one is people just say that our moral intuitions are not built for weird and often wacky scenarios.
They're built for normal life, real life.
And the problem with thought experiments is that they are often very strange, very artificial.
And so we shouldn't trust our intuitions.
And I would say that would probably be the main objection to them.
I mean, the response to that is there's a very good reason why they are artificial.
The whole point about a thought experiment is you're trying to separate all the extraneous circumstances and factors that might be getting in the way of our thinking.
And you're trying to kind of focus, in particular, on one area of a problem.
So you might have a thought experiment where there are two scenarios, which are different, except for the fact that one has a particular factor that the other doesn't have.
And the point is to try and work out whether that factor is making a difference or not.
And often you can only do that if you create a very artificial world, because the real world is not like that.
The real world is just full of music and noise and complications.
And so the thought experiment is designed to simplify and clarify and try and get at the nub of a problem.
Yeah, I mean, it's a kind of conceptual and even emotional surgery that's being performed.
I mean, you change specific variables and you look at the difference in response.
And again, as you said, this is often, it can often seem highly artificial or unlikely because you're looking for the pure case.
You're looking for the corner condition that really does elucidate the moral terrain.
Right.
I think we should describe both thought experiments here.
I think because I think there's an analogy between a common response to the trolley problem and what's happening in the shallow pond as well.
So before we dive into the shallow pond, I guess pun intended, describe the trolley problem case and how it's used.
Well, the main trolley problem case goes like this.
You are to imagine that there is a runaway train.
It's careering down a track.
There are five people tied to the track.
And in the simple case, you are on the side of the track and there's a switch and you can flick the switch and you can divert the train down a spur.
And unfortunately, on that spur, one person is tied to the track.
So the question is, should you turn the switch and divert the train away from the five to kill the one?
And that was invented by a woman called Philippa Foote in 1967.
She was writing about abortion at the time, and it was in an article about abortion.
And then 20 years later, an American philosopher called Judith Jarvis Thompson comes up with another example.
So this one goes like this.
You ought to imagine that the train is, again, it's out of control.
It's heading down the track.
There are five people who once again are tied to the track.
This time, there's a different way of saving them.
You are standing on a footbridge.
You're standing next to, in the original article, it was a fat man.
Now for modern sensibilities, it's a man with a heavy rucksack.
So you're standing next to a man with a heavy rucksack.
You can push the man with the heavy rucksack over the footbridge.
And because that rucksack is so heavy, or in the original case, because the man is so fat, he will stop the train and so save the five people.
And he would be killed in the process.
And the puzzle that Judith Jarvis Thompson asks us to grapple with is that she seems to think that in the first case, you should turn the train to save the five and kill the one.
But in the second case, you shouldn't push the fat man or you shouldn't push the man with a heavy rucksack to save the five at the cost of the one.
And that was her intuition.
And it's been tested all around the world.
It's been tested on men.
It's been tested on women.
It's been tested on the highly educated, on the less educated.
It's been tested in different countries.
And on the whole, almost everybody thinks that in the first case, it is right to turn the train.
And in the second case, it's wrong to push the fat man or the man with a heavy rucksack.
And so the puzzle in this thought experiment is to explain why, because in both cases, you are saving five lives at the cost of one.
Yeah.
And to be clear, the dissociation here is really extreme.
It's something like 95% for and against in both cases, but the groups flip, right?
So in the case where you just have to flip a switch, which is this kind of anodyne gesture, if you're touching something mechanical that diverts the train onto the other track, killing the one and saving the five, 95% of people think you should do that.
And when you're pushing the man from the footbridge, fat or otherwise, something like 95% think you shouldn't do that because that would be a murder.
And, you know, I often have thought that there's a kind of a lack of homology between these two cases because at least in my imagination, people are burning some fuel, trying to work out whether pushing the fat man really will stop the train, right?
There's kind of an intuitive physics that seems implausible there.
But leaving that aside, I think the big difference, which accounts for the difference in behavioral or experimental result, is that when people imagine pushing a person to his death, there's this up close and personal, very affect driving image of actually touching the person and being the true proximate cause of his death.
Whereas in the case of flipping the switch, there's this mechanical intermediary and you're not having to get close to the person who's going to die, much less touch him.
And that seems to be an enormous difference.
And this is often put forward as a kind of an embarrassment to consequentialism because the consequences on the surface seem the same.
We're just talking about body count.
There's a net four lives that are saved.
So they should be on a consequentialist analysis, the same case.
But I've always felt that this, I'm sure we'll cycle back to this topic a few times because I think it's important to get this right.
I've always felt that this is just a specious version, or at least an incomplete and unimaginative version of consequentialism or what consequentialism could be and should be, which is to have a fuller accounting of all the consequences.
So if in fact it is just fundamentally different experientially for a person to push someone to his death than to flip a switch.
And if it's different to live in a society where people behave that way versus the other, well, then that's part of the set of consequences that we have to add to the balance.
And I think it is, I mean, I think it is obviously different.
And that's what's being teased out in the experiment.
So I now recognize that we should probably define consequentialism in order to continue this conversation.
So anyway, I just lob that back to you and perhaps respond, but also give us a prese on consequentialism.
Well, consequentialism is the theory that what matters purely are the consequences.
So in these two trolley cases, as you say, the consequences of flipping the switch and pushing the man with the heavy rucksack are the same if you accept the hypothetical example, which is that one person dies and five people are saved.
So if you're a pure consequentialist, it looks like there's no difference between these two cases.
So there are dozens of these trolley cases in philosophy.
There were dozens of scenarios which involve runaway trains.
There are tractors.
There were all sorts of things going on in these trolley cases.
And it's been given a jokey title, which is trolleology.
The study of these trolley cases is trolleology.
And they've studied precisely the thing that you bring up.
So the question is, is really the difference just a sort of emotional difference about pushing the fat man as opposed to turning the switch.
So they've tested that and they've come up with a very ingenious way of testing it.
So what they do is they ask people the following scenario.
Imagine that the man with the heavy rucksack is on the footbridge, but this time you're standing next to a switch.
And if you turn the switch, the man with the heavy rucksack will fall through a trapdoor and will plummet to the ground and once again will stop the runaway train from killing the five people.
Now, if you are totally right about this.
Yeah, I mean, I see where this is going.
I'd forgotten all these iterations here.
And I think that it definitely dissects out the up close and personal, touchy feel part of it.
But what it doesn't change is the fact that the man himself is being manipulated, right?
So you're not manipulating the train, you're manipulating the man, and the man is becoming the instrument, you know, his murder is the instrument rather than the effect of the flipping the switch.
I think that does seem somehow a crucial difference.
Right.
So, but that's not a consequentialist difference, right?
So the words are.
Well, it is, I would just say it is if in fact, I mean, just imagine being these two, in one universe, you flip the switch as 95% of people think you should, and you feel while it was not pleasant to do, your conscience is totally clear.
In another universe, you flip the switch to the trapdoor and watch this man fall to his death and stop the train, and you can scarcely live with yourself because of the psychological toxicity of having had that experience.
That's part for whatever the, I mean, we can talk more about the reasons why there is a difference there, and I'm happy to hear all your thoughts on that matter.
But if there just is in fact a difference, albeit maybe only in 95% of people, that's part of the consequences.
And you can imagine the ripples of those consequences spreading to any society that would make policy of a sort that would enshrine one behavior as normative versus the other, right?
I mean, this is what, I mean, there are all kinds of strange examples that are hurled at consequentialism that seem to be defeaters of it, which always seem to me to be specious.
I mean, one you actually deal with in the book, which is perhaps the most common one, which is the doctor who recognizes he's got five patients who need organ donations and he's got a perfectly healthy person in his waiting room just waiting for a checkup and he decides to euthanize this person and distribute his organs to the waiting five, saving a net four lives.
That seems on this narrow focus on body count to be acceptable on a consequentialist analysis.
But of course, it's not because you have to look at the consequences of what it would be like to live in a society where trust has so totally eroded because we know at any time, even by the doctor who purports to have our well-being at heart, we could be casually murdered for the benefit of others.
I mean, no one would want to live in that society.
It'd be a society of just continuous terror and for good reason.
So anyway, that's just my pitch that I've yet to hear, I mean, perhaps you can produce one in this conversation, but I've yet to hear a real argument against consequentialism that takes all consequences, all things considered into account.
Right.
So in your hospital case where somebody's bopped on the head and their two kidneys and their two lungs and their heart are used to save five patients.
So you're obviously right that if that, as it were, got out, then that would be terrifying for everybody.
You would never go and visit Aunty Doris in the hospital, because you'd think, well, there's a risk that when I go and visit Aunty Doris, the same thing is going to happen to me.
Of course, what the philosopher does is they then create a hypothetical example that… Just a one-off case, yeah.
It's a one-off and nobody finds out about it and the person has got no friends and blah, blah, blah.
But again, the response to that is, well, we can't really imagine that.
You know, our intuitions aren't really coping with that really kind of cocooned example.
We're imagining that this is going to, this news is going to leak out.
In the trolley case, I think it's much more complicated.
It is true that people would find it more difficult to live with themselves by pushing the fat man or by dropping the fat man through the trapdoor.
But the question is why?
And I think the explanation is that people have one very powerful non-consequentialist intuition.
And it goes something like this.
Although they don't articulate it and they're very puzzled by this thought experiment, if you put the following to them, they think, yes, this explains my intuition.
So imagine that you push the large man from the footbridge and the large man is wearing a rubber suit.
And instead of dying, he bounces off the track and he runs away.
So what's your reaction to that case?
Your reaction to that case is that's not good because the whole point of pushing him over was that so he got in the way of the train so that he would save five lives.
Now imagine that in the first case, the train is going along and it's going to kill the five people and you flick the switch and it goes down the spur.
Now imagine that the person on the spur is able to extricate themselves from their ropes and able to run away.
How would you feel about that?
Well, you'd feel absolutely delighted.
And why do you feel delighted?
Because you haven't killed the five and you hadn't had to kill the one.
So the difference between the two cases, and this comes back to the doctrine of double effect that goes all the way back to Thomas Aquinas, is as you hinted at earlier, in the fat man case, you are using the fat man as a means to an end.
And that's not the case with the spur case.
Another way of putting that is you intend to kill the fat man when you push him over the football.
You want to kill him.
Well, you need him to get in the way.
You don't intend to kill the person on the spur.
Yeah, it's interesting.
Well, it's, I'm not sure I totally buy that.
That all turns on there being an important difference between acting in a way where it seems there's a 100% chance of killing a person, but still there being it being true to say that you don't intend to kill the person.
I think the key distinction is the distinction between intending and merely foreseeing.
So it's the distinction in the Geneva Convention between attacking a munitions factory.
It's a collateral damage issue, right?
Exactly.
So attacking the munitions factory, knowing that 100 civilians will die, but this munitions factory is so important to the enemy's war effort that the attack on the munitions factory is justified, even though you know that 100 people, 100 civilians will die.
It's the difference between that and intentionally targeting those 100 civilians.
So, yeah, I think I misspoke a moment ago.
I do clearly see that distinction.
I guess it's the, let me see what's bothering me about this.
Well, perhaps it'll come out just in further discussion here around the other thought experiments.
Well, let's talk about the shallow pond and kind of fill in more of this picture.
And I think we'll cycle back on whether consequentialism has any real retort.
Because you said a moment ago that this was a non-consequentialist intuition.
And my deep bias here, I'll be happy to be disabused of it, but my deep bias is that when you drill down on any strongly held intuition that pushes our morality around and we can't shake it, it is either at bottom some intuition about consequences, about what it would mean to live in a world where this kind of rule was repeated.
So it's kind of a rule consequentialism rather than an act consequentialism, per se, or we just have to bite the bullet and admit that, okay, this is an illusion, some kind of moral illusion, right?
So, I mean, there's so many things that we could care about as we're about to see and magically don't care about.
And it is inscrutable that we, even when they're pointed out, we don't feel differently.
I mean, the one that always comes to mind for me is, you know, if we just changed our driving laws just slightly, I mean, just to slightly inconvenience ourselves, that we made the speed limit 10 miles an hour lower on every street in the nation.
I mean, speaking of America here, where we have 40,000 traffic deaths a year, reliably, and I don't know how many people are maimed, but 40,000 people are killed outright based on how badly we drive.
If we just reduce the speed limit by, let's say, 10 miles an hour, we would save thousands of lives.
I think there's no question of that.
And it's just the only real consequence, I mean, I'm sure you could, maybe a few people would be inconvenienced in a way that might prove fatal, but it certainly wouldn't, it would be massively offset by the number of lives saved.
The real consequence would be that it would be less fun to drive, right?
Or we could actually, I mean, even to make it more inscrutable still, we could put governors on all of our cars that, you know, so for whatever car, you know, from a Ferrari on down could never exceed the speed limit, right?
You could drive however you wanted, but you could just never drive faster than the speed limit.
That's technologically feasible.
No one would want that, no matter how many lives it would save, because it would be less fun to drive somehow.
We want to somehow carve out the possibility of driving faster than the speed limit, at least sometimes.
And yet when you talk about that body count, nobody moves from that point to the obvious conclusion that we're all moral monsters for so callously imperiling the lives of everyone, including our own, really.
I mean, there's no identifiable victim in advance.
That's part of the problem, I think.
But I mean, there's thousands of people, 40,000 people are guaranteed to die this year in America based on the status quo.
How is this acceptable and how are we not monstrously unethical for accepting it?
And somehow the sense that there's even a moral problem here evaporates before I even can get to the end of the sentence.
Yeah, so 40,000 is a lot of people.
I think there were 58,000 killed in the whole of the Vietnam War, right?
So that's a big figure.
And oddly, in London, in much of London now, they've reduced the speed limit to 20 miles an hour.
If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast.
The Making Sense podcast is ad-free and relies entirely on listener support.