All Episodes
March 1, 2017 - Making Sense - Sam Harris
24:19
#66 — Living with Robots

Sam Harris speaks with Kate Darling about the ethical concerns surrounding our increasing use of robots and other autonomous systems. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

| Copy link to current segment

Time Text
Welcome to the Making Sense Podcast.
This is Sam Harris.
Just a note to say that if you're hearing this, you are not currently on our subscriber feed and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org.
There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
For today's podcast, I bring you Kate Darling.
time.
What a great name.
Kate is a researcher at the MIT Media Lab and a fellow at the Harvard Berkman Center, and she focuses on the way technology is influencing society, specifically robot technology.
But her background is in law and in the social sciences, and she's one of the few people paying attention to this.
And this is, along with AI, going to become increasingly interesting to us as we integrate more and more autonomous systems into our lives.
I really enjoyed speaking with Kate.
We get into some edgy territory.
As I think I said at some point, the phrase child-size sex robots was not one that I was ever planning to say on the podcast, much less consider its implications.
But we live in a strange world, and it appears to be getting stranger.
So to help us all figure that out, I now bring you Kate Darling.
I am here with Kate Darling.
Kate, thanks for coming on the podcast.
I'm delighted to be here.
It's great to be able to do this.
I'm continually amazed that we can do this, given the technology.
But I first learned of you, I think, in a New Yorker article on robot ethics.
And this is your area of focus and expertise.
And this is an area that almost doesn't exist.
You're one of the few people focusing on this.
So perhaps just take a moment to say how you got into this.
Yeah.
Robot ethics is, it is kind of a new field and it sounds really science fictiony and strange.
Um, but I, so I have a legal and social sciences background.
And at some point about five and a half years ago, I started working at the media lab at MIT where there's a bunch of roboticists.
And, um, I made friends with them because I love robots and I've, I've always loved robots.
So, We started talking and, um, we realized that I was coming at the technology with, you know, some questions that they hadn't quite.
Encountered before.
And we realized that, um, together there were some things that, uh, you know, some questions that, that were worth exploring that when you bring people who really understand how the technology works together with people who come at this from kind of a, you know, policy or social sciences or societal mindset.
that can be interesting to explore.
Tell people what the Media Lab is.
It seems strangely named, but everything that comes out of it is incredibly cool and super diverse.
What's going on over there at MIT?
Yeah, it's a little hard to explain.
The Media Lab is kind of, to me, it's this building where they just stick a bunch of people from all sorts of different fields, usually Interdisciplinary, or as they call it, anti-disciplinary.
And they give them a ton of money and then cool stuff happens.
That's basically what it is.
So there's everything from like economists to roboticists to people who are curing blindness in mice to artists and designers.
It's really a mishmash of all sorts of very interesting people working in fields that aren't don't really fit into the traditional categories of academia that we have right now.
And so now your main interest with robots is in how our relating to them could well and may in fact inevitably change the way we relate to other human beings.
Yeah, absolutely.
I'm totally fascinated by the way that we treat robots like they're alive, even though we know that they're not, and the implications that that might have for our behavior.
I must say I'm kind of late to acquire this interest.
Obviously I've seen robots in science fiction for as long as I've seen science fiction, but it wasn't until watching Westworld, literally a couple of months ago, that I realized that the coming changes in our society based on whatever robots we develop are going to be far more interesting and ethically pressing than I realized.
And this has actually nothing to do with what I thought was the central question, which is, will these robots be conscious?
That is obviously a hugely important question, and a lot turns ethically on whether we build robot slaves that are conscious and can suffer.
But Even short of that, we have some really interesting things that will happen once we build robots that escape what's now called the Uncanny Valley.
I'll probably have you talk about what the Uncanny Valley is.
And I think even based on some of your work, you know, you don't even have to get All the way out of the uncanny valley, or even into it, for there to be some ethical issues around how we treat robots, which we have no reason to believe are conscious.
In fact, you know, we have every reason to believe that they're not conscious.
So perhaps before we get to the edgy considerations of Westworld, Maybe you can say a little bit about the fact that your work shows that people have their ethics pushed around even by relating to robots that are just these bubbly cartoon characters that nobody thinks are alive or conscious in any sense.
Yeah, we are so good at anthropomorphizing things.
And it's not restricted to robots.
I mean, we've always had kind of a tendency to name our cars and, you know, become emotionally attached to our stuffed animals and kind of imagine that they, they're these social beings rather than just objects.
But robots are super interesting because they combine physicality and movement in a way that we will automatically project intent onto.
So I think that it's just it's it's so interesting to see people treat even the simplest robots like they're alive and like they have agency, even if it's totally clear to them that it's just a machine that they're looking at.
So, you know, long before you get to any sort of complex humanoid Westworld type robot, people are naming their Roombas.
People feel bad for the Roomba when it gets stuck somewhere just because it's kind of moving around on its own in a way that we project onto.
And I think like it goes further than just being primed by science fiction and pop culture to want to personify robots.
Like obviously, you know, we've, we've all seen a lot of sci-fi and star wars and you know, we, we probably have this inclination to name robots and personify them because of that.
But I think that there's also this biological piece to it that's even more, um, that's even deeper and, and really fascinating to me.
One of the things that we've noticed is that people will have empathy for robots, or at least some of our work indicates that people will empathize with robots and be really uncomfortable when they're asked to destroy a robot or do something mean to it, which is fascinating.
Does this pose any ethical concern?
Because obviously it's kind of an artificial situation to hand people a robot that is cute and then tell them to mistreat it, but There are robots being used in, I think, isn't it like a baby seal robot that you're giving people with Alzheimer's or autism?
Is contact with these surrogates for affection, does that pose any ethical concerns?
Or is that just, if it works on any level, it's intrinsically good, in your view?
I think it depends.
I think there is something unethical about it, but probably not in the way that most people intuitively So I think intuitively it's a little bit creepy when you first hear that, oh, we're using these baby steel robots with dementia patients and we're giving them the sense of nurturing this thing that isn't alive.
That seems a little bit wrong to people at first blush.
I honestly, so if, if you look at what these robots are intended to replace, which is animal therapy, it's interesting to see that they can have a similar effect and no one, no one complains about animal therapy for, you know, dementia patients.
It's something that we often can't use because of hygienic or safety or other reasons, but we can use robots because people will consistently treat them sort of like animals and not like devices.
And I also think that, you know, for the ethics there, it's important to look at some of the alternatives that we're using.
So with the baby seal, if we can use that as an alternative to medication for calming distressed people, I'm really not so sure that that's really an unethical use of robots.
I actually think it's kind of awesome.
Yeah.
So one of the things that does concern me though, is that this is such an engaging and, or in other words, manipulative technology that And we're seeing a lot of these robots being developed for kind of vulnerable parts of the population, like the elderly or children.
A lot of kids' toys have increasing amounts of this kind of manipulative robotics in them.
I do wonder whether, you know, the companies that are making the robots might be able to use that in, in ways that aren't necessarily in the public interest, like get people to buy products and services or manipulate people into revealing more personal data than they would otherwise want to enter into a database.
Things like that concern me, but those are more people doing things to other people rather than, you know, something intrinsically wrong about treating robots like they're alive.
So has there been anything like that?
Have any companies with toy robots or eldercare robots done anything that seems to push the bounds of propriety there in terms of introducing messaging that you wouldn't want in that kind of situation?
Yeah, I don't know any examples of, like, people trying to manipulate the elderly as of now, but, I mean, we do have examples from, you know, the porn industry and having Very manipulative chatbots that try and get you to sign up for services.
So, and this was happening decades ago, right?
So we, we do have a history of companies trying to use technology in advertising or, or say, you know, the in-app purchases that we see on iPads where there have been consumer protection cases where, you know, kids were buying a bunch of things and now that companies have had to implement all of these safeties so that it requires parental override in order to purchase stuff.
Like there's a history of, we know that companies serve their own interests and any technology that we develop that is engaging in the way that robots already are in their very primitive forms and will increasingly be, I think might pose a consumer protection risk Or you could even, you know, think of governments using robots that are increasingly entering into our homes and very intimate areas of our lives.
Governments using robots to, you know, collect more data about people and essentially spy on them.
So there's this basic fact where any system that seems to behave autonomously doesn't have to be humanoid, doesn't even have to have a lifelike shape, it doesn't have to draw on biology at all.
As you said, it could be something like a Roomba.
If it's sufficiently autonomous, it begins to kindle our sense that we are in relationship to another, which we can find cute or menacing or whatever we feel about it.
It pushes our intuitions in the direction of, this thing is a being.
In its own right.
I believe you have a story about how a landmine-diffusing robot that was insectile, like spider-like, could no longer be used, or at least one person in the military overseeing this project felt you could no longer use it because it was getting its legs blown off.
And this was thought to be disturbing, even though, again, we're talking about a robot that isn't even close to being the sort of thing that you would think people would attribute consciousness to.
Yeah.
And then, of course, with design, you can really start influencing that, right?
So whether people think it's cute or menacing or whether people treat it as a social actor, because there's this whole spectrum of, you know, you have a simple robot like the Roomba and then you have a social robot that's specifically designed to mimic all of these cues that you subconsciously associate with states of mind.
So we're seeing increasingly robots being developed that specifically try and get you to treat it like a living thing, like the baby seal.
Are there more robots in our society than most of us realize?
What is here now and what do you know about that's immediately on the horizon?
Well, I think what's sort of happening right now is we've had robots for a long time, but robots have been mostly kind of in factories and manufacturing lines and assembly lines and kind of behind the scenes.
Now we're gradually seeing robots creep into all of these new areas.
So the military or hospitals, we have surgical robots or transportation systems, autonomous vehicles.
And we have these new household assistants.
A lot of people now have Alexa or Google Home or other systems in their homes.
And so I think we're just seeing an increase of robots coming into areas of our lives where we're actually going to be interacting with them in all sorts of different fields and areas.
So what's the boundary between, or is there a boundary between these different classes of robots?
I don't think there's any, you know, clear line to distinguish these robots.
Also in terms of, you know, the effect that they have on people, you know, you see, depending on how a factory robot is designed, people will become emotionally attached to that as well.
That's happened.
And we also, I mean, by the way, we don't even have a universal definition of what a robot is.
Some of the robots I picture, like the robots I was picturing on an assembly line, are either fixed in place, and we're just talking about arms that are constantly moving and picking things up, or they're kind of moving on tracks, but they're not roving around in 360 degrees of freedom.
I trust there are other robots that do that in industry as well.
Yeah.
But like, so one question is, you know, is the inside of a dishwasher, is that a robot?
Like, is that movement autonomous enough?
It's basically what the factory robots are doing, but we call those robots.
We don't call the dishwasher robot.
There's just this continuum of machines with greater and greater independence from human control and greater complexity of their routines, and there's no clear stopping point.
Let's come back to this concept of the uncanny valley, which I've spoken about on the podcast before.
What is the uncanny valley, and what are the prospects that we will get out of it anytime soon?
Yeah, the Uncanny Valley is a somewhat controversial concept that you can design something that is lifelike.
But as soon as you get too close to, I think for the Uncanny Valley, it's specifically humanoid.
If you get too close to something that looks like a human, but you don't quite match what it is, then it suddenly becomes really creepy.
So people will like the thing, the more lifelike that it gets.
And then once it gets too close, Like the likability of it drops.
It's like zombies or something like something that's human, but not quite human really creeps us out.
And then it, it, it, it doesn't go back up again until you can perfectly like absolutely perfectly, uh, mimic a human.
And I think I, I like to think about it more less in terms of the uncanny Valley and more in terms of expectation management, I guess.
So I think that if we see something that looks human, we expect it to act like a human.
And if it's not quite up to that standard, I think it disappoints what we were expecting from it.
And that's why we don't like it.
And that's a principle that I see in robot design a lot.
So a lot of the really, I think, compelling social robots that we develop nowadays are not.
Designed to look like something that you're intimately familiar with.
Like, I have this robot cat at home that Hasbro makes, and it's the creepiest thing.
Because it's clearly not a real cat, even though it tries to look like one.
So it's very unlovable in a way.
But I also have this baby dinosaur robot that is much more compelling.
I've never actually interacted with a two-week-old Camarasaurus before, so it's much easier to suspend my disbelief and actually imagine that this is how a dinosaur would behave.
So, yeah, so it's interesting to see how, you know, the whole Westworld concept, you know, before we could even get there, we would really need to have robots that are so similar to humans that we wouldn't really be able to tell the difference.
What is the state of the art in terms of humanoid robots at this point?
I mean, we are, I've never actually been in the presence of any advanced robot technology that's attempting to be humanoid.
There are some Japanese androids that are, that are pretty interesting.
I don't think, like to me, they're not out of the Uncanny Valley yet, but there's also some conversation about whether the Uncanny Valley is cultural or not.
And also I think some research on that, which I don't think is very conclusive, but It might be that in some cultures, you know, like in, in Japanese culture, people are more accepting of robots, uh, that are, that look like humans, but aren't quite there because, uh, you know, people say that there's this religious background to it.
The, that the Shinto religion of the belief that objects can have souls makes people more accepting of robotic technology in general.
Whereas in Western society, we're more.
Uh, creeped out by the, this idea that a thing, a machine could, you know, resemble a living thing in a way, but I'm, yeah, I'm, I'm not really sure.
And, and I mean, you should check, check out the Androids that, that, uh, Ishiguro in Japan is making because they, they're pretty cool.
He made one that looks like himself, which is interesting, uh, to think about, you know, his own motivations and psychology behind that.
But, um, It is a pretty cool robot.
I think, you know, just from a photograph you might not be able to tell the difference.
Probably in interacting with it you would.
So do you think we will get to a Westworld-level lifelikeness long before we get to the AI necessary to power those kinds of robots?
Do you have any intuitions about how long it will take to climb out of the Uncanny Valley?
That's a good question.
I honestly, I'm not as interested in, you know, how do we completely replicate humans?
Because I see so many interesting design things happening now where that's not necessary.
We can create, we can already with, and, and, and robotic technology is very primitive at this point.
I mean, robots can barely operate a fork, but we can create characters that people will treat as though they're alive.
And while it's not quite Westworld level, if we move away from this idea that we have to create humanoid robots and we create, you know, a blob or, you know, some, we have a century of animation expertise to draw on in creating these compelling characters that people can relate to and that move in a really expressive way.
And I think that that's.
You know, much more interesting.
I think much sooner than Westworld, we can get to a place where we are creating robots that people will consistently treat like living things, even if we know that they're machines.
I guess my fixation on Westworld is born of the intuition that something fundamentally different happens once we can no longer tell the difference between a robot and a person.
And maybe I'm wrong about that.
Maybe this change and all of its ethical implications comes sooner when, as you say, we have a blob that people just find compelling enough to treat it as though it were alive.
It just seems to me that Westworld is predicated on the expectation that people will want to use robots in ways that would truly be unethical if these robots were sentient, but because on assumption or in fact they will not be sentient, this becomes a domain of creative play analogous to what happens in video games.
If you're using a first-person shooter video game, you are not being unethical shooting the bad guys.
And the more realistic the game becomes, the more fun it is to play, and there's this sense that, I mean, while some people have worried about the implications of playing violent video games, all the data that I'm aware of suggests they're really not bad for us, and Crime has only gone down in the meantime, and it seems to me that there's no reason to worry that as that becomes more and more realistic, even with virtual reality, it's going to derange us ethically.
But watching Westworld made me feel that robots are different.
Having something in physical space that is human-like to the point where it is indistinguishable from a human, Even though you know it's not, it seems to me that will begin to compromise our ethics if we mistreat these artifacts.
We'll not only feel differently about ourselves and about other people who mistreat them, we will be right to feel differently because we will actually be changing ourselves.
You'd have to be More callous than, in fact, most people are to rape or torture a robot that is, in fact, indistinguishable from a person because all of your intuitions of being in the presence of personhood, of being in relationship, will be played upon by that robot even though you know That it's been manufactured and, let's say, you've been assured it can't possibly be conscious.
So, the takeaway message from watching Westworld for me is that Westworld is essentially impossible.
We would just be creating a theme park for psychopaths and rendering ourselves more and more sociopathic if we tried to normalize that behavior.
And I think what you're suggesting is that long before we ever get to something like Westworld, we will have, and may even have now, robots that, if you were to mistreat them callously, you would in fact be callous, and you'd have to be callous in order to do that, and you're not going to feel good about doing it if you're a normal person, and people won't feel good watching you do it if they're normal.
Is that what you're saying?
Yeah, I mean we already have some indication that people's empathy does correlate with how they're willing to treat a robot, which is just cringe.
If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes, NAMAs, and the conversations I've been having on the Waking Up app.
Export Selection