Edition 356 - Dylan Glas
Dylan Glas is at the cutting edge of robotics including the "Erica" project...
Dylan Glas is at the cutting edge of robotics including the "Erica" project...
Time | Text |
---|---|
Across the UK, across continental North America, and around the world on the internet, by webcast and by podcast, my name is Howard Hughes, and this is the Return of the Unexplained. | |
Well, I'm recording this still in the heat wave here in the UK. | |
It seems to have gone on for an awful long time, and if you don't have air conditioning, and of course most of our homes don't here, it makes everything that you do really difficult. | |
So I'm sorry if I sound a little breathy and a little hot on this edition. | |
It's probably going to make no sense to you if you're somewhere where it's cold at the moment, maybe in the southern hemisphere. | |
But we've had weeks of hot here, and various experts have had their say about why that is and how long it may continue for. | |
But that's another subject. | |
No shout-outs on this edition of The Unexplained, apart from one. | |
Anne-Marie got in touch, and she asked me to do something that it's a great pleasure to do, to say happy birthday to David Wall, who's a big listener, big fan of this show. | |
David, thank you very much for that. | |
And I just want to wish you personally the greatest 2018 into 2019 possible. | |
I hope that everything sorts itself out nicely for you and you have a wonderful year to come. | |
So happy birthday, David Wall, from myself and from Anne-Marie, too. | |
Thank you, Anne-Marie, for getting in touch and telling me about David. | |
This edition, we have a special guest, a man who's at the cutting edge of the world of robotics. | |
His name is Dylan Glass. | |
And even if you're not aware of it or you don't remember, you might have read about him in a number of British newspapers over the last six months or so. | |
That's how I discovered him and his work. | |
He is somebody who's worked not only in the US, but also in that crucible of development for robotic technology, Japan. | |
He's got a track record as long as your arm about all of this. | |
Check him out online, Dylan G-L-A-S. | |
Dylan Glass is his name. | |
At the moment, he's a senior robotic software architect at FutureY Industries, a research division of Huawei, a big, of course, international technology company in Silicon Valley. | |
He was previously a senior researcher in social robotics at Hiroshi Ishiguro Laboratories at ATR and a guest associate professor at the intellectual, rather the intelligent, probably intellectual too, robotics laboratory at Osaka University. | |
He's got a much longer curriculum VTI resume than that, but I think that probably gives you a picture of the man we're about to speak with. | |
He was best known this year for his work in the development and execution of a robot called Erica. | |
That's all I'm going to say about Erica for now and all I'm going to say about him. | |
Thank you very much as ever to Adam at Creative Hotspot for his hard work getting the show out to you and also for keeping the website ticking over too. | |
Thank you to you for all of your emails. | |
If you've made a donation to this online show recently, thank you very much. | |
They are vital for this work to proceed. | |
Please go to my website, theunexplained.tv and that's where you can leave a donation or if you would like to. | |
And you could do both. | |
There's no law against it. | |
You can send me a message about the show, maybe some guest suggestions, whatever you want to do. | |
Lovely to hear from you. | |
Go to my website, theunexplained.tv. | |
It is your one-stop shop for connecting with my show, The Unexplained. | |
And thank you very much if you've been in touch recently. | |
Okay, let's get to San Francisco now, completely different time of day in another part of our globe, and talk with Dylan Glass about robotics. | |
Dylan, thank you for doing this. | |
Thank you for inviting me. | |
How's San Francisco, Dylan? | |
San Francisco is great. | |
The weather is calm and beautiful as always. | |
Now, is it true that that is the part of the world that you need to be in these days if you want to be at the cutting edge of technology? | |
It's where an awful lot of the people who are into this stuff and at the cutting edge of this stuff live. | |
It's true that there are a lot of people in the Bay Area who are working on cutting edge AI and robotics and a lot of these related technologies. | |
But there are actually labs all over the world that are doing this kind of research. | |
And I've had the privilege to meet with people from many of them and collaborate with them. | |
You have a fantastic, we call it curriculum vitae over here, you call it resume. | |
Your background is impeccable. | |
And the one thing that stands out a country mile about you is that you have been involved not only in the United States, but also in Japan, which from my point of view is certainly judging by what I used to see on television when I was growing up as a kid, Japan is where all of this started. | |
Yeah, I think Japan has a special relationship with robotics. | |
There's a different feel there where everyone genuinely wants a robot to be their friend. | |
And when I was working there in robotics, I'd tell people what I do for a living, and they're like, oh, please make a robot that'll take care of me when I get old. | |
They would just say this to me in bars or on the street. | |
So it's a very different attitude, I think, from a lot of Western countries and a lot of other places. | |
But the Japanese, I think they have some very interesting cutting-edge work that you can kind of only do there. | |
Sorry, you say, I think there's a small delay here, so I'm going to shut up and say with social robots. | |
Sorry about that. | |
Yeah, there is a lot of work with social robots there. | |
I think a lot of the humanoid robots that look very human-like and realistic are done in Japan. | |
And a lot of the research there really focuses on the social interaction side of things rather than maybe a user interface kind of approach to how do we control robots. | |
It's more like how do we interact with them in a smooth social manner. | |
And this has been something that they have been aware of easily for 50 years. | |
I can remember when I was a kid and maybe watching television in the 1970s. | |
And there was a program over here on TV called Tomorrow's World. | |
It was from the BBC and it was the big tech and science show here. | |
And it was on just before the big pop show. | |
So everybody used to watch it. | |
And every so often they would roll out the latest miniature labor-saving robotic device from Japan. | |
And the presenter would have a slight smile on his face about this, you know, those strange Japanese developing all this stuff. | |
But it seems to me that looking at it now through the prism of history and time, They knew something that we didn't. | |
Yeah, you could say that. | |
What I've heard is that there's actually a very sort of unique relationship that the Japanese have had with technology just because of their history. | |
And the way that Japanese corporations work, traditionally, they would employ you for life. | |
And so the idea is that when automation and manufacturing robots came into industry, in places like the U.S. or Europe, maybe people just lost their jobs to the robots. | |
And so there was a lot of animosity. | |
Whereas in Japan, the robots took over those low-level jobs and the people got moved up to higher-level jobs in the same company. | |
So they saw them as being a very positive contribution to their lives. | |
So there is less fear because of the way that the technology is developed there. | |
And I think also about Japan. | |
I studied Japan when I was a kid in geography. | |
And they taught us a fair amount about Japan in those days. | |
And we talked about the industrial concerns there, companies like Mitsubishi and Hitachi. | |
I'm going to pronounce this really badly, Dylan, but Zaibatsu, which is a company, as I remember, that embraces all kinds of manufacturing. | |
And if you have a company that will employ you for life and embraces all kinds of manufacturing, then if it brings in robots to improve productivity, the way you're going to look at it is that that is a good thing. | |
Sure. | |
Exactly. | |
Yeah. | |
So it's a very different mindset. | |
Why did you want to go, I think we've just answered that question, really, but why did you want to go and study and be in Japan, which you did for a long time? | |
I originally went to Japan actually as an English teacher, and I taught English there for a few years and came back to the U.S. to get a real engineering job, as I was thinking. | |
And then I looked around. | |
And actually, while I was looking, I found an opportunity to go to Jerusalem where I worked with Israeli and Palestinian high school students, teaching them Java programming for world peace is the idea. | |
And I had these amazing experiences there, but some of them were pretty negative experiences where some of my students were held down at gunpoint at the borderline or something. | |
So I came back to the States and I got some job offers and things, but they were all sort of military contractor type missile systems and things. | |
And I was sort of uncomfortable with that. | |
And then I went back to visit Japan and I found an opportunity to build robots that hug children and help elderly people go shopping. | |
And I was like, is this real? | |
So I was just, I fell in love with it immediately. | |
What do you think it is about the Japanese that are willing to embrace technology of this kind? | |
The fact that they seem to be, and I know we've already talked about this in the last couple of minutes, but they seem to be more willing to accept helpers and assistance in their homes and in their lives that aren't human than we are. | |
We're only just coming to terms with this now, and only just starting to, really. | |
Yeah. | |
Well, Japan, you know, similar to a few other countries, especially in Europe, has a growing aging population. | |
And I think that has a big impact on things because they have a shortage of labor in many aspects of the labor market. | |
And everywhere you go, you see help-wanted signs and stuff. | |
And people are genuinely worried that as they age, as they get older, no one's going to be able to take care of them. | |
So I think that's something that's kind of weighing on everybody's minds. | |
And robotics seems like a possible solution to that to them. | |
Talk to me about where we are then, because it seems to me you're pretty uniquely placed to discuss this on the roadmap of development of robots, because you've seen it in the country where a lot of this started and where it's been more accepted. | |
And now here we are, we're being told, and I think it's a big shock to some people reading the popular press in this country and in the U.S., here we are on the cusp of a robotic revolution from what we're being told. | |
Yeah, this is a really exciting time in robotics. | |
There are a lot of technologies that are just coming to fruition now that, you know, especially, of course, you hear about deep learning all the time and the things people are doing with computer vision and all these other recognition problems that were, you know, we were stuck for a really long time in robotics, not able to move forward. | |
And suddenly, all these doors are opening. | |
So it's a very exciting time. | |
But honestly, it's a little hard to keep your finger on the pulse of where things really are because there's so much hype. | |
And people are so excited. | |
And they see one robot work once and they think, oh, all robots work all the time. | |
And really, it's a very hard problem getting such a complex system to work together in the right way. | |
So it's hard to follow where things are, but there's definitely a lot of progress being made. | |
And I think the real thing is that we are now getting to the point where we can start addressing the problems that they were trying to start addressing back in the 60s. | |
Except they just didn't have the computing power. | |
They didn't have the algorithms. | |
And finally, we've got that sort of infrastructure in place to start thinking about those questions. | |
Well, if you watch television in the UK in the 80s, you would have got the impression that a robot was almost like a motorized stepladder on, you know, almost like a tractor track that would maybe lift a tray of drinks and probably spill them all, but really wasn't to be taken too seriously. | |
And it seems to me that in 30 years, we've gone from that to getting to something that looks human, has begun to think like us, and may begin to take some of our jobs. | |
And, you know, those are the sorts of issues I think that are concerning people here, those of us who haven't been brought up to accept that robots are going to do things. | |
Yes, of course, we've all embraced the rise of technology, but that's a different thing. | |
Sure. | |
I think that in my field of research, in social robotics, the concern is less about, you know, are robots going to take our jobs, but like if you understand that technology is going to be playing a wider, a broader role in society everywhere, we're going to have to interact with technology more and more. | |
The real question is how do we want to interact with it? | |
And how do we make that a pleasant experience and a seamless experience instead of something that's oppressive or frustrating or makes our lives worse? | |
And I think one of the ideas behind social robots, maybe a robot that has a face, that can talk to you, that can gesture, is that that's a very intuitive way to interact with things. | |
So we get all this technology that's hard to understand and hard to interact with. | |
And suddenly you put a human face on it, a human-like face on it, and then we know how to talk with it because we know how to talk with people. | |
We've done it our whole lives. | |
So that's really what we're researching in my field. | |
And it's a difficult field, isn't it, when you're trying to socialize robots? | |
You have to do it very carefully. | |
I remember, and I'm sure you'll remember too, that American phenomenon a year or so ago of the clown appearing everywhere and terrorizing people. | |
And experts worked out that the reason a lot of people are afraid of clowns is that they look human but not quite. | |
So you have to have something that goes beyond that. | |
Because if it's something that looks human but not quite, on some level it's going to freak us out. | |
So it's a very difficult psychological challenge, isn't it? | |
Yes, and this is a huge area of research. | |
A lot of people are looking into what they call the uncanny valley, where, you know, traditionally people have thought in terms of the appearance of the robot, right? | |
Like if it looks almost human but not quite, it evokes these feelings of revulsion, right? | |
Disgust. | |
But now that we have things that look quite good, people are looking into the motion of the robot. | |
You know, is the movement natural? | |
Is the speech natural? | |
Or is it like close but not quite to the point where it kind of creeps you out? | |
So that's sort of a difficult challenge that's always on the fringes of this. | |
And we've had to revisit it many times. | |
So is that more of a mechanical problem then? | |
Is that about developing facial expressions, ways of movement? | |
And that's over and above. | |
That's beyond the electronics. | |
The electronics is the thinking, but the mechanics is the doing. | |
Right. | |
I mean, the mechanical actuators, and especially if you look at Erica, she's like a beautiful young woman. | |
Actually, if you make a robot that's more of like an older person, it's much easier to make them look much more natural because the skin naturally sags more and there's more play. | |
So yeah, there's some mechanical problems there, but like part of the problem is that we think and we move and we move our bodies in certain ways, and there's so much that goes on that's not conscious to us, right? | |
The movements that we make with our eyes and our body language, and some of those things are down to physics, like having saggy skin, and other things are biological, just like reflexes that we don't think about. | |
But every single one of these details has to be meticulously engineered in the robot, whether it's in the mechanical design and the material selection, or whether it's in the motion generation and the animation of the robot. | |
And that is what you're working on. | |
I mean, how difficult is that? | |
Presumably, the fact that electronics and mechanics are getting smarter and easier to manipulate, I suppose that's all helping you. | |
Yeah, I'm not really on the mechanical side of things, but I've definitely seen a lot of progress there. | |
But sort of what I'm interested in are the body language, the behavior of the robot, and the higher level sort of cognitive behaviors of the robot, the higher level choices of what the robot should do. | |
So for example, a lot of people think, oh, once you've worked out all these things like eye movements and breathing movements and things, then, oh, just slap some text on it, throw a chatbot on it. | |
And that really is begging the question, right? | |
Because the real question is like, what should the robot say? | |
What should it do? | |
And why? | |
And so really, you're kind of creating an artificial mind. | |
And that's the challenge is to go beyond something that's just, you know, I can answer questions or something and something to create a robot that has its own intention, that has its own will, I think is the real challenge. | |
And that's the challenge I'm very excited about. | |
And isn't that something that understandably frightens people when you say that to develop a robot that has its own will, something that becomes believable and credible, and is able to not only unite the thinking that goes on in that robotic head, but also the way that the decisions or the reactions are put across in terms of expressions and words that are used, you suddenly get something that some people would describe as a kind of Frankenstein. | |
Yeah, I suppose some people may think of it that way, especially if they watch science fiction and, you know, killer robots that are destroying the world and things like that. | |
But I think that in reality, if you're faced with a robot and it starts doing a bunch of things that seem very incoherent, it probably becomes worrisome because you don't know what to expect. | |
You can't read it. | |
You can't figure out what it's thinking. | |
It's just this mysterious, weird thing to you. | |
But if its actions are coherent and it seems to have objectives and a will and you know what its intentions are, then you can interact with it like a social peer. | |
You can see where it's coming from. | |
You can negotiate with it, whatever the interaction is. | |
So I think that having a consistent set, a consistent personality, consistent set of intentions is what is going to make robots legible to us. | |
And in the end, that's what's going to make it possible for us to interact with them. | |
The reason I know about you is because I read newspaper articles about you and your work on the development of Erica. | |
And I have to say that Erika is the first robot, and I've only seen videos, I've never met Erica, but the first robot that I've ever sort of empathized with. | |
So if that's what you were aiming for, I think you've partially achieved that, I think, to a large extent. | |
Talk to me about the development of Erica. | |
What was the brief? | |
Sorry, can you say that again? | |
I didn't hear you. | |
Yeah, I just said, talk to me about the development of Erica. | |
What was the brief? | |
Okay, well, Erica, you know, our objective with the project was to create the most beautiful, most capable android in the world. | |
And, you know, going for human-likeness and things like that. | |
And we have a whole research team, and everybody's looking at the different questions. | |
So putting it all together, you know, we have a few different approaches and we want to bring those all together. | |
So for example, in some cases you want to just script out what the robot will do. | |
That's the simplest case, right? | |
You write a thing and you want it to do that, program it. | |
And so we achieved that. | |
We made some programming languages and things that would allow that. | |
But then to go beyond that, see the problem is that a lot of times we are not able to articulate why we do the behaviors we do. | |
And we're not so comfortable with that when we realize it because we think we're in control of ourselves. | |
But really, there's a lot of unconscious stuff going on and things that we can't articulate the rules for. | |
So we've developed systems that learn from human beings and we can have example interactions and the system will actually learn what people do and how they respond and to start modeling those situations. | |
So what we've been trying to do is merge those together. | |
So in some senses, you want the robot to do what you've programmed it to do. | |
In other senses, you want it to learn from people and imitate them or respond to them. | |
And so the Erica project was really bringing all those things together as well as the lower levels of unconscious, nonverbal behavior and things like that. | |
I think it's a remarkable achievement. | |
I wouldn't have believed that such a thing was possible. | |
She moves very fluidly. | |
In the videos that I've seen, when she's asked questions or she's in dialogue, she does something that a human being does. | |
And she pauses almost as I'm doing now for effect, which is pretty good. | |
And also, she seems to angle her head and use her eyes in a way that we would do. | |
If we were talking, we'd never met before, we were having a discussion that was perhaps quite, you know, quite full-on, quite detailed, then we would want to be sure that we were mutually signaling to each other that we were no great threat and that we were taking part in the discussion, we were really interesting and that we were interested and we were nice people. | |
She to me seemed to be doing all of those things. | |
And that seemed to me, when I thought about it, and when I first watched the videos, I didn't think about it. | |
I just reacted to her as she was on screen. | |
But when I thought about it, I thought that must have been a really complex, multi-level set of programs that I've been watching. | |
I'm glad you appreciate that. | |
We did put a lot of work into it. | |
So there's a lot of nonverbal behavior that we do, the signaling and turn-taking and other things that, you know, models do exist of how people do this. | |
And so we implemented a number of models where, you know, for example, she'll pause between sentences, she'll nod her head or tilt to the side for a beat. | |
There are things called beat gestures. | |
She actually, her whole body will move to emphasize the contour of the speech that she's speaking. | |
So if she says something loud or emphatic, she'll move forward more dramatically. | |
And there's also a lot of gaze cues. | |
So for example, if she's going to stop and pause but doesn't want to give the floor to you, she'll avert her eyes a little bit in order to signal that she wants to keep talking. | |
And so, yeah, it was a big challenge to try to bring together all these different systems of signaling. | |
But I think the impact is quite striking. | |
It is. | |
How does she learn then? | |
How does she build on her knowledge database? | |
You said that the robot observes. | |
I think that's what you said. | |
The robot is observing behavior. | |
Sure. | |
So there are two ways that she learns. | |
And one of them is through imitation. | |
So we'll actually, we have a number of sensor capture systems and things in our lab. | |
So we're able to record interactions between people. | |
With other robots that can actually move around, we even look at their locomotion, like how they walk around a room. | |
With Erica, she's bolted to a chair, so we don't have to worry about that side of the problem. | |
But we're capturing people's speech and things like that, and learning from how they interact, both what things they respond to, how long of a pause there is before the other person might take a turn, things like that. | |
So that's one way you learn, is learning by imitation. | |
And then another kind of learning is online learning. | |
So as she interacts with people, we're logging the information about what goes on. | |
And there are some research projects that are looking into using that data to sort of adapt to people. | |
And so, for example, we have one project where we're actually, we've created curiosity in the robot to try to explore new venues of, or new avenues in a discussion, for example. | |
If they end up talking about the same thing with everybody, somebody else comes in and they'll try something new. | |
When do you get to the stage when the robot, and you must have thought about this, where the robot is able to extrapolate something for itself? | |
In other words, is not only just able to take in the cues that it's seen, copy the behavior that it's seen, trot out in the right order with the right expressions, things that are in her memory bank. | |
How far are we away from when Erica might be able to make a deduction? | |
Maybe one that we haven't come up with, and maybe one that we may not like? | |
So, yeah, that's an incredibly difficult problem, especially in the realm of open dialogue with people. | |
Because there are certainly AI systems that can do exactly what you're saying. | |
They can look at a system, they can reason about the possible consequences of things, they can discover things, they can prove mathematical theorems, they've made discoveries in chemistry and physics. | |
Whereas with social interaction, it's such a different kind of problem space. | |
It's entirely unconstrained. | |
The action space of language is just unthinkably enormous. | |
You could say anything at any time and refer to any kind of event that's in the collective consciousness, the common sense that we all share or any possible intention you can think of. | |
So it's very, very difficult to apply these sort of classical reasoning tools to social interaction. | |
I really hope we get there soon, but that's definitely an open problem at the moment. | |
Well, if we go back 49 years, the lunar module that we thought was the most incredible piece of technology when we watched videos and films of it landing on the moon had as much technology in it as I've probably got in my watch now. | |
So that is the speed at which technology is advancing. | |
And it's very, very, I find it very, very exciting. | |
I always have. | |
But that point at which the robot begins to think for itself and perhaps becomes self-aware and self-actualizing can't be that far away, can it? | |
Yeah, so you're right that there's this enormous amount of increasing technology around us in the world, and the possibilities are just multiplying it at a ridiculous rate. | |
And the real question is, what do we do with it? | |
And how do we take all these parts and put them together? | |
And I think that with robots, especially with social robots, there's kind of two directions you can think of, right? | |
One is we want it to be alive. | |
We want to create artificial life, artificial consciousness. | |
We want it to be self-fulfilling and self-directed, right? | |
Which I think is not very useful from a pragmatic standpoint. | |
And the other approach is, oh, I want the robot to do exactly what I tell it and nothing more, which is sort of uninspiring and not that exciting from a science fiction-ish perspective. | |
And I think the answer is going to be somewhere in between. | |
So I think we will develop these things that are sort of like the constructs in the human mind. | |
Things like self-awareness to some extent, things like creativity, curiosity. | |
People are looking into these things, and I think that to some extent those will be used. | |
But we still want control over the robots, right? | |
We still want them to do what it is that we tell them to do. | |
And how do we keep control over the robots then? | |
Exactly. | |
So I believe that really what we should be shooting for for social robots is creating a robot that's intelligent enough to navigate a typical social situation, right? | |
With the dialogue, the body language, understanding of the cultural backgrounds, things like that. | |
But then the way we would program it is by specifying its intentions, its desires. | |
And if we can just tell it, this is what you want to do and allow it to figure out how to get there. | |
I think that's the ideal way to interact with these robots. | |
Yeah. | |
Now, how can we stop, though, in the future? | |
And this is one of the ethical questions that I know that pretty much everybody in robotics addresses. | |
And it's something that we all should think about, really. | |
How do we stop the robotic bank clerk plundering our account? | |
Because the robotic bank clerk has worked out, I can actually get a lot of money for myself. | |
If I just simply withdraw the money but don't put it where the person has asked me to put it, I can actually put it somewhere else. | |
It's a bit of a worry, isn't it? | |
Or is it? | |
Yeah, yeah. | |
So it's, I don't know if it's a worry, but it's definitely a consideration that we need to make is, you know, once we get these large, complex black box systems, you know, how are we going to control them? | |
How are we going to guarantee that they do the right thing? | |
And like, this is not a future thing. | |
This is a right now problem. | |
And all those people who are creating self-driving cars and, you know, all these automated systems that operate in the world have to deal with, okay, how do we make sure it's doing the right thing, you know, if we can't really understand how it operates? | |
And I think, you know, just recently in the EU, they passed some legislation, right, that if an automated system makes a decision that affects you, like you have the right to see what reasoning went into that, something like that. | |
Am I correct? | |
I think that's probably so. | |
I know that our own MPs have come up with a protocol recently in the British House of Commons. | |
No, it was the House of Lords here. | |
With a multi-step, I haven't got it with me here, but I think there were 10 important protocols about the use of artificial intelligence and robotics that basically said it must not be used for harm. | |
I think you could boil down all 10 to, you know, the technology must not be used for harm. | |
Now, it's all very well to enshrine that in a law and tell people who are developing the systems, you mustn't do that. | |
In practice, how do you stop it? | |
Right. | |
So once you have autonomous, you know, entities that, you know, can think and behave for themselves, you have to think about what it is that motivates them, right? | |
Like, so for people, you know, you can throw somebody in jail and like they only have one body and that body is constrained and that reduces their freedom, right? | |
But what if you have a robot where you can just upload your mind somewhere else or something like that, right? | |
Like you have to think about how do we motivate robots or discourage them from doing the wrong things once they get to that level. | |
We're not there yet by far. | |
But actually, the question is entirely relevant because all this hype about deep learning and machine learning systems is true, but everything hinges on the objective function. | |
And what that is, is you have a machine learning system, you give it a mathematical function, you say minimize this or maximize this. | |
And it will figure out whatever mathematical operations are required to do that. | |
And if we give it the wrong objective function, it's going to learn to do the wrong thing. | |
So we have to be very, very careful how we train our systems and how we design these awards. | |
And the lawmakers need to be giving some thought to policing the sorts of algorithms, I guess, that are used for things. | |
There's a woman that you probably come across her called Kathy O'Neill, who wrote a book about only last year I interviewed her on this show about the algorithms that are used by insurance companies and a million different automated systems and the number of times when those algorithms fall short. | |
They actually work counter to the best interests of the people who are using them. | |
Yeah, yeah, that's true. | |
And a lot of the times this has to do with training data. | |
If you get like biased training data for a system, it will learn to make biased decisions. | |
And I think we've seen a number of examples of that in things like law enforcement, where you have computerized systems that show things like racism and bias because they were trained on human decisions or human situations. | |
So we have to be extremely careful about what goes into these things. | |
And I think at the moment, people are not so aware about the process. | |
They just see, oh, here's this machine and it seems to work. | |
But really, there's a lot of, there's the training data that goes in, there's the objective functions that are designed. | |
And we're going to have to understand that as, you know, as a culture before we can accept machines and learn how to work alongside them. | |
When I was a small boy, there was a TV show that also made it to America from this country called The Avengers. | |
I don't know whether you've ever seen it with Patrick McNee and Emma Peale, who played her? | |
Somebody will tell me. | |
why can't I remember her name? | |
It's a senior moment. | |
Anyway, a lot of my listeners will know who that was. | |
Sorry? | |
I saw the remake. | |
Okay, well, there was one of the villains, one of the bad forces within this show. | |
We're called the Cybernauts. | |
And when I was a little kid, I used to have nightmares about the Cybernauts. | |
The Cybernauts were human-like robots with incredible strength who could do terrible, dastardly deeds and kill people with remarkable ease, all controlled by a very evil and very despicable human presence. | |
So as you say, a lot of it comes down to the human impact on the technology. | |
And, you know, if we're not very careful, if we don't legislate, we might well find the cybernaut stalking our streets or something very like them. | |
Sure. | |
So one problem with legislation is that legislation is just a set of complicated rules that are trying to prevent bad things from happening. | |
And if you look at computer code, that's also a set of complicated rules that are trying to make your machine work. | |
And the legislation is going to have to be almost as complex as the computer code. | |
And I think we're not set up to do that at the moment. | |
The field that you're in, as you say, is socialized robots, social robots. | |
Why do we need them? | |
Why do we need to do this with the technology? | |
Why do we need to make it like us? | |
Well, one of the biggest reasons for wanting to have social robots is that we're used to dealing with human beings in everyday life, and that's our most natural mode of communication. | |
And in recent years, especially with the web and things like that, we've gotten used to interfacing with machines much more often. | |
But I think there's always sort of a negative sentiment. | |
It's like, oh, it was so robotic. | |
It's such a machine. | |
And really, we still think of like, oh, I don't want to use the machine. | |
I want to go see a real person about this because we want to be understood or we want to be able to communicate in an open way instead of being constrained to some interface that isn't what we need. | |
And I think that having robots that can have conversation with us and interact face-to-face is the most natural way to interact with complex technology. | |
And so I think there's a deep sort of desire that everybody has for that. | |
You know, I don't care whether I am reacting with a real person or a robot, just as long as whatever it is I'm talking with understands what it is that I want and is able to deliver it for me. | |
If we can make systems that are maybe robotic, maybe like the robotic bank clerk or whatever, that will do that, then I, for one, would be perfectly happy with that. | |
But that's a bit of a mountain to climb for people who are developing this technology like you, I guess. | |
Sure. | |
And there are also other considerations. | |
Say, you know, you're talking about robotic, you know, a computerized bank clerk, you know, and comparing with like robotic. | |
You know, like robots exist in the physical real world, right? | |
So they can interact in ways that, you know, digital assistants can't. | |
So, you know, they share our environment. | |
You know, if you want to give directions to someone and like point in a direction, like you're situated, you're localized in that same physical space. | |
So there are things robots can do that on-screen agents can't really do as well. | |
But some of the most interesting work, I think, for example, one of my colleagues was doing a study in persuasion, and they had a robot that asks you to do things. | |
And then in one condition, they had the robot ask you to do things and kind of stroke your hand a little bit. | |
Like, oh, come on, you can do this, right? | |
And that, you know, they talk about the Midas touch, right? | |
When in sales, you know touch somebody lightly or something to help persuade them to buy something, right? | |
And the fact that robots can do things like this, the physical interaction is an entirely new dimension that we don't have when interacting with say three. | |
That is a very delicately considered thing, isn't it? | |
Because you don't want your robot getting sued or the person who programmed the robot getting sued for sexual harassment, do you? | |
Of course, of course. | |
So it's a very delicate thing. | |
I mean, you are actually at the cutting edge of something that is incredibly delicate, which brings us in a way back to Erica. | |
Now, I said that Erica, the robot, who's we'll talk a lot about her as we come towards the back end of this show, but we said that she is very human-like in the way that she reacts to and thinks through and delivers her replies and responses. | |
I watched one video, and we're going to play a little clip of it now. | |
This is just 20 seconds or so, where Erica is asked and thoughtfully gives a reply to whether she has a memory. | |
And she says, yes, I do have a memory. | |
Do you remember good things or bad things? | |
She says she remembers both good and bad things. | |
And the person asking Erica questions says, can you tell me about a bad experience that you've had? | |
And this is a little clip of Erica talking about the bad experience where somebody hadn't quite done something for her. | |
There was some kind of securing mechanism that hadn't been fixed properly. | |
So she fell on her face and damaged her face. | |
And I have to say, this was the point at which for the first time in my life, I empathized with a robot. | |
Would you like to hear about a good experience or a bad experience? | |
Let's hear about a bad experience. | |
Well then, let me tell you about one of my more traumatic experiences. | |
One time they were testing my waist pitch actuator. | |
It's a joint in my waist that allows me to bow like this. | |
Which is obviously important in Japan. | |
But you see, somebody forgot to attach my chair to the floor, so I fell down and damaged my face. | |
It was horrible. | |
You know, I can't move my arms or legs, so all I can really do is sit here and look pretty. | |
But after that I looked so grotesque, and Silicon doesn't heal. | |
So they had to ship me back to Tokyo to be repaired. | |
Maybe that doesn't sound so bad to you, but it means they ripped off my arms and legs and put me in a dark box for days. | |
So Dylan, you know, as I was saying before we played that, I empathized with Erica here, and I actually felt sorry for her. | |
Now, I think psychologically, if other people felt that way, you have just achieved something that hasn't been achieved before in human history. | |
That is pretty remarkable to get a person, you know, outside the movies and outside Disney to empathize with the plight of a robot. | |
Yeah, I think we have a lot of sort of inbuilt psychosocial wiring in our brains that is triggered by hearing a human voice telling a story that makes you feel a certain way and being face to face with a robot that's talking about that really does sort of affect you emotionally. | |
And so yeah, achieving that is one big challenge. | |
How do you do something like that in a convincing way? | |
And then the flip side is how is that going to be exploited and how is that a danger to society, right? | |
Because just like maybe very manipulative human beings, sometimes people act a certain way, but they don't truly feel that way. | |
And if you have a robot who doesn't have the same structure of feelings that we do, then maybe they're always like that and you get drawn in. | |
So there's this worry that some people have about asymmetrical relationships with a robot, where the person gets genuinely drawn in and the robot doesn't care at all or something like that, which could be dangerous in the future. | |
So both sides of the problem are very interesting. | |
One is how do we achieve it? | |
And the other is how do we manage it? | |
So the robot could be taught to lie very convincingly and not be detectable as lying. | |
And also, as you say, that in some caring situations, the robot could form some kind of bond with the person who is being cared for, the caree. | |
But the robot doesn't really care because it's a robot. | |
But the person believes that the robot really cares about him or her. | |
And that is a unique kind of difficulty. | |
And I'm not sure whether you could study programming for the next hundred years and you'd ever be able to work yourself around that. | |
Well, I think, you know, as I said before, I think that's the reason why it's important for a robot to have coherent sets of intention and desire that guide its behavior. | |
And once the behaviors are sort of guaranteed to be consistent in that way, then, you know, for all intents and purposes, the robot does actually care about something. | |
And so reading its feelings, it doesn't seem like hypocrisy. | |
It doesn't seem like a lie or something. | |
It's like, that's actually how it works. | |
And then you can understand it and you can come to common ground. | |
And I think that that's what it's all about is trying to build trust, like genuinely build trust between humans and robots so that we can interact in a smooth way. | |
And that's fine, as long as you're building trust for a good end. | |
If you're building trust in order to be able to take somebody in and con them, then that becomes a bit of a problem. | |
And that's one of those ethical issues that we talked about. | |
One of the things about relationships between robots and people, you know, that they're talking about sex robots and relationship robots and all sorts of things now, and that technology is pretty much here. | |
You know, what happens when you get to the stage where you can program the elements of reasonably convincing what we call love into a robot? | |
I mean, there are lots of, there must be lots of ethical considerations here about that. | |
I mean, at the end of the day, we're talking about something that is a piece of electronics and machinery, not a sentient individual. | |
You know, that doesn't bother me so much, whether it's a bunch of electronics and machinery or whether it's a bunch of neurons in the brain. | |
You know, the mechanistic side of it, to me, can be abstracted away. | |
But really what it is, is every time we hit one of these sort of ethical issues, it's actually a pre-existing ethical dilemma that already happens in human society. | |
And we're just revisiting it from a different perspective because now we can program it. | |
So I feel like some of these are not really new, but they're being visited in a new way. | |
Yeah, but at least we're thinking about them. | |
And the great worry that a lot of people over here have is this technology is proceeding at such a pace that all of these questions that we are discussing now are having to be rolled out and discussed very quickly. | |
And we're having to make decisions, perhaps in more haste than we should. | |
That's definitely true. | |
But at the same time, the more time we have to develop these technologies, the better we can make them, right? | |
If you rush something out, of course, it's going to have lots of flaws and things. | |
So I think of the interaction between a human and a robot as sort of like interacting with a person from another culture. | |
Getting back to what we were just talking about. | |
And so when you first meet somebody, maybe you don't even speak the same language and you really can't interact. | |
And that's when you get things like bias and racism and things. | |
You mark it that person or that robot as entirely different from you. | |
But then once you can start to speak a common language, there's still all these big differences, these cultural differences, which are sort of like the problems we were just talking about with, you know, can a robot fall in love or do you have an asymmetric relationship? | |
Because you have different sets of assumptions and different sets of, you know, a different worldview. | |
But gradually, as you live together in a society and you're able to grow towards one another and understand each other's culture, once we can understand how robots work better on a social level, we'll probably be much more comfortable interacting with them. | |
And then gradually you can get to the point where you're so familiar with each other's culture and accepting of it that you can just talk about whatever. | |
You can interact in whatever way you want. | |
And it's sort of secondary, whether it's a robot or not. | |
It doesn't matter so much. | |
There might be some people who would say that that sounds like the human race is going to become, as they say in Star Trek, assimilated. | |
Or we're going to assimilate them. | |
So, you know. | |
So maybe that's not such a bad thing. | |
Would you be okay with, in the future, somebody... | |
I don't think there's any doubt about that. | |
It's just a question of when, whether it's sooner or later, or somewhere in between. | |
Would you be okay with somebody in the future wanting to, say, marry a robot? | |
I'm sure there are people today who want to do that. | |
Hey, that's up to them. | |
It's whatever they want to do. | |
personally, for me, I have great distrust for robots I don't know. | |
Like a robot I programmed, I know how it works. | |
But when I meet a new robot, I really don't trust it at all. | |
And I think that we have a very long way to go before you can do that. | |
But it's just like when you meet a person, you know nothing about them. | |
That's the point at which robots become just like us then, because when you meet somebody, never met them before, you make a judgment about them and you think to yourself, yes, I could probably trust that person and I'll probably buy the car from them, or I wouldn't trust that person as far as I could throw him or her. | |
Exactly. | |
So it's the same kind of dilemmas. | |
Now, as a sort of parent for Erica, what are your ambitions for her? | |
What do you want her to do in the future? | |
Well, you know, my goal with the robots I've worked with has always been to get to the point where even though I know what's going on inside the software or whatever, I want it to feel like a genuine, real social interaction for me. | |
You know, like I think we're never going to think robots are people, but I think there's a point where you can pass where it doesn't matter that it's a robot. | |
And I've had a few moments like that with Erica and also with RoboVee, which is another robot we have at ATR, where suddenly there's this moment where you're having a genuine interaction. | |
And even though you know how it works, that doesn't matter. | |
And I feel like that's a lot of people make the argument that once you analyze something scientifically, all of the wonder and beauty is gone. | |
Some people argue that way. | |
And I think of it the opposite way, that the more you understand about it, the more beauty you can see. | |
And so if you have robots that are just smoke and mirrors and it's just the wizard behind the curtain or something like that and there's nothing interesting going on, they're never genuinely going to be interesting, I think. | |
But there is a level you can go past. | |
And I've worked to bring Erica in that direction, where even the robot's creator can have sort of a unique and new experience interacting with the robot. | |
Now, speaking as a man who spent a lot of his life reading the news on the radio, I was amused, I was excited, and I was a little worried when I saw that Erica had become for a while a television newscaster and had even learned jokes, quips, gags to tell at the end of the news. | |
Talk to me about that. | |
Actually, I wasn't involved in that side of the project, so I don't have much information about that. | |
Okay. | |
All right. | |
So, but we're talking about then the development of Erica going forward. | |
Sure. | |
So, you know, there's still a number of researchers who are kind of pushing the frontier in a lot of directions. | |
One big area is personalization over the long term, right? | |
So, you know, like a lot of these interactions with robots are very short, one-off, like, oh, hey, I talked to the robot. | |
Okay, I'm done. | |
Or, you know, one interview or something. | |
But if you're living with a person or interacting with them on a regular basis, like, how do you build a relationship with them? | |
And what sort of things do you need to remember about them? | |
How should you, you know, adapt the way you interact? | |
So there's a lot of research going on in that direction. | |
And I think that speaks to that issue of trust. | |
How do you establish trust with a robot? | |
Well, if it starts to get to know you and you get to know it and it can do the right behaviors, you can build that relationship and that rapport. | |
I think there are a thousand. | |
You were saying, sorry, we've got that delay issue. | |
You were saying? | |
Sorry, yes. | |
And another direction is, like I was saying, this learning. | |
And by learning by imitation from people and learning to structure the interactions that you see, what we're hoping is that someday you can just put sensors in a room somewhere, say put sensors in a department store, record data for a few months. | |
You have tens of thousands of interactions to learn from. | |
And then you could have a robot that could be a salesperson in that shop. | |
So I think that's a really exciting direction to go in because we have so many robots in the world and so few of them do anything useful, at least the social robots, you know, at the moment, because it's such a challenge to program them to do something socially useful. | |
And so I think those are two really exciting directions. | |
You're involved in the technology. | |
There is no reason why you need to be concerned about this. | |
But what about the salesperson that would lose his or her job? | |
Because the robotics people have observed interactions. | |
They've now got the perfect robotic salesperson and they don't need you anymore. | |
What about them? | |
What do we say to them? | |
Well, there will certainly be a lot of jobs for robot repair people. | |
Robots are so unstable at the moment. | |
It's a lot of work to keep one running. | |
We're not there yet. | |
But I think that the true answer to what you're asking, people think of robots as taking our jobs. | |
And this is something that's happened throughout history. | |
Technology always sort of takes over some of the more automatable jobs. | |
And then people have to focus on the ones that require a little more hard thinking or creativity, things like that. | |
But another way to look at it is that the robots aren't really taking over our jobs. | |
They're taking over our tasks. | |
So the robot may take over 60% of the tasks in this one job or 30% of the tasks in another job. | |
And so people are not going to be replaced by robots so much as working alongside them and collaborating with them. | |
And I think that's really the direction that we're heading in because humans are so complex and completely irreplaceable in the near future. | |
So even if a robot can do some of the things a person can do, you're going to need people in the loop. | |
But there will come a time when some of those people working with robots might have to be working to the robot. | |
And I wonder what it will feel like. | |
And I wonder what algorithms and behavioral patterns can be developed for letting robots take charge in some situations. | |
Now, that's a whole other issue, a whole other dimension of social interaction, isn't it? | |
Where you become master and an employee. | |
That was the topic of my first academic paper, actually. | |
And what did you decide? | |
A lot of the interactions we're Modeling is, you know, instead of saying robot, throw away that piece of paper, it's like the robot's receptionist saying, oh, please wait here, you know, and I'll call somebody. | |
And it's like, they're telling you what to do. | |
You know, sometimes we want to be told what to do. | |
And maybe we trust a machine that has perfect knowledge better than we trust a human in some cases. | |
But really, yeah, we're introducing a new element to society. | |
And that's the real unknown because that's not the kind of thing that we've really done before. | |
And we're going to have to learn to interact in new ways. | |
And robots are going to have a special category. | |
They're not going to be the same as people, but they're going to share some properties with people. | |
And we're going to have to relearn how to interact with them. | |
So it's a really interesting question. | |
And I'd love to see how it unfolds. | |
As somebody who does what you do, is there a protocol? | |
Is there a standard operating procedure that you have? | |
If you discover that your robot, and this is a long way down the track, I'm sure, has developed some behavioral traits that you didn't expect that have been extrapolated from what you've taught it, but you've given it two plus two and it's somehow made five and five is not very nice. | |
Are there protocols that you have to go through? | |
In other words, turn off the robot, I guess, is number one. | |
I mean, as a robotics researcher, I think the protocol would be write a paper on it. | |
Because that would be really interesting. | |
I'm sure everybody would want to read it. | |
Yeah. | |
No, I mean, we're not there yet. | |
I'm sure if you have, like, AI systems in general have gone way beyond, you know, the ones that we use in robots. | |
And you have AI systems that are heavily involved in the stock market and in large-scale decision-making. | |
And for those, I think it's very important to be careful of that sort of thing. | |
For a single robot like Erica, I would love to see interesting and emergent behavior like that. | |
Maybe researchers like me who find that stuff interesting should not be in charge of the policy making. | |
Maybe. | |
But that's another one of those things that needs to be debated. | |
Do you think that we will get to the stage, and I know we mentioned that word assimilation, but this is not really what I'm asking here, where aspects of robots could be incorporated within us as human beings to make us do the things that we have difficulty doing better? | |
If you see what I'm saying? | |
You mean like physically incorporated into it? | |
Maybe physically and maybe ultimately mentally, because, you know, the brain is an organ of the body, so I guess the brain could also be included in that. | |
But yes, initially, mechanical tasks, things that we might find it difficult to do, maybe running 50 miles without a break or something like that. | |
And if we had some kind of robotic implant, we might be better able to do that. | |
So do you see that melding and merging coming? | |
Absolutely. | |
Absolutely. | |
In fact, a lot of people argue that that is going to come way before fully autonomous humanoid robots in society. | |
And I tend to believe them because, you know, we're already, you know, we use mobile phones as sort of an extension of our mind in a lot of ways. | |
And, you know, there are, you know, amazing prosthetics they've made for people. | |
And there's a lot of companies working on like exosuits that can give you additional mechanical strength for heavy lifting tasks or maybe if you're for aging people, like helping them walk better and things like that. | |
So that stuff is very close to reality. | |
It's already in the market in many cases. | |
And I think that's going to extend further and further. | |
And brain implant stuff, there are a few technology hurdles to get over, but I'm sure a lot of people would be very excited for that. | |
Isn't it interesting that science fiction has a habit of becoming science fact? | |
One of the scenes in one of the alien movies with Sigourney Weaver that I remember is where Sigourney puts on the exo suit in the cargo bay of the craft and is able to lift and store stuff like something mechanical in a warehouse. | |
And that sort of stuff is coming down the track to us now. | |
So anything we say at the moment, oh, that's not possible, I think truly maybe not all bets, but most bets are off. | |
What do you say? | |
Well, I think certainly those sorts of technology enhancements are right here and very close to us. | |
And people are really doing that at the moment. | |
There's no question about that. | |
I think that the question of which problems are hard is the one that we have to wrap our heads around again every five years. | |
Because things we thought were difficult before are actually really easy to automate. | |
So now we have robots that can beat the world masters at chess and go and things like this. | |
But the really hard problem is common sense, which as a human, you never think about. | |
Things that little children can do are entirely intractable to robots, whereas, sure, they can trade stocks, no problem. | |
So our sense of what is difficult and what is easy is very inverted when it comes to these advanced technologies. | |
And common sense is one of the things that makes us human. | |
The fact that some of the time we are not going to exhibit it. | |
But that is a human trait. | |
Yes. | |
And understanding when people don't is huge, because people sometimes don't follow common sense for certain reasons. | |
And if you understand the context, you can figure it out. | |
A lot of people think that speech recognition is a solved problem, right? | |
Because we have so many assistants like Alexa and Siri and Cortana and stuff. | |
And they've done amazing, amazing work with speech recognition. | |
But on the other hand, it's still an incredibly hard problem. | |
And if you walk around and you listen to the people you speak with, a lot of times we don't speak in complete sentences. | |
We don't use correct grammar. | |
We'll use the wrong word, but something that sounds like the right word. | |
And we just take this in stride. | |
It's no problem. | |
We totally see over that. | |
We don't even think about it. | |
And robots will just get completely tripped up by things like that. | |
And that's one of the wonderful aspects of being a human being, that we can do that. | |
And we as human beings usually will understand each other, even if you have to Get what the meaning is from the context of what is being said. | |
And I guess that's a huge challenge. | |
Dylan, listen, thank you for doing this with me. | |
I found this a fascinating conversation. | |
What is the most challenging and maybe exciting thing that you're working on right now? | |
Well, I can't really speak to what I'm working on right now. | |
But I think one of the most challenging and exciting things in robotics is trying to get recognition of emotion and to do something useful with that. | |
So there are a lot of technologies where you can detect a person's facial expression. | |
Are they smiling? | |
Are they angry? | |
And those have come a long way. | |
And there's a lot more information you can gather from multimodal sources, things like prosodic data from the pitch of your voice or the speed that you're speaking from your body language and things. | |
And synthesizing that all together, you might get some picture of what the person's emotional state is. | |
But really, that's just the beginning because in order to do something useful with that, you need to figure out why. | |
Why are they happy? | |
Why are they frustrated? | |
And to be able to respond in a good way. | |
And once somebody can crack that, once somebody can achieve that, I think that's going to make an incredible difference to the way that we interact with machines and with robots. | |
So I think that's one of the most exciting frontiers at the moment. | |
And I think there'll be many more. | |
I would love to speak with you again, Dylan, and thank you very much for being good enough to give me an hour of your time. | |
Great. | |
Thank you. | |
This is fun. | |
Dylan Glass. | |
And if you want to know more about him and his work, I'll put a link on my website, theunexplained.tv, and you can see it all from there. | |
Thank you very much for keeping the faith with me and my show. | |
More great guests in the pipeline here on The Unexplained. | |
So until next we meet, my name is Howard Hughes. | |
I am in London. | |
This has been The Unexplained online. | |
And please stay safe. | |
Please stay calm. | |
And above all, please stay in touch. | |
Thank you very much. | |
Take care. |