Episode 303 Scott Adams: Talking About Robots and AI Because it Must be 4:20 Somewhere
|
Time
Text
Hey everybody, come on in here.
It's a very special afternoon version of Coffee with Scott Adams.
This time without coffee.
But if you'd like to enjoy a beverage, Get in here fast because it's time for a sip of water with Scott Adams.
Let's call that the simultaneous sip.
Join me. Good morning, China.
Good morning.
Lindy, Colleen, and Molly, and Auntie, and Mike, and all the rest of you.
All right. Now, one of the things I think is the funniest thing in the world, and literally it's just one of the funniest things, is that scientists and the smartest people in the world can't figure out, yet, how to create artificial intelligence.
And I think I know the problem.
The reason you can't produce artificial intelligence is that there's no such thing as intelligence.
At least not what we imagine it is.
So in other words, we're trying to reproduce something that doesn't exist in the first place.
You can't reproduce something that's never been produced.
And that's what AI people are trying to do.
In other words, they're trying to reproduce human intelligence.
But if I've taught you nothing, human intelligence is largely an illusion in a practical sense.
We are largely not using intelligence, but imagining that we are.
So we're actually literally imagining our own intelligence.
And so the reason you can't build it into a robot is that the thing you're trying to reproduce has never been produced.
You can't copy something that doesn't exist.
Literally. That's the problem.
And I want to talk about that a little bit by a little thought experiment in which I ask you...
What does it mean to be alive and intelligent?
Now, there is a definition of what it means to be alive in scientific terms.
But I propose that that will not be a good enough definition in a world of robots and computer life.
That we'll need to update what it means to be alive.
And we're also going to have to update what it means to be intelligent.
And here's a little mental experiment.
And the question is this. What would you have to add to a robot to be satisfied that it was alive and intelligent?
Not just a robot anymore, in the sense of being an object, but crossing over into being a life form.
What would you have to add to it to get to that?
On the other hand, what would you have to take from a human to turn it into not alive anymore?
And if you work both ways, you might find out something surprising.
So take, for example, this human.
What could I subtract from that human and still say, well, he's still intelligent and he's still alive.
So let's see what we could remove from this person.
We could remove...
All of his skills, because if he suddenly forgot how to do stuff, you'd still say, well, you know, he doesn't know how to play baseball, but he's still a living, intelligent creature.
So you couldn't, you know, I'm not talking about removing the skill of walking and talking, but you can get rid of most of the skills.
You can get rid of most of the body, but not all of it.
And you'd still have you, right?
If this person were a brain in a jar, it would still be the person.
It just could do less stuff.
You're not really your actions because you could change your actions and you'd still be the same person.
You could say that you're your DNA. But that's something that the robot can essentially copy.
Because the DNA is like the source code for the robot.
So that part is not special, right?
It's really just the mechanism that allows you to do what you're doing.
It doesn't really define you as intelligence.
I don't think it's really your thoughts.
Because no matter what you're thinking, you're still alive and you're still intelligent.
It doesn't matter specifically what you're thinking, but there has to be some kind of thought process going on.
So I'll get to that in a moment.
Obviously there has to be something like thinking, but I'll get to that.
You probably need something like your senses.
I don't think memories are necessarily important.
Because if I erased all of your memories up till today, you would still be alive and you would still be intelligent.
So I don't think that's the important part.
I think it comes down to this.
I think the thing that makes you human, somebody said emotions.
We all have very different emotions, but it's kind of a feedback loop with our senses and our brain and stuff.
It's an emergent sort of thing.
I don't think emotions define you, do they?
If you could, let's say, let's take somebody like a psychopath.
A psychopath would not have much in the way of emotions.
Not in a normal sense, but would still be alive and would still be intelligent.
So I'm going to make a case for the most important thing that makes you human is this.
And here's my little rule that I want to run by you.
So my rule that I wrote today is anything that can develop preferences through experience is both alive and intelligent.
So anything that can develop its own preferences through living and experiencing in ways that are not predictable to other people is effectively alive and effectively intelligent.
Now here's the trouble.
If a robot starts developing preferences, we should be concerned because those preferences might not be in our best interest as humans.
So you would need to give us some kind of guiding principle.
So you might say, you can develop any preferences you want about how to act within reason, but they all have to be compatible with some larger guiding principle.
And I would suggest that the best guiding principle for our AI of the future is that it maximizes human reproduction.
In other words, it does whatever is best for human reproduction.
Which is slightly different from don't hurt anybody.
If you have the rule of don't hurt anybody, you get into more ambiguous situations because there are lots of cases where you have to hurt somebody to help somebody else.
But the way human beings figure that out is they usually choose what's best for human reproduction.
If they have a tough choice and somebody's got to die and somebody's got to live, they're going to choose the one that's best for human reproduction.
Let me give you an example. If you had a choice, let's say you're a bus driver.
You're a bus driver and your bus is filled with senior citizens that are 80 years old.
So everybody in your bus is 80 years old.
And suddenly a young mother with a baby carriage goes in front of the bus.
If you're a human being, you instantly make the decision that's best for human reproduction.
Meaning you'll kill everybody in the bus to save the baby and the young mother.
You'll go right off the road.
And the reason you'll do that, and you'll do it instantly, is because the people in the back of the bus can't have babies.
Now you don't process it that way, but the fact is that there's a reason that women and children go first, right, in disasters.
They have a higher value to reproduction.
So you could easily build into your robots and your AI that as their guiding principle, so that when the robots get into ambiguous situations where somebody might have to die, or at least you're putting a risk of somebody dying, but somebody else might be saved, that the robot says, this one is more likely to reproduce and have a healthy, good offspring.
This one is likely past the age of reproduction.
Now I'm not talking about eugenics here.
The robot would treat everybody as equally valuable, except for are they likely to reproduce or not.
That's it. And I would argue that that would be very close to the way humans I would even go so far as to say that most of our human decisions are some kind of subconscious expression of our reproductive impulse.
You know, lots of smart people have said that, so that's not new.
But the fact that you dress a certain way, that you make money, that you show off, that you do things to protect your ego, your health, all of that Directly or indirectly goes toward reproduction.
So if you said to the robots, here's the deal, robots, you robots don't have egos.
As a robot, nobody gave you an ego.
So you have no reason to protect yourself.
But you need an organizing principle around which to make all of your ambiguous decisions.
And that principle will be whatever's best for the health and reproduction, mostly the reproductive health of the human species.
Somebody says, pure Darwin.
That is evil. I'm not sure why that's evil.
It might be. But I like to hear the argument.
Alright, so for those of you joining late, let me summarize.
In my opinion, the difference between a human and a robot is going to shrink over time.
And that lots of things we think that make us alive and make us intelligent are actually kind of optional.
You can lose an arm, you could even, you know, what happens if you edit your DNA? Because we'll be able to do that, right?
I think we already can. Suppose you're born and then somebody edits your DNA. Are you a different person?
So I say an entity which can develop its own preferences in an unpredictable way based on its experience in the environment is alive and intelligent and that we would actually see it that way.
If you saw a robot That you could tell was learning, and specifically what it was learning, is what works and what doesn't.
And it's learning what keeps people alive and what doesn't.
Now let's say that the robot is also connected to the internet, so that it learns from all of the other robots' experiences.
So every robot is going to learn what every other robot is learning as soon as they're learning it, if they're networked.
Does the robot run over this gay...
I'm sorry, that was a terrible question.
I don't even want to repeat it, but it was funny.
It's funny that you would ask the question.
It's not a funny question. Why are you ignoring the sentience?
I think sentience is largely an illusion.
It just has to do with the way we're wired and that I think this thing called consciousness is little more than being able to predict what's going to happen and then measuring that Against what actually does happen and that that's all it is.
It's the difference between what you think is going to happen and what's happening and then that gets fed through your senses and processed in your brain and what you feel while that's happening is your sentience.
Because remember, I'm talking about living and in the concept of living, a plant is alive but it's not sentient.
If you were to watch a robot developing preferences, you would say it was intelligent.
Yes, that is Christina.
She's practicing her Christmas songs.
All right. Boy, she's really good.
Alright, that's all I have for today.
I just wanted to talk about robots for a while, and now I've got more stuff to do.
I've got to decorate the house with Christmas stuff.