Sam Harris introduces John Brockman's new anthology, "Possible Minds: 25 Ways of Looking at AI," in conversation with three of its authors: George Dyson, Alison Gopnik, and Stuart Russell. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Just a note to say that if you're hearing this, you are not currently on our subscriber feed and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org.
There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
Welcome to the Making Sense Podcast.
This is Sam Harris.
Okay, a few things to announce here.
I have an event in Los Angeles on July 11th.
If you're a supporter of the podcast, you should have already received an email.
This is actually the first event for the app.
It's the first waking up event.
It is at the Wiltern on July 11th.
And it is with a great Tibetan Lama by the name of Mingyur Rinpoche.
And Mingyur is a fascinating guy.
He's the youngest son of the greatest Dzogchen master I ever studied with, Thugur Gen Rinpoche.
And I wrote about him in my book Waking Up, so that name might be familiar to some of you.
I studied with him in Nepal about 30 years ago.
And I've never met Mingyur, and he's about, I don't know, seven years younger than me.
I was in my 20s when I was in Nepal, and he was a teenager.
And he was on retreat for much of that time.
He did his first three-year retreat when he was, I think, 13.
And he was always described as the superstar of the family.
I studied with two of his brothers, Chökyam Rinpoche and Tsokyam Rinpoche.
But I've never met Mingyur, and I'm really looking forward to it.
He has a very interesting story because at some point he started teaching and started running monasteries.
I believe he has three monasteries he's running, as well as a foundation.
But then in 2011, when he was 36, he just disappeared from his monastery in India and spent the next four and a half years Wandering around India as a mendicant yogi, living in caves and on the streets, and encountering all kinds of hardships.
I believe he got very sick and almost died.
Anyway, he's written a book about this, titled In Love with the World, which I haven't read yet, but I will obviously read it before our event.
And we will discuss the book and the nature of mind and the practice of meditation and take your questions.
And again, that will be happening at the Wiltern in Los Angeles on July 11th.
And you can find more information on my website at samharris.org forward slash events.
And tickets are selling quickly there, so if you care about that event, I wouldn't wait.
And the audio will eventually be released on the podcast.
Okay.
The Waking Up app.
There have been a few changes.
We've added Annika's meditations for children, which are great.
And there are some meta-meditations coming from me as well.
Also, we'll soon be giving you the ability to sit in groups, where you can organize a virtual group with your friends or colleagues and sit together, either in silence or listening to a guided meditation.
And very soon there will be a web-based version of the course.
You can get more information about all that at wakingup.com.
So this podcast is the result of three interviews, and it is organized around a new book from my agent, John Brockman, who edited it.
And the book is titled, Possible Minds, 25 Ways of Looking at AI.
And you may have heard me mention John on the podcast before.
He's not just a book agent, though between him and his wife Katinka Mattson and their son Max Brockman, they have a near monopoly on scientific non-fiction.
It's really quite impressive.
Many of the authors you know and admire, Steve Pinker, Richard Dawkins, Dan Dennett, and really most other people in that vein you could name, and many who have been on this podcast, are represented by them.
But John is also a great connector of people and ideas.
He seems to have met every interesting person in both the literary and art worlds since around 1960.
And he's run the website edge.org for many years, which released its annual question for 20 years, and got many interesting people to write essays for that.
And there have been many books published on the basis of those essays.
He's also put together some great meetings and small conferences.
So he's really facilitated dialogue to an unusual degree and at a very high level.
And he's written his own books, The Third Culture and by the late John Brockman.
But this new book is another one of his anthologies, and it's organized around a modern response to Norbert Wiener's book, The Human Use of Human Beings.
Wiener was a mathematical prodigy and the father of cybernetics, and a contemporary of Alan Turing and John von Neumann and Claude Shannon and many of the people who were doing foundational work on computation.
And Wiener's thoughts on artificial intelligence anticipate many of our modern concerns.
Now, I didn't wind up contributing to this book.
I had to sit this one out, but I will be speaking with three of the authors who did.
The first is George Dyson.
George is a historian of technology, and he's the author of Darwin Among the Machines and Turing's Cathedral.
My second interview is with Alison Gopnik.
Alison is a developmental psychologist at UC Berkeley.
She's a leader in the field of children's learning and development, and her books include The Philosophical Baby.
And finally, I'll be speaking with Stuart Russell, who's been on the podcast before.
Stuart is a professor of computer science and engineering at UC Berkeley.
And he's also the author of the most widely used textbook on AI, titled Artificial Intelligence, A Modern Approach.
This is a deep look at the current state and near and perhaps distant future of AI.
And now, without further delay, I bring you George Dyson.
I am here with George Dyson.
George, thanks for coming on the podcast.
Thank you.
Happy to be here.
So, the occasion for this conversation is the publication of our friend and mutual agent's book, Possible Minds, 25 Ways of Looking at AI, and this was edited by the great John Brockman.
I am not in this book.
I could not get my act together when John came calling, so unfortunately I'm not in this very beautiful and erudite book.
Previously you wrote Turing's Cathedral, so you've been thinking about computation for quite some time.
How do you summarize your intellectual history and what you focused on?
Well, my interest, yeah, goes back much farther than that.
I mean, Turing's Cathedral is a recent book.
So 25 years ago, I was writing a book called Darwin Among the Machines, at a time when there actually were no publishers publishing, you know, any general literature about computers, except Addison-Wesley, so they published it, thanks to John.
The thing to remember about John, John and Katinka, it's a family business, and Katinka's father was a literary agent.
And John's father, I think, was in the flower merchant business.
So they have this very great combination of flowers have to be sold the same day and books have to last forever.
It sort of works really well together.
Yeah.
And your background is, you also have a family background that's relevant here, because your father is Freeman Dyson, who many people will be aware is a famous physicist.
He got inducted into the Manhattan Project right at the beginning as well, right?
He was at the Institute for Advanced Study.
Correct my sequencing here.
First of all, the important thing in my background is not so much my father, but my mother.
My mother was a mathematical logician.
She worked very closely with Kurt Goodall and knew Alan Turing's work in logic very well, and that's where the world of computers came out of that.
My father, they both came to America at the same time in 1948, so the Manhattan Project was long over.
My father had nothing to do with it.
He was working for the conventional bombing campaign for the Royal Air Force during the war, but not the Manhattan Project.
So you have deep roots in the related physics and logic and mathematics of information, which has given us this now century or near century of computation and has transformed everything.
And it's a fascinating intellectual history because The history of computing is intimately connected with the history of war, specifically, you know, code breaking and bomb design.
And you did cover this in Turing's Cathedral.
You're often described as a historian of technology.
Is that correct?
Does that label fit well with you?
That's true, yes.
I'm more a historian of people, of the people who build the technologies.
But somehow the label is historian of technology.
I'm not a historian of science.
That's also, I don't know why that's always, you know, it's just sort of a pigeonhole they put you into.
So, maybe we can walk through this topic by talking about some of the people.
There are some fascinating characters here, and the nominal inspiration for this conversation, for John's book, was his discovery or rediscovery of Norbert Wiener's book, The Human Use of Human Beings.
But there were two, there were different paths through the history of thinking about information and computation and the prospect of building intelligent machines.
And Wiener represented one of them, but there was another branch that became more influential, which was due to Alan Turing and John von Neumann.
Maybe, I guess, who should we start with?
Probably Alan Turing at the outset here.
How do you think of Alan Turing's contribution to the advent of the computer?
Well, it was very profound.
Norbert Wiener was working, you know, in a similar way at almost the same time, so they all sort of came out of this together, their sort of philosophical grandfather was Leibniz, the German computer scientist and philosopher.
So they all sort of were disciples of Leibniz and then, you know, executed that in different ways.
Von Neumann and Wiener worked quite closely together at one time.
Turing and Wiener never really did work together, but they were very aware of each other's The young Alan Turing, which also people forget, he came to America in 1936.
So he was actually in New Jersey when his great paper on computation was published.
So he was there in the same building with von Neumann.
He offered him a job, which he didn't take.
He preferred to go back to England.
Yeah, so that's... I don't know how to think about that.
So just bring your father into the picture here, and perhaps your mother if she knew all these guys as well.
Did they know von Neumann and Turing and Claude Shannon and Wiener?
What of these figures do you have some family lore around?
Yes and no.
They knew, you know, they both knew Johnny von Neumann quite well because he was sort of in circulation.
My father had met Norbert Wiener, but never worked with him, didn't really know him.
And neither of them actually met Alan Turing.
But of course, my father came from Cambridge, where Turing had been sort of a fixture.
My father said was that when, you know, he read Turing's paper when it came out, and, you know, he thought, like many people, he thought this was sort of the least likely, you know, this was interesting logic, but it would have no great effect on the real world.
I think my mother was probably maybe a little more prescient that, you know, logic really would change the world.
Von Neumann is perhaps the most colorful character here.
I mean, there seems to be an absolute convergence of opinion that, regardless of the fact that he may not have made the greatest contributions in the history of science, He seemed to have just bowled everyone over and given a lasting impression that he was the smartest person they had ever met.
Does that ring true in the family as well?
Or have estimations of von Neumann's intelligence been exaggerated?
No, I don't think that's exaggerated at all.
I mean, he was impressively sharp and smart, extremely good memory, you know, phenomenal calculation skills, sort of everything.
Plus he had this, you know, his real genius was not entrepreneurship, but Just being able to put everything together.
His father was an investment banker, so he had no shyness about just asking for money.
That was sort of, in some ways, almost his most important contribution, was he was the guy who could get the money to do these things that other people simply dreamed of.
But he got them done, and he hired the right people, sort of like the orchestra conductor, who get the best violin player and put them all together.
Yeah, and these stories are, I think I've referenced them occasionally on the podcast, but it is astounding to just read this record because you have really the greatest physicists and mathematicians of the time all gossiping essentially about this one figure who
Certainly, Edward Teller was of this opinion, and I think there's a quote from him somewhere which says that, you know, if we ever evolve into a master race of super intelligent humans, we'll recognize that von Neumann was The prefiguring example, like this is how we will appear when we are fundamentally different from what we are now.
And Wigner and other physicists seem to concur.
The stories about him are these two measures of intelligence, both memory and processing speed.
You grab both of those knobs and turn them up to 11, and that just seems to be the impression you make on everyone, that you're just a different sort of mind.
Yeah, it's sort of, in other ways, it's a great tragedy, because he was doing really good work in, you know, pure mathematics and logic and game theory, quantum mechanics, and those kinds of things, and then got completely distracted by the weapons and the computers.
Never, never really got back to any real science, and then died young, like Alan Turing, the very same thing.
So we sort of lost these two brilliant minds who Not only died young, but sort of professionally died very early because they got sucked into the war, never came back.
Yeah, there was an ethical split there because Norbert Wiener, who was, again, part of this conversation, fairly early, I think it was 1947, published a piece in The Atlantic more or less vowing never to let his intellectual property have any point of contact with military efforts.
And so at the time, it was all very fraught, seeing that physics and mathematics was the engine of destruction, however ethically purposed.
You know, obviously there's a place to stand where the Manhattan Project looks like a very good thing, you know, that we won the race to fission before the Nazis could get there, but it's a ethically complicated time, certainly.
Yes.
And that's where, you know, Norbert Wiener worked very intensively and effectively for the military in both World War I. He was at the proving ground in World War I and World War II, but he worked on anti-aircraft defense.
And what people forget was that it was pretty far along at Los Alamos when we knew, when we learned that the Germans were not actually building nuclear weapons.
And at that point, people like Norbert Wiener wanted nothing more to do with it.
And particularly, Norbert Wiener wanted nothing to do with the hydrogen bomb.
There was no military justification for a hydrogen bomb.
The only use of those weapons still today, it's against, you know, it's genocide against civilians.
They have no military use.
Do you recall the history on the German side?
I know there is a story about Heisenberg's involvement in the German bomb effort, but I can't remember if rumors of his having intentionally slowed that or not are, in fact, true.
Well, that's a whole other subject.
Stay away from?
Not getting into that, and I'm not the expert on that, but what little I do know is that It became known at Los Alamos later in the project that there really was no German threat, yet then the decision was made to keep working on it.
There were a few people, now there's one whose name I don't remember who it was who You know, one or two physicists actually quit work when they learned that the German program was not a real threat, but most people chose to keep working on it.
That was a very moral decision.
Yeah, but how do you view it?
Do you view it as a straightforward good one way or the other, or how would you have navigated that?
Very complicated.
Very, very complex.
I mean, of the, you know, those people you were talking about, the Martians, the sort of extraterrestrial Hungarians, they all kept working on the weapons, except Leo Tzilard, who actually, he was at Chicago.
He'd been sort of excommunicated from Los Alamos.
Groves wanted to have him put in jail.
And he circulated a petition, I think it was signed by 67 physicists from Chicago, to not use the weapon against the civilians of Japan, to at least give a demonstration against an unpopulated target.
And that petition never even reached the president.
It was sort of embargoed.
I've never understood why a demonstration wasn't a more obvious option.
I mean, was the fear that it wouldn't work and... Yes, because they didn't know.
And they had only very few weapons at that time.
They had two or three.
So there were a lot, but that's, again, a story that's still to be figured out.
And I think the people like von Neumann carried a lot of that to the grave with them.
But, you know, Edward Teller's answer to the Szilard petition was, you know, I'd love to sign your petition, but I think his exact words were, the things we are working on are so terrible that no amount of fiddling with politics will save our souls.
That's pretty much an exact quote.
Yeah, so I think Teller was, first Teller was another one of these Hungarian mutants along with von Neumann, and the two of them really inspired the continued progress past a fission weapon and on to a fusion one.
And computation was an absolutely necessary condition of that progress.
So the story of the birth of the computer is largely, or at least the growth of our power in building computers, is largely the story of the imperative that we felt to build the H-bomb.
Right.
And what's weird is that we're sort of stuck with it.
For 60 years, we've been stuck with this computational architecture that was developed for this very particular problem to do numerical hydrodynamics to solve this hydrogen bomb question.
The question was, they knew the Russians were working on it.
Because von Neumann had worked intimately with Klaus Fuchs, who turned out to be a Russian spy.
So they knew the Russians, sort of knew everything they did.
But the question was, was it possible?
And you needed computers to figure that out.
And they got the computer working.
And then, you know, now, 67 years later, our computers are still exact copies of that particular machine they built.
To do that job.
It's very, none of those people would, I think they would find it incomprehensible if they came back today and saw that, you know, we hadn't really made any architectural improvements.
Is this a controversial position at all in computer circles, or is this acknowledged that having the von Neumann architecture, as I think it is still called, we got stuck in this legacy paradigm which is by no means necessarily the best for building computers?
Yeah, no, they knew it wasn't, but I mean, already Even by the time Alan Turing came to Princeton, he was working on completely different kinds of computation.
He was already sort of bored with the Turing machine.
He was interested in much more interesting sort of non-deterministic machines.
And the same with von Neumann.
Long before that project was finished, he was thinking about other things.
What's interesting about von Neumann is he only has one patent, and the one patent he took out was for a completely non-von Neumann computer that IBM bought from him for $50,000.
Another strange story that hasn't quite, I think, been figured out.
Presumably, that was when $50,000 really meant something.
It was an enormous amount of money.
I mean, just a huge amount of money.
So, yeah, they all wanted to build different kinds of computers, and if they had lived, I think they would have.
In your contribution to this book, you talk about the prospect of analog versus digital computing.
Make that intelligible to the non-computer scientist.
Yes, so there are really two very different kinds of computers.
It goes back to Turing in a mathematical sense.
There are continuous functions that vary continuously, which is how we perceive time, or the frequency of sound, or those sorts of things.
And then there are discrete functions, the ones and zeros and bits, that took over the world.
And Alan Turing gave this very brilliant Proof of what you could do with a purely digital machine.
But both Alan Turing and von Neumann were, almost towards the end of their lives, obsessed with the fact that nature doesn't do this.
Nature does this in our genetic systems.
We use digital coding because digital coding, as Shannon showed us, is so good at error correction.
But You know, continuous functions in analog computing are better for control.
All control systems in nature, all nervous systems, the human brain, the brain of a fruit fly, the brain of a mouse, those are all analog computers, not digital.
There's no digital code in the brain.
And von Neumann, you know, wrote a whole book about that that people have misunderstood.
I guess you could say that whether or not a neuron fires is a digital signal, but then the analog component is downstream of that, just the different synaptic weights and... Right, but there's no code.
There's no code with a logical meaning.
You know, the complexity is not in the code, it's in the topology and the connections of the network.
Everybody knew that.
You take apart a brain, you don't find any sort of digital code.
I mean, now we're sort of obsessed with this idea of algorithms, which is what Alan Turing gave us, but there are no algorithms in a nervous system or a brain.
That's a much, much, much sort of higher level function that comes later.
Well, so you introduced another personality here and a concept, so let's just do a potted bio on Claude Shannon and this notion that digitizing information was somehow of value with respect to error correction.
Yes, what Claude Shannon's great contribution was sort of modern information theory, which you can make a very good case, he actually took those ideas from Norbert Wiener, who was explaining them to him during the war.
But it was Shannon who published the great manifesto on that, proving that you can sort of communicate with reliable accuracy, given any arbitrary amount of noise, by using digital coding.
And that none of our computers would work without that.
The fact that basically your computer is a communication device and has to communicate these Hugely complicated states from one fraction of a microsecond to the next billions of times a second.
And the fact that we do that perfectly is due to Shannon's theory and his model of how can you do that in an accurate way.
Is there a way to make that intuitively understandable, why that would be so?
I mean, what I picture is like cogs in a gear where it's like you're either all in one slot or you're all out of it, and so any looseness of fit keeps reverting back to you fall back into the well of the gear or you slip out of it, whereas something that's truly continuous
That is to say, analog admits of errors that are undetectable because you're just, you're kind of sliding off a more continuous, smoother surface.
Do you have a better?
Yeah, that's a good, that's a very good way to explain it.
Now it has this fatal flaw that you sort of, there's always a price for everything, and so you can get this perfect digital accuracy where you can make sure that every bit Billions of bits, and every bit is in the right place, your software will work.
But the fatal flaw is that if for some reason a bit isn't in the right place, then the whole machine grinds to a halt, whereas the analog machine will keep going.
It's much more robust against failure.
So, are you in touch with people who are pursuing this other line of building intelligent machines now?
What does analog computation look like circa 2019?
Well, it's, it's coming at us from two, in two directions.
There's bottom up and there's sort of top down.
And the bottom up is actually extremely interesting.
And I'm, I'm, you know, I'm professionally not a computer scientist.
I just, you know, I'm a historian.
So I look at the past, but occasionally I get dragged into a meeting a couple of years ago that was, was actually held at Intel.
You'll have a meeting like that, and they like the voice of a historian there, so I get to go.
And there, this was an entire meeting of people working on building analog chips from the bottom up, using the same technology we use to build digital computers, but to build completely different kinds of chips that actually do analog processing on them.
And that's extremely exciting.
I think it's going to change the world the same way the microprocessor changed the world.
We're sort of at the stage where, like we were when we had the first 4-bit Calculator you could buy and then suddenly, you know, somebody figured out how to play a game with it.
The whole thing happened.
So that's from the bottom up.
Some of these chips are going to do very interesting things like voice recognition, smell, things like that.
Of course, the big driver, you know, sort of killer app is drones, which is sort of the equivalent of the hydrogen bomb.
That's what's driving this stuff and self-driving cars and cell phones.
And then from the top down is a whole other thing, and that's the part where I think we're sort of missing something.
If you look at the sort of Internet as a whole, or the whole computational ecosystem, particularly on the commercial side, enormous amount of the interesting computing we're doing now is back to analog computing, where we're computing with Continuous functions.
It's pulse frequency coded.
Something like, you know, Facebook or YouTube doesn't care that, you know, the, the file that somebody clicks on, they don't care what the code is.
They just care.
The meaning is in the frequency that it's connected to very much the same way a brain or a nervous system works.
So, so if you look at these large companies, Facebook or Google or something, actually they're, you know, they're large analog computers.
Digital is not replaced.
But another layer is growing on top of it.
The same way that after World War II we had all these analog vacuum tubes and the oddballs like Alan Turing and von Neumann and even Norbert Wiener figured out how to use the analog components to build digital computers.
And that was the digital revolution.
But now we're sort of right in the midst of a Another revolution where we are taking all this digital hardware and using it to build analog systems, but somehow people don't want to talk about that.
Analog is still sort of seen as this archaic thing, and I believe differently.
In what sense is an analog system supervening on the digital infrastructure?
Are there other examples that can make it more vivid for people?
Yes.
I mean, analog is much better.
Like nature uses analog for control systems.
So you take an example, like, you know, an obvious one would be Google Maps with, with live traffic.
So you have all these cars driving around and with people have their digital cell phone in the car.
And you sort of have this deal with Google, where the Google will tell you what the traffic is doing and the optimum path.
If you tell Google how fast, where you are and how fast you're moving.
And that becomes an analog computers or an analog system where there is no, there is no digital model of the, you know, all the traffic in San Francisco, the, the actual system is its own.
It is its own model.
That's, and that's sort of, that was sort of an annoyance definition of a of an organism or a complex system that it constitutes its own simplest behavioral description.
There is no trying to formally describe what's going on makes it more complicated, not less.
There's no way to simplify that whole system except the system itself.
And so you're using, you know, Facebook's very much the same way.
It'd be impossible to build, you could build a digital model maybe of You know, social life in a high school, but if you try to do social life and it's anything large, it becomes just collapses under its own complexity.
So you just give everybody a copy of Facebook, which is a reasonably simple piece of code that lives on their mobile device.
And suddenly you have a, a full scale model of the actual thing itself.
So the social graph is the
is the social graph and that's what's huge transition we've sort of I think is at the root of some of the unease people are feeling about some of these particular companies is that suddenly you know it used to be Google was someplace where you would go to look something up and now it really effectively is becoming what what people think and with the big fears of something like Facebook becomes what your friends are and that can be good or bad but it's a real you know
Just in an observational sense, it's something that's happening.
So what most concerns you about how technology is evolving at this point?
Well, I wear different hats there, you know.
I mean, my other huge part, most of my life was spent as a boat builder, and I still am right here in the middle of a kayak building workshop and want nothing to do with computer manufacturing.
That's really why I started studying them and writing about them, because I was not against them, but quite suspicious.
The big thing about artificial intelligence, AI, it's not a threat, but the threat is that Not that machines become more intelligent, but that people become less intelligent.
So I've spent a lot of time out in the wild with no computers at all, lived in a treehouse for three years, and you can lose that sort of natural intelligence, I think, as a species reasonably quickly if we're not careful.
So that's what worries me.
I mean, obviously the machines are clearly taking over.
If you look at just the span of my life from when von Neumann built that one computer to where we now you know, almost biological growth of this technology.
So as a, you know, sort of as a member of living things, it's something to be concerned about.
Do you know David Krakauer from the Santa Fe Institute?
Yes, I don't know him, but I've met him and talked to him.
Yeah, because he has a rap on this very point where he distinguishes between, I think his phrasing is, cognitively competitive and cognitively cooperative technology.
So there are forms of technology that compete with our intelligence on some level.
And insofar as we outsource our cognition to them, we get less and less competent.
And then there are other forms of technology where we actually become better even in the absence of the technology.
And so the Unfortunately, the only example of the latter that I can remember is the one he used on the podcast was the abacus, which apparently if you learn how to use an abacus well, you internalize it and you can do calculations you couldn't otherwise do in your head in the absence even of the physical abacus.
Whereas if you're relying on a pocket calculator or your phone or for arithmetic or you're relying on GPS, you're eroding whatever ability you had in those areas.
So if we get our act together and all of this begins to move in a better direction or something like an optimal direction, what does that look like to you?
If I told you 50 years from now we arrived at something just far better than any of us were expecting with respect to this marriage of increasingly powerful technology with some regime that conserves our deepest values.
How do you imagine that looking?
Well, it's yeah, it's certainly possible.
And I guess that's where I would be slightly optimistic in that sort of my knowledge of human culture goes way back and we We grew up, we, you know, as a species, I'm speaking of just all humanity.
Most of our history was, you know, it was among animals who were bigger and more powerful than we were and things that we completely didn't understand.
And we sort of made up our, not religions, but just views of the world that, that, that we couldn't control everything we had to.
We had to live with it, and I think in a strange way we're kind of returning to that childhood of the species in a way that we're building these systems that we no longer have any control over, and we in fact no longer even have any real understanding of.
So we're sort of in some ways back to that world that we were, you know, originally we're quite comfortable with, where we're at the power of things that we don't understand.
Sort of megafauna, and I think that's Could be a good thing, it could be a bad thing, I don't know, but it doesn't surprise me.
And I'm just, personally, I'm interested.
Like, if you take, you know, to get back to why we're here, which is John's book, almost everyone in that book is talking about domesticated artificial intelligence.
I mean, they're talking about commercial systems, products that you can buy, things like that.
Just personally, I'm sort of a naturalist, and I'm interested in wild AI, what evolves completely in the wild, out of human control completely.
And that's a very interesting part of the whole sphere that doesn't get looked at that much.
The focus now is so much on marketable, captive AI, self-driving cars, things like that.
But it's the wild stuff that, to me, that's... I'm not afraid of bad AI, but I'm very afraid of good AI.
The kind of AI where some ethics board decides what's good and what's bad.
I don't think that's what's going to be really important.
But don't you see the possibility that, so what we're talking about here is powerful, increasingly powerful AI, so increasingly competent AI, but those of us who are worried about the prospect of building what's now called AGI, artificial general intelligence,
That is, that proves bad is just based on the assumption that there are many more ways to build AGI that is not ultimately aligned with our interests than there are ways to build it perfectly aligned with our interests.
which is to say we could build the megafauna that tramples us perhaps more easily than we could build the megafauna that lives side-by-side with us in a durably benign way.
You don't share that concern?
No, I think that's extremely foolish and misguided to think that we can, I mean, sort of by definition, real AI you won't have any control over.
I mean, this sort of idea that, oh, we There's some, that's again why I think there's this enormous mistake that thinking it's all based on algorithms.
I mean, real AI won't be based on algorithms.
And so there's this misconception that happened, you know, back to when, when they built those first computers that they needed programmers to run.
So this view is that, well, the programmers are in control, but if you have non-algorithmic There is no program.
By definition, you don't control it.
And to expect control is absolutely foolish.
I think it's much better to be realistic and assume that you won't have control.
Well, so then why isn't your bias here one of the true counsel of fear, which says we shouldn't be building machines more powerful than we are?
Well, we probably shouldn't, but we are.
I mean, the fact is, we've done it.
It's not something that we're thinking about.
It's something we've been doing for a long time, and it's probably not going to stop.
And then the point is to be realistic about, and maybe optimistic that, you know, humans have not been the best at controlling the world, and something else could well be better.
But this illusion that we are going to program Artificial intelligence is, I think, provably wrong.
I mean, Alan Turing would have proved that wrong.
You can, you know, he, that was how he got into the whole thing at the beginning was, was proving this, this statement called the enshinings problem, whether by, you know, is there any systematic way to look at a string of code and predict what it's going to do?
You can't.
And it baffles me that people don't sort of, somehow we've been so brainwashed by this The digital revolution was so successful.
It's amazing how it has sort of clouded everyone's thinking.
If you talk to biologists, of course, they know that very well.
People who actually work with brains of frogs or mice, they know it's not digital.
Why people think more intelligent things would be digital is, again, sort of baffling.
How did that sort of take over the world, that thought?
Yeah, so it does seem though that if you think the development of truly intelligent machines is synonymous with machines that not only can we not control, but we on some level can't form a reasonable expectation of what they will be inclined to do,
There's the assumption that there's some way to launch this process that is either provably benign in advance, or... So I'm looking at the book now, and the person there who I think has thought the most about this is Stuart Russell, and he's just
Trying to think of a way in which AI can be developed where its master value is to continually understand in a deeper and more accurate way what we want, right?
And what we want can obviously change, and it can change in dialogue with this now super-intelligent machine, but its value system is in some way durably anchored to our own, because its concern is to get our situation the way we want it.
Right, but all the most terrible things that have ever happened in the world happened because somebody wanted them.
I mean, there's no safety in that.
I admire Stuart Russell, but we disagree on this sort of provably good AI.
Yes, but I guess at least what you're doing there is collapsing it down to one fear rather than the other.
I mean, the fear that provably benign AI or provably obedient AI could be used by bad people toward bad ends, that's obviously a fear.
But the greater fear that many of us worry about is that developing AGI in the first place can't be provably benign, and we will find ourselves in relationship to something far more powerful than ourselves that doesn't really care about our well-being in the end.
Right.
And that's, again, sort of the world we used to live in, and I think we can make ourselves reasonably comfortable there, but we no longer become the, you know, sort of the classic religious view was they're humans, And there's God, and there's only, nothing but angels in between.
That can change.
Nothing but angels and devils in between, no.
Right.
So, Norbert Wiener sort of, the last thing he published before, well, he actually published it after he died, but I mean, there's a line in there, which I think gets it right, that the world of the future will be an ever more demanding struggle against the limitations of our own intelligence.
It's not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.
Those are the two sort of paths that so many people want.
So, oh, the cars are going to drive us around and be our slaves.
It's probably not going to happen that way.
On that dire note...
It's not a dire note.
I mean, it could be a good thing.
We've been the sort of chief species for a long time, and it could be time for something else.
But at least be realistic about it.
Don't have this sort of childish view that everything's going to be obedient to us.
That hasn't worked.
And I think it did a lot of harm to the world that we had that view.
But again, one of the signs of Any real artificial intelligence would immediately be intelligent enough not to reveal its existence to us.
That would be the first smart thing it would do, would be not reveal itself.
The fact that AI has not revealed itself, to me, is zero evidence that it doesn't exist.
I would take it the other way.
If it existed, I would expect it not to reveal itself.
Unless it's so much more powerful than we are that it perceives no cost and reveals itself by merely steamrolling over us.
Well, there would be a cost.
I think it's sort of faith is better than proof.
So you can see where I'm going with that, but it's not necessarily malevolent.
It's just as likely to be benevolent as malevolent.
Okay, so I have a few bonus questions for you, George.
These can be short form.
If you had one piece of advice for someone who wants to succeed in your field, and you can describe that field however you like, what would it be?
Okay, well, I'm a historian, whatever I became, or a boat builder, and so the advice in all those fields is just I mean, find something and become obsessed with it.
I became obsessed with the kayaks that the Russians adopted when they came to Alaska.
And then I became obsessed with how computing really happened.
And if you are obsessed with one little thing like that, you immediately become, you know, you can very quickly know more than anybody else.
And that helps to be successful.
What, if anything, do you wish you'd done differently in your 20s, 30s or 40s?
Uh, that's, I mean, you can't, you can't replay that, that tape.
I wish, well, I can be very clear about that.
I wish in my twenties, I had gone to the Aleutian Islands earlier while, while More of the old-time kayak builders were still alive and kind of interviewed and learned from them.
And then very much the same in my 30s.
I mean, all these projects I did go find the surviving Project Orion people and technicians and physicists and interviewed them, but I should have done that earlier.
And the same with computing, you know, in my 40s, I could have interviewed a lot more people who really were there at that It's an important time.
I sort of caught them, but almost too late, and I wish I had done that sooner.
Ten years from now, what do you think you'll regret doing too much of or too little of at this point in your life?
Probably regret not, you know, not getting out more up the coast again, which is what I'm trying to do.
It's what I'm working very diligently at, but I keep getting distracted.
You've got to get off the podcast and get into the kayak.
Yeah, well, podcast, you know, we could be doing this from Orca Lab.
They have a good internet connection.
I mean, that's the beautiful thing is that you can do this.
And I, the other thing I would say is, this is aside, but I grew up, you know, I grew up as a, since a young teenager in Canada where the country was united by radio.
I mean, in Canada, people didn't get newspapers, but everybody listened to one radio channel.
And so in a way, podcasts are, again, back to that past where we're all listening to the radio again.
And I think it's a great thing.
What negative experience, one you would not wish to repeat, has most profoundly changed you for the better?
I very nearly choked to death.
I mean, literally, that's the only time I've had a true near-death experience, seeing the tunnel of light and reliving my whole life and not only thinking about my daughter and other profound things, but thinking how stupid this was, you know, this guy who had Like, kayak to Alaska six times with no life jacket, dies in a restaurant on Columbus Avenue in New York.
Wow.
And John Brockman saved my life.
Ran out and came back with a New York City off-duty fireman, you know, who literally saved my life.
Wow.
I'm so glad I asked that question.
I had no idea of that story.
So, yeah, learn Heimlich maneuver.
Dr. Heimlich really did something great for the world.
Fascinating.
We may have touched this in a way, but maybe there's another side to this.
What most worries you about our collective future?
Yeah, kind of what I said, that we lose all these skills and intelligences that we've built up over such a long period of time.
The ability to, you know, survive in the wilderness and understand animals and respect them.
I think that's a very sad thing that we're losing that, of course, and losing the wildlife itself.
If you could solve just one mystery as a scientist or historian or journalist, however you want to come at it, what would it be?
One mystery?
Well, one of them would be the one we just talked about, you know, cetacean communication, what's really going on with these whales communicating in the ocean.
That's something I think we could solve, but we're not looking at it in the right way.
If you could resurrect just one person from history and put them in our world today and give them the benefit of a modern education, who would you bring back?
The problem there is that most of the people I'm interested in history sort of had extremely good education.
You're talking about John von Neumann and Alan Turing, yeah, you're right.
Yeah, and Leibniz, I mean, he was very well, yeah.
Lately, the character in the project I've been working on lately, It was kind of awful, but fascinating, was Peter the Great.
He was so obsessed with science and things like that, so I think to have brought him, you know, if he could come back, it might be a very dangerous thing, but he sort of wanted to learn so much and was, again, preoccupied by all these terrible things and disasters that were going on at the time.
What are you doing on Peter the Great?
Writing this very strange book where it kind of starts with him and Leibniz.
They go to the hot springs together and they basically stop drinking alcohol for a week.
And Leibniz convinces him.
He wants him to support building digital computers, but he's not interested.
So the computer thing failed.
But what Leibniz did convince him was to launch a voyage to America.
So that's how the Russians came to Alaska.
It became It all starts in this hot springs where they, you know, they can't drink for a week, so they're just drinking mineral water and talking.
There is a great biography on Peter the Great, isn't there?
Is there one that you recommend?
Several.
I wouldn't know which one to recommend, but he's, again, that's why he's Peter the Great, because he's been well studied.
His relationship with Leibniz fascinates me in that that's not There's just a lot there we don't know, but it's kind of amazing how this sort of obscure mathematician becomes very close to this great leader of a huge part of the world.
Okay, last question, the Jurassic Park question.
If we are ever in a position to recreate the T-Rex, should we do it?
I would say yes, but this comes up as a much more real question with the woolly mammoth and these other animals, the stellar sea cow.
There's another one we could maybe resurrect.
So yeah, I've had these arguments with Stuart Brand and George Church who were realistic about could we do it.
So I would say yes, don't expect it to work, but certainly worth trying.
What are their biases?
Does Stuart and George say we should or shouldn't do this?
Yeah, if you haven't talked to them, definitely that would be a great program to go through that debate.
The question more is, if you can recreate the animal, does that recreate the species?
one of the things they're working on is I think trying to build a park in Kamchatka or somewhere over there in Siberia so that if you did recreate the woolly mammoth they would have an environment to go live in so to me that's actually the payoff the payoff to recreating the woolly mammoth is that it would force us to create a better environment same as we should bring the we did, buffalo are coming back and we should bring the antelope back
it's sort of the American cattle industry that sort of wrecked the great central heart of America that could easily come back into the grasslands it once was hmm Well, listen, George, it's been fascinating.
Thank you for your contribution to this book, and thanks for coming on the podcast.
Thank you.
It's a very interesting book.
There's short chapters, which just makes it very easy to read.
Yeah, it's a sign of the times, but a welcome one.
I am here with Alison Gopnik.
Alison, thank you for coming on the podcast.
Glad to be here.
So we are, the occasion of our conversation is the release of John Brockman's book, Possible Minds, 25 Ways of Looking at AI.
And I'm sure there'll be other topics we might want to touch, but as this is our jumping off point, first give me your background.
How would you summarize your intellectual interests at this point?
Well, I began my career as a philosopher and I'm still half appointed in philosophy at Berkeley.
But for 30 years or so, more than that, I guess now, I've been looking at young children's development and learning to really answer some of these big philosophical questions.
Specifically, the thing that I'm most interested in is how do we come to have an accurate view of the world around us when the information we get from the world seems to be so concrete and particular and so detached from the reality of the world around us?
And that's a problem that people in philosophy of science raise.
It's a problem that people in machine learning raise.
And I think it's a problem that you can explore particularly well by looking at young kids who, after all, are the people who we know in the universe who are best at solving that particular problem.
And for the past 20 years or so, I've been doing that in the context of thinking about computational models of how that kind of learning about the world is possible for anybody, whether it's a scientist or an artificial computer or a computational system or, again, the best example we have, which is young children.
Right.
We'll get into the difference between how children learn and how our machines do, or at least our current machines do.
But just a little more on your background.
So you did your PhD in philosophy or in psychology?
I actually did my first degree, my BA, in honors philosophy.
And then I went to Oxford to actually wanting to do both philosophy and psychology.
I worked with Jerome Bruner in psychology and I spent a lot of time with the people in philosophy.
And my joke about this is that after a year or two in Oxford, I realized that there was one of two communities that I could spend the rest of my life with.
One community was of completely disinterested seekers after truth who wanted to find out about the way the world really was more than anything else.
And the other community was somewhat spoiled, narcissistic, egocentric creatures who needed to be taken care of by women all the time.
And since the first community was the babies and the second community was the philosophers, I thought it would be I'd be better off spending the rest of my life hanging out with the babies.
Right.
That's a little unfair to the philosophers.
But it does make the general point, which is that I think I think a lot of these big philosophical questions can be really well answered by by looking at a very neglected group in some ways, namely babies and young children.
Yeah, yeah.
So I did my PhD in the end in experimental psychology with Jerome Bruner, and then I I was in Toronto for a little while and then came to Berkeley where, as I say, I'm in the psychology department but also affiliated in philosophy and I've done a lot of collaborations with people doing computational modeling at the same time.
So I really think of myself as being a cognitive scientist in the sense that cognitive science puts together ideas about computation, ideas about psychology, and ideas about philosophy.
Yeah, well, if you're familiar with me at all, you'll understand that I don't respect the boundaries between these disciplines really at all.
I just think that it's just interesting how someone comes to a specific question.
But, you know, whether you're doing cognitive science or neuroscience or psychology or philosophy of mind, this Can change from sentence to sentence, or it just really depends on what building in a university campus you're standing in.
Well, I think I've tried, you know, I've tried and I think to some extent succeeded in actually doing that in my entire career.
So I publish in philosophy books and collaborate with philosophers.
I had a wonderful project where we had half philosophers who were looking at causality, people like Park Leemore and James Woodward and Chris Hitchcock, and then half Developmental psychologists and computational cognitive scientists.
So people like me, like Josh Tenenbaum at MIT, like Tom Griffiths.
And that was an incredibly powerful and successful interaction.
And the truth is, I think one of my side interests is David Hume.
And if you look at people like David Hume or Berkeley or Descartes or the great philosophers of the past, they certainly wouldn't have seen boundaries between The philosophy that they were doing and psychology and empirical science.
Let's start with the AI question and then get into children and other areas of common interest.
So perhaps you want to summarize how you contributed to this volume and your angle of attack on this really resurgent interest in artificial intelligence.
It was this period where It kind of all went to sleep, and I remember being blindsided by it, just thinking, well, AI hadn't really panned out, and then all of a sudden AI was everywhere.
How have you come to this question?
Well, as I say, we've been doing work looking at computational modeling and cognitive science for a long time, and I think that's right for a long time.
Even though there was really interesting theoretical work going on about how we could represent the kinds of knowledge that we have as human beings computationally, it didn't translate very well into, you know, actual systems that could actually go out and do things more effectively.
And then what happened, interestingly, in this new AI spring wasn't really that there was some great new, you know, killer app, new idea about how the mind worked.
Instead, what happened was that some ideas that had been around for a long time, since the 80s, basically, these ideas about neural networks, and in some ways, you know, much older ideas about associative networks, for example.
Suddenly, when you had a whole lot of data the way you do with the internet, and when you also had a whole lot of compute power with good old Moore's law running through its cycles, those ideas became very practical so that you could actually take a giant data set of all the images that had been put on the net, for example, those ideas became very practical so that you could actually take a giant data set of all the images
Or you could take the giant data sets of all the translations of French and English on the net, and you could use that to actually design a translation program.
Or you could have something like Alpha, something like AlphaZero that could just play millions and millions and millions of games of chess against itself.
And then you could use that data set to, to figure out how to play chess.
So the real change was not so much a kind of conceptual change about how we thought about the mind.
It was this change in the capacities of computers.
And I think to the surprise of everybody, including the people who were, you know, including the people who had designed the systems in the first place.
It turned out that those ideas really could scale.
And the big problem with computational cognitive science has always been not so much that finding good computational models for the mind, although that's a problem, but finding ones that could do more than just solve toy problems.
Ones that could deal with the complexity of real-world kinds of knowledge.
And I think it was surprising and kind of wonderful that these learning systems could actually turn out to Could actually turn out to work at a broad scale.
And the other thing that of course was interesting was that, not just in the history of AI, but in the history of philosophy, there's been this constant kind of ping-ponging back and forth between two ways to solve this big problem of knowledge, this big problem of how we can ever understand the world around us.
And a way I like to put it is, here's the problem.
We We seem to have all this abstract, very structured knowledge of the world around us.
We seem to know a lot about the world and we can use that knowledge to make predictions and change the world.
And yet, it looks as if all that reaches us from the world are these patterns of photons at the back of our eyes and disturbances of air at our ears.
And the question is always, how could you resolve that conundrum?
And one way, going back to Plato and Aristotle, has been to say, well, a whole lot of it is built in in the first place.
We don't actually have to learn that abstract structure.
It's just there.
Maybe it evolved.
Maybe if you're Plato, it was in a past life.
And then the other approach going all the way back to Aristotle has been to say, well, if you just have enough data, if you just had enough stuff to learn, then you could develop this kind of abstract knowledge of the world.
And again, going back to Plato and Aristotle, we kind of ping pong back and forth between those two approaches to trying to solve the problem.
And sort of good old fashioned AI said, well, if we just, you know, Famously, Roger Shank said, well, if we just had like a summer's worth of interns, we'll figure out all of our knowledge about the world.
We'll write it all down and we'll program it into a computer.
And that turned out not to be a very successful project.
And then the alternative, the kind of neural net idea was, well, if we just have enough data and we have some learning mechanisms, then the learning mechanisms will just be able to pull out the information from the data.
And that's kind of where we are now.
That's the latest iteration in this back and forth between building in knowledge and learning the knowledge from the data.
Yeah, so what you've done there is you've sketched two different approaches to generating intelligence.
One, I guess, could be considered top-down and the other bottom-up.
And what AI has done of late, the great gains we see in image recognition and many other things, is born of a process that really is aptly described as bottom-up, where you take in an immense amount of data and do what is essentially a statistical pattern recognition on it.
And some of this can be entirely blind and black-boxed, such that the humans who have written these programs don't even necessarily know how The machines are doing it, and yet given enough processing power and enough data, we're now getting results that are human-level and beyond for specific tasks.
But of course, you make this point in your piece that We know this is not how humans learn, that there is some structure, undoubtedly given to us by evolution, that allows us to generalize on the basis of comparatively small amounts of data.
And so this makes what we do non-analogous to what our machines are doing.
And I guess, I mean, now both top-down and bottom-up approaches are being combined in AI.
I guess one question I have for you is, is the difference between the way our machines learn and the way human brains learn Just of temporary interest to us now.
I mean, can you imagine us kind of blowing past this moment and building machines that we just we know are developing their intelligence in a way that is totally unlike The way we do it biologically, and yet it is successful, it becomes successful on all fronts without our building any analogous process into them, and we just lose sight of the fact that it was ever interesting to compare the ways we do it.
I mean, there's an effective way to do it in a brute force way, let's say bottom-up, on every front that will matter to us.
Or do you think that there's some problems for which it will be impossible to generate You know, true artificial intelligence, unless we have a deeper theory about how biological systems do it.
Well, I think we already can see that.
So, one of the reasons, one of the interesting things is that there's this whole really striking revival of interest in AI, among people in AI, in cognitive development, for example.
And it's because we're starting to come up against the limits of this kind of pattern And having this technique of doing a lot of statistical inference from big data sets.
So there are lots of examples, for instance, even if you're thinking about things like image recognition, where, you know, if you have something that looks like a German Shepherd, it'll recognize it as a German Shepherd.
But if you just have something that, to a human, just looks like a mass that has the same textural superficial features as the German Shepherd, it will also recognize it as a German Shepherd.
You know, if it sees a car that's suspended in the air and flooded, it will report, this is a car parked by the side of the road and so forth.
And there's a zillion examples that are like that.
In fact, there's a whole kind of area of these adversarial examples where you can show that the machine is not actually making the right decision.
And it's because it's only paying attention to the sort of superficial features.
And in particular, the machines are very bad at making generalizations.
So even if you, you know, taught Alpha, teach AlphaZero how to play chess and then you said, all right, we're going to just change the rules a little bit.
So now the rooks are going to, are going to be able to move diagonally and you're going to want to capture the queen instead of the king.
That kind of difference, which for a human who had learned chess would be really easy to adjust to for the more recent AI systems, leads to this problem they call catastrophic forgetting, which is having to relearn everything all over again when you get a new data set.
So in principle, of course, you know, there's no in principle reason why we couldn't have an intelligence that operated completely differently from the way that That, say, human children learn.
But human children are a demonstration case of the capacities of an intelligence, presumably in some sense a computational intelligence, because that's the best way we have of understanding how human brains work.
But that's the best example we have of a system that actually really works to be intelligent.
And nothing that we have now is really even in the ballpark of being able to do the same kinds of things that those systems that system can do.
So in principle, it might be that we would figure out some totally different way of being intelligent.
But at the moment, the best case we have is, you know, a four-year-old, a four-year-old human child.
And we're very, very, very far from being able to simulate that.
You know, I think part of it is if people had just labeled the new techniques by saying statistical inference from large data sets, instead of calling it artificial intelligence, I think we would be having a very different kind of conversation, even though statistical inference from large data sets turns out to be an incredibly powerful tool, more powerful than we might have thought. even though statistical inference from large data sets turns out We should remind people how alarmingly powerful it is in narrow cases.
I mean, we take something like AlphaZero.
What happened there was fairly startling, because you have an algorithm that is fairly generic in that it can be taught to play both a game like Go and a game like Chess, and presumably other games as well.
You know, we have this history of developing better and better chess engines, and finally the human grandmaster ability was conquered, I forget when that was, 1997 or so, when Garry Kasparov lost, famously.
And ever since, there's just been this incremental growth in the power of these machines, and what AlphaZero did was create a Again, a far more general algorithm which, over the course of four hours, taught itself to be better than any chess engine ever.
So, I mean, you're taking the totality of human knowledge about this 2,000-year-old game All of the engineering talent that went into making this better and better over decades, and here we found an algorithm which turned loose on the problem, beat every machine and every person in human history, essentially.
When you extrapolate that kind of process to anything else we could conceivably care about, you know, the recognition of emotion in a human face and voice, say,
Again, coming at this not in an AGI way where we've cracked the code of what intelligence is on some level and built it from the bottom up, but in a piecemeal way where we take the hundred most interesting cognitive problems and find brute force methods to crack them, it's amazing to consider how quickly
A solution can appear, and once it does—and this is the point I've always made about so-called human-level intelligence—for any ability that we actually do find an AI solution, even a narrow one in the case of chess or arithmetic, Once that solution is found, you're never talking about human-level intelligence.
It's always superhuman.
So the moment we get anything like a system that can behave or learn like a four-year-old child, it won't be at human level even for a second, because you'd have to degrade all of its other abilities that you could cobble together to support it.
You wouldn't make it worse than your iPhone as a calculator, right?
So it's already going to be superhuman.
Yeah.
I think there's a question, though, about exactly what different kinds of problems require and how you solve those problems.
And I think an idea that is pretty clearly there in the computer science and neuroscience is that there's trade-offs between different kinds of properties of a solution.
That aren't just because we happen to be biological humans, but are built into the very nature of trying to solve the problem.
And in some ways, the most striking thing about the progress of AI all through has been what people sometimes call Moravich's paradox, which is that actually the things that really impress us as humans are the things that we're not very good at, like doing arithmetic or playing chess.
So I think of these sometimes as being Like the corridas of nerd machismo.
So the things that you have to just have a particular kind of ability that most people don't have and then really train it up to do really well.
It turns out those things are things that computers are going to do it.
On the other hand, an example I give is my grandson, who's three, plays something that we call Addy Chess.
His name is Atticus.
So how do you play Addy Chess?
Well, the way you play Addy Chess is you take all the pieces off the board and then you throw them in the wastebasket.
And then you pick them up out of the wastebasket and you put them more or less in the same places they were in before.
And then you take them all off and throw them in the wastebasket again.
And it turns out that Addy Chess is actually a lot harder than Grandmaster Chess because Addy Chess means actually manipulating objects in the real physical world so that you have to figure wherever it is that that piece lands in the wastebasket, whatever orientation it is, I can pick it up and perform the motor actions that are necessary to get it on the board.
And that turns out to be incredibly difficult.
If you, you know, go and see any robotics lab, they have to put big walls around the robots to keep them from destroying each other, even trying to do incredibly simple tasks like picking up objects off of a tray.
And there's another thing about NHS that makes it really different from What even very, very powerful artificial intelligence can do, which is, as you said, what these new systems can do is you can take what people sometimes call an objective function.
You can say to them, look, this is what I want you to do.
Given this set of input, I want you to produce this set of output.
Given this set of moves, I want you to get the highest score or I want you to win at this game.
And if you specify that, it turns out that these neural net learning mechanisms are actually remarkably good at solving those problems without a lot of additional information, except just here's a million examples of the input and here's a million examples of the output.
But of course, what human beings are doing all the time is going out and making their own objectives.
They're going out and creating new objectives, creating new ideas, creating new goals.
Goals that are not the goals that anyone has created before, even if they might look kind of silly, like playing Addy Chess.
And in some way that we really don't understand at all, there's some sense of a kind of progress in those goals that we're capable of setting ourselves goals that were better than the goals that we had before.
But again, that's not even kind of in the ballpark.
It's not like, oh, if we just made the machines more powerful, then they would be able to to do those things too.
They would be able to go out and physically manipulate the world and they would be able to able to set novel objectives, that's kind of not even in the same category.
And as I say, I think an interesting idea is that there might really be trade-offs between some of the kinds of things that humans are really good at, like, for instance, taking very complicated, high-dimensional spaces of solutions, having to think of an incredibly wide range of possibilities having to think of an incredibly wide range of possibilities versus, say, being able to do something really quickly and efficiently when it's well-specified.
And I think there's reasons to think those things.
You might think, well, okay, if you could do the thing that's really well-specified and just do that better and better, then you're going to be able to solve the more complicated problem and the less well-defined problem.
And I think there's actually reasons to believe that that's not true, that there's real trade-offs between the kinds of things you need to do to solve those two kinds of problems.
Yeah, well, so the paradox you point to is interesting and is a key to how people's expectations will be violated when automation begins to replace human labor to a much greater degree than it has, because people tend to expect that
Menial jobs will be automated first, or lower-skilled, famously high-cognition jobs will be the first to be automated away, but as you point out, many of the things that we find
amazing that the human beings can do are easier to automate than the things that any or virtually any human being can do and you know which is to say it's it's easier to play grandmaster level chess than it is to walk across the room if you're a computer so you know your oncologist and your local mathematician are likely to lose their jobs to AI before your plumber will which is a harder task to
move physically into a space and manipulate objects and make decisions across tasks of that sort.
So it's, um, there's a lot that's counterintuitive here.
I guess my sense, however, is that, I mean, one, you're not at all skeptical, are you, that intelligence is substrate-independent, ultimately?
That we could find some way of instantiating human-like intelligence in a non-biological system.
Is there something potentially magical about having a computer made of meat, from your point of view, or not?
Well, I think the answer is that we don't really know, right?
So again, the one we have a kind of, you know, species of one or maybe species of a couple of examples of systems that can really do this.
And the ones that we know about are indeed biological.
Now, I think the most, it's rather striking and I think maybe not appreciated enough that this idea that really comes with Turing, the idea of thinking about a human mind as being a computational, a computational system.
That's just been an incredibly productive idea that's ended up enabling us to make really, really good predictions about many, many, many things that human beings do.
And we don't have another idea that's as good at making predictions or providing explanations for intelligence as that idea.
Now, again, maybe it'll turn out that there is something that we're missing that is contributing something important about biology.
But I think at the moment, the kind of computational theory of the mind is the best one that's on the table.
It's the one that's been most successful just in empirical scientific terms.
So, for instance, when we're looking at young children, if we say, are they doing something like Bayesian inference of structured causal systems, that's a computational idea.
We can actually say, OK, well, if they're doing that, then if we give them this kind of problem, they should solve it this way.
And sure enough, it turns out that over and over again, that's what they do, kind of independently of knowing very much about what exactly is going on in their brains when they're doing that.
Again, it could be that this gap between the kinds of problems that we can solve computationally now and the kinds of problems that every four-year-old are solving, it could be that that's got something to do with having a biological substrate, but I don't think that's kind of the most likely hypothesis given the information that we have now.
I think actually one of the interesting things is the problem is not so much trying to figure out what our representations and rules are, what's going on in our head, what the computations look like.
The problem is what people in computer science call a search problem.
So the problem is really, given all the possible things we could believe about the world, or given all the possible solutions we could have to a problem, or given all the possible things that we could do in the world, How is it that we end up converging?
How is it that we end up picking ones that are, as it were, the right ones, rather than all the other ones that we could consider?
And that, I think that's, at the moment, that's the really deep, serious problem.
So we kind of know how a computational system could be instantiated in in a brain.
We have ideas about how neurons could be configured so they could do computations.
We kind of figured that part out.
But the part about how we take all these possibilities and end up narrowing in on ones that are Relatively good, relatively true, relatively effective.
I think that's a really that's the really next deep problem.
And looking at kids can help us to think about looking at how kids solve that problem.
We know that they do solve it.
Could help to help to let us make progress.
Another name for this is common sense.
What computers are famously bad at is, as you say, narrowing the search space of solutions to rule out the obviously ridiculous and detrimental ones.
This is where all the cartoons of AI apocalypse come in, the idea that you're going to design a computer to Remove the possibility of spam and, you know, an easy way to do that is just kill all the people who would send spam, right?
So this is, obviously, this is nobody's actual fear.
It just points out that unless you build the common sense into these machines, they're not going to have it necessarily for free.
The more and more competent they get at solving specific problems.
But see, it's in a way, it's even worse than that.
Because, you know, one thing is one thing you might say is, well, okay, you know, we have some idea about what our everyday common sense is, is like, you know, we have these principles.
So if we could just sort of specify those things enough, so we could take our Our everyday ideas about the mind, for example, or our everyday ideas about how the physical world works, and we could build those into the computer, that would help.
And it is true that the systems that we have now don't even have that.
But the interesting thing about people is that we can actually discover new kinds of common sense.
So we can actually go out in the world and say, you know that thing that we thought about how the physical world worked?
It's not true, actually.
We can have action at a distance, or even worse, you know, it turns out that actually space and time can be translated into one another, which is certainly not anything that anyone intuitively thinks about how physics works.
Or for that matter, we can say, you know, that thing that we thought that we knew about Morality?
It turns out that no, actually.
When we think about it more carefully, something like gay marriage is not something that should be perceived as being immoral, even though lots and lots of people for a long time had thought that that was true.
So we have this ability to go out into the world and both see the world in new ways and actually change the world, invent new environments, invent new niches, invent new worlds.
And then figure out how to thrive in those new worlds and look around the space of possibilities and create yet other worlds and repeat.
So even if we could build in sort of what in 2019 is everybody's understanding about the world or build in the understandings about the world that we had in the Pleistocene, that still wouldn't capture this ability that we have to To search the space, to consider new possibilities, to think about new things that aren't there.
Let me give you some examples.
For instance, the sort of things that people are concerned about, I think legitimately concerned about, that AI could potentially do is, for example, you could give The kind of systems that we have now, examples of all of the verdicts of guilty and innocent that had gone on in a court over a long period of time, and then get it to give it a new example and say, OK, how would this how would this case be judged?
Will it be judged innocent or will it be judged guilty?
And the systems that we have now could probably do a pretty decent job of of doing that.
And certainly, you know, changes to those systems could you could it's easy to imagine an extension of the systems we have now that could solve that kind of problem.
But, of course, what we can do is to say, you know what, all that law, that's really not right.
That isn't really capturing what we want.
That's not enabling people to thrive.
Now, we should think of a different way of thinking about making these kinds of judgments.
And that's exactly the sort of thing that the current systems, again, it's not just like if you gave them more data, they would be able to do that.
They're not really even conceptually in the ballpark of being able to do that.
And that's probably a good thing.
Now, you know, I think it's important to say that, and I think you're going to talk to Stuart Russell who will make this point, you know, these systems don't have to have anything like human level general intelligence to be really dangerous.
Electricity is really dangerous.
I just heard a, was talking to someone who made a really interesting point, which is about like, how did we invent circuit breakers?
It turns out the insurance companies actually started insisting that people have circuit breakers on their electrical systems because houses were being set on fire.
So, you know, electricity, which we now think of as being this completely benign thing we put on a switch and electricity comes out and none of us is sitting there thinking, oh, my God, is our house about to burn down?
That was only a very long, complicated process of regulation and legislation and work to get that to be other than a really, really dangerous thing.
And I think that's absolutely true, not about, you know, some theoretical artificial general intelligence, but about the AI that we have now, that it's a really powerful force.
And like any powerful technology, we have to figure out ways of regulating it and having it make sense.
But I don't think that's like a giant difference in kind from all the issues we've had about dealing with powerful technologies in the past.
Yeah, yeah.
I guess this issue of creativity and growth in intuitions is...
I guess my intuitions divide from many people's on this point, because creativity is often held out as something that's fundamentally different.
That our machines can't do this, and we routinely do this.
But in my view, creativity isn't especially creative, in the sense that it clearly proceeds on the basis of Rules we already have, and nothing is fundamentally new, you know, down to the studs.
Nothing that's meaningful is.
I mean, you can create something that essentially looks like noise that is new.
Something that strikes us as insightful, meaningful, beautiful is functioning on the basis of properties that our minds already acknowledge as relevant and are already using.
And so, I mean, you take something like, again, a simple case of a mathematical intuition that It was a fairly hard one and took thousands of years to emerge in someone's mind, but once you've got it, you've sort of got it, and it's really the same thing you're doing anyway, which is you take a triangle having 180 degrees on a flat plane, but you curve the plane and it can have more or less than that.
It's strange that it took so long to see that, but the scene of that doesn't strike me as fundamentally more mysterious than the fact that we can understand anything about triangles in the first place.
I mean, I think I would just set that on its head in the sense that, you know, again, this is one of the real advantages of studying young children is that, you know, when you say, well, it's no more mysterious than understanding triangles in the first place, people have actually tried to figure out how is it that we can understand triangles?
How is it that children can understand basic things about how a number works?
Or in the work that I've done, how do children understand basic things about the causal structure of the world, for example?
And it turns out that even very basic things that we take for granted, like understanding that you can believe something different from what I believe, for example, it's actually very hard to see exactly how it is that children are taking individual pieces and putting them together to Putting them together to come to realizations about, say, how other people's minds work.
And the problem is, if you're doing it backwards, once you know what the answer is, then you can say, oh, I see, this is how you could put that together from pieces that you have in the world or from data that you have.
But of course, if you're sort of doing it prospectively, then there's all sorts of incredibly large number of different other ways that you could have put together Now, again, I don't think there's any kind of, you know, giant reason why we couldn't solve that problem.
the data.
And the puzzle is, how is it that you came upon the one that was both new and interesting and wasn't just random?
Now, again, I don't think there's any kind of, you know, giant reason why we couldn't solve that problem.
But I do think that's looking at even something as simple as, you know, children figuring out basic things about how the world around them and the people around them work.
That turns out to be a very, very, very tricky problem to solve.
And one interesting thing, for example, that we found in our data, in our research, is that in many respects, children are actually better at coming to unlikely or new solutions than adults are.
So again, this is this kind of trade-off idea.
Where actually the more you know, in some ways, the more difficult it is for you to conceive of something new.
We use a lot of Bayesian ideas when we're trying to characterize what the children are doing.
And one way you could think about it is that, you know, as your Priors get to be more and more peaked as you know more and more, as you're more and more confident about certain kinds of knowledge.
And that's a good thing, right?
That's what lets you go out into the world and build things and make the world a better place.
It gets to be harder and harder for you to conceive of new possibilities.
And one idea that I've been arguing for is that you could think about the very fact of childhood as being a solution to this kind of explore exploit tension, this tension between exploring, being able to explore lots of different possibilities, even if they're maybe not very good and having to narrow in on the possibilities that are really relevant to, to a particular problem.
And again, that's the sort of, that's the sort of thing that people or humans over the course of their life history and culture seem to be pretty good at doing in a way that we don't really have a good, we don't even really have a good start on thinking about how a computation system could do that.
Now we're working on it.
I mean, you know, we're, we're hoping that we could get a computational system that could do that.
And we have some, sort of have some ideas, but that's a dimension that really, really differentiates what the current powerful AI systems can do and what every four year old can do.
Yeah, yeah.
No, I'm granting all of that.
I guess I'm just putting the line at a different point because, again, people often hold out creativity and being able to form new goals and insights, intuitions, as though this were a uniquely human thing that was very difficult to understand how a machine could do.
You know, as you point out, just being able to walk across the room is fairly miraculous from the point of view of how hard it is to instantiate in a robot and to ride a bicycle and to do things that kids routinely learn to do very early.
Once we crack that, these fairly basic problems that evolution has solved for us, and really for even non-human animals in many cases, then we're talking about just incremental gains into something that is fundamentally beyond the human.
Because no one's putting the line, nobody says, well, yes, you might be able to build a machine that could run across a room like a human child and balance something on its finger, but you're never going to get something that can produce the creative genius of an Olympic athlete or a professional basketball player.
That's where I think the intuitions flip.
Once you could build something that could move exactly like a person, then there is no limit.
There's no example of human agility That it will be out of sight at that point.
I guess what I'm reacting to is that people seem to think different rules apply at the level of cognition and artistic creativity, say.
Well, I think it's just an interesting empirical question.
You know, we're collaborating now on a big project with a bunch of people who are doing things in computer vision, for example.
And that's another example where something that we think is very simple and straightforward, you know, I mean, we don't even feel as if we do any effort to go out into the world and actually see the objects that are out there in the world.
That turns out to be Both extremely difficult, and in some ways very mysterious, that we can do that as well as we can.
Not only do we identify images, but we can recognize that there's an object that's closer to me, or an object that's further away from me, or that objects have texture, or that objects are really three-dimensional.
Those are all really, really challenging problems.
Interesting thought is that at a very high abstract level, it may be that we're solving some of those problems in the same way that enables us to solve some of these creativity problems.
So let me give you an example.
One of the things that the kids very characteristically do Is do experiments, except that when they do experiments, we call it getting into everything.
They explore.
They're not just sort of passively waiting for data to come to them.
They can have a problem and actually go out and get the data that's relevant to that problem.
Again, when they do that, we call it playing or getting into everything or making a mess.
And we sit there and nod our heads and try and keep them from killing themselves when they're doing it.
But that's a really powerful technique, a really powerful way of making progress, actually getting more information about what the structure of the world is like, and then using it to change what you think about the world, and then repeating by actually going out into the real world and getting data from the real world.
And that's something that kids are very good at doing.
That seems to play a big role in our ability to do things like move around the world or do perform skilled actions.
And again, that's something that at least at the moment isn't very characteristic of the way the machines work.
Here's another nice example of something that we're actually working on at Berkeley.
So one of the things that we know about kids is their motivation and affect is that they're insatiably curious.
They just want to get as much information as they can about the world around them.
And they're driven to go out and get information and especially get new information.
Which again is why just thinking about the way that we evolved isn't going to be enough to answer the problem.
One of the things that's true about lots of creatures, but especially human children, is that they're curiosity driven.
And in work that we've been doing with computer scientists at Berkeley, You can design an algorithm that instead of, say, wanting to have a higher score, wants to have the predictions of its model be violated.
So actually, when it has a model and things turn out to be wrong, instead of being depressed, it goes out and says, huh, that's interesting.
Let me try that again.
Let me see what's going on with that little toy car that it's doing that strange thing.
And you can show that a system that's got that kind of motivation can solve problems that your typical, say, reinforcement learning system can't solve.
And that what we're doing is actually comparing children and these curious AIs on the same problems to see the ways that the children are being curious and how that's related to the ways that the AIs are being curious.
So I think you're absolutely right that The idea that the place where humans are going to turn out to be unique is in, you know, the great geniuses or the great artists or the great athletes.
They're going to turn out to have some special sauce that the rest of us don't have.
And that's going to be the thing that AI I think you're right that that's not really going to be true, that what those people are doing is an extension of the things that every two- and three-year-old is equipped to do.
But I also think that what the two- and three-year-olds are equipped to do is going to turn out to be very different from at least what the current batch of AI is capable of doing.
Yeah.
Well, I don't think anyone is going to argue there.
Well, so how do you think of consciousness in the context of this conversation?
For me, I'll just give you a look.
If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense Podcast, along with other subscriber-only content, including bonus episodes, NAMAs, and the conversations I've been having on the Waking Up app.