All Episodes
Sept. 20, 2018 - Making Sense - Sam Harris
49:31
#138 — The Edge of Humanity

Sam Harris speaks with Yuval Noah Harari about his new book “21 Lessons for the 21st Century.” They discuss the importance of meditation for his intellectual life, the primacy of stories, the need to revise our fundamental assumptions about human civilization, the threats to liberal democracy, a world without work, universal basic income, the virtues of nationalism, the implications of AI and automation, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

| Copy link to current segment

Time Text
Welcome to the Making Sense Podcast.
This is Sam Harris.
Just a note to say that if you're hearing this, you are not currently on our subscriber feed and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org.
There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
Today I am speaking with Yuval Noah Harari.
Yuval has a PhD in History from the University of Oxford, and he lectures at Hebrew University in Jerusalem, where he specializes in world history.
His books have been translated into over 50 languages, and these books are Sapiens, A Brief History of Humankind, Homo Deus, A Brief History of Tomorrow, and his new book, which we discuss today, is 21 Lessons for the 21st Century.
Yuval is rather like me in that he spends a lot of time worrying out loud.
He's also a long-term meditator.
I don't know if there's a connection there.
There was so much to talk about.
There is much more in the new book than we touched, but we touched a lot.
We actually started talking about the importance of meditation for his intellectual life.
We talk about the primacy of stories.
The need to revise our fundamental assumptions about human civilization and how it works.
The current threats to liberal democracy.
What a world without work might look like.
Universal basic income.
The virtues of nationalism.
You've all had some surprising views on that.
The implications of AI and automation.
And several other topics.
So, without further delay, I bring you Yuval Noah Harari.
Thank you.
Thank you.
And thank you to Rivers Cuomo.
That's amazing.
So you've heard this from me before if you've been to an event or It's my honor.
Thank you.
on a podcast, so it may get old to hear, but it really doesn't get old to say.
I can't tell you what an honor it is to put a date on the calendar and have you all show up.
I mean, it's just astonishing to me that this happened, so thank you.
And thank you to Yuval for coming out.
It's amazing to collaborate with you.
Thank you.
So, Yuval, you have these books that just steamroll over all other books, and I know because I write books.
So you wrote Sapiens, which is kind of about the deep history of... Yes.
A few fans.
Which is really about the history of humanity.
And then you wrote Homo Deus, which is about our far future.
And now you've written this book, 21 Lessons for the 21st Century, which is about the present, I can't be the only one in your publishing world who notices that now you have nothing left to write about.
So good luck with that career of yours.
So how do you describe what you do?
Because you're a historian.
One thing that you and I have in common is that we have a reckless disregard for the boundaries between disciplines.
You just touch so many things that are not straightforward history.
How do you think about your intellectual career at this point?
Well, my definition of history is that history is not the study of the past, it's the study of change, how things change.
And yes, most of the time you look at change in the past, but in the end, all the people who lived in the past are dead.
And they don't care what you write or say about them.
If the past has anything to teach us, it should be relevant to the future and to the present also.
But you touch biology and the implications of technology.
I follow the questions.
And the questions don't recognize these disciplinary boundaries.
And as a historian, maybe the most important lesson that I've learned as a historian is that humans are animals, and if you don't take this very seriously into account, you can't understand history.
Of course, I'm not a biologist.
I also know that humans are a very special kind of animal.
If you only know biology, you will not understand things like the rise of Christianity, or the Reformation, or the Second World War.
So, you need to go beyond just the biological basis.
But if you ignore this, you can't really understand anything.
Yeah, the other thing we have in common, which gives you, to my eye, a very unique slant on all the topics you touch, is an interest in meditation, and a sense that our experiences in meditation have changed the way we think about problems in the world, and questions like what it means to live a good life, or even whether the question of the meaning of life is an intelligible one, or a valid one, or one that needs to be asked.
How do you view the influence of the contemplative life on your intellectual pursuits?
I couldn't have written any of my books, either Sapiens or Homo Deus or 21 Lessons, without the experience of meditation, partly because of just what I learned about the human mind.
For observing the mind, but also partly because you need a lot of focus in order to be able to summarize the whole of history into like 400 pages.
And meditation gives you this kind of ability to really focus.
My understanding of at least the meditation that I practice is that the number one question is what is reality?
What is really happening?
To be able to tell the difference between the stories that the mind keeps generating about the world, about myself, about everything, and the actual reality.
And this is what I try to do when I meditate, and this is also what I try to do when I write books, to help me and other people understand what is the difference between fiction and reality.
Yeah, and I want to get at that difference because you use these terms in slightly idiosyncratic ways.
So I think it's possible to either be confused about how you use terms like story and fiction.
For instance, just the way you talk about the primacy of fiction, the primacy of story, the way in which our concepts that we think map onto reality don't really quite map onto reality, and yet they're nonetheless important.
That is, in a way that you don't often flag in your writing, a real meditator's eye view of what's happening here.
You're giving people the epiphany that certain things are made up, like the concept of money, the idea that we have dirty paper in our pocket that is worth something.
That is a convention that we've all agreed about.
But it is a it's an idea.
It only works because we agree that it works.
But you the way you use the word story and fiction rather often seems to denigrate these things a little bit more than I'm tempted to do when I talk about them.
I don't say that there is anything wrong with it.
Stories and fictions are a wonderful thing, especially if you want to get people to cooperate effectively.
You cannot have a global trade network unless you agree on money.
And you cannot have people playing football or baseball or basketball or any other game unless you get them to agree on rules that, quite obviously, we invented.
They did not come from heaven.
They did not come from physics or biology.
We invented them.
And there is nothing wrong with people agreeing, accepting, let's say for 90 minutes, the story of football, the rules of football, that if you score a goal, then this is the goal of the whole game, and so forth.
The problem begins only when people...
forget that this is only a convention, this is only something we invented, and they start confusing it with kind of, this is reality, this is the real thing.
And in football it can lead to people, to hooligans beating up each other or killing people because of this invented game.
And on a higher level, it can lead to, you know, to world wars and genocides in the name of fictional entities like gods and nations and currencies that we've created.
Now, there is nothing wrong with these creations as long as they serve us instead of us serving them.
But wouldn't you acknowledge that there's a distinction between good stories and bad stories?
Yeah, certainly.
The good stories are the ones that really serve us, that help people, that help other sentient beings live a better life.
I mean, it's as simple as that.
I mean, of course, in real life, it's much more complicated to know what will be helpful and what not and so forth.
But a good starting place is just to have this basic ability to tell the difference between fiction and reality.
between our creations and what's really out there, especially when, for example, you need to change the story, or a story which was very adapted to one condition is less adapted to a new condition, which is, for example, what I think is happening now with the story of the underground liberal democracy.
that it was probably one of the best stories ever created by humanity, and it was very adapted to the conditions of the 20th century, but it is less and less adapted to the new realities of the 21st century.
And in order to kind of reinvent the system, we need to acknowledge that to some extent, it is based on stories we have invented.
Right, but so when you talk about something like human rights being a story or a fiction, that seems like a story or a fiction that shouldn't be on the table to be fundamentally revised, right?
Like that's where people begin to worry that to describe these things as stories or fictions is to suggest tacitly, if I don't think you do this explicitly, that All of this stuff is made up, and therefore it's all sort of on the same level, right?
And yet there's clearly a distinction you make in your book between dogmatism and the other efforts we make to justify our stories, right?
There are stories that are dogmatically asserted, and religion has more than its fair share of these, but there are political dogmas, there are tribal dogmas of all kinds, you know, nationalism can be anchored to dogma.
And the mode of asserting a dogma is to be doing so without feeling responsible to counter-arguments and demands for evidence and reasons why, whereas with something like human rights, we can tell an additional story about why we value this convention, right?
It doesn't have to be a magical story.
It doesn't have to be that we were all imbued by our creator with these things, but we can talk for a long time without saying it's just so to justify that convention.
Yeah, I mean, human rights is a particularly problematic and also interesting case.
First of all, because it's our story.
I mean, we are very happy with you discrediting the stories of all kinds of religious fundamentalists and all kinds of tribes somewhere and ancient people, but not our story.
Don't touch that.
It depends what you mean by we.
So I guess we, most of the people, I don't see anybody here.
It could be just empty chairs and recordings of laughter.
But I assume that the people here, most of them, this is our story.
The second thing is that we live in a moment when liberal democracy is under a severe attack.
And this was not so when I wrote Sapiens.
I felt much fear writing these things back in 2011, 2012.
And now it's much more problematic.
And yes, I find myself, one of the difficulties of living right now, as an intellectual, as a thinker, that I'm kind of torn apart.
by the imperative to explore the truth, to follow the truth wherever it leads me, and the political realities of the present moment, and the need to engage in very important political battles.
And this is one of the costs, I think, of what is happening now in the world, that it restricts our ability, our freedom, To truly go deep and explore the foundations of our system.
And I still feel...
The importance of doing it, of questioning even the foundations of liberal democracy and of human rights, simply because I think that, as we have defined them since the 18th century, they are not going to survive the tests of the 21st century.
And it's extremely unfortunate that we have to engage in this two-front battle That at the same moment, we have to defend these ideas from people who look at them from the perspective of nostalgic fantasies, that they want to go back from the 18th century.
And at the same time, we have to also go forward and think what it means, what the new scientific discoveries and technological developments of the 21st century really mean to the core ideas.
What do human rights mean when you are starting to have superhumans?
Do superhumans have superhuman rights?
What does the right to freedom mean when we have now technologies that simply undermine the very concept of freedom?
When we created this whole system, not we, somebody, back in the 18th and 19th century, we gave ourselves All kinds of philosophical discounts of not really going deeply enough in some of the key questions, like, what do humans really need?
And we settled for answers like, just follow your heart.
Yeah.
And this was good enough.
This is Joseph Campbell.
I blame Joseph Campbell.
Follow your bliss.
No, but follow your heart.
The voter knows best.
The customer is always right.
Beauty is in the eyes of the beholder.
All these slogans, they were kind of covering up.
For not engaging more deeply with the question of what is really human freedom and what do humans really need?
And for the last 200 years it was good enough.
But now, to just follow your heart is becoming extremely dangerous and problematic when there are corporations and organizations and governments out there that, for the first time in history, can hack your heart.
And your heart might be now a government agent.
And you don't even know it.
So telling people in 2018, just follow your heart, is a much, much more dangerous advice than in 7076.
Yeah.
So let's drill down on that circumstance.
So we have this claim that liberal democracy is, one, under threat, and two, might not even be worth maintaining, as we currently conceive it, Given the technological changes that are upon us or will be upon us.
It is worth maintaining, it's just becoming more and more difficult.
Presumably there are things about liberal democracy that are serious bugs and not features in light of the fact that, as you say, if it's all a matter of putting everything to a vote and we're
We are all part of this massive psychological experiment where we're gaming ourselves with algorithms written by some people in this room to not only confuse us with respect to what's in our best interest, but the very tool we would use to decide what's worth wanting is being hijacked.
It's one thing to be wrong about how to meet your goals.
It's another thing to have the wrong goals and not even know that.
It's hard to know where Ground zero is for cognition and emotion if all of this is susceptible to outside influence, which ultimately we need to embrace because there is a possibility of Influencing ourselves in ways that open vistas of well-being and peaceful cooperation that we can't currently imagine, right?
Or we can't see how to get to.
So it's not like we actually want to go back to when there was no, quote, hacking of the human mind.
Every conversation is an attempted hack of somebody else's mind, right?
So we're just getting, it's getting more subtle now.
Yeah, it's, you know, throughout history, Other people and governments and churches and so forth, they all the time tried to hack you and to influence you and to manipulate you.
They just weren't very good at it, because humans are just so incredibly complicated.
And therefore, for most of history, this idea that I have an inner arena, Which is completely free from external manipulation.
Nobody out there can really understand what's happening within me.
How special you are.
And how special I am and what I really feel and how I really think and all that.
It was largely true.
And therefore, the belief in the autonomous self and in free will and so forth, it made practical sense.
Even if it wasn't true on the level of ultimate reality, on a practical level, it was good enough.
But however complicated the human entity is, we are now reaching a point when somebody out there can really hack it.
Now, they won't...
It can never be done perfectly.
We are so complicated, I'm under no illusion that any corporation or government or organization can completely understand me.
This is impossible.
But the yardstick or the threshold, the critical threshold, is not perfect understanding.
The threshold is just better than me.
The key inflection point in the history of humanity is the moment when an external system can reliably, on a large scale, understand people better than they understand themselves.
And this is not an impossible mission, because so many people don't really understand themselves very well.
No.
So just ask my wife.
With the whole idea of shifting authority from humans to algorithms.
So I trust the algorithm to recommend TV shows for me, and I trust the algorithm to tell me how to drive from Mountain View to this place this evening.
And eventually, I trust the algorithm to tell me what to study, and where to work, and whom to date, and whom to marry, and who to vote for.
And then people say, no, no, no, no.
That won't happen, because there will be all kinds of mistakes and glitches and bugs, and the algorithm will never know everything, and it can't do it.
And if the yardstick is the algorithm, to trust the algorithm, to give authority to the algorithm, it needs to make perfect decisions, then yes, it will never happen.
But that's not the yardstick.
The algorithm just needs to make better decisions than me.
about what to study and where to live and so forth.
And this is not so very difficult, because as humans, we often tend to make terrible mistakes, even in the most important decisions in life.
Yeah.
Yeah.
I promise this will be uplifting at some point.
So, let's linger on the problem of the precariousness of liberal democracy, and there's so many aspects to this.
Maybe just to add one thing to this precariousness, the ideas that systems have to change, again, as a historian, this is obvious.
I mean, you couldn't really have a functioning liberal democracy in the Middle Ages because you didn't have the necessary technology.
Liberal democracy is not this eternal ideal that can be realized anytime, anyplace.
Take the Roman Empire in the 3rd century.
Take the Kingdom of France in the 12th century.
Let's have a liberal democracy there.
No, you don't have.
The technology, you don't have the infrastructure, you don't have what it takes.
It takes communication, it takes education, it takes a lot of things that you just don't have.
And it's not just a bug of liberal democracy.
It's true of any socio-economic or political system.
You could not build a communist regime in 16th century Russia.
I mean, you can't have communism without trains and electricity and radio and so forth, because in order to make all the decisions centrally, if the slogan is that you work, they take everything, and then they redistribute according to needs, each one works according to their ability and gets according to their need, the key problem there is really a problem of data processing.
How do I know?
what everybody is producing, how do I know what everybody needs, and how do I shift the resources, taking wheat from here and sending it there?
In 16th century Russia, when you don't have trains, when you don't have radio, you just can't do it.
So as technology changes, it's almost inevitable that the socioeconomic and political systems will change.
So we can't just hold on.
No, this must remain as it is.
The question is, how do we make sure that the changes are for the better and not for the worse?
Well, by that yardstick, now might be the moment to try communism in earnest.
We can do it now, right?
So you can all tweet that Yuval Noah Harari is in favor of communism.
I didn't say anything.
I mean, we had a moment in the sun that seemed, however delusionally, to be kind of outside of history.
You know, it's like the first moment in my life where I realized I was living in history was September 11, 2001.
But before that, it just seemed like People could write books with titles like The End of History, and we sort of knew how this was going to pan out, it seemed.
Liberal values were going to dominate the character of a global civilization, ultimately.
We were going to fuse our horizons with people of however disparate background.
Someone in a village in Ethiopia was eventually going to get some version of the democratic liberal notion of human rights and the primacy of rationality and the utility of science.
So religious fundamentalism was going to be held back and eventually pushed all the way back and irrational economic dogmas that had proved that they're merely harmful would be pushed back and we would find an increasingly orderly and amicable collaboration among more and more people.
And we would get to a place where war between nation states would be less and less likely, to the point where, by analogy, a war between states internal to a country, like the United States, a war between Texas and Oklahoma, just wouldn't make sense, right?
How is that possibly going to come about?
Wait and see.
Yeah, exactly.
But now we seem to be in a moment where Much of what I just said we were taking for granted can't be taken for granted.
There's a rise of populism, there's a xenophobic strand to our politics that is just Immensely popular, both in the US and in Western Europe.
And this anachronistic, nativist reaction is, as you spell out in your most recent book, is being kindled by a totally understandable anxiety around technological change of the story.
We're talking about people who are sensing, it's not the only source of xenophobia and populism, but There are many people who are sensing The prospect of their own irrelevance, given the dawn of this new technological age.
What are you most concerned about in this present context?
I think irrelevance is going to be a very big problem.
It already fuels much of what we see today with the rise of populism, is the fear and the justified fear of irrelevance.
If in the 20th century the big struggle was against exploitation, than in the 21st century for a lot of people around the world, the big struggle is likely to be against irrelevance, and this is a much, much more difficult struggle.
A century ago, you felt that, at least if you were the common person, that there were always these elites that exploit me.
Now you increasingly feel, as a common person, that there are all these elites that just don't need me.
And that's much worse.
On many levels, both psychologically and politically, it's much worse to be irrelevant than to be exploited.
Spell that out.
Why is it worse?
First of all, because you're completely expendable.
If a century ago you mount a revolution against exploitation, then you know that if things, when bad comes to worse, they can't shoot all of us.
Because they need us.
Who's going to work in the factories?
Who's going to serve in the armies if they get rid of us?
That's a motivational poster I'm going to get printed up.
I'm not sure what the graphic is, but they can't shoot all of us.
If you're irrelevant, that's not the case.
You're totally expendable.
And again, we...
We are often, our vision of the future is followed by the recent past.
The 19th and 20th century were the age of the masses, where the masses ruled, and even authoritarian regimes, they needed the masses.
So you had these mass political movements, like Nazism and like Communism, and even somebody like Hitler or like Stalin, they invested a lot of resources In building schools and hospitals and having vaccinations for children and sewage systems and teaching people to read and write.
Not because Hitler and Stalin were such nice guys, but because they knew perfectly well That if they wanted, for example, Germany to be a strong nation with a strong army and a strong economy, they needed millions of people, common people, to serve as soldiers in the army and as workers in the factories and in the offices.
So some people could be expendable and could be scapegoats, like the Jews, but on the whole, you couldn't do it to everybody.
You needed them.
But in the 21st century, there is a serious danger that more and more people will become irrelevant and therefore also expendable.
We already see it happening in the armies.
That whereas the leading armies of the 20th century relied on recruiting millions of common people to serve as common soldiers, Today, the most advanced armies rely on much smaller numbers of highly professional soldiers and, increasingly, on sophisticated and autonomous technology.
If the same thing happens in the civilian economy, then we might see a similar split in civilian society, where you have a relatively small Very capable professional elite relying on very sophisticated technology, and most people, just as they are already today, militarily irrelevant, they could become economically and politically irrelevant.
Now, that sounds like a real risk we're running, but the normal intuitions about what is scary about that don't hold up given the right Construal and expectations about human well-being.
I mean, so it's like we know what people are capable of doing when they're irrelevant, because aristocrats have done that for centuries.
I mean, there are people who have not had to work in every period of human history, and they had a fine old time, you know, shooting pheasant and inventing weird board games.
And then if you add to that some more sophisticated way of finding wellbeing, you know, so if we taught people, you know, stoic philosophy and how to meditate and good sports, and there's no, it's nowhere written that life is only it's nowhere written that life is only meaningful if you are committed to something you would only do, you only will do because someone's paying you to do it, Definitely.
I mean, there is a worst-case and a best-case scenario.
In the best-case scenario, people are relieved of all the difficult, boring jobs that nobody really wants to do, but you do it because you need the money.
And you're relieved of that, and the enormous profits of the automation revolution are shared between everybody, and you can spend your time, your leisure time, on exploring yourself, developing yourself, doing art or meditating or playing sports or developing communities.
They're wonderful scenarios.
There are also some terrible scenarios that can be realized.
I mean, I don't think there is anything inevitable.
I mean, the technology, the technological revolution, which is just beginning right now, it can go in completely different directions.
Again, if you look back at the 20th century, then you see that with the same technology of trains and electricity and radio, you can build a communist dictatorship or a fascist regime or a liberal democracy.
The trains don't care.
They don't tell you what to do with them, and they can be used for anything you can use them for.
They don't object.
And it's the same with AI and biotechnology and all the current technological inventions.
We can use them to build, really, paradise or hell.
The one thing that is certain is that we are going to become far more powerful than ever before, far more powerful than we are now.
We are really going to acquire divine abilities of creation, in some sense, even...
Greater abilities than what was traditionally ascribed to most gods from Zeus to Yahweh.
If you look, for instance, at the creation story in the Bible, the only things that Yahweh managed to create are organic entities.
And we are now on the verge of creating the first inorganic entities after four billion years of evolution.
So, in this sense, we are even on the verge of outperforming the biblical God in creation.
And we can do so many different things with that.
Some of them can be extremely good.
Some of them can be extremely bad.
This is why it's so important to have these kinds of conversations, because this is maybe the most important question that we are facing.
What to do with these powers?
Yeah.
Norms or stories or conventions or fictions, concepts, ideas, do you think stand in the way of us taking the right path here?
We've sort of alluded to it without naming it.
Let's say we could all agree that universal basic income was the near-term remedy for some Explosion of automation and irrelevance.
You look skeptical about that.
Yeah, I have two difficulties with universal basic income, which is universal and basic.
Income is fine, but universal and basic, they are ill-defined.
Most people, when they speak about universal basic income, they actually have in mind national basic income.
They think in terms, OK, we'll tax Google and Facebook in California and use that to pay unemployment benefits or to give free education to unemployed coal miners in Pennsylvania and unemployed taxi drivers in New York.
The real problem is not going to be in New York.
The real problem, the greatest problem, is going to be in Mexico, in Honduras, in Bangladesh.
And I don't see an American government taxing corporations in California and sending the money to Bangladesh to pay unemployment benefits there.
And this is really the automation revolution.
They're clapping to stop us from paying.
Those are the libertarians in the audience.
We've built...
Over the last few generations, a global economy and a global trade network, and the automation revolution is likely to unravel the global trade network and hit the weakest links the hardest.
So you will have enormous new wealth, enormous new wealth, created here in San Francisco and Silicon Valley, but you can have the economies of entire countries just collapse completely.
Because what they know how to do, nobody needs that anymore.
And we need a global solution for this.
So universal, if by universal you mean global, Taking money from California and sending it to Bangladesh, then yes, this can work.
But if you mean national, it's not a real answer.
And the second problem is with basic.
How do you define what are the basic needs of human beings?
Now, in a scenario in which a significant proportion of people no longer have any jobs, and they depend on this universal basic income or universal basic services, Whatever they get, they can't go beyond that.
This is the only thing they're going to get.
Then who defines what is their basic needs?
What is basic education?
Is it just literacy, or also coding, or everything up to PhD, or playing the violin?
Who decides?
And what is basic healthcare?
Is it just, I mean, if you're looking 50 years to the future, and you see genetic engineering of your children, and you see all kinds of treatments to extend life, is this the monopoly of a tiny elite?
Or is this part of the universal basic package?
And who decides?
So it's a first step.
The discussion we have now about universal basic income is an important first step.
But we need to go much more deeply into understanding what we actually mean by universal and by basic.
Right.
Well, so let's imagine that we begin to extend the circle, coincident with this rise in affluence.
And because on some level, if the technology is developed correctly, we're talking about pulling wealth out of the ether, right?
So automation and artificial intelligence, there's more that the pie is getting bigger.
And then the question is how generously or wisely we will share it with the people who are becoming irrelevant because we don't need them for their labor anymore.
Let's just let's say we get better at that than we currently are.
But I mean, you can imagine that we are going to be We'll be fast to realize that we need to take care of the people in our neighborhood, in San Francisco, and we will be slower to realize we need to take care of the people in Somalia.
But maybe we'll just, these lessons will be hard.
One, we'll realize if we don't take care of the people in Somalia, A refugee crisis, unlike any we've ever seen, will hit us in six months, right?
So there'll be some completely self-serving reason why we need to eradicate famine or some other largely economic problem elsewhere.
But presumably, we can be made to care more and more about everyone.
Again, if only out of self-interest.
What are the primary impediments to our doing that?
Human nature?
It is possible, it's just very difficult.
I think we need for a number of reasons to develop global identities, a global loyalty, a loyalty to the whole of humankind and to the whole of planet Earth.
So this is a story that becomes so captivating that it supersedes other stories that seem to say Team America.
Not abolishes them.
I don't think we need to abolish all nations and cultures and languages and just become this homogeneous gray goo all over the planet.
No, you can have several identities and loyalties at the same time.
People already do it now.
They had it throughout history.
I can be loyal to my family, to my neighborhood, to my profession, to my city, and to my nation at the same time.
And there are conflicts, say, between my loyalty to my business and my loyalty to my family.
So I hate to think hard.
Sometimes I prefer the interests of the family.
Sometimes I prefer the interests of the business.
So, you know, that's life.
We have these difficulties in life.
It's not always easy.
So I'm not saying let's abolish all other identities.
And from now on, we are just citizens of the world.
But we can add this kind of layer of loyalty To their previous layers, and this, you know, people have been talking about it for thousands of years, but now it really becomes a necessity, because we are now facing three global problems, which are the most important problems of humankind, and it should be obvious to everybody that they can only be solved on a global level, through global cooperation.
These are nuclear war, climate change, and technological disruption.
It would be obvious to anybody that you can't solve climate change on a national level.
You can't build a wall against rising temperatures or rising sea levels.
No country, even the United States or China, no country is ecologically independent.
There are no longer independent countries in the world, if you look at it from an ecological perspective.
Right.
Similarly, when it comes to technological disruptions, the potential dangers of artificial intelligence and biotechnology should be obvious to everybody.
You cannot regulate artificial intelligence.
on a national level.
If there is some technological development you're afraid of, like developing autonomous weapon systems, or like doing genetic engineering on human babies, then if you want to regulate this, you need cooperation with other countries, because, like the ecology, also science and technology, they are global.
They don't belong to any one country or any one government.
So if, for example, the United States bans genetic engineering on human beings, it won't prevent the Chinese or the Koreans or the Russians from doing it.
And then a few years down the line, if the Chinese are starting to produce superhumans by the thousands, the Americans wouldn't like to stay behind, so they will break their own ban.
The only way to prevent a very dangerous arms race in the fields of AI and biotechnology is through global cooperation.
Now, it's going to be very difficult, but I don't think it's impossible.
I actually gain a lot of hope from seeing the strength of nationalism.
Okay, so that's totally counterintuitive, because everything you just said, in the space provided, there's only one noun that solves the problem, which is world government on some level.
We don't need a single emperor or government.
You can have good cooperation even without a single emperor.
Then we need some other tools by which to cooperate, because we have, you know, in a world that is as politically fragmented as ours into nation states, all of which have their domestic political concerns and their short time horizons.
So you're talking about global problems and long-term problems that can only be solved through global cooperation and long-term thinking.
Yeah.
And we have political systems that are insular and focused on time horizons that don't exceed four or, in the best case, six years.
Yeah.
And then we have the occasional semi-benevolent dictatorship that can play the game slightly differently.
So what is the solution if not just a fusing of political apparatus at some point in the future?
No, we certainly need to go beyond the national level to a level when we have real trust between different countries of the kind you see, for example, still in the European Union.
If you take the example of having a ban on developing autonomous weapon systems, so if the Chinese and the Americans today try to sign an agreement banning killer robots, the big problem there is trust.
How do you really trust the other side to live up to the agreement?
AI is in this sense much worse than nuclear weapons, because with nuclear weapons it's very difficult to develop nuclear weapons in complete secrecy.
People are going to notice.
But with AI, there are all kinds of things you can do in secret, and the big question is, how can we trust them?
And at present, there is no way that the Chinese and the Americans, for example, are really going to be able to trust one another.
Even if they sign an agreement, every side will say, yes, we are good guys, we don't want to do it, but how can we really be sure that they are not doing it?
So we have to do it first.
But if you think about, for example, France and Germany, despite the terrible history of these two countries, a much worse history than the history of the relations between China and the US, if today the Germans come to the French and they tell the French, trust us.
We don't have some secret laboratory in the Bavarian Alps where we develop killer robots in order to conquer France.
The French will believe them.
And the French have good reason to believe them.
They're really trustworthy in this.
And if the French and Germans manage to reach this situation, I think it's not hopeless, also for the Chinese and the Americans.
So what explains that difference?
Because it is a shocking fact of history that you can just you can take these time slices that are, you know, 40, 50 years apart, where, you know, you have the the attempted rise of the Thousand Year Reich, where Germany is the least trustworthy nation anyone could conceive of, the most power hungry, the most militaristic.
You could say the same about Japan at that moment.
And then fast forward a few decades and we have I guess it's always vulnerable to some change, but we have a seemingly truly durable basis of trust.
As a historian, what accomplished that magic and why is it hard to just reverse engineer that with respect to Russia or China or any other seeming adversary?
It's just a lot of hard work.
In the case of the Germans, what you could say about them is If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense Podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app.
Export Selection