Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Just a note to say that if you're hearing this, you're not currently on our subscriber feed, and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org.
There you'll also find our scholarship program, where we offer free accounts to anyone who can't afford one.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
Today I'm speaking with Nick Bostrom.
Nick is a professor at the University of Oxford, where he is the founding director of the Future of Humanity Institute.
He is the author of many books.
Superintelligence is one of them that I've discussed on the podcast, which alerted many of us to the problem of AI alignment.
And his most recent book is Deep Utopia, Life and Meaning in a Solved World.
And that is the topic of today's conversation.
We discuss the twin concerns of alignment failure and also a possible failure to make progress on superintelligence.
The only thing worse than building computers that kill us is a failure to build computers that will help us solve our existential problems as they appear in the future.
We talk about why smart people don't perceive the risk of superintelligent AI.
The ongoing problem of governance, path dependence, and what Nick calls naughty problems.
The idea of a solved world.
John Maynard Keynes' predictions about human productivity.
The uncanny valley issue with the concept of utopia.
The replacement of human labor and other activities.
The problem of meaning and purpose.
Digital isolation and plugging into something like the matrix.
Pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, What Nick calls our cosmic endowment, the ethical confusion around long-termism, possible problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics.
Anyway, I think Nick has a fascinating mind.
As always, it was a great pleasure to speak with him.
And now I bring you Nick Bostrom.
I am here with Nick Bostrom.
Nick, thanks for joining me again.
Hey, good to be with you.
So, you wrote a very gloomy book about AI that got everyone worried some years ago.
This was Superintelligence, which we spoke about in the past.
But you have now written a book about all that could possibly go right in an AI future, and that book is Deep Utopia, Life and Meaning in a Solved World, which I want to talk about.
It's a fascinating book, but let's build a bridge between the two books that takes us through the present.
Perhaps you can just catch us up.
What are your thoughts about the current state of AI and is there anything that's happened since you published Superintelligence that has surprised you?
Well, a lot has happened.
I think one of the surprising things is just how anthropomorphic current-level AI systems are.
The idea that we have systems that can talk long before we have generally super-intelligent systems.
I mean, we've had these for years already.
That was not obvious 10 years ago.
Otherwise, I think things have unfolded pretty much in accordance with expectation, maybe slightly faster.
Were you surprised?
The most surprising thing from my point of view is that much of our talk about AI safety seemed to presuppose, even explicitly, that there would be a stage where the most advanced AIs would not be connected to the internet, right?
There's this so-called air gapping of the This black box from the internet and then there'd be a lot of thought around whether it was safe to unleash into the wild.
It seems that we have blown past that landmark and everything that's being developed is just de facto connected to more or less everything else.
Well, I mean, it's useful to connect it.
And so far, the systems that have been developed don't appear to pose this kind of existential risk that would occasion these more extreme security measures like ear gapping.
Now, whether we will implement those when the time comes for it, I guess, remains to be seen.
Yeah.
I mean, it just seems that the safety ethos doesn't quite include that step at the moment.
I mean, maybe they have yet more advanced systems that they wouldn't dream of connecting to anything, or maybe they'll reach that stage.
But it just seems that we're, far before we really understand the full capacities of a system, we're building them already connected to much of what we care about.
Yeah, I don't know exactly how it works during the training of the next generation models that are currently in development, whether there is any form of temporary ear gapping until sort of a given capability level can be assessed.
On the one hand, I guess you want to make sure that you also learn the kinds of capabilities that require internet access, the ability to look things up, for example.
But I guess in principle, you could train some of those by having a locally stored copy of the internet.
Right.
And so I think maybe it's something that you only want to implement when you think that the risks are actually high enough to warrant the inconvenience and the limitation of at least during training and testing phases, doing it in an air-gapped way.
Have your concerns about the risks changed at all?
In particular, the risk of AI alignment, or the failure of AI alignment?
I mean, the macro picture, I think, remains more or less the same.
Obviously, there's a lot more granularity now.
We are 10 years farther down the track, and we can see in much more specificity what the leading-edge models look like, and what the field in general looks like.
But I'm not sure the PDoom has changed that much.
I think maybe my emphasis has shifted a little bit from alignment failure, the narrowly defined problem of not solving the technical challenge of scalable alignment, to focus a little bit more on the other ways in which we might succumb to an existential catastrophe.
There is the governance challenge, broadly construed, and then the challenge of digital minds with moral status that we might also make a hash of.
And of course, also the failure, the potential risk of failing ever to develop superintelligence, which I think in itself would constitute plausibly an existential catastrophe.
Well, that's interesting.
So it really is a high-wire act that we have to develop it exactly as needed to be perfectly aligned with our well-being in an ongoing way.
And to develop it in an unaligned way would be catastrophic, but to not develop it in the first place could also be catastrophic given the challenges that we face that could only be solved by superintelligence.
Yeah, and we don't really know exactly how hard any of these problems are.
I mean, we've never had to solve them before.
So I think most of the uncertainty in how well things will go is less uncertainty about the degree to which we get our act together and rally, although there's some uncertainty about that, and more uncertainty in just the intrinsic difficulty of these challenges.
So there is a degree of fatalism there.
Like if the problems are easy, we'll probably solve them.
If they are really hard, we'll almost certainly fail.
And now maybe there will be sort of intermediate difficulty level, in which case, in those scenarios, it might make a big difference the degree to which we sort of do our best in the coming months and years.
Well, we'll revisit some of those problems when we talk about your new book, but what do you make of the fact that some very smart people, people who were quite close or even responsible for the fundamental breakthroughs into deep learning that have given us the progress we've seen of late, I'm thinking of someone like Geoffrey Hinton,
What do you make of the fact that certain of these people apparently did not see any problem with alignment or the lack thereof, and they just only came to perceive the risk that you wrote about 10 years ago?
In recent months, maybe it was a year ago that Hinton retired and started making worried noises about the possibility that we could build superintelligence in a way that would be catastrophic.
I gave a TED talk on this topic, very much inspired by your book, in 2016.
Many of us have been worried for a decade about this.
How do you explain the fact that someone like Hinton is just now having these thoughts?
Well, I mean, I'm more inclined to give credit to, I mean, it's particularly rare and difficult to change one's mind or update as one gets older, and particularly if one is very distinguished like Hinton is.
I think that's more the surprising thing, I think, rather than the failure to update earlier.
There are a lot of people who haven't yet sort of really come to appreciate just how immense this leap into superintelligence will be and the risks associated with it.
The thing that I find mystifying about this topic is that there's some extraordinarily smart people who you can't accuse of not understanding the details, right?
It's not an intellectual problem, and they don't accept the perception of risk.
In many cases, they just don't even accept that it's a category of risk that can be thought about.
Yet, their counter-arguments are so unpersuasive, insofar as they even have counter-arguments, that it's just some kind of brute fact of a difference of intuition that is very hard to parse.
I'm thinking of somebody like David Deutsch, who you probably know, the physicist at Oxford.
Maybe he's revised his opinion.
It's been a couple of years since I've spoken to him about this, but I somehow doubt it.
The last time we spoke, he was not worried at all about this problem of alignment.
The analogy he drew at the time was that we all have this problem.
We have kids, we have teenagers, and the teenagers are not perfectly aligned with our view of reality, and yet we navigate that fine, and they grow up on the basis of our culture and our instruction and in continuous dialogue with us, and nothing gets Too out of whack.
And, you know, everything he says about this that keeps him so sanguine is based explicitly on the claim that there's just simply no way we can be cognitively closed to what a superintelligence ultimately begins to realize and understand and want to actuate.
I mean, it's just a matter of us getting enough memory and enough, you know, processing time to have a dialogue with such a mind.
And then the conversation sort of peters out without there actually being a clear acknowledgement of the analogies to our relationship to lesser species.
When you're in relationship to a species that is much more competent, much more capable, much more intelligent, There's just the obvious possibility of that not working out well as we have seen in the way we have run roughshod over the interests of every other species on earth.
The skeptics of this problem just seem to think that there's something about the fact that we are inventing this technology that guarantees that this relationship cannot go awry.
And I've yet to encounter a deeper argument about why that is not merely guaranteed, or why that's in any way even guaranteed to be likely.
Well, I mean, let's hope they are right.
Have you ever spoken with Deutsch about this?
I don't believe I have.
I think I've only met him once, and that was, yeah, I don't recall whether this came up or not.
Long time ago.
So, but, I mean, do you share my mystification after colliding with many of these people?
Yeah, well, I guess I'm sort of numb to it by now.
I mean, you just take it for granted that that's the way things are.
Perhaps things seem the same from their perspective, like these people running around with their pants on fire, being very alarmed, and we have this long track record of technological progress being for the best, and sometimes ways to try to control things end up doing more harm than what they are protecting against.
But yeah, it does seem prima facie something that has a lot of potential to go wrong.
If you're introducing the equivalent of a new species that are far more intelligent than Homo sapiens, even if it were a biological species, that already would seem a little bit perilous, or at least worth being cautious about as we did it.
And in here, it's maybe even more different and much more sudden.
Actually, I think that variable would sharpen up people's concerns.
If you just said, we're inventing a biological species that is more intelligent and capable than we are and setting it loose, it won't be in a zoo, it'll be out with us.
I think people, the kind of the wetness of that invention, I think would immediately alert people to the danger or the possibility of danger.
There's something about the fact that it is a clear artifact, a non-biological artifact that we are creating that makes people think this is a tool, this isn't a relationship.
Yeah.
I mean, in fairness, the fact that it is sort of a non-biological artifact also potentially gives us a lot more levers of control in designing it.
Like you can sort of read off every single parameter and modify it and it's software and you have a lot of affordances with software that you couldn't have with like a biological creation.
Yeah.
So if we are lucky, it just means we have like this precision control as we engineer this and no built in, like maybe you think A biological species might have sort of inbuilt predatory instincts and all of that, which need not be present in a digital mind.
Right.
But if we're talking about an autonomous intelligence that exceeds our own, there's something about the meanings of those terms, you know, autonomous, intelligent, and more of it than we have.
that entails our inability to predict what it will ultimately do.
It can form instrumental goals that we haven't anticipated.
That falls out of the very concept of having an independent intelligence.
It puts you in relationship to another mind.
We can leave the topic of consciousness aside for the moment.
We were simply talking about intelligence, and there's just something about that that is, again, unless we have complete control and can pull the plug at any moment, which becomes harder to think about in the presence of something that is, as stipulated, more powerful and more intelligent than we are.
Again, I'm mystified that people simply don't acknowledge the possibility of the problem.
The argument never goes, oh yes, that's totally possible that we could have this catastrophic failure of relationship to this independent intelligence, but here's why I think it's unlikely.
That's not the argument.
The argument is, you're merely worrying about this is a kind of perverse religious faith in something that isn't demonstrable at all.
Yeah, I mean, it's hard not to get religious in one way or another when confronting such immense prospects and the possibility of much greater beings and how that all fits in.
But I guess it's worth reflecting as well on what the alternative is.
So it's not as if the default course for humanity It's just this kind of smooth highway with McDonald's stations interspersed every four kilometers.
It does look like things are a bit out of control already from a global perspective.
We're inventing different technologies without much plan, without much coordination.
And maybe we've just mostly been lucky so far that we haven't discovered one that is So destructive that it destroys everything.
Because, I mean, the technologies we have developed, we've put them to use, and if they are destructive, they've been used to cause destruction.
It's just so far, the worst destruction is kind of the destruction of one city at a time.
Yeah, this is your urn of invention argument that we spoke about last time.
Yeah, so there's that where you could focus it, say, on specific technological discoveries.
But in parallel with that, there is also this kind of out-of-control dynamics.
You could call it evolution or like kind of a global geopolitical game theoretic situation that is evolving.
And our sort of information system, the memetic drivers that have changed presumably quite a lot since we've developed the internet and social media.
And that is now driving human minds in various different directions.
And, you know, if we are lucky, they will make us like wiser and nicer, but there is no guarantee that they won't instead create more polarization or addiction or other Various kinds of malfunctions in our collective mind.
And so that's kind of the default course, I think.
So yes, I mean, AI will also be dangerous, but the relevant standard is like, how much will it increase the dangers relative to just keep doing what's currently being done?
Actually, there's a metaphor you use early in the book, the new book, Deep Utopia, which captures this wonderfully.
It's the metaphor of the bucking beast.
I just want to read those relevant sentences because they bring home the nature of this problem, which we tend not to think about in terms that are this vivid.
You say that, quote, Humanity is riding on the back of some chaotic beast of tremendous strength, which is bucking, twisting, charging, kicking, rearing.
The beast does not represent nature.
It represents the dynamics of the emergent behavior of our own civilization.
The technology-mediated, culture-inflected, game-theoretic interactions between billions of individuals, groups, and institutions.
No one is in control.
We cling on as best we can for as long as we can, but at any point, perhaps if we poke the juggernaut the wrong way, or for no discernible reason at all, it might toss us into the dust with a quick shrug, or possibly maim or trample us to death.
Right?
So we have all these various... That's the end of the quote.
So we have all these variables that we influence in one way or another through culture and through all of our individual actions.
And yet, on some basic level, no one is in control.
And there's just no... I mean, the system is increasingly chaotic, especially given all of our technological progress.
And Yes, so into this picture comes the prospect of building more and more intelligent machines.
And again, it's this dual-sided risk.
There's the risk of building them in a way that contributes to the problem, but there's the risk of failing to build them and failing to solve the problems that might only be solvable in the presence of greater intelligence.
Yeah, so that certainly is one dimension of it, I think.
It would be kind of sad if we never even got to roll the dice with superintelligence, because we just destroyed ourselves before.
Even that would be particularly ignominious, it seems.
Yeah, well, maybe this is a place to talk about the concept of path dependence and what you call naughty problems in the book.
What do those two phrases mean?
Well, I mean, path dependence, I guess, means that the result depends sort of on how you got there, and that kind of the opportunities don't supervene on some current state, but also on the history.
The history might make a difference, not just But the naughty problems, basically, there's like a class of problems that become automatically easier to solve as you get better technology.
And then there is another class of problems for which that is not necessarily the case and where the solution instead maybe requires improvements in coordination.
So for example, you know, maybe the problem of poverty is getting easier to solve the more efficient, productive technology we have.
You can grow more.
If you have tractors, it's easier to keep everybody fed than if you have more primitive technology.
So starvation just gets easier over time, as long as we make technological progress.
But, say, the problem of war doesn't necessarily get easier just because we make technological progress.
In fact, in some ways, wars might become more destructive if we make technological progress.
Can you explain the analogy to the actual knots in string?
Well, so the idea is with knots that are just tangled in certain ways, if you just pull hard enough on the ends of that, it kind of straightens out.
But there might be other types of problems where if you kind of advance technologically equivalently to tugging on the ends of the string, you end up with this ineliminable knot.
And the more perfect the technology, the tighter that knot becomes.
So, say you have a kind of totalitarian system to start off with, maybe then the more perfect technology you have, the greater the ability of the dictator to maintain himself in power using advanced surveillance technology, or maybe like anti-aging technology, like whatever you could.
With perfect technology, maybe it becomes a knot that never goes away.
And so, in the ideal, if you want to end up with a kind of unknotted string, you might have to resolve some of these issues before you get technological maturity.
Yeah, which relates to the concept of path dependence.
So let's actually talk about the happy side of this equation, the notion of deep utopia and a solved world.
What do you mean by a solved world?
One characterized by two properties.
So one is it has attained technological maturity, or some good approximation thereof, meaning at least all the technologies we already can see are physically possible have been developed.
But then it has one more feature, which is that political and governance problems have been solved to whatever extent those kinds of problems can be solved.
So imagine some future civilization with really advanced technology, and it's a generally Fair world that doesn't wage war and where people don't oppress one another and things are at least decent in terms of the political stuff.
So that would be one way of characterizing it.
But another is to think of it as a world in which there's a sense in which either all practical problems have already been solved, or if there remain any practical problems, they are any better worked on by AIs and robots, and so that in some sense there might not remain any practical problems for humans to work out.
The world is already solved.
And when we think about this, well, first it's interesting that there's this historical prediction from John Maynard Keynes, which was surprisingly accurate given the fact that it's a hundred years old.
What did Keynes predict?
He thought that productivity would increase four to eightfold over the coming hundred years from when he was writing it.
I think we are about 90 years since he wrote it now, and that seems to be on track.
So that was the first part of his prediction.
And then he thought that the result of this would be that we would have a kind of four-hour working week at Leisure Society, that people would work much less because they could, you know, get enough of all that they had before and more, even whilst working much less.
If every hour of work was like eight times as productive, you could like work, you know, four times less and still have two times as much stuff.
He got that part wrong, but that was... Yeah, so he got that mostly wrong, although we do work less.
Working hours have decreased, so there's more sort of... People take longer to enter the labor market, there's more education, they retire earlier, there's more sort of maternity and paternity leave and slightly shorter working hours, but nowhere near as much as he had anticipated.
I'm surprised, and perhaps this just attests to my lack of economic understanding, but I'm surprised that he wasn't an order of magnitude off or more in his prediction of productivity.
I'm going out that way.
Given what he had to work with, you know, 90 years ago in terms of looking at the results of the Industrial Revolution, and given all that's happened in the intervening decades, it's surprising that his notion of where we would get as a civilization in terms of productivity was at all close.
Yeah, I mean, so those basic economic growth rates of productivity growth haven't really changed that much.
It's a little bit like Moore's law where it's had, you know, a relatively steady doubling pace for a good long time now.
And so I guess he just extrapolated that and that's how he got his prediction.
So there's this, I think you touch on it in the book, there's this strange distortion of our thinking when we think about the utopia or the prospect of solving all of our problems.
When you think about incremental improvements in our world, all of those seem, almost by definition, good, right?
I mean, we're talking about an improvement, right?
So you're telling me you're going to cure cancer?
Well, that's good.
But once you improve too much, right, if you cure cancer and heart disease and Alzheimer's and then even aging, and now we can live to be 2,000, All of a sudden, people's intuitions become a little wobbly, and they feel that you've improved too much, and we almost have a kind of uncanny valley problem for a future of happiness.
It all seems too weird, and in some ways undesirable, and even unethical, right?
I don't know if you know the gerontologist Aubrey de Grey, who has made many arguments about the ending of aging, and He ran into this whenever he would propose the idea of solving aging itself as effectively an engineering problem.
He was immediately met by opposition of the sort that I just described.
People find it to be unseemly and unethical to want to live forever, to want to live to be a thousand.
But then when he would break it down and he would say, well, okay, but let me just get this straight.
Do you think curing cancer would be a good idea?
And everyone, of course, would say yes.
What about heart disease?
And what about Alzheimer's?
And everyone will sign up a la carte for every one of those things on the menu, even if you present literally everything that constitutes aging from that menu.
They want them all piecemeal, but comprehensively it somehow seems indecent and uncanny.
So, I mean, do you have a sense of utopia being almost a hard thing to sell?
Were it achievable that people still have strange ethical intuitions about it?
Yeah, so I don't really try to sell it, but more dive right into that counter-intuitiveness and awkwardness.
The sense of unease that comes if you really try to imagine what would happen if we made all these little steps of progress that everybody would agree are good individually, and then you think through what that would actually produce.
There is a sense in which, at least at first tithes, a lot of sites, a lot of people would recoil from that.
And so the book doesn't try to sugarcoat that, but let's really dive in and think just how potentially repulsive and counterintuitive that condition of a soul world would be, and not blink or look away and stare straight into that.
And then like analyze what kinds of values could you actually have in this solved world.
And I mean, I think I'm ultimately optimistic that as it were on the other side of that, there is something very worthwhile, but it certainly would be, I think, in many important ways, very different from the current human condition.
And there is a sort of paradox there that we're so busy making all these steps of progress that we celebrate as we make them, but we rarely look at where this ends up, if things go well.
And when we do, we kind of recoil.
So, I mean, you could cure the individual diseases and then you cure aging, but also other little practical things, right?
You know, your black and white television, then you have a color television, then you have a remote control, you don't have to get up, then you have a virtual reality headset, and then you have a little thing that reads your brain, so you don't even have to select what you want to watch, it kind of directly just selects programming based on what maximally stimulates various circuits in your brain, and then, you know, maybe you don't even have that, you just have something that directly stimulates your brain, and then Maybe it doesn't stimulate all the brain, but just the pleasure center of your brain.
And as you think through, as it were, these things taken to their optimum degree of refinement, it seems that it's not clear what's left at the end of that process that would still be worth having.
But let's make explicit some of the reasons why people begin to recoil.
I mean, you just took us all the way, essentially, into the matrix, right?
And then we can talk about, I mean, you're talking about directly stimulating the brain so as to produce non-veridical but otherwise desirable experiences.
So we'll probably end somewhere near there.
On the way to all of that, there are other forms of dislocation.
I mean, just the fact that you're uncoupling work from the need to work in order to survive, right, in a solved world.
So, let's just talk about that first increment of progress where we achieve such productivity that work becomes voluntary, right, or we have to think of our lives as games or as works of art Where what we do each day has no implication for whether or not we have a sufficient economic purchase upon the variables of our own survival.
What's the problem there with unemployment or just purely voluntary employment or not having a culture that necessarily values human work because all that work is better accomplished by intelligent machines?
How do you see that?
Yeah, so we can take it in stages, as it were layers of the onion.
So the outermost and most superficial analysis would say, well, so we get machines that can do more stuff, so they automate some jobs.
But that just means we humans would do the other jobs that haven't been automated.
And we've seen transitions like this in the past.
Like 150 years ago, we were all farmers, basically, and now it's one or two percent.
And so similarly in the future, maybe people, you know, will be Reiki instructors or massage therapists or like other things we haven't even thought of.
And so, yes, there will be some challenges we need to, you know, maybe there will be some unemployment and we need, I don't know, unemployment insurance or retraining of people to And that's kind of often where the discussion has started and ended so far in terms of considering the implications of this machine intelligence era.
I've noticed that the massage therapists always come out more or less the last people standing in any of these thought experiments.
But that might be a euphemism for some related professions.
I think the problem goes deeper than that, because it's not just the current jobs that could be automated, right?
But the new jobs that we could invent also could be automated, if you really imagine AI that is as fully generally capable as the human brain, and then presumably robot bodies to go along with that.
So all human jobs could be automated, with some exceptions that might be relatively minor, but are worth, I guess, mentioning in passing.
So there are services or products where we care not just about the functional attributes of what we're buying, but also about how it was produced.
So right now, some person might pay a premium for a trinket if it were made by hand, maybe by an indigenous craftsperson, as opposed to in a sweatshop somewhere in Indonesia.
So you might pay more for it, even if the trinket itself is functionally equivalent, because you care about the history and how it was produced.
So similarly, if future consumers have that kind of preference, It might create a niche for human labor because only humans can make things made by human.
Or maybe people just prefer to watch human athletes compete rather than like robots, even if the robots could run faster and box harder, et cetera.
So that's like the footnotes to that general claim that everything could be automated.
So that would be a more radical conception then of a leisure society where it's not just that we would retrain workers, but we would stop working altogether.
And in some sense, that's more radical, but it's still not that radical.
We already have various groups that don't work for a living.
We have children, so they are economically completely useless, but nevertheless often have very good lives.
They run around playing and inventing games and learning and having fun.
So even though they are not economically productive, their lives seem to be great.
You could look at retired people.
There, of course, the situation is confounded by health problems that become more common at older ages.
But if you take a retired person who is in perfect physical and mental health, you know, they often have great lives.
So they maybe travel the world, play with their grandkids, watch television, take their dog for a walk in the park, do all kinds of things.
They often have great lives.
And then there are people who are independently wealthy, who don't need to work for a living.
Some of those have great lives.
And so it's just maybe we would all be in more of these categories.
And that would undoubtedly require substantial cultural readjustment, like the whole education system presumably would need to change.
Rather than training kids to become productive workers who receive assignments and hand them in and do as they're told and sit at their desks, you could sort of focus education more on cultivating the art of conversation, appreciation for in the natural beauty, for literature, hobbies of different kinds, physical wellness.
So that would be a big readjustment.
Well, you've already described many of the impractical degrees that some of us have gotten, right?
I mean, I did my undergraduate degree in philosophy.
I forget what you did.
Did you do philosophy or were you in physics?
I did a bunch of things, yeah, physics and philosophy and AI and stuff.
But I mean, you've described much of the humanities there, so it's funny to think of the humanities as potentially the optimal, I guess, of non-humanities as circa 2024, given what's been happening on college campuses of late.
Some purified version of the humanities, like the Great Books program at St.
John's, say, is just the optimal education for a future wherein more or less everyone is independently wealthy.
Yeah.
Or maybe one component of it.
I think there's like, you know, music, appreciation, dimensions of a great life that don't all consist of reading all the books, but it's definitely like that could be an element there.
But, but I think the, um, the, the, the problem goes like deeper than that.
So we can peel off another layer of the onion, which is that when we consider the affordances of technological maturity, we realize it's not just.
Economic labor that could be automated, but a whole bunch of other activities as well.
So rich people today are often leading very busy lives.
They are like having various projects they are pursuing, et cetera, and that they couldn't accomplish unless they actually put time and effort into them themselves.
But you can sort of think through of the types of activities that people might fill their leisure time with and think whether those would still make sense at technological maturity.
And I think for For many of them, you can sort of cross them out, or at least put a question mark there.
You could still do them, but they would seem a bit pointless, because that would be an easier way to accomplish their aim.
So right now, some people are not just like shopping as an activity.
If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast.
The podcast is available to everyone through our scholarship program, so if you can't afford a subscription, please request a free account on the website.