Nick Bostrom warns AI could outpace humanity, solving global crises or sparking existential risks like recursive self-improvement, with alignment and coordination as critical unknowns. His simulation argument—original life is improbable in an infinite universe—clashes with Joe Rogan’s skepticism over gas-cloud brains and lack of evidence, though Bostrom insists probabilistic reasoning favors simulated existence. Both agree humanity may miss "crucial considerations" for long-term survival, like overlooked AI dangers or ethical blind spots, leaving future decisions fraught with uncertainty despite current biological tangibility. [Automatically generated summary]
All the problems we have, most of them could be solved if we were smarter or if we had somebody on our side who were a lot smarter with better technology and so forth.
Also, I think if we want to imagine some really grand future where humanity or our descendants one day go out and colonize the universe, I think that's likely to happen, if it's going to happen at all, after we have super intelligence that then develops the technology to make that possible.
You could imagine that we harness it, but then use it for bad purposes as we have a lot of other technologies through history.
So I think there are really two challenges we need to meet.
One is to make sure we can align it with human values and then make sure that we together do something better with it than fighting wars or oppressing one another.
Yeah, the idea that we're in a state of evolution, that we are just like we look at ancient hominids, that we are eventually going to become something more advanced or at least more complicated than we are now.
But what I'm worried is that biological life itself has so many limitations.
When we look at the evolution of technology, if you look at Moore's Law or if you just look at new cell phones, like they just released a new iPhone yesterday and they're talking about all these Incremental increases in the ability to take photographs and wide-angle lenses and night mode and a new chip that works even faster.
These things, there's not, the word evolution is incorrect, but the innovation of technology is so much more rapid than anything we could ever even imagine biologically.
Like if we had a thing that we had created, if we had created, instead of artificial intelligence in terms of like something in a chip or computer, If we created a life form, a biological life form, but this biological life form was improving radically every year.
It didn't even exist.
The iPhone existed in 2007. That's when it was invented.
If we had something that was 12 years old, but all of a sudden was infinitely faster and better and smarter and wiser than it was 12 years ago, the newest version of it, version X1, we would start going, whoa, whoa, whoa, hit the brakes on this thing, man.
How many more generations before this thing's way smarter than us?
How many more generations before this thing thinks that human beings are obsolete?
Well, don't I have like Tyler Cowen and even Peter Thiel sometimes goes on about the pace of innovation not really being what it needs to be.
I mean, maybe it was faster in like 1890s, but still compared to almost all of human history, it seems like a period of unprecedented rapid progress right now.
Yeah, so I see it as not something that should be avoided, neither something that we should just be completely gung-ho about, but more like a kind of gate through which we will have to pass at some point.
All paths that are both plausible and lead to really great futures, I think, at some point involve the development of greater-than-human intelligence, machine intelligence.
And so that our focus should be on getting our act together as much as we can in whatever period of time we have before that occurs.
Well, I mean, that might involve doing some research into various technical questions as how you build these systems so that we actually understand what they are doing and they have some intended impact on the world.
It might also – if we are able to get our act together a little bit on the kind of global political scene, a little bit more peace and love in the world would be good, I think.
Where is the current state of technology now in regards to artificial intelligence and how far away do you think we are from AGI? Well, different people have different views on that.
I think the truth of the matter is that it's very hard to have accurate views about the timelines for these things that still involve kind of beginning breakthroughs that have to happen.
Certainly, I mean, over the last eight or ten years, there has been a lot of excitement with the deep learning revolution.
Things that...
I mean, it used to be that people thought of AI as this kind of autistic savant, really good at logic and counting and memorizing facts, but...
With no intuition.
And this deep learning evolution, when you began to do these deep neural networks, you kind of solved perception in some sense.
You have computers that can see, that can hear, and that have visual intuition.
So that has enabled a whole wide suite of applications, which makes it commercially valuable, which then drives a lot of investment in it.
So there's now quite a lot of momentum in machine learning and trying to kind of stay ahead of that.
It's interesting that when we think about artificial intelligence and whatever potential form that it's going to take, if you look at films like 2001, like Hal, like, open the door, Hal, you know?
We think of something that's communicating to us, like a person would, and maybe is a little bit colder and doesn't share our values and has a more pragmatic view of life and death and things.
When we think of intelligence, though, I think intelligence in our mind is almost inexorably connected to all the things that make us human, like emotions and And ambition and all these things, like the reason why we innovate.
It's not really clear.
We innovate because we enjoy innovation and because we want to make the world a better place and because we want to fix some problems that we've created and we want to solve some limitations of the human body and the environment that we live in.
But we sort of assume that intelligence that we create will also have some motivations.
Well, there is a fairly large class of possible structures you could do.
If you want to do anything that has any kind of cognitive or intellectual capacity at all, a large class of those would be what we might call agents.
So these would be systems that interact with the world in pursuit of some goal.
And if there are a sophisticated class of agents, they can plan ahead a sequence of actions.
Like more primitive agents might just have reflexes.
But the sophisticated agent might have a model of the world where it can kind of think ahead before it starts doing stuff.
It can kind of think, what would I need to do in order to reach this desired state?
And then reason backwards from that.
So I think it's a fairly natural...
It's not the only possible cognitive system you could build, but it's also not this weird, bizarre, special case that, you know, it's a fairly natural thing to aim for.
If you're able to specify the goal, something you want to achieve, but you don't know how to achieve it, a natural way of trying to go about that is by building this system that has this goal and is an agent and then moves around and tries different things and eventually perhaps learns to solve that task.
Whether or not we can replicate all the functions of the human brain in the way it functions and mimic it exactly, or whether we could have some sort of superior method that achieves the same results that the human brain does in terms of its ability to calculate and reason and do multiple tasks at the same time.
Yeah, and I also think that maybe once you have a sufficiently high level of this general form of intelligence, then you could use that maybe to emulate or mimic things that we do differently.
The cortex is quite limited, so we rely a lot on earlier neurological structures that we have.
We have to be guided by emotion because we can't just calculate everything out.
And instinct, and if we lost all of that, we would be helpless.
But maybe some system that had a sufficiently high level of this more abstract reasoning capability could maybe use that to substitute for things that weren't built in in the same way that we do.
Well, I mean, I do think that there are these significant risks that will be associated with this transition to the machine intelligence era, including existential risks, threats to the very survival of humanity or what we care about.
One of the things that scares me the most is the idea that if we do create artificial intelligence, then it will improve upon our design and create far more sophisticated versions of itself.
And that it will continue to do that until it's unrecognizable, until it reaches literally a godlike potential.
I mean, I forget what the real numbers were, maybe you could tell us, but someone had calculated some Reputable source and calculated the amount of improvement that sentient artificial intelligence would be able to create inside of a small window of time.
Like if it was allowed to innovate and then make better versions of itself and those better versions of itself were allowed to innovate and make better versions of itself.
You're talking about not an exponential increase of intelligence but an explosion.
So it's hard enough to forecast the pace at which we will make advances in AI. Because we just don't know how hard the problems are that we haven't yet solved.
And, you know, once you get to human level or a little bit above, I mean, who knows?
It could be that there is some level where to get further, you would need to put in a lot of...
Thinking time to kind of get there.
Now, what is easier to estimate is if you just look at the speed, because that's just a function of the hardware that you're running it on, right?
So there we know that there is a lot of room in principle.
If you look at the physics of computation and you look at what would an optimally arranged physical system be that was optimized for computation, that would weigh many, many orders above what we can do now.
And then you could have arbitrarily large systems like that.
So, from that point of view, we know that that could be things that would be like a million times faster than the human brain and with a lot more memory and stuff like that.
And then something, if it did have a million times more power than the human brain, it could create something with a million times more computational power than itself.
It could make better versions.
It could continue to innovate.
Like if we create something and we say, you are, I mean, it is sentient.
It is artificial intelligence.
Now, please go innovate.
Please go follow the same directive and improve upon your design.
Yeah, well, we don't know how long that would take then to get to something.
We already have sort of millions of times more thinking capacity than a human has.
I mean, we have millions of humans.
So if you kind of break it down, you think there's like one milestone when you have maybe an AI that could do what one human can do.
But then that might still be quite a lot of orders of magnitude until it would be equivalent of the whole human species.
And maybe during that time other things happen, maybe we upgrade our own abilities in some way.
So there are some scenarios where it's so hard to get even to one human baseline that we kind of use this massive amount of resources just to barely create kind of a village agent using billions of dollars of compute, right?
So if that's the way we get there, then, I mean, it might take quite a while, because you can't easily scale something that you've already spent billions of dollars building.
Yeah, some people think the whole thing is blown out of proportion, that we're so far away from creating artificial general intelligence that resembles human beings, that it's all just vaporware.
Well, I mean, one would be that I would want to be more precise about just how far away does it have to be in order for us to be rational to ignore it.
It might be that if something is sufficiently important and high stakes, then even if it's not going to happen in the next 5, 10, 20, 30 years, it might still be wise for our pool of 7 billion plus people to have some people actually thinking about this ahead of time.
So some of these disagreements, I guess this is my point, are more apparent than real.
Like, some people say it's going to happen soon, and some other people say, no, it's not going to happen for a long time.
And then, you know, one person means by soon, five years, and another person means by a long time, five years.
And, you know, it's more of different attitudes rather than different specific beliefs.
So I would first want to make sure that there actually is a disagreement.
Now, if there is, if somebody is very confident that it's not going to happen in hundreds and hundreds of years, then I guess I would want to know their reasons for that level of confidence.
What's the evidence they're looking at?
Do they have some ground for being very sure about this?
Certainly, the history of technology prediction is not that great.
You can find a lot of other examples where even very eminent technologists and scientists where culture, it's not going to happen in our lifetime.
In some cases, it actually already just happened in some other part of the world, or it happened a year or two later.
So I think some epistemic humility with these things would be wise.
I was watching a talk that you were giving and you were talking about the growth of innovation technology and GDP over the last 100 years and you were talking about the entire history of life on earth and what a short period of time humans have been here and then during what a short period of time what a stunning amount of innovation and how much change we've enacted on the earth and just a blink of an eye and you had the scale of GDP over the course
of the last hundred years.
It's crazy, because it's so difficult for us with our current perspective, just being a person, living, going about the day-to-day life that seems so normal, to put it in perspective time-wise and see what an enormous amount of change has taken place in relatively an incredibly short amount of time.
We think of this as sort of the normal way for things to be.
The idea that the alarm wakes you up in the morning and then you commute in and sit in front of a computer all day and you try not to eat too much.
And that if you sort of imagine that, you know, maybe in 50 years or 100 years or at some point in the future, it's going to be very different.
That's like some radical hypothesis.
But, of course, this quote-unquote normal condition is a huge anomaly any which way you look at it.
I mean, if you look at it on a geological timescale, the human species is very young.
If you look at it historically, you know, for more than 90%, we were just hunter-gatherers running around and agriculturalists for...
The last couple of hundred years, when some parts of the world have escaped the Malthusian condition, where you basically only have as much income as you need to be able to produce two children.
And we have the population exploit.
All of this is very, very, very recent.
And in space as well, of course, almost everything is ultra-high vacuum, and we live on the surface of this little special crumb.
And yet we think this is normal and everything else is weird, but I think that's a complete inversion.
And so when you do plot, if you do plot, for example, world GDP, which is a kind of rough measure for the total amount of productive capability that we have, right?
Right.
If you plot it over 10,000 years, what you see is just a flat line and then a vertical line.
And you can't really see any other structure.
It's so extreme, the degree to which humanity's productive capacity.
So if I look at this picture, now we imagine this is now the normal, this is the way it's going to be now, indefinitely.
It just can't seem...
Prima facia implausible.
It sort of doesn't look like we are in a static period right now.
It looks like we're in the middle of some kind of explosion.
And oddly enough, everyone involved in the explosion, everyone that's innovating, everyone that's creating all this new technology, they're all apart.
of this momentum that was created before they were even born.
So it does feel normal.
They're just a part of this whole spinning machine and they jump in, they're born, they go to college, next thing you know they have a job and they're contributing to making new technology and then more people jump in and add on to it and there's very little perspective in terms of like the historical significance of this incredible explosion technologically.
When you look at What you're talking about, that gigantic spike.
No one feels it, which is one of the weirdest things about it.
People lived and died, and so absolutely no technological change.
And in fact, you could have many, many generations.
The very idea that there was some trajectory...
In the material conditions is a relatively new idea.
I mean, people thought of history either as, you know, some kind of descent from a golden age, or some people had a cyclical view.
But it was all in terms of political organization, that would be a great kingdom, and then a wise ruler would rule for a while.
And then like a few hundred years later, you know, they're Grand-great-grandchildren would be too greedy, and it would come into anarchy, and then a few hundred years later it would come back together again.
So it would be all these pieces moving around, but no new pieces really entering.
Or if they did, it was at such a slow rate that you didn't notice.
But over the eons, the wheel slowly turns, and somebody makes a slightly better wheel, somebody figures out how to They irrigate a lot better.
They breed better crops.
And eventually there is enough that you could have enough of a population, enough brains that then create more ideas at a quick enough rate that you get this industrial revolution.
Like objectively, if you were outside of the human race and you were looking at all these various life forms competing on this planet for resources and for survival, you would look at humanity and you go, well, you know, clearly it's not finished.
So there's going to be another version of it.
It's like, when is this version going to take place?
Is it going to take place?
Over millions and millions of years like it has historically when it comes to biological organisms or is it going to invent something?
That takes over from there, and then that's the new thing.
Something that's not based on tissue, something that's not based on cells, it doesn't have the biological limitations that we have, nor does it have all the emotional attachments to things like breeding, social dominance, hierarchies, all those things were no consequence to it.
It doesn't mean anything, because it's not biological.
Yeah, I mean, I don't think millions of years, I mean, a number of decades or whatever.
But it's interesting that even if we set that aside, we say machine intelligence is possible for some reason.
Let's just play with that.
I still think that would be very rapid change, including biological change.
I mean, we are doing great advances, making great advances in biotech as well, and we'll increasingly be able to control what our own organisms are doing through different means and enhance human capacities through biotechnology.
So even there, it's not going to happen overnight, but over an historically very short period of time, I think you would still see quite profound change just from applying bioscience to change human capacities.
Yeah, one of the technologies or one of the things that's been discussed to sort of mitigate the dangers of artificial intelligence is a potential merge.
Some sort of symbiotic relationship with technology that you hear discussed, like...
I don't know exactly how Elon's neural link works, but it seems like a step in that direction.
There's some sort of a brain implant that interacts with an external device, and all of this increases the bandwidth for available intelligence and knowledge.
I mean, good that somebody tries it, you know, but I think it's quite technically hard to improve a normal, healthy human being's, say, cognitive capacity or other capacities by implanting things in them.
And get benefits that you couldn't equally well get by having the gadget outside of the body.
So I don't need to have an implant to be able to use Google, right?
Well, hopefully you could do that even with implant.
And once you start to look into the details, there's sort of these kind of demos, but then if you actually look at the papers, often you find, well, then there were these side effects, and the person had headaches, or they had some deficit, and the speech, you know, like, infection.
Like, it's just, biology is messy.
Yes.
So, maybe it will work better than I expect.
That could be good.
But otherwise, I think that the place where it will first become possible to enhance...
Human biological capacities would be through genetic selection, which is technologically something very near.
Well, so this would just be in the context of, say, in vitro fertilization.
You have usually some half dozen or dozen embryos created during this fertility procedure, which is standardly used.
So rather than just a doctor kind of looking at these embryos and saying, well, that one looks healthy, I'm going to implant that, you could run some genetic test and then use that as a predictor and select the one you think has the most desirable attributes.
And so, I mean, to some extent, we already do this.
There are a lot of testing done for various chromosomal abnormalities that you can already check for.
But our ability to look beyond clear, stark diseases, that this one gene is wrong.
To look at more complex traits is increasing rapidly.
So obviously there are a lot of ethical issues and different views that come into that.
But if we're just talking about what is technologically feasible, I think that already you could do a very limited amount of that today.
And maybe you would get two or three IQ points in expectation more if you selected using current technology based on 10 embryos, let's say.
So very small.
But as genomics gets better at deciphering the genetic architecture of Whether it's intelligence or personality attributes, then you would have more selection power and you could do more.
And then there is a number of other technologies we don't yet have, but which if you did, would then kind of stack with that and enable much more powerful forms of enhancement.
So there, yeah, I don't think there are any major technological hurdles, really, in the way.
Just some small amount of incremental further improvement.
That's when you talk about Doing something with genetics and human beings and selecting.
Selecting for the superior versions.
And then if everybody starts doing that.
The ethical concerns, when you start discussing that, people get very nervous.
Because they start to look at their own genetic defects.
And they go, oh my god, what if I didn't make the cut?
Like, I wouldn't be here.
And you start thinking about all the imperfect people that have actually contributed in some pretty spectacular ways to what our culture is.
And like, well, if everybody has perfect genes, would all these things even take place?
Like, what are we doing, really, if we're bypassing nature and we're choosing to select for the traits and the attributes that we find to be the most positive and attractive?
And you think what would have happened if, say, some earlier age...
had had this ability to kind of lock in their, you know, their prejudices, or if the Victorians had had this, maybe we would all be, whatever, pious and patriotic now or something.
Yeah, we know, like the Nazis.
So, in general, with all of these powerful technologies we are developing, there is I think the ideal course would be that we would first gain a bit more wisdom, and then we would get all of these powerful tools.
But it looks like we're getting the powerful tools before we have really achieved a very high level of wisdom.
Isn't that fascinating, though, when you think about human beings and all the different things we do?
We have very little understanding of the mechanisms behind most of what we need for day-to-day life, yet we just use them because there's so many of us and so many people are understanding various parts of all these different things that together, collectively, we can utilize the intelligence of all these millions of people that have innovated and we, with no work whatsoever, just go into the Verizon store and pick up the new phone.
I mean, and not just technology, but worldviews and political ideas as well.
It's not as if most people sit down with an empty table, try to think from the basic principles of what would be the ideal configuration of the state or something like that.
You just kind of absorb it and go with it.
You float in the stream of culture.
Yeah.
And it's amazing just how little of that actually at any point channels through your sort of conscious attention where you make some rational otherwise with like deliberate decision.
Most you just get carried away with.
But that again, I mean, if this is what we have to work with, then there's no other way.
There's no other way and there's no way, even like you and I discussing this, like Discussing the history of this incredible spike of evolution, or innovation rather, in technology.
It just doesn't feel like anything.
It feels normal.
So even though we can intellectualize it, even though we can have this conversation, talk about what an incredible time we're in and how terrifying it is that things are moving at such an incredibly rapid rate.
And no one's putting the brakes on it.
No one's thinking about the potential pros and cons.
I mean, when I got interested in these things in the 90s, and it was very much a fringe activity.
There was some internet mailing lists, some people exchanging ideas.
But since then, I mean, there's now a small set of academic research institutes and some other that are kind of actually trying to do more systematic thinking about some of these big picture topics.
I think during the Second World War, they had computers that were useful for doing stuff.
Then before that, they had kind of tabulating machines.
And before that, they had designs for things that, if they had been put together, would have been able to calculate a lot of numbers.
And then before that, they had an Abacus.
It kind of...
There's a number of, like, the line from having some external tool like a notepad, which you can calculate bigger numbers, right, if you can scribble on a piece of paper to a modern-day supercomputer, like, that kind of, you can break it down into small steps and they happen gradually.
Well, in the mid-50s, when people started using the word artificial intelligence, some of these AI researchers at the time were quite optimistic about the timelines.
In fact, there was some summer project that we're going to have a few students or whatever and work over the summer.
And I thought, oh, maybe we can solve vision over the summer.
there.
And now we've kind of solved vision, but that's like six years later.
It can be hard to know how hard the problem is until you've actually solved it.
But the really interesting thing to me is that even though I can understand why they were wrong about how difficult it is, because how would you know, right, if it's 10 years of work or 100 years of work?
Kind of hard to estimate at the outset.
But what is striking is that even the ones who thought it was 10 years away, they didn't think of what the obvious next step would be after that.
Like if you actually succeeded, At mechanizing all the functions of the human mind.
They couldn't think, well, it's obviously not going to stop there once you get human equivalence.
You're going to get superintelligence.
But it was as if the imagination muscle had so exhausted itself thinking of this radical possibility.
You could have a machine that does everything that the human does.
You couldn't kind of take the next step.
Or for that matter, the immense ethical and social implications.
Even if all you could do is to replicate a human mind, like in a machine.
If you actually thought you were building that and you were 10 years away, it'd be crazy not to spend a lot of time thinking about how this is going to impact the world.
But that didn't really seem to have occurred much to them at all.
Like, it wasn't just, oh, it could be fun to do, right?
Sure.
And so with the Manhattan Project, obviously, it was during wartime and maybe Hitler had a program.
They thought you could easily see why that would motivate a lot of people.
But even before they actually started the Manhattan product, so the guy who kind of first conceived of the idea that you could make a nuclear explosion, Leo Szilard, he was a kind of eccentric physicist who conceived of the idea of a chain reaction.
So it's been known before that that you could split the atom and a little bit of energy came out.
But if you're going to split one atom at a time, You're never going to get anything because it's too little.
So the idea of a chain reaction was that if you split an atom and it releases two neutrons, then each of those can split another two atoms that then release four neutrons and you get an exponential blow-up.
So he thought of this...
I forget exactly when.
It must have been in the early 30s, probably.
And he was a remarkable person because he didn't just think, oh, this is a fun idea.
I should publish it and get a lot of citations.
But he thought, what would this mean for the world?
Gee, this is...
This could be bad for civilization.
And so he then went to try to persuade some other of his colleagues who were also working in nuclear physics not to pursue this, not to publish unrelated ideas and have some partial success.
That is the problem in those cases where you would actually prefer the innovation not to happen.
Historically, of course, we now look back and think there are a lot of dissenters that we are now glad could have their way because a lot of cultures were quite resistant to innovation and they wanted to do the way things had always been, whether it's like social innovation or technological innovation.
The Chinese were at one point ahead in seafaring, exploring, and then they shut all of that down because the emperor at the time, I guess, didn't like it.
So there are many examples of kind of stasis, but as long as there were a lot of different places, a lot of different countries, a lot of different mavericks, then somebody would always do it.
And then once the others could see that it worked, they could kind of copy and...
Things move forward.
But of course if there is a technology you actually want not to be developed, then this multipolar situation makes it very, very hard to coordinate, to refrain from doing that.
Yeah, this I think is a kind of structural problem in the current human condition that is ultimately responsible for a lot of the existential risks that we will face in this century.
There's this kind of failure of ability to solve global coordination problems.
Yeah, and when you think about the people that did Oppenheimer and the people behind the Manhattan Project, they were inventing this to deal with this existential threat, this horrific threat from Nazi Germany, the Japanese and the World War II,
you know, this idea that this evil empire is going to try to take over the world, and this created The momentum and this created the motivation to develop this incredible technology that wind up making a great amount of our electricity and wound up creating enough nuclear weapons to destroy the entire world many times over.
And we're in this strange state now where it was motivated by this horrific moment in history, this evil empire that tries to take over the world and we come up with this incredible technological solution, the ultimate weapon That we detonate a couple of times on some cities and then now we're in this weird state where, you know, we're how many years later?
80 years later?
And we're not doing it anymore.
We don't drop any bombs on people anymore, but we all have them and we all have them pointed at each other.
Yeah, I mean, war has had a way of focusing minds and stuff.
No, I think that nuclear energy we would have had anyway.
Maybe it would have been developed like five years or ten years later.
Reactors are not that difficult to do.
So I think we could have gotten to all the good uses of nuclear technology that we have today without having to have had kind of the nuclear bomb developed.
If we do eventually come to a time where those things are going to war for us instead of us, like if we get involved in robot wars, our robots versus their robots, and this becomes the next motivation for increased technological innovation to try to deal with superior robots by the Soviet Union or by China, right?
These are more things that could be threats that could push people to some crazy level of technological innovation.
I mean, I think there are other drivers for technological innovation as well that seem plenty strong commercial drivers, let us say, that we wouldn't have to rely on war or the threat of war to kind of stay innovative.
I mean, there has been this effort to try to see if it would be possible to have some kind of ban on lethal autonomous weapons.
There are a few technologies that we have.
There has been a relatively successful ban on chemical and biological weapons, which have by and large been honored and upheld.
There are kind of treaties on nuclear weapons, which has limited proliferation.
Yes, there are now maybe, I don't know, a dozen.
I don't know the exact number.
But it's certainly a lot better than 50 or 100 countries.
Yes.
And some other weapons as well, blinding lasers, landmines, cluster munitions.
So some people think maybe we could do something like this with lethal autonomous weapons, killer bots, that Is that really what humanity needs most now, like another arms race to develop killer bots?
It seems arguably the answer to that is no.
I've kind of, as a lot of my friends are supportive, I kind of stood a little bit on the sidelines on that particular campaign, being a little unsure exactly what it is.
I mean, certainly I think it'd be better if we refrained from having some arms race to develop these than not.
But if you start to look in more detail, what precisely is the thing that you're hoping to ban?
So if the idea is the autonomous bit, like the robot should not be able to make its own firing decision.
Well, if the alternative to that is...
There's some 19-year-old guy sitting in some office building and his job is whenever the screen flashes fire now, he has to press a red button.
And then exactly the same thing happens.
I mean, I'm not sure how much is gained by having that extra step.
Well, you've got to attack this group of surface ships here, and here are the general parameters, and you're not allowed to fire outside these coordinates.
I don't know.
I mean, another is the question of it would be better if we had no wars, but if there is going to be a war, Maybe it is better if it's robots v.
robots or if there's going to be bombing.
Maybe you want the bombs to have high precision rather than low precision, like get fewer civilian casualties.
Or if it proliferates and you have these kind of mosquito-sized killer bots that terrorists have and It doesn't seem like a good thing to have a society where you have a facial recognition thing and then the bot flies out and you just have a kind of dystopia.
We're thinking rationally given the overall view of the human race that we want peace and everything to be well.
Realistically, if you were someone who is trying to attack someone militarily, you'd want the best possible weapons that give you the best possible advantage.
And that's why we had to develop the atomic bomb first.
It's probably why we'll try to develop the killer autonomous robot first.
It requires everybody to synchronize their actions.
And then you can have successes like we've had with some of these treaties.
Like, we've not had a big arms race in biological weapons or in chemical weapons.
I mean, there have been.
There were cheaters even on the biological warfare program, like the Soviet Union had massive efforts there, but still probably less use of that and less development than if there had been no such treaty.
Or just look at the amount of money being wasted every year to maintain these large arsenals so that we can kill one another if one day we decide to do it.
Yeah, and so if you look at the biggest efforts so far to make that happen, so after the First World War, people were really aware of this.
They said, this sucks, like war.
I mean, look at this.
Like a whole generation just ground up machine guns.
Got to make sure this never happens again.
So they tried to do the League of Nations, but then didn't really invest it with very much power.
And then the second war, second world war happened.
And so then again, just after that, it's fresh in people's memory saying, well, never again.
This is it.
The United Nations and in Europe, the European Union is kind of both designed as ways to try to prevent this.
But again, with kind of maybe in the case of the United Nations, quite limited powers to actually enforce the agreements.
And there's a veto, which makes it hard if it's two of the major powers that are at loggerheads.
So it might be that if there were a third big conflagration, that then people would say, well, this time, you know, we've got to really put some kind of institutional solution in place that has enough enforcement power that we don't try this yet again.
And we were taught in schools about nuclear fallout and stuff.
It was like a very palpable sense that at any given point in time, there could be some miscalculation or crisis or something.
And all the way up to senior statesmen at the time, these were like very real and very serious.
And I feel that memory of just how bad it is to live in that kind of hair-trigger nuclear arms race Cold War situation has kind of faded, and now we think, wow, maybe the world didn't blow up, so maybe it wasn't so bad after all.
Well, I think that would be the wrong lesson to learn.
It's a bit like you're...
Playing Russian roulette and you survive one and you say, well, it isn't so dangerous at all to play Russian roulette.
I think I'm going to have another go.
You've got to realize, well, maybe that was a 10% chance or a 30% chance that the world would blow up during the Cold War and we were lucky, but it doesn't mean we want to have another one.
And then a number of maneuvers are made and then you find yourself in a kind of situation where there's like honor at stake and reputation and you feel you can't back down and then another thing happens and you get into this place where if you even say something kind about the other side, You seem to be like, you know, you're a soft, you're a pinky, you're a light.
And on both sides, on the other side as well, obviously, they're going to have the same internal dynamic.
And each side says bad things about the other.
It makes the other side hate them even more.
And these things are then hard to reverse.
Like, once you find this dynamic happening, it's kind of almost, well, it's not too late.
You can't try it, but it can be very hard to back out of that.
And so if you can prevent yourself from going down that path to begin with, that's much preferable.
When you see Boston Dynamics and you see those robots, is there something comparable that's being developed either in the Soviet Union or in China or somewhere else in the world where there's similar type robots?
Yeah, well, I mean, I think if it has a gun, I mean, it doesn't It really doesn't matter whether it looks like a dog or if it's just a small flying platform.
I mean, in general, I think the more with AI and robotics, the cooler something looks, usually technically the less impressive it is.
As you see, the extreme case of this is these robots that look exactly like a human, maybe shaped like a beautiful woman or something like that.
Do you anticipate, like when you see Ex Machina, Do you think that that's something that could be realistically, that could be implemented in a hundred years or so?
Like we really could have some form of artificial human that's indistinguishable?
Well, I think the action is not going to lie in the robotic part so much as in the brain part.
I think it's the AI part.
And robotics only insofar as it becomes enabled by having, say, much better learning algorithms.
So right now, if you have a robot, for the most part, in any one of these big factories, It's like a blind, dumb thing that executes a pre-programmed set of motions over and over again.
And if you want to change off the production, you need to get in some engineers to reprogram it.
But with a human, you could kind of show them how to do something once or twice, and then they can do it.
So it will be interesting to see over the next few years whether we can see some kind of progress in robotics that enable this kind of imitation learning.
To work well enough that you could actually start doing it.
There are demonstrations already, but robustly enough that it would be useful and you could replace a lot of these kind of industrial robotics experts by having this.
So I think in terms of making things look like human, I think that's more for Hollywood and for press releases than the actual driver of progress.
The same would hold even if it were not a robot, but just a program inside a computer.
But yeah, the idea that you could have something that is strategic and deceptive and so forth.
But then other elements of the movie, of course, and in general, a reason why it's bad to get your kind of map of the future from Hollywood itself.
So if you think it's this one guy, presumably some genius, living out in the nowhere and kind of inventing this whole system, like in reality, it's like anything else.
There are hundreds of people programming away on their computers, writing on whiteboards, and sharing ideas with other people across the world.
It doesn't look like a human.
And that would often be some economic reason for doing it in the first place.
Like not just, oh, we have this Promethean attitude that we want to kind of bring – So all of those things don't make for such good plot lines, so they just get removed.
But then I wonder if people actually think of the future in terms of some kind of...
Super villain and some hero and it's going to come down to these two people and they're going to wrestle.
And it's going to be very personalized and concrete and localized.
Whereas a lot of things that determine what happens in the world are very spread out and bureaucracies churning away.
The iconic image of aliens from another world is these little gray things with no sexual organs and large heads and black eyes.
This is the iconic thing that we imagine when we think about things from another planet.
I've often wondered if What we think of in terms of like artificial life from another planet or life from another planet is that.
It's like an artificial creation.
Like in our ideas that we understand that the biological limitations of the body when it comes to traveling through space, the dealing with radiation, the death, need for food, things along those lines, that what we would do is create some artificial thing to travel for us like we've already done on Mars, right?
We have the rover.
That roams around Mars.
The next step would be an artificial, autonomous, intelligent creature that has no biological limitations like we do in terms of its ability to absorb radiation from space.
And we create one of those little guys just like that with an enormous head.
No sex organs.
Doesn't need sex organs.
And we have this thing.
Pilot these ships that can defy our own physical limitations in terms of what would happen to us if we had to deal with 1 million G-force because it's moving at some preposterous rate through space.
When we think of these things coming from another planet, if we think of life on another planet, If they can innovate in a similar fashion the way we do, we would imagine they would create an artificial creature to do all their dirty work.
Like, why would they want to, like, risk their body?
That is spacefaring in a serious way would have nanotechnology.
So they'd have basically the ability to arbitrarily configure matter in whatever structure they wanted.
They would have like nanoscale probes and things that could shapeshift.
It would not be that there would be this person sitting in a seat behind the steering wheel.
If they wanted to, there could be invisible tasks, I think, like nanoscale things hiding in a rock somewhere, than just connecting with an information link up to some planetary-sized computer somewhere.
I think that's the way that space is most likely to get colonized.
It's not going to be like with meat sacks kind of driving spaceships around and having Star Trek adventures.
It's going to be some spherical frontier emanating from whatever the home planet was, moving at some significant fraction of the speed of light and converting everything in its path into infrastructure.
Of whatever type is maximally valuable for that civilization.
Maybe computers and launchers to launch more of these space probes so that the whole wavefront can continue to propagate.
I mean, one of the things you brought up earlier is that if human beings are going to continue and we're going to propagate through the universe, we're going to try to go to other places, we're going to try to populate other planets, And are we going to do that with just robots?
Or are we going to try to do that biologically?
We're probably going to try to do it biologically.
One of the things you were saying earlier is one of the things that artificial intelligence could possibly do is accelerate our ability to travel to other lands or other planets.
I just think that's going to not lead to anything important until those efforts become obsoleted.
By some radical new technology wave, probably triggered by machine superintelligence that then rapidly leads to something approximating technological maturity.
Once innovation happens at digital timescales rather than human timescales, then all these things that you could imagine we're doing, if we had 40,000 years to work on it, we would have space colonies and cures for aging and all of these things, right?
But if that thinking time happens, you know, Digital space, then that long future gets telescoped, and I think you fairly quickly reach a condition where you have close to optimal technology.
And then you can colonize the space cost-effectively.
You just need to send out one little probe that then can land on some resource and set up a production facility to make more probes, and then it spreads exponentially everywhere.
And then if you want to, you could then, like, after that initial infrastructuring has happened, you could transport biological human beings to other planets if you wanted to.
But it's not really where the action is going to be.
Yeah, so my guess would be after technological maturity, like after superintelligence.
Now, with Mars, it's possible that there would be like a little kind of prototype colonization thing because people are really excited about that.
So you could imagine some little demo projects.
Happening sooner.
But if we're talking about something, say, that would survive long term, even if the Earth disappeared, like some kind of self-sustaining civilization, I think that's going to be very difficult to do until you have super intelligence and then it's going to be trivial.
So you think superintelligence could potentially be what, I mean, one of the applications would be to terraform Mars, to change the atmosphere, to make it sustainable for biological life.
Now, I also think that at this, this is a very radical context, technological maturity, because we already, maybe there are additional technologies we can't even think of yet, but even just what we already know, About physics, etc.
We can sort of see possible technologies that we're not yet able to build, but we can see that they would be consistent with physics, that would be stable structures.
And already that creates a vast space of things you could do.
And so, for example, I think it would be possible at technological maturity to upload human minds into computers, for example.
I mean, I'm kind of a bit cautious with these things.
At the very least, I'd rather think about it for a long time before.
Also, I have attachments.
There are people I care about here and projects and maybe even opportunities to try to make some difference.
If we actually are in this weird time right now, different from all of earlier human history where nothing really much was happening and we're not yet where it's all out of our hands and it's superintelligence is running.
If that actually is, if that's true, then that means we right now live in this very weird period where our actions might have cosmological consequences.
If we affect the precise time and way in which the transition to machine superintelligence happens, we would be hugely influential.
And if you have some ambition to try to do some good in the world, then that kind of can be a very exciting prospect as well.
Like, there might be no other better time to exist if your goal is to do good.
So it's an exciting, crazy time where all these changes are taking place really rapidly.
Like, if you were from the future, this might be the place where you would travel to, to experience what it was like to see this immense change take place almost instantaneously.
Like, if you could go back in time to a specific time in history and experience what life was like, to me, I think I'd probably pick ancient Egypt, like, during the days of the pharaohs.
I would imagine, I mean, I've really thought long and hard about the construction methods of ancient Egypt.
I would love to see what it looked like when they were building the pyramids.
How long did it take?
What were they doing?
How did they do it?
We still don't know.
It's all really theoretical.
There's all these ideas of how they constructed it with incredible precision and precision in terms of the way it's astronomically aligned to certain areas of our solar system and different constellations.
It's amazing.
I would love to have seen how they did that and what was the planning like and how they implemented and how many people did it take and how long did it take because we really don't know.
It's all speculation.
During the burning of the Library of Alexandria, we lost so much information.
We've got hieroglyphs and the physical structures that are still daunting.
We have no idea.
They look at the Great Pyramid of Giza, the huge one with two million-plus stones in it.
They don't even think it's slaves anymore, I don't think.
I think they think it's skilled labor based on their diet.
Based on the diet, the utensils that they found in these camps, these workers' camps...
They think that these were highly skilled craftspeople, that it wasn't necessarily slaves.
They used to think it was slaves, but now because of the bones of the food, they were eating really well, and they think that, well, and also the level of sophistication involved.
This is not something you just get kind of slaves to do.
This seems to be that there was a population of structural engineers, that there was a population of skilled construction people, and that they tried to, you know, utilize all of these great minds that they had back then and put this thing together.
But it's still a mystery.
I think that's the spot that I would go to because I think it would be amazing to see so many different innovative times.
I mean, it would be amazing to be alive during the time of Genghis Khan or to be alive during some of the wars of 1,000, 2,000 years ago just to see what it was like.
The pyramids would be the big one.
But I think if I was in the future, some weird dystopian future where artificial intelligence runs everything and human beings are linked to some sort of Neurological implant that connects us all together and we long for the days of biological independence and we would like to see what was it like when they first started inventing phones?
What was it like when the internet was first opened up for people?
What was it like when people saw, when someone had someone like you on a podcast and was talking about We're good to go.
This really Goldilocks period of great change where we're still human, but we're worried about privacy.
We're concerned our phones are listening to us.
We're concerned about surveillance dates and people put little stickers over the laptop camera.
We see it coming, but it hasn't quite hit us yet.
We're just seeing the problems that are associated with this increased level of technology in our lives.
I mean, I guess the intuitive way of thinking about it, like what are the chances that just by chance you would happen to be living in the most interesting time in history, being like a celebrity, like whatever, like that's pretty low prior probability.
And so that could just be, I mean, if there's a lottery, somebody's got to have the ticket, right?
Or, yeah, or we are wrong about this whole picture, and there is some very different structure in place, which would make our experiences more typical.
I mean, for the majority of my time, I'm not actively thinking about that.
I'm just living.
Now, I have this weird that my work is actually to think about big picture questions.
So it kind of comes in through my work as well.
When you're trying to make sense of our position, our possible future prospects, the levers which we might have available to affect the world, what would be a good and bad way of pulling those levers, then you have to try to put all of these constraints and considerations together.
And in that context, I think it's important.
I think if you are just going about your daily existence, then it might not really be very useful or relevant to constantly try to bring in hypotheses about the nature of our reality and stuff like that.
Because for most of the things you're doing on a day-to-day basis, they work the same, whether it's inside a simulation or in basement-level physical reality.
You still need to get your car keys out.
So in some sense, it kind of factors out and is irrelevant for many practical intents and purposes.
No, I mean, I remember when the simulation argument occurred to me, which is less, it's not just, I mean, for as long as I can remember, like, yeah, I mean, maybe it's a possibility, like, oh, it could all be a dream, it could be a simulation, but that there is this specific argument that kind of narrows down the range of possibilities and where the simulation hypothesis is then one of only three What are the three options?
Well, one is that there is almost all civilizations at our current stage of technological development go extinct before reaching technological maturity.
Well, say having developed at least all those technologies that we already have good reason to think are physically possible.
So that would include the technology to build extremely large and powerful computers on which you could run detailed computer simulations of conscious individuals.
So that kind of would be a pessimistic, like if almost all civilizations at our stage failed to get there, that's bad news, right?
Because then we'll fail as well, almost certainly.
Option two is that there is a very strong convergence among all technologically mature civilizations in that they all lose interest in creating ancestor simulations or these kinds of detailed computer simulations of conscious people like their historical predecessors or variations.
So maybe they have all of these computers that could do it, but for whatever reason, they all decide not to do it.
Maybe there's an ethical imperative not to do it or some other...
I mean, we don't really know much about these post-human creatures and what they want to do and don't want to do.
Downloading consciousness into a computer, it almost ensures that there's going to be some type of simulation.
If you have the ability to download consciousness into a computer, once it's contained into this computer, what's to stop it from existing there?
As long as there's power and as long as these chips are firing and electricity is being transferred and data is being moved back and forth, you would essentially be in some sort of a simulation.
So we have these kind of virtual reality environments now that are imperfect but improving.
And you could kind of imagine that they get better and better and then you have a perfect virtual reality environment.
But imagine also that your brain, instead of sitting in a box with big headphones and some glasses on, the brain itself also could be part of the simulation.
But you could include in the simulation, just as you have maybe simulated coffee mugs and cars, etc., you could have simulated brains.
And so...
Here is one assumption coming in from outside the simulation argument, and one can talk about it separately, but it's the idea that I call it the substrate independence thesis, that you could in principle have conscious experiences implemented on different substrates.
It doesn't have to be carbon atoms, as is the case with the human brain.
It could be silicon atoms.
What creates conscious experiences is some kind of structural feature of the computation that is being performed.
Rather than the material that is used to underpin it.
So in that case, you could have a simulation with detailed simulations of brains in it where maybe every neuron and synopsis simulated and then those brains would be conscious.
Well, no, so the possibility number two is that these post-humans just are not at all interested in doing it.
And not just that some of them don't, but like of all these civilizations that reach technological maturity, that's kind of pretty uniformly, just don't do that.
But yeah, I've refrained from giving a very precise probability.
Partly because, I mean, if I said some particular number, it would get called there and it would create this maybe sense of false precision.
The argument doesn't allow you to derive this, the probability is X, Y, Z. It's just that at least one of these three has to obtain.
So, yeah, so that narrows it down.
Because you might think...
Why do we know the future is big?
You could just make up any story and we have no evidence for it.
But it seems that there are actually, if you start to think everything through, quite tight constraints on what probabilistically coherent views you could have.
And it's kind of hard even to find one overall hypothesis that fits this and various other considerations that we think we know.
The idea would be that if there is one day the ability to create a simulation, that it would be indiscernible from reality itself.
Say if we're not in a simulation yet.
If this is just biological life, we're just extremely fortunate to be in this Goldilocks period.
But we're working on virtual reality in terms of like Oculus and all these companies are creating these consumer-based virtual reality things that are getting better and better and really kind of interesting.
You've got to imagine that 20 years ago there was nothing like that.
20 years from now, it might be indiscernible.
You might be able to create a virtual reality that's impossible to discern from the reality that we're currently experiencing.
Now, as I said, I think if you simulate the brain also, You have a cheaper overall system than if you have a biological component in the center surrounded by virtual reality gear.
So you could, for a given cost, I think create many more ancestry simulations with simulated brains in them rather than biological brains with VR gear.
So most, in these scenarios where there would be a lot of simulations, most of those scenarios, it would be the kind of where everything is digital.
Because it's just cheaper with mature technology to do it that way.
And aren't there people that have done some strange, impossible to understand calculations that are designed to determine whether or not there's a likelihood of us being involved in a simulation currently?
So there are these attempts to try to figure out the computational requirements that would be required if you wanted to simulate some physical system with perfect precision.
So if we have some human, a brain, a room, let's say, and we wanted to simulate every little part, every atom, every subatomic particle, the whole quantum wave function, What would be the computational load of that?
And would it be possible to build a computer powerful enough that you could actually do this?
Now, I think the way that this misses the point is that it's not necessary to simulate all the details of this environment that you want to create in an ancestry simulation.
You would only have to simulate it insofar as it is perceptible to the observer inside the simulation.
So, if some post-human civilization wanted to create a Joe Rogan doing a podcast simulation, they'd need to simulate...
Joe Rogan's brain, because that's where the experiences happen.
And then whatever parts of the environment that you are able to perceive.
So surface appearances, maybe of the table and walls.
Maybe they would need to simulate me as well, or at least a good enough simulacrum that I could sort of spit out words that would sound like they came from a real human, right?
I don't know.
Now we're getting quite good with this GPT-2, like this kind of AI that just spews out words with...
I don't know whether...
Anyway, but what is happening inside this table right now is completely irrelevant.
You have no idea of knowing whether there even are atoms there.
Now, you could...
Take a big electron microscope and look at finer structure and then you could take an atomic force microscope and you could see individual atoms even and you could perform all kinds of measurements.
And it might be important that if you did that you wouldn't see anything weird because physicists do these experiments and they don't see anything weird.
But then you could kind of fill in those details like if and when somebody were performing those experiments.
That would be vastly cheaper than continuously running all of this.
And so this is the way a lot of computer games are designed today, that they have a certain rendering distance.
You only actually simulate the virtual world when the character goes close enough that you could see it.
And so I imagine these kind of super-intelligent post-humans doing this.
Obviously, they would have figured that out and a lot of other optimizations.
So in other words, these calculations or experiments, I think, don't really tell on the hypothesis.
I can kind of unfold the argument a little bit more and look more granular.
So suppose that the first two options are false.
So some non-trivial fraction of civilizations at our stage do get through.
And some non-trivial fraction of those are still interested.
Then I think you can convincingly show that by using just a small portion of their resources they could create very, very many simulations.
And you can show that or argue for that by comparing the computational power of systems that we know are physically possible to build.
We can't currently build them, but we could see that you could build them with nanotech and if you have planetary-sized resources on the one hand.
And on the other hand, estimates of how much compute power it would take to simulate a human brain.
And you find that a mature civilization would have many, many orders of magnitude more.
So that even if they just used 1% of their compute power of one planet for one minute, they could still run thousands and thousands and thousands of these simulations.
And they might have billions of planets and they might last for billions of years.
So the numbers are quite extreme, it seems.
So then what you get is this implication that if the first two options are false, it Would follow that there would be many, many more simulated experiences of our kind than there would be original experiences of our kind.
So the idea is that if we continue to innovate, if human beings or intelligent life in the cosmos continues to innovate, that creating a simulation is almost inevitable?
The first option, if human beings do figure out a way to not die and stay innovative and we don't have any sort of natural disasters or man-made created disasters, then step two, if we don't We don't decide to not pursue this.
If we continue to pursue all various forms of technological innovation, including simulations, that it becomes inevitable.
If we get past those two first options, it becomes inevitable that we pursue it.
I mean, I don't think it's ridiculous to consider.
I think it might be beyond us, but maybe we would be able to form some abstract conception of what it is.
I mean, in fact, if the path to believing the simulation hypothesis is the simulation argument, then we have a bunch of structure there that gives us some idea.
There would be some advanced civilization that would have developed a lot of technology over time, including compute technology, ability to do virtual reality very well.
We'd imagine probably they would have used that technology for a whole host of other purposes as well.
You wouldn't just get that technology and not be able to create a train or something like that.
They'd probably be super intelligent and have the ability to colonize the universe and do a whole host of other things.
And then for one reason or another, they would have decided to use some of the resources to create simulations.
And inside one of those simulations, perhaps, our experiences would be taking place.
So you could more speculatively fill in more details there.
But I still think that fundamentally our ability to grok this whole thing would be very limited.
And...
There might be other considerations that we are oblivious to.
I mean, if you think about the simulation argument, it's quite recent, right?
So it's less than 20 years old.
So if you think that...
So I suppose it's correct for the sake of argument.
Then up to this point, everybody was missing something like hugely important and fundamental, right?
Really smart people, hundreds of years, like this massive piece right in the center.
But what's the chances that we now have figured out the last big missing piece?
Presumably, There must be some further big, giant realization that is like beyond us currently.
So I think having some...
Yeah, I mean, that looks kind of plausible, but maybe there are further big discoveries or revelations that would kind of maybe not falsify the simulation, but maybe change the interpretation, like do something that is hard to know in advance what that would be.
Well, there are different options there, and there might be many different simulations that are configured differently.
There could be ones that run for a very long time, ones that run for a short period of time, ones that simulate everything and everybody, others that just focus on some particular scene or person.
It's just a vast space of possibilities there.
And which ones of those would be most likely is really hard to say much about because it would depend on the reasons for creating these simulations, like what would the interests of these hypothetical post-humans be.
Have you ever had a conversation with a pragmatic, capable person who really understands what you're saying, but they disagree about even the possibility of a simulation?
Well, I mean, I move in kind of unrepresentative circles.
So I think amongst the folk I interact with a lot, I think a common reaction is that it's plausible and still there is some uncertainty because these things are always hard to figure out.
But we should assign it some probability.
But I'm not saying that would be the typical reaction if you kind of did a Gallup survey or something like that.
I mean, another common thing is, I guess, to misinterpret it in some way or another.
And there are different versions of that.
So one would be this idea that in order for the simulation hypothesis to be true, it has to be possible to simulate everything around us to perfect microscopic detail, which we discussed earlier.
Then some people might not immediately get this idea that the brain itself could be part of the simulation.
So they imagine it would be plugged in with like a big cable.
And if you just somehow could reach behind you and like that would be so that that would be another possible common misconception, I guess.
Then I think a common thing is to conflate the simulation hypothesis with the simulation argument.
The simulation hypothesis is we are in a simulation.
The argument is that one of these three options is true, only one of which is the simulation hypothesis.
That is that whether or not we are in a simulation, people presumably still have dreams and there are other reasons and explanations for why that would happen.
The concept of creativity, how does that play into a simulation?
If during the simulation you're coming up with these unique creative thoughts, are these unique creative thoughts your own or are these unique creative thoughts stimulated by the simulation?
I think it would be potentially as much your own in the simulation as it would be outside the simulation.
I mean, unless the simulators had, for whatever reason, set it up with the view that they, for some reason, they just wanted to have, oh, this is Rogan coming up with this particular idea, and that kind of configured the initial conditions and just the right way to achieve that.
Maybe then, when you come up with it, maybe it's less your achievement than the people who set up the initial conditions.
Because the reason I ask that is all ideas, everything that gets created, all innovation, initially comes from some sort of a point of someone figuring something out or coming up with a creative idea.
All of it.
Like everything that you see in the external world, like everything from televisions to automobiles, was an idea.
And then somebody implemented that idea or groups of people implemented the technology involved in that idea and then eventually it came to fruition.
If you're in a simulation, How much of that is being externally introduced into your consciousness by the simulation?
And is it pushing the simulation in a certain direction?
I think the kind of simulation that it would be the clearest case for why that would be possible would be one where all people would be simulated that you perceive in each brain.
Because then you could get the realistic behavior out of the brain if you simulated the whole brain at a sufficient level of detail.
Well, that type of simulation should certainly be possible.
Then it's more of an open question whether it would also be possible to create simulations where there was, say, only one person conscious and the others were just like simulacra.
They acted like humans, but there's nothing inside.
So these would be in philosopher's parlance zombies, that is...
It's like a technical term, but it means when philosophers discuss it, somebody who acts exactly like a human but with no conscious experience.
Now, whether those things are possible or not is an open question.
One is that you'd probably have some probability distribution over all these different kinds of situations that you could be in.
Maybe all of those situations are simulated in different frequencies and stuff.
Different numbers of times, that is.
So there would be some probability distribution there.
That would be the first thought.
That in reality you're always kind of uncertain.
The second would be that even if you were in that kind of simulation, it might still be that behaviorally what you should do is exactly the same as if you were in the other simulation.
So it might not have that much day-to-day implications.
It could also, I guess, if you sort of interpret it in the wrong way, maybe lead it to feel more alienated or something like that.
I don't know.
But I think to a first approximation, the same things that would be Work well and make a lot of sense to do in physical reality would be also our best bets in a simulated reality.
Like, if it's a simulation, but you must behave in each and every instance as if it's not.
If you know, if you had a test you could take, like a pregnancy test, when you went to the CVS and you pee on a strip and it tells you, guess what, Nick?
This shit isn't real.
You're in a simulation.
100% proven, absolutely positive.
You know from now on, from this moment on, that everything you interact with is some sort of a creation.
It's not real.
But it is real, because you're having the same exact experience as if it was real.
I think there are certain possibilities that look kind of far-fetched if we're not in a simulation that become, like, more realistic if we are.
So one obvious one is, like, if a simulation could be shut off, like if the computer where the simulation is running is if the plug is pulled, right?
So we think physical universe, as we normally understand, can just suddenly pop out of existence.
There's a conservation of energy and momentum and so forth.
But a simulated universe, that seems like something that could happen.
It doesn't mean it is likely to happen or it doesn't say anything about what time frame, but at least it's like enters as a possibility where it was not there before.
Other things as well become more maybe similar to various theological possibilities that exist.
Like afterlife and stuff like that.
And in fact, it kind of maybe through a very different path leads to some similar destinations as people through thinking about theology and stuff have arrived at.
I mean, it's kind of different.
I think there is no logically necessary connection either way.
But there are some kind of structural parallels, analogs, between the situation of a simulated creature to their simulators and a created entity to their creator.
That are interesting, although kind of different.
So that might be kind of comparisons there that you could make that would give you some possible ways of proceeding.
It's interesting how much it has just over the last 10-15 years, how long the idea has It's interesting how ideas can migrate from some kind of extreme radical fringe and some decade or two later, they're just kind of almost common sense.
Well, we have a great ability to get used to things.
I mean, this comes back to our discussion about the pace of technological progress.
It seems like the normal way for things to be.
We are very adaptable creatures, right?
You can adjust to almost everything, and we have no kind of external reference point, really, and mostly these judgments...
Are based on what we think other people think.
So if it looks like some high-status individual, Elon Musk or whatever, seems to take the simulation argument seriously, then people think, oh, it's a sensible idea.
And it only takes like one or two or three of those people that are highly regarded and suddenly it becomes normalized.
I mean, pretty much right off the bat, including public...
I mean, it was published in some academic journal, Philosophical Quarterly.
But yeah, it quickly...
True, a lot of it.
And then it's kind of come in waves, like every year or so.
There should be like some new group of, either a new generation or some new community that hears about it for the first time, and it kind of gets a new wave of attention.
But in parallel to these waves, there's also this chronic trend towards it becoming more part of the mainstream conversation and seeming kind of less far out there.
And I think, yeah, that's maybe partly just – if the idea – like maybe – Maybe if there were some big flaw in the idea, it would have been discovered by now.
So if it's been around for a while, it makes it a little bit more credible.
It might also be slightly assisted by just technological progress.
If you see virtual reality getting better and stuff, it becomes maybe easier to imagine how it could become so good one day that you could create perfectly flawless.
Is option four the possibility that one day we could conceivably create some sort of an amazing simulation, but it hasn't been done yet.
And this is why it's become this topic of conversation is that there's some need for concern because as you extrapolate technology and you think about where it's going now and where it's headed, There could conceivably be one day where this exists.
Well, so I'd say that that would be highly unlikely in that if the third – so if the first two are wrong, right, then there are many, many more simulated ones than non-simulated ones, will be over the course of all of history.
- Right, but so then the question is, given that you know that by the end of time there will have been, let's say, just a million simulations and one original history.
- Sure. - And that all of these simulated people and the original history people all have subjectively indistinguishable experiences.
You can't from the inside tell the difference.
then what, given that assumption, would it be rational for you to believe?
Should you think you are one of the exceptional ones?
Or should you think you are one amongst the larger set, the simulated ones?
Let's just look in the narrow case of just the Earth.
In the narrow case of just the Earth, if the historical record is accurate, if it's not a simulation, then it seems very reasonable that we're just dealing with incremental increases in technology that's pretty stunning and pretty profound currently, but that we haven't Well, that's how it looks, right?
Let's say for the sake of argument, because I don't really have an opinion on this, pro or con, open the air.
But if I was going to argue about pragmatic reality, the practicality of biological existence as a person that has a finite lifespan, you're born, you die, you're here right now, and we're a part of this just long line of humanity that's created all these incredible things that's led up to civilization.
That's led up to this moment right now where you and I are talking into these microphones.
It's being broadcast everywhere.
Why isn't it likely that a simulation hasn't occurred yet?
That we are in the process of innovating and one day could potentially experience a simulation.
But why are you not factoring in the possibility or the probability that that hasn't taken place yet?
Now, why would we assume… Why would a simulation be the most likely scenario when we've experienced, at least we believe we've experienced, all this innovation in our lifetime?
We see it moving towards a certain direction.
Why wouldn't we assume that that hasn't taken place yet?
Yeah, I think to try to argue for the premise that conditional on there being first an initial segment of non-simulated Joe Rogan experiences and then a lot of other segments of simulated ones, that conditional on that being the way the world in totality looks, you should think you're one of the simulated ones.
Well, to argue for that, I think then you need to roll in this piece of probability theory called anthropics, which I alluded to.
And just to pull one little element out of there to kind of create some initial plausibility for this.
If you think in terms of rational betting strategies for this population of Joe Rogan experiences, the ones that...
Would lead to the overall maximal amount of winning would be if you all thought you're probably one of the simulated segments.
If you had the general reasoning rule that in this kind of situation you should think that you're the initial segment of non-simulated Rogen, then the great preponderance of these simulated experiences would lose their bets.
Well, the two alternatives being that intelligent life goes extinct before they create any sort of simulation or that they agree to not create a simulation.
But what about if they're going to create a simulation?
There has to be a time before the simulation is created.
Why wouldn't you assume that that time is now currently happening when you've got a historical record of all the innovation that's leading up to today?
I mean, there would be one Joe Rogan experience in the real original history, and then, like, maybe a million, let's just say.
In simulated realities later.
But if you think about your actions that kind of can't distinguish between these different possible locations in space-time where you could be, most of the impact of your decisions will come from impacting all of these million Dior Hogan instances.
You would have weaker probabilistic evidence insofar as you had evidence against the two alternatives.
So, for example, if you got some evidence that suggested it was less likely that all civilizations at our stage go extinct before maturity.
Let's say we get our act together, we eliminate nuclear weapons, we become prudent and...
We check all the asteroids, nothing is on collision course with Earth.
That kind of tend to lower the probability of the first, right?
Okay.
So that would tend to shift probability over on the remaining alternatives.
Let's suppose that we moved closer ourselves.
To becoming post-human.
We develop more advanced computers and VR and we're getting close to this point ourselves and we still remain really interested in running Ancestry simulations.
We think this is what we really want to spend our resources on as soon as we can make it work.
That would move probability over from the second alternative.
It's less likely that there is this strong convergence among all post-human technologically mature civilizations if we ourselves are almost post-human and we still have this interest in creating ancestor simulations.
So that would shove probability over to the remaining alternative.
Take the extreme case of this.
Imagine if we...
A thousand years from now have built our own planetary-sized computer that can run these simulations, and we are just about to switch it on, and it will create the simulation of precisely people like ourselves.
And as we move towards the big button to sort of initiate this, then the probability of the first simulation Two hypotheses basically goes to zero, and then we would have to conclude with near certainty that we are ourselves in a simulation as we push this button to create a million simulations.
We should assume that we are ignorant as to which of these different time slices we are, which of these different Joe Rogan experiences is the present one.
We just can't tell from the inside which one it is.
If you could see some objective clock and say that, well, as yet the clock is so early that no simulations have happened, then obviously you could conclude that you're in the original history.
But if we can't see that clock outside the window, if there is no window in the simulation to look out, then it would look the same.
And then I'd say we have no way of telling which of these different instances we are.
But not even condition on those other alternatives being wrong.
Let's say that human beings haven't blown themselves up yet.
Let's say that human beings haven't come up with – there is no need to make the decision to not activate the simulation because the simulation hasn't been invented yet.
Isn't that also a possibility?
Isn't it also a possibility that the actual timeline – Of technological innovation that we all agree on is real and that we're experiencing this as real, live human beings not in a simulation that one day the simulation could potentially take place but has not yet.
Yeah, so in your option, the vast majority of all these experiences that will ever have existed will also be simulated, if I understand your option correctly.
But as I understand your option is that if we look at the universe at the end of time and we look back, there will be a lot of simulated versions of you and then one original one.
I'm just thinking maybe we could think of some simpler thought experiment which has nothing to do with simulations and stuff, but...
Imagine if...
So I'm making this up as I go along, so we'll see if it actually works.
You're taking into a room, and then you're awake there for one hour, and then a coin is tossed.
And if it lands heads, then the experiment ends and you exit the room and everything is normal again.
But if it lands tails, then you're given an amnesia drug And then you're woken up in the room again.
You think you're there for the first time because you don't remember having been there before.
And then this is repeated 10 times.
So we have a world where either there is one hour experience of you in the room or else it's a world with 10 Joe Rogan experiences in the room with an episode of amnesia in between.
But when you're in the room now, you find yourself in this room, you're wondering, hmm...
Is this the first time I'm in this room?
It could be.
But it could also be that I'm later on and I was just given an amnesia drug.
So the question now is, when you wake up in this room, you have to assign probabilities to these different places you could be in time.
And maybe you have to bet or make some decision that depends on where you are.
So...
I guess I could ask you, like, so if you wake up in this room, what do you think the probability should be that the coin, that you're, like, at time one versus at some later time?
Well, what is the probability that I'm actually here versus what is the probability of this highly unlikely scenario that I keep getting drugged over and over again every hour?
Well, we assume that, like, you're certain that That the setup is such that there was this mad scientist who had the means to do this and he was going to flip this coin.
So we're assuming that you're sure about that either way.
The only thing you're unsure about is how the coin landed.
Well, if that was a scenario where I knew that there was a possibility of a mad scientist and I could wake up over and over again, that seems like a recipe for insanity.
Well, I'm trying to distill the probability theory part from the wider simulation.
But I guess I could also ask you, if we were to move closer to this point where we ourselves can create simulations, if we survive, we become a multi-planetary, we build planetary-sized computers.
Yeah.
How would your probability in the simulation hypothesis change as we kind of develop?
It's not an outcome in that it would require you to postulate that you are this very unusual and special observer amongst all the observers that will exist.
Since we don't know what time it is now in external reality, and we therefore can't tell from looking at our evidence, Where we are in a world where either there is just an original history and then it ends, or there is a world with an original history and then a lot of simulations.
We need to think about how to assign probabilities given each of these two scenarios.
And so then we have a situation that is somewhat analogous to this one with the amnesia room, where you have some number of episodes.
And so the question is, in those types of situations, how do you allocate probability over the different hypotheses about how the world is structured?
And This kind of betting argument is one type of argument that you can try to reduce to kind of get some grip on that.
And another is by looking at various applications in cosmology and stuff where you have multiverse theories.
Which say the universe is very big, maybe there are many other universes, maybe there are a lot of observers, maybe all possible observers exist out there in different configurations.
How do you drive probabilistic predictions from that?
It seems like whatever you observe would be observed by somebody, so how could you test that kind of theory?
And this same kind of anthropic reasoning that I want to use in the context of the simulation argument, Also plays a role, I think, in deriving observational predictions from these kinds of cosmological theories, where you need to assume something like you're most likely a typical observer from amongst the observers that will ever have existed, or so I would suggest.
Now, I should...
Admit as an asterisk that this field of anthropic reasoning is tricky and not fully settled yet.
And there are things there that we don't yet fully understand.
But still, the particular application of anthropic reasoning that is relevant for the simulation argument, I think, is one of the relatively less problematic ones.
So that Conditional on there being, by the end of time, a large number of simulated Joe Rogans and only one original one, I think, conditional on that hypothesis, it would seem that most of your probability should be on being one of the simulated ones.
But I'm not sure I have any other ways of making it more vivid or possible.
The way I see it is that I have taken that into account and it receives the same probability that I'm that initial segment as I would give to any of the other Nick Bostrom segments that all have the same evidence.
See, that's where we differ because I would give much more probability to the fact that we are existing right now in the current state as we experience it in real life, carbon life, no simulation, but that potentially one day there could be a simulation which leads us to look at the possibilities and look at the probabilities that it's already occurred.
All right, so what we think happened is there was a big bang, planets formed, and then some billions of years later, we evolved, and here we are now, right?
Suppose some physicists told you that, well, the universe is very big, and early on in the universe, very, very rare occasions, there was some big gas cloud.
In an infinite universe, this will happen somewhere, right?
Where, just by chance...
That was a kind of Joe Rogan-like brain coming together for a minute and then dissolved in the gas.
And yeah, if you have an infinite universe, it's going to happen somewhere.
But there's got to be many, many fewer Joe Rogan brains In such situations, then, will exist later on on planets, because evolution helps funnel probability into these kinds of organized structures, right?
So, if some physicists told you that, well, this is the structure of our part space-time.
Like, there are a few very, very rare spontaneously materialized brains from gas clouds early in the universe, and then there are the normal rogances much later.
And there are, of course, many, many more normal ones.
The normal ones happen in one out of every, you know, 10 to the power of 50 planets, whereas the weird ones happen in one out of 10 to the power of 100. Normal versus weird, how so?
I think the fact that it matches reality is, I think, irrelevant to the point I want to make.
So if this turned out to be the way the world works, a few weird ones happening from gas clouds and then the vast majority are just normal people living on a planet.
Would you similarly say, given that model, that you should think, oh, it might just as well be one of these gas cloud ones?
Because after all, the other ones might not have happened yet.
Anyway, I think that this would be a structurally similar situation where there would be a few exceptional early living versions that would be very small in numbers compared to the later ones.
And if they allow themselves the same kind of reasoning where they would say, well, the other ones may or may not come to exist later on planets.
I have no reason to believe I'm one of the planet living ones.
Then it seems that in this model of the universe, you should think you're one of these early gas cloud ones.
And as I said, I mean, this looks like it probably actually is the world we're living in, in that it looks like it's infinitely big and that would have been a few georogons spontaneously generated very early from random processes.
There are going to be very few numbers compared to ones that have, you know, risen on planets.
So that by taking the path you want to take with relation to the simulation argument, I wonder if you would not then be committed to thinking that you would be like, in effect, a Boltzmann brain in a gas cloud super early in the universe.
If you believe in science and if you believe in the discoveries that so far people have all currently agreed to, we've agreed that clouds are formed and that planets are created and that all the matter comes from inside of the explosions of a star And that it takes multiple times for this to coalesce before we can develop carbon-based life forms.
All that stuff science currently agrees on, right?
And then we believe in single-celled organisms, become multi-celled organisms through random mutation and natural selection.
We get evolution, and then we agree that we have...
We've come to a point now where technology has hit this gigantic spike that you described earlier.
So human beings have created all this new innovation.
Why wouldn't we assume that all this is actually taking place right now with no simulation?
Yeah, I mean, the simulation argument is the answer to that, but with a qualification that A, the simulation argument doesn't even purport to prove the simulation hypothesis, because there are these two alternatives.
B, that even if the simulation hypothesis is true, in many versions of it, it would actually be the case that In the simulation, all of these things have taken place.
And the simulation might go back a long time, and it might be a reality tracking simulation.
Maybe these same things also happened before or outside the simulation.
Well, to me it seems probable only if at least one of the other alternatives is true.
Or, I admit that there is also this general possibility, which is always there, that I'm confused about some big thing, like maybe the simulation argument is wrong in some way.
I'm just looking at the track record of science and philosophy, we find we're sometimes wrong.
So I attach some probability to that.
But if we're working within the parameters of what currently seems To me to be the case, that we would be the first civilization in a universe where there will later be many, many simulations seems unlikely for those exact reasons.
And that if we are the first, it's probably because one of the alternatives is true.
And what else is there that we haven't figured out yet?
If we come back in 50 years, Even just with human beings thinking about stuff.
And I think I have this concept of a crucial consideration.
I alluded to it a little bit earlier.
But the idea of some argument or data or insight that if only we got it would Radically change our mind about our overall scheme of priorities.
Not just change the precise way in which we go about something, but kind of totally reorient ourselves.
An example would be if you are an atheist and you have some big conversion experience and suddenly your life feels very different.
What were you doing before?
You were basically wasting your time and now you found what it's all about.
But there could be sort of slightly smaller versions of this.
I wonder what the chances are that we have discovered all crucial considerations now.
Because it looks like...
At least up until very recently, we hadn't, in that there are these important considerations that seems to, whether it's AI, like if this stuff about AI is true, like maybe that's the one most important thing that we should be focusing on and the rest is kind of frittering away our time as a civilization.
We should be focused on AI alignment.
So we can see that it looks like all earlier ages, up until very recently, We're oblivious to at least one crucial consideration, insofar as they wanted to have maximum positive impact on the world.
They just didn't know what the thing was to focus on.
And it also seems kind of unlikely that we just now have found the last one.
That just seems kind of...
Given that we keep discovering these up until quite recently, we are probably missing out on one or more likely several more crucial considerations.
And if that's the case, then it means that we are fundamentally in the dark.
We are basically clueless.
We might try to improve the world, but we are...
Overlooking maybe several factors, each one of which would make us totally change our mind about how to go about this.
And so it's less of a problem, I think, if your goal is just to lead your normal life and be happy and have a happy family.
Because there we have a lot more Evidence and it doesn't seem to keep changing every few years.
Like we still know, yeah, have good relationships, you know, don't ruin your body, don't jump in front of trains, like these are tried and tested, right?
But if your goal is to somehow steer humanity's future in such a way that you maximize expected utility, there it seems our best guess is keep jumping around every few years and we haven't kind of settled down into some stable conception of that.