Speaker | Time | Text |
---|---|---|
And here we go. | ||
Alright Nick, this is one of the things that scares people more than anything. | ||
Is the idea that we're creating something, or someone's going to create something that's going to be smarter than us. | ||
That's going to replace us. | ||
Is that something we should really be concerned about? | ||
I presume you're referring to babies? | ||
I'm referring to artificial intelligence. | ||
Ah, yes. | ||
Well, it's the big fear and the big hope, I think. | ||
Both? | ||
At the same time, yeah. | ||
How is it the big hope? | ||
Well, there are a lot of things wrong with the world as it is now. | ||
Pull this up to your face if you would. | ||
All the problems we have, most of them could be solved if we were smarter or if we had somebody on our side who were a lot smarter with better technology and so forth. | ||
Also, I think if we want to imagine some really grand future where humanity or our descendants one day go out and colonize the universe, I think that's likely to happen, if it's going to happen at all, after we have super intelligence that then develops the technology to make that possible. | ||
The real question is whether or not we would be able to harness this intelligence or whether it would dominate. | ||
Yeah, that certainly is one question. | ||
Not the only. | ||
You could imagine that we harness it, but then use it for bad purposes as we have a lot of other technologies through history. | ||
So I think there are really two challenges we need to meet. | ||
One is to make sure we can align it with human values and then make sure that we together do something better with it than fighting wars or oppressing one another. | ||
I think, well, what I'm worried about more than anything is that human beings are going to become obsolete. | ||
That we're going to invent something that's the next stage of evolution. | ||
I'm really concerned with that. | ||
I'm really concerned with if we look back on ancient hominids, Australopithecus, just think of some primitive ancestor of man. | ||
We don't want to go back to that. | ||
That's a terrible way to live. | ||
I'm worried that what we're creating is the next thing. | ||
unidentified
|
I think... | |
We don't necessarily want, or at least I wouldn't be totally thrilled with a future where humanity as it is now was the last and final word. | ||
The ultimate version beyond that. | ||
I think there's a lot of room for improvement. | ||
But not anything that is different is an improvement. | ||
So the key would be, I think, to find some path forward where the best in us can continue to exist and develop to even greater levels. | ||
And maybe at the end of that path, it looks nothing like we do now. | ||
Maybe it's not two-legged, two-armed creatures running around with three pounds of thinking matter, right? | ||
It might be something quite different. | ||
But as long as what we value is present there and ideally in a much higher degree than in the current world, then that could count as a success. | ||
Yeah, the idea that we're in a state of evolution, that we are just like we look at ancient hominids, that we are eventually going to become something more advanced or at least more complicated than we are now. | ||
But what I'm worried is that biological life itself has so many limitations. | ||
When we look at the evolution of technology, if you look at Moore's Law or if you just look at new cell phones, like they just released a new iPhone yesterday and they're talking about all these Incremental increases in the ability to take photographs and wide-angle lenses and night mode and a new chip that works even faster. | ||
These things, there's not, the word evolution is incorrect, but the innovation of technology is so much more rapid than anything we could ever even imagine biologically. | ||
Like if we had a thing that we had created, if we had created, instead of artificial intelligence in terms of like something in a chip or computer, If we created a life form, a biological life form, but this biological life form was improving radically every year. | ||
It didn't even exist. | ||
The iPhone existed in 2007. That's when it was invented. | ||
If we had something that was 12 years old, but all of a sudden was infinitely faster and better and smarter and wiser than it was 12 years ago, the newest version of it, version X1, we would start going, whoa, whoa, whoa, hit the brakes on this thing, man. | ||
How many more generations before this thing's way smarter than us? | ||
How many more generations before this thing thinks that human beings are obsolete? | ||
It's coming at us fast, it feels like. | ||
But some people think, oh, it's slowing down now. | ||
Who thinks it's slowing down? | ||
Well, don't I have like Tyler Cowen and even Peter Thiel sometimes goes on about the pace of innovation not really being what it needs to be. | ||
I mean, maybe it was faster in like 1890s, but still compared to almost all of human history, it seems like a period of unprecedented rapid progress right now. | ||
Unprecedented. | ||
I'd say so. | ||
Yeah, I mean, except for maybe a couple of decades, a hundred years ago, when there was a lot of, you know, electricity, the whole thing. | ||
Yeah. | ||
No, I agree. | ||
I just, I don't think it's a concern, because it's more of a curiosity. | ||
I mean, I am concerned, but the more I look at it and go, well, this is, it seems inevitable. | ||
unidentified
|
Yeah. | |
That we're going to run into artificial intelligence. | ||
But the questions are so open-ended. | ||
We really don't know when. | ||
We really don't know what form it's going to take. | ||
And we really don't know what it's going to do to us. | ||
Yeah, so I see it as not something that should be avoided, neither something that we should just be completely gung-ho about, but more like a kind of gate through which we will have to pass at some point. | ||
All paths that are both plausible and lead to really great futures, I think, at some point involve the development of greater-than-human intelligence, machine intelligence. | ||
And so that our focus should be on getting our act together as much as we can in whatever period of time we have before that occurs. | ||
Prepare ourselves. | ||
Well, I mean, that might involve doing some research into various technical questions as how you build these systems so that we actually understand what they are doing and they have some intended impact on the world. | ||
It might also – if we are able to get our act together a little bit on the kind of global political scene, a little bit more peace and love in the world would be good, I think. | ||
unidentified
|
Sure. | |
That would be nice. | ||
And then like refraining from destroying ourselves through some other mean before we even get a chance to try to needle our way through this gate. | ||
Well, that's certainly possible. | ||
We're certainly capable of screwing it all up. | ||
Where is the current state of technology now in regards to artificial intelligence and how far away do you think we are from AGI? Well, different people have different views on that. | ||
I think the truth of the matter is that it's very hard to have accurate views about the timelines for these things that still involve kind of beginning breakthroughs that have to happen. | ||
Certainly, I mean, over the last eight or ten years, there has been a lot of excitement with the deep learning revolution. | ||
Things that... | ||
I mean, it used to be that people thought of AI as this kind of autistic savant, really good at logic and counting and memorizing facts, but... | ||
With no intuition. | ||
And this deep learning evolution, when you began to do these deep neural networks, you kind of solved perception in some sense. | ||
You have computers that can see, that can hear, and that have visual intuition. | ||
So that has enabled a whole wide suite of applications, which makes it commercially valuable, which then drives a lot of investment in it. | ||
So there's now quite a lot of momentum in machine learning and trying to kind of stay ahead of that. | ||
It's interesting that when we think about artificial intelligence and whatever potential form that it's going to take, if you look at films like 2001, like Hal, like, open the door, Hal, you know? | ||
We think of something that's communicating to us, like a person would, and maybe is a little bit colder and doesn't share our values and has a more pragmatic view of life and death and things. | ||
When we think of intelligence, though, I think intelligence in our mind is almost inexorably connected to all the things that make us human, like emotions and And ambition and all these things, like the reason why we innovate. | ||
It's not really clear. | ||
We innovate because we enjoy innovation and because we want to make the world a better place and because we want to fix some problems that we've created and we want to solve some limitations of the human body and the environment that we live in. | ||
But we sort of assume that intelligence that we create will also have some motivations. | ||
Well, there is a fairly large class of possible structures you could do. | ||
If you want to do anything that has any kind of cognitive or intellectual capacity at all, a large class of those would be what we might call agents. | ||
So these would be systems that interact with the world in pursuit of some goal. | ||
And if there are a sophisticated class of agents, they can plan ahead a sequence of actions. | ||
Like more primitive agents might just have reflexes. | ||
But the sophisticated agent might have a model of the world where it can kind of think ahead before it starts doing stuff. | ||
It can kind of think, what would I need to do in order to reach this desired state? | ||
And then reason backwards from that. | ||
So I think it's a fairly natural... | ||
It's not the only possible cognitive system you could build, but it's also not this weird, bizarre, special case that, you know, it's a fairly natural thing to aim for. | ||
If you're able to specify the goal, something you want to achieve, but you don't know how to achieve it, a natural way of trying to go about that is by building this system that has this goal and is an agent and then moves around and tries different things and eventually perhaps learns to solve that task. | ||
Do you anticipate different types of artificial intelligence? | ||
Like artificial intelligence that mimics the human emotions? | ||
Do you think that people will construct something that's very similar to us in a way that we can interact with it in common terms? | ||
Or do you think it will be almost like communicating with an alien? | ||
So there are different scenarios here. | ||
My guess is that the first thing that actually achieves superintelligence would not be very human-like. | ||
There are different possible ways you could try to get to this level of technology. | ||
One would be by trying to reverse engineer the human brain. | ||
We have an existence in the limiting case. | ||
Imagine if you just made an exact duplicate in silicon of the human brain, like every neuron had some counterpart. | ||
So that seems technologically very difficult to do, but it wouldn't require a big theoretical breakthrough to do it. | ||
You could just, if you had sufficiently good microscopy and large enough computers and enough elbow grease, you could kind of... | ||
But it seems to me plausible that what will work before we are able to do it that way will be some more synthetic approach. | ||
That would only be a very rough resemblance, maybe with the neocortex. | ||
unidentified
|
Okay. | |
Yeah, that's one of the big questions, right? | ||
Whether or not we can replicate all the functions of the human brain in the way it functions and mimic it exactly, or whether we could have some sort of superior method that achieves the same results that the human brain does in terms of its ability to calculate and reason and do multiple tasks at the same time. | ||
Yeah, and I also think that maybe once you have a sufficiently high level of this general form of intelligence, then you could use that maybe to emulate or mimic things that we do differently. | ||
The cortex is quite limited, so we rely a lot on earlier neurological structures that we have. | ||
We have to be guided by emotion because we can't just calculate everything out. | ||
And instinct, and if we lost all of that, we would be helpless. | ||
But maybe some system that had a sufficiently high level of this more abstract reasoning capability could maybe use that to substitute for things that weren't built in in the same way that we do. | ||
Have you ever talked to Sam Harris about this? | ||
Yeah, a little bit. | ||
Have you ever had a podcast with him? | ||
Yeah, actually he had him on his podcast half a year ago. | ||
I'll have to listen to it because he has the worst view of the future in terms of artificial intelligence. | ||
He's terrified of it. | ||
And when I talk to him, he terrifies me. | ||
And Elon Musk is right up there. | ||
He also has a terrifying view of what artificial intelligence could potentially be. | ||
What do you say to those guys? | ||
Well, I mean, I do think that there are these significant risks that will be associated with this transition to the machine intelligence era, including existential risks, threats to the very survival of humanity or what we care about. | ||
So why are we doing this? | ||
There are a lot of things we're doing that maybe globally it would be better if we didn't do. | ||
Why do we build thousands of nuclear weapons? | ||
unidentified
|
Why do we overfish the oceans? | |
If I actually ask why do different individuals work on AI research or why do different Thank you. | ||
If you can make the Google search engine 1% better, that's got to be worth like a billion dollars right off the bat. | ||
It's become a kind of prestige thing now where nations want to have some sort of strategy because it's seen as this new frontier. | ||
Just like when you had steam engines and industrialization a few hundred years ago and electricity. | ||
It's going to just open up a lot of economic opportunities. | ||
You want to be in there. | ||
You want to be this kind of, we are going to do subsistence agriculture while the rest of the world is moving on. | ||
It's kind of overdetermined. | ||
You could remove some of these reasons and there would still be enough reasons for why people would be pushing forward with this. | ||
One of the things that scares me the most is the idea that if we do create artificial intelligence, then it will improve upon our design and create far more sophisticated versions of itself. | ||
And that it will continue to do that until it's unrecognizable, until it reaches literally a godlike potential. | ||
I mean, I forget what the real numbers were, maybe you could tell us, but someone had calculated some Reputable source and calculated the amount of improvement that sentient artificial intelligence would be able to create inside of a small window of time. | ||
Like if it was allowed to innovate and then make better versions of itself and those better versions of itself were allowed to innovate and make better versions of itself. | ||
You're talking about not an exponential increase of intelligence but an explosion. | ||
Well, we don't know. | ||
So it's hard enough to forecast the pace at which we will make advances in AI. Because we just don't know how hard the problems are that we haven't yet solved. | ||
And, you know, once you get to human level or a little bit above, I mean, who knows? | ||
It could be that there is some level where to get further, you would need to put in a lot of... | ||
Thinking time to kind of get there. | ||
Now, what is easier to estimate is if you just look at the speed, because that's just a function of the hardware that you're running it on, right? | ||
So there we know that there is a lot of room in principle. | ||
If you look at the physics of computation and you look at what would an optimally arranged physical system be that was optimized for computation, that would weigh many, many orders above what we can do now. | ||
And then you could have arbitrarily large systems like that. | ||
So, from that point of view, we know that that could be things that would be like a million times faster than the human brain and with a lot more memory and stuff like that. | ||
And then something, if it did have a million times more power than the human brain, it could create something with a million times more computational power than itself. | ||
It could make better versions. | ||
It could continue to innovate. | ||
Like if we create something and we say, you are, I mean, it is sentient. | ||
It is artificial intelligence. | ||
Now, please go innovate. | ||
Please go follow the same directive and improve upon your design. | ||
Yeah, well, we don't know how long that would take then to get to something. | ||
We already have sort of millions of times more thinking capacity than a human has. | ||
I mean, we have millions of humans. | ||
So if you kind of break it down, you think there's like one milestone when you have maybe an AI that could do what one human can do. | ||
But then that might still be quite a lot of orders of magnitude until it would be equivalent of the whole human species. | ||
And maybe during that time other things happen, maybe we upgrade our own abilities in some way. | ||
So there are some scenarios where it's so hard to get even to one human baseline that we kind of use this massive amount of resources just to barely create kind of a village agent using billions of dollars of compute, right? | ||
So if that's the way we get there, then, I mean, it might take quite a while, because you can't easily scale something that you've already spent billions of dollars building. | ||
Yeah, some people think the whole thing is blown out of proportion, that we're so far away from creating artificial general intelligence that resembles human beings, that it's all just vaporware. | ||
What do you say to those people? | ||
Well, I mean, one would be that I would want to be more precise about just how far away does it have to be in order for us to be rational to ignore it. | ||
It might be that if something is sufficiently important and high stakes, then even if it's not going to happen in the next 5, 10, 20, 30 years, it might still be wise for our pool of 7 billion plus people to have some people actually thinking about this ahead of time. | ||
Yeah, for sure. | ||
So some of these disagreements, I guess this is my point, are more apparent than real. | ||
Like, some people say it's going to happen soon, and some other people say, no, it's not going to happen for a long time. | ||
And then, you know, one person means by soon, five years, and another person means by a long time, five years. | ||
And, you know, it's more of different attitudes rather than different specific beliefs. | ||
So I would first want to make sure that there actually is a disagreement. | ||
Now, if there is, if somebody is very confident that it's not going to happen in hundreds and hundreds of years, then I guess I would want to know their reasons for that level of confidence. | ||
What's the evidence they're looking at? | ||
Do they have some ground for being very sure about this? | ||
Certainly, the history of technology prediction is not that great. | ||
You can find a lot of other examples where even very eminent technologists and scientists where culture, it's not going to happen in our lifetime. | ||
In some cases, it actually already just happened in some other part of the world, or it happened a year or two later. | ||
So I think some epistemic humility with these things would be wise. | ||
I was watching a talk that you were giving and you were talking about the growth of innovation technology and GDP over the last 100 years and you were talking about the entire history of life on earth and what a short period of time humans have been here and then during what a short period of time what a stunning amount of innovation and how much change we've enacted on the earth and just a blink of an eye and you had the scale of GDP over the course | ||
of the last hundred years. | ||
It's crazy, because it's so difficult for us with our current perspective, just being a person, living, going about the day-to-day life that seems so normal, to put it in perspective time-wise and see what an enormous amount of change has taken place in relatively an incredibly short amount of time. | ||
Yeah. | ||
We think of this as sort of the normal way for things to be. | ||
The idea that the alarm wakes you up in the morning and then you commute in and sit in front of a computer all day and you try not to eat too much. | ||
And that if you sort of imagine that, you know, maybe in 50 years or 100 years or at some point in the future, it's going to be very different. | ||
That's like some radical hypothesis. | ||
But, of course, this quote-unquote normal condition is a huge anomaly any which way you look at it. | ||
I mean, if you look at it on a geological timescale, the human species is very young. | ||
If you look at it historically, you know, for more than 90%, we were just hunter-gatherers running around and agriculturalists for... | ||
The last couple of hundred years, when some parts of the world have escaped the Malthusian condition, where you basically only have as much income as you need to be able to produce two children. | ||
And we have the population exploit. | ||
All of this is very, very, very recent. | ||
And in space as well, of course, almost everything is ultra-high vacuum, and we live on the surface of this little special crumb. | ||
And yet we think this is normal and everything else is weird, but I think that's a complete inversion. | ||
And so when you do plot, if you do plot, for example, world GDP, which is a kind of rough measure for the total amount of productive capability that we have, right? | ||
Right. | ||
If you plot it over 10,000 years, what you see is just a flat line and then a vertical line. | ||
And you can't really see any other structure. | ||
It's so extreme, the degree to which humanity's productive capacity. | ||
So if I look at this picture, now we imagine this is now the normal, this is the way it's going to be now, indefinitely. | ||
It just can't seem... | ||
Prima facia implausible. | ||
It sort of doesn't look like we are in a static period right now. | ||
It looks like we're in the middle of some kind of explosion. | ||
Explosion. | ||
And oddly enough, everyone involved in the explosion, everyone that's innovating, everyone that's creating all this new technology, they're all apart. | ||
of this momentum that was created before they were even born. | ||
So it does feel normal. | ||
They're just a part of this whole spinning machine and they jump in, they're born, they go to college, next thing you know they have a job and they're contributing to making new technology and then more people jump in and add on to it and there's very little perspective in terms of like the historical significance of this incredible explosion technologically. | ||
When you look at What you're talking about, that gigantic spike. | ||
No one feels it, which is one of the weirdest things about it. | ||
I mean, you kind of expect every year there will be a better iPhone or whatever, right? | ||
Yes, if not, we'd be upset. | ||
For almost all of human history. | ||
People lived and died, and so absolutely no technological change. | ||
And in fact, you could have many, many generations. | ||
The very idea that there was some trajectory... | ||
In the material conditions is a relatively new idea. | ||
I mean, people thought of history either as, you know, some kind of descent from a golden age, or some people had a cyclical view. | ||
But it was all in terms of political organization, that would be a great kingdom, and then a wise ruler would rule for a while. | ||
And then like a few hundred years later, you know, they're Grand-great-grandchildren would be too greedy, and it would come into anarchy, and then a few hundred years later it would come back together again. | ||
So it would be all these pieces moving around, but no new pieces really entering. | ||
Or if they did, it was at such a slow rate that you didn't notice. | ||
But over the eons, the wheel slowly turns, and somebody makes a slightly better wheel, somebody figures out how to They irrigate a lot better. | ||
They breed better crops. | ||
And eventually there is enough that you could have enough of a population, enough brains that then create more ideas at a quick enough rate that you get this industrial revolution. | ||
And that's where we are now, I think. | ||
Elon Musk had the most terrifying description of humanity. | ||
He said that we are the biological bootloader for artificial intelligence. | ||
That's what we're here for. | ||
Well, bootleaders are important. | ||
They are important, but I think... | ||
There's like objectively and there's personally. | ||
Like objectively, if you were outside of the human race and you were looking at all these various life forms competing on this planet for resources and for survival, you would look at humanity and you go, well, you know, clearly it's not finished. | ||
So there's going to be another version of it. | ||
It's like, when is this version going to take place? | ||
Is it going to take place? | ||
Over millions and millions of years like it has historically when it comes to biological organisms or is it going to invent something? | ||
That takes over from there, and then that's the new thing. | ||
Something that's not based on tissue, something that's not based on cells, it doesn't have the biological limitations that we have, nor does it have all the emotional attachments to things like breeding, social dominance, hierarchies, all those things were no consequence to it. | ||
It doesn't mean anything, because it's not biological. | ||
Yeah, I mean, I don't think millions of years, I mean, a number of decades or whatever. | ||
But it's interesting that even if we set that aside, we say machine intelligence is possible for some reason. | ||
Let's just play with that. | ||
I still think that would be very rapid change, including biological change. | ||
I mean, we are doing great advances, making great advances in biotech as well, and we'll increasingly be able to control what our own organisms are doing through different means and enhance human capacities through biotechnology. | ||
So even there, it's not going to happen overnight, but over an historically very short period of time, I think you would still see quite profound change just from applying bioscience to change human capacities. | ||
Yeah, one of the technologies or one of the things that's been discussed to sort of mitigate the dangers of artificial intelligence is a potential merge. | ||
Some sort of symbiotic relationship with technology that you hear discussed, like... | ||
I don't know exactly how Elon's neural link works, but it seems like a step in that direction. | ||
There's some sort of a brain implant that interacts with an external device, and all of this increases the bandwidth for available intelligence and knowledge. | ||
Yeah, I'm sort of skeptical that that will work. | ||
I mean, good that somebody tries it, you know, but I think it's quite technically hard to improve a normal, healthy human being's, say, cognitive capacity or other capacities by implanting things in them. | ||
And get benefits that you couldn't equally well get by having the gadget outside of the body. | ||
So I don't need to have an implant to be able to use Google, right? | ||
Right. | ||
And there are a lot of advantages to having it external. | ||
You can kind of upgrade it very easily. | ||
You can shut it off. | ||
Well, hopefully you could do that even with implant. | ||
And once you start to look into the details, there's sort of these kind of demos, but then if you actually look at the papers, often you find, well, then there were these side effects, and the person had headaches, or they had some deficit, and the speech, you know, like, infection. | ||
Like, it's just, biology is messy. | ||
Yes. | ||
So, maybe it will work better than I expect. | ||
That could be good. | ||
But otherwise, I think that the place where it will first become possible to enhance... | ||
Human biological capacities would be through genetic selection, which is technologically something very near. | ||
You mean like CRISPR type? | ||
So that would be editing, right? | ||
When you actually go in and change things. | ||
That also is moving. | ||
What do you mean by selection? | ||
Well, so this would just be in the context of, say, in vitro fertilization. | ||
You have usually some half dozen or dozen embryos created during this fertility procedure, which is standardly used. | ||
So rather than just a doctor kind of looking at these embryos and saying, well, that one looks healthy, I'm going to implant that, you could run some genetic test and then use that as a predictor and select the one you think has the most desirable attributes. | ||
And so this could be a trend in terms of how human beings reproduce, that we... | ||
Instead of just randomly having sex, woman gets pregnant, gives birth to a child, we don't know what it's going to be, what's going to happen. | ||
We just hope that it's a good kid. | ||
Instead of that, you start looking at all the various components that we can measure. | ||
Yeah. | ||
And so, I mean, to some extent, we already do this. | ||
There are a lot of testing done for various chromosomal abnormalities that you can already check for. | ||
But our ability to look beyond clear, stark diseases, that this one gene is wrong. | ||
To look at more complex traits is increasing rapidly. | ||
So obviously there are a lot of ethical issues and different views that come into that. | ||
But if we're just talking about what is technologically feasible, I think that already you could do a very limited amount of that today. | ||
And maybe you would get two or three IQ points in expectation more if you selected using current technology based on 10 embryos, let's say. | ||
So very small. | ||
But as genomics gets better at deciphering the genetic architecture of Whether it's intelligence or personality attributes, then you would have more selection power and you could do more. | ||
And then there is a number of other technologies we don't yet have, but which if you did, would then kind of stack with that and enable much more powerful forms of enhancement. | ||
So there, yeah, I don't think there are any major technological hurdles, really, in the way. | ||
Just some small amount of incremental further improvement. | ||
That's when you talk about Doing something with genetics and human beings and selecting. | ||
Selecting for the superior versions. | ||
And then if everybody starts doing that. | ||
The ethical concerns, when you start discussing that, people get very nervous. | ||
Because they start to look at their own genetic defects. | ||
And they go, oh my god, what if I didn't make the cut? | ||
Like, I wouldn't be here. | ||
And you start thinking about all the imperfect people that have actually contributed in some pretty spectacular ways to what our culture is. | ||
And like, well, if everybody has perfect genes, would all these things even take place? | ||
Like, what are we doing, really, if we're bypassing nature and we're choosing to select for the traits and the attributes that we find to be the most positive and attractive? | ||
Like, what are, like, that gets slippery. | ||
And you think what would have happened if, say, some earlier age... | ||
had had this ability to kind of lock in their, you know, their prejudices, or if the Victorians had had this, maybe we would all be, whatever, pious and patriotic now or something. | ||
Yeah, we know, like the Nazis. | ||
So, in general, with all of these powerful technologies we are developing, there is I think the ideal course would be that we would first gain a bit more wisdom, and then we would get all of these powerful tools. | ||
But it looks like we're getting the powerful tools before we have really achieved a very high level of wisdom. | ||
But we haven't earned them. | ||
The people that are using them are sort of... | ||
Think about the technology that all of us use. | ||
How many... | ||
How many pieces of technology do you use in a day and how much do you actually understand any of those? | ||
Most people have very little understanding of how any of the things they use work. | ||
They put no effort at all into creating those things, but yet they've inherited the responsibility of the power that those things possess. | ||
Yeah, I mean, that's the only way we can do it. | ||
It's just way too complex for any person. | ||
If you had to sort of learn how to build every tool you use, you wouldn't get very far. | ||
Isn't that fascinating, though, when you think about human beings and all the different things we do? | ||
We have very little understanding of the mechanisms behind most of what we need for day-to-day life, yet we just use them because there's so many of us and so many people are understanding various parts of all these different things that together, collectively, we can utilize the intelligence of all these millions of people that have innovated and we, with no work whatsoever, just go into the Verizon store and pick up the new phone. | ||
I mean, and not just technology, but worldviews and political ideas as well. | ||
It's not as if most people sit down with an empty table, try to think from the basic principles of what would be the ideal configuration of the state or something like that. | ||
You just kind of absorb it and go with it. | ||
You float in the stream of culture. | ||
Yeah. | ||
And it's amazing just how little of that actually at any point channels through your sort of conscious attention where you make some rational otherwise with like deliberate decision. | ||
Most you just get carried away with. | ||
But that again, I mean, if this is what we have to work with, then there's no other way. | ||
There's no other way. | ||
There's no other way and there's no way, even like you and I discussing this, like Discussing the history of this incredible spike of evolution, or innovation rather, in technology. | ||
It just doesn't feel like anything. | ||
It feels normal. | ||
So even though we can intellectualize it, even though we can have this conversation, talk about what an incredible time we're in and how terrifying it is that things are moving at such an incredibly rapid rate. | ||
And no one's putting the brakes on it. | ||
No one's thinking about the potential pros and cons. | ||
We're just pushing ahead. | ||
Yeah. | ||
Well, not nobody. | ||
I mean, there are a few people. | ||
I've got my research group. | ||
Yes. | ||
There's actually increased... | ||
I mean, when I got interested in these things in the 90s, and it was very much a fringe activity. | ||
There was some internet mailing lists, some people exchanging ideas. | ||
But since then, I mean, there's now a small set of academic research institutes and some other that are kind of actually trying to do more systematic thinking about some of these big picture topics. | ||
When did it seem like it was possible? | ||
You got involved in it in the 90s. | ||
It must have seemed like some very fringe sort of pie in the sky idea of a general artificial intelligence. | ||
So we're talking specifically about AI? | ||
Yeah. | ||
Well, actually, the field of artificial intelligence sometimes is kind of dated to 1956. | ||
That was a conference. | ||
I mean it's somewhat arbitrary, but roughly that's when it got started. | ||
But the pioneers, even right back at the beginning, They thought that they were going to be able to do all the things that the human brain does. | ||
In fact, they were quite optimistic. | ||
They thought maybe 10 years or something like that. | ||
Back then? | ||
Yeah, many of them. | ||
Really? | ||
Even before computers? | ||
No, they had computers in 1956. How did they? | ||
What kind of computers? | ||
Well, slow. | ||
Slow computers. | ||
When was the computer invented? | ||
Well, it's one of those things. | ||
I think during the Second World War, they had computers that were useful for doing stuff. | ||
Then before that, they had kind of tabulating machines. | ||
And before that, they had designs for things that, if they had been put together, would have been able to calculate a lot of numbers. | ||
And then before that, they had an Abacus. | ||
It kind of... | ||
There's a number of, like, the line from having some external tool like a notepad, which you can calculate bigger numbers, right, if you can scribble on a piece of paper to a modern-day supercomputer, like, that kind of, you can break it down into small steps and they happen gradually. | ||
But, yeah, roughly since the 40s or so. | ||
That's when they first invented code? | ||
Like, electrical, yeah. | ||
Yeah. | ||
I think. | ||
So even back then, they thought we're only about 10 years away. | ||
Well, in the mid-50s, when people started using the word artificial intelligence, some of these AI researchers at the time were quite optimistic about the timelines. | ||
In fact, there was some summer project that we're going to have a few students or whatever and work over the summer. | ||
And I thought, oh, maybe we can solve vision over the summer. | ||
there. | ||
And now we've kind of solved vision, but that's like six years later. | ||
It can be hard to know how hard the problem is until you've actually solved it. | ||
But the really interesting thing to me is that even though I can understand why they were wrong about how difficult it is, because how would you know, right, if it's 10 years of work or 100 years of work? | ||
Kind of hard to estimate at the outset. | ||
But what is striking is that even the ones who thought it was 10 years away, they didn't think of what the obvious next step would be after that. | ||
Like if you actually succeeded, At mechanizing all the functions of the human mind. | ||
They couldn't think, well, it's obviously not going to stop there once you get human equivalence. | ||
You're going to get superintelligence. | ||
But it was as if the imagination muscle had so exhausted itself thinking of this radical possibility. | ||
You could have a machine that does everything that the human does. | ||
You couldn't kind of take the next step. | ||
Or for that matter, the immense ethical and social implications. | ||
Even if all you could do is to replicate a human mind, like in a machine. | ||
If you actually thought you were building that and you were 10 years away, it'd be crazy not to spend a lot of time thinking about how this is going to impact the world. | ||
But that didn't really seem to have occurred much to them at all. | ||
Well, sometimes it seems that people just want to do it. | ||
Like, even with the creation of the atomic bomb, I mean, they felt like they had to do it because we had to develop it before the Germans did. | ||
unidentified
|
Right. | |
But that was a specific reason. | ||
Like, it wasn't just, oh, it could be fun to do, right? | ||
Sure. | ||
And so with the Manhattan Project, obviously, it was during wartime and maybe Hitler had a program. | ||
They thought you could easily see why that would motivate a lot of people. | ||
But even before they actually started the Manhattan product, so the guy who kind of first conceived of the idea that you could make a nuclear explosion, Leo Szilard, he was a kind of eccentric physicist who conceived of the idea of a chain reaction. | ||
So it's been known before that that you could split the atom and a little bit of energy came out. | ||
But if you're going to split one atom at a time, You're never going to get anything because it's too little. | ||
So the idea of a chain reaction was that if you split an atom and it releases two neutrons, then each of those can split another two atoms that then release four neutrons and you get an exponential blow-up. | ||
So he thought of this... | ||
I forget exactly when. | ||
It must have been in the early 30s, probably. | ||
And he was a remarkable person because he didn't just think, oh, this is a fun idea. | ||
I should publish it and get a lot of citations. | ||
But he thought, what would this mean for the world? | ||
Gee, this is... | ||
This could be bad for civilization. | ||
And so he then went to try to persuade some other of his colleagues who were also working in nuclear physics not to pursue this, not to publish unrelated ideas and have some partial success. | ||
So there was some partial success where his colleagues agreed. | ||
Some things were not published immediately. | ||
Not all of his colleagues listened to him. | ||
Of course. | ||
Isn't that the problem? | ||
That is the problem. | ||
Some people are always going to want to be the ones that sort of innovate. | ||
That is the problem in those cases where you would actually prefer the innovation not to happen. | ||
Historically, of course, we now look back and think there are a lot of dissenters that we are now glad could have their way because a lot of cultures were quite resistant to innovation and they wanted to do the way things had always been, whether it's like social innovation or technological innovation. | ||
The Chinese were at one point ahead in seafaring, exploring, and then they shut all of that down because the emperor at the time, I guess, didn't like it. | ||
So there are many examples of kind of stasis, but as long as there were a lot of different places, a lot of different countries, a lot of different mavericks, then somebody would always do it. | ||
And then once the others could see that it worked, they could kind of copy and... | ||
Things move forward. | ||
But of course if there is a technology you actually want not to be developed, then this multipolar situation makes it very, very hard to coordinate, to refrain from doing that. | ||
Yeah, this I think is a kind of structural problem in the current human condition that is ultimately responsible for a lot of the existential risks that we will face in this century. | ||
There's this kind of failure of ability to solve global coordination problems. | ||
Yeah, and when you think about the people that did Oppenheimer and the people behind the Manhattan Project, they were inventing this to deal with this existential threat, this horrific threat from Nazi Germany, the Japanese and the World War II, | ||
you know, this idea that this evil empire is going to try to take over the world, and this created The momentum and this created the motivation to develop this incredible technology that wind up making a great amount of our electricity and wound up creating enough nuclear weapons to destroy the entire world many times over. | ||
And we're in this strange state now where it was motivated by this horrific moment in history, this evil empire that tries to take over the world and we come up with this incredible technological solution, the ultimate weapon That we detonate a couple of times on some cities and then now we're in this weird state where, you know, we're how many years later? | ||
80 years later? | ||
And we're not doing it anymore. | ||
We don't drop any bombs on people anymore, but we all have them and we all have them pointed at each other. | ||
Well, not all. | ||
Well, yes. | ||
Which is a good thing, I think. | ||
Quite a few. | ||
But it's incredible that the motivation for this incredible technology, this amazing technology, was actually to deal with something that was awful. | ||
Yeah, I mean, war has had a way of focusing minds and stuff. | ||
No, I think that nuclear energy we would have had anyway. | ||
Maybe it would have been developed like five years or ten years later. | ||
Reactors are not that difficult to do. | ||
So I think we could have gotten to all the good uses of nuclear technology that we have today without having to have had kind of the nuclear bomb developed. | ||
Now, you pay attention to Boston Dynamics and all these different robotic creations that they've made? | ||
They seem to have a penchant for doing really sinister-looking bots. | ||
I think all robots that are, you know, anything that looks autonomous is kind of sinister-looking. | ||
Well, I mean, you see the Japanese have these big-eyed, sort of rounded, so it's a different... | ||
They're trying to trick us. | ||
Boston Dynamics is, I guess, they want the Pentagon to give them funding or something. | ||
Right, DARPA. They look like they're developing Terminators. | ||
Yeah. | ||
Yeah. | ||
But what I was thinking is... | ||
If we do eventually come to a time where those things are going to war for us instead of us, like if we get involved in robot wars, our robots versus their robots, and this becomes the next motivation for increased technological innovation to try to deal with superior robots by the Soviet Union or by China, right? | ||
These are more things that could be threats that could push people to some crazy level of technological innovation. | ||
Yeah, it could. | ||
I mean, I think there are other drivers for technological innovation as well that seem plenty strong commercial drivers, let us say, that we wouldn't have to rely on war or the threat of war to kind of stay innovative. | ||
I mean, there has been this effort to try to see if it would be possible to have some kind of ban on lethal autonomous weapons. | ||
There are a few technologies that we have. | ||
There has been a relatively successful ban on chemical and biological weapons, which have by and large been honored and upheld. | ||
There are kind of treaties on nuclear weapons, which has limited proliferation. | ||
Yes, there are now maybe, I don't know, a dozen. | ||
I don't know the exact number. | ||
But it's certainly a lot better than 50 or 100 countries. | ||
Yes. | ||
And some other weapons as well, blinding lasers, landmines, cluster munitions. | ||
So some people think maybe we could do something like this with lethal autonomous weapons, killer bots, that Is that really what humanity needs most now, like another arms race to develop killer bots? | ||
It seems arguably the answer to that is no. | ||
I've kind of, as a lot of my friends are supportive, I kind of stood a little bit on the sidelines on that particular campaign, being a little unsure exactly what it is. | ||
I mean, certainly I think it'd be better if we refrained from having some arms race to develop these than not. | ||
But if you start to look in more detail, what precisely is the thing that you're hoping to ban? | ||
So if the idea is the autonomous bit, like the robot should not be able to make its own firing decision. | ||
Well, if the alternative to that is... | ||
There's some 19-year-old guy sitting in some office building and his job is whenever the screen flashes fire now, he has to press a red button. | ||
And then exactly the same thing happens. | ||
I mean, I'm not sure how much is gained by having that extra step. | ||
But it is something, it feels better for us. | ||
For some reason, someone is pushing the button. | ||
Right. | ||
But exactly what does that mean? | ||
Like in every particular firing decision? | ||
Or is it like some... | ||
Well, you've got to attack this group of surface ships here, and here are the general parameters, and you're not allowed to fire outside these coordinates. | ||
I don't know. | ||
I mean, another is the question of it would be better if we had no wars, but if there is going to be a war, Maybe it is better if it's robots v. | ||
robots or if there's going to be bombing. | ||
Maybe you want the bombs to have high precision rather than low precision, like get fewer civilian casualties. | ||
And operating under artificial intelligence so it makes better decisions. | ||
Well, it depends exactly on how. | ||
So I don't know. | ||
On the other hand, you could imagine it kind of reduces the threshold for going to war if you think that you wouldn't fear any casualties. | ||
Maybe you would be more eager to do it. | ||
Right. | ||
Or if it proliferates and you have these kind of mosquito-sized killer bots that terrorists have and It doesn't seem like a good thing to have a society where you have a facial recognition thing and then the bot flies out and you just have a kind of dystopia. | ||
I think we're thinking rationally. | ||
We're thinking rationally given the overall view of the human race that we want peace and everything to be well. | ||
Realistically, if you were someone who is trying to attack someone militarily, you'd want the best possible weapons that give you the best possible advantage. | ||
And that's why we had to develop the atomic bomb first. | ||
It's probably why we'll try to develop the killer autonomous robot first. | ||
Yeah, yeah. | ||
Someone else would have it. | ||
Right, the fear that the other is. | ||
So this is why it's basically a coordination problem. | ||
Like, it's hard for any one country unilaterally to make sure that the world is peaceful and... | ||
Sure. | ||
And kind, right? | ||
It requires everybody to synchronize their actions. | ||
And then you can have successes like we've had with some of these treaties. | ||
Like, we've not had a big arms race in biological weapons or in chemical weapons. | ||
I mean, there have been. | ||
There were cheaters even on the biological warfare program, like the Soviet Union had massive efforts there, but still probably less use of that and less development than if there had been no such treaty. | ||
Or just look at the amount of money being wasted every year to maintain these large arsenals so that we can kill one another if one day we decide to do it. | ||
There's got to be a better way. | ||
But getting there is hard. | ||
We would hope that we would get to some point where all this would be irrelevant because there's no more war. | ||
Yeah, and so if you look at the biggest efforts so far to make that happen, so after the First World War, people were really aware of this. | ||
They said, this sucks, like war. | ||
I mean, look at this. | ||
Like a whole generation just ground up machine guns. | ||
Got to make sure this never happens again. | ||
So they tried to do the League of Nations, but then didn't really invest it with very much power. | ||
And then the second war, second world war happened. | ||
And so then again, just after that, it's fresh in people's memory saying, well, never again. | ||
This is it. | ||
The United Nations and in Europe, the European Union is kind of both designed as ways to try to prevent this. | ||
But again, with kind of maybe in the case of the United Nations, quite limited powers to actually enforce the agreements. | ||
And there's a veto, which makes it hard if it's two of the major powers that are at loggerheads. | ||
So it might be that if there were a third big conflagration, that then people would say, well, this time, you know, we've got to really put some kind of institutional solution in place that has enough enforcement power that we don't try this yet again. | ||
So we don't have a second robot war. | ||
So once we get through the first robot war... | ||
I mean, but the kind of memories fade, right? | ||
Yes, that's the problem, right? | ||
So even the Cold War, I mean, I grew up... | ||
I'm Swedish, I remember. | ||
We were kind of in between, right? | ||
And we were taught in schools about nuclear fallout and stuff. | ||
It was like a very palpable sense that at any given point in time, there could be some miscalculation or crisis or something. | ||
And all the way up to senior statesmen at the time, these were like very real and very serious. | ||
And I feel that memory of just how bad it is to live in that kind of hair-trigger nuclear arms race Cold War situation has kind of faded, and now we think, wow, maybe the world didn't blow up, so maybe it wasn't so bad after all. | ||
Well, I think that would be the wrong lesson to learn. | ||
It's a bit like you're... | ||
Playing Russian roulette and you survive one and you say, well, it isn't so dangerous at all to play Russian roulette. | ||
I think I'm going to have another go. | ||
You've got to realize, well, maybe that was a 10% chance or a 30% chance that the world would blow up during the Cold War and we were lucky, but it doesn't mean we want to have another one. | ||
When I was in high school, it was a real threat. | ||
When I was in high school, everyone was terrified that we were going to go to war with Russia. | ||
It was a big thing. | ||
And you talk to people from my generation about that, and everybody remembers it. | ||
Remember that feeling that you had in high school. | ||
Like, at any day, something could go wrong, and we could be at war with another country that's a nuclear superpower. | ||
But that's all gone now. | ||
Like, that feeling, that fear. | ||
People are so confident that that's not going to happen, that that's not even in people's consciousness. | ||
unidentified
|
Yeah. | |
And then a number of maneuvers are made and then you find yourself in a kind of situation where there's like honor at stake and reputation and you feel you can't back down and then another thing happens and you get into this place where if you even say something kind about the other side, You seem to be like, you know, you're a soft, you're a pinky, you're a light. | ||
And on both sides, on the other side as well, obviously, they're going to have the same internal dynamic. | ||
And each side says bad things about the other. | ||
It makes the other side hate them even more. | ||
And these things are then hard to reverse. | ||
Like, once you find this dynamic happening, it's kind of almost, well, it's not too late. | ||
You can't try it, but it can be very hard to back out of that. | ||
And so if you can prevent yourself from going down that path to begin with, that's much preferable. | ||
When you see Boston Dynamics and you see those robots, is there something comparable that's being developed either in the Soviet Union or in China or somewhere else in the world where there's similar type robots? | ||
Well, I think a lot of the Boston Dynamics thing seems more showy than actually useful. | ||
unidentified
|
Really? | |
These kind of animal-like things that hop around with 150 decibel or something. | ||
If I were a special ops trying to sneak in, I wouldn't want... | ||
This is kind of big alarm. | ||
But I think a lot of action would be more in terms of flying drones, maybe submarine stuff, missiles, that kind of stuff. | ||
But when you see these robots and you see the ones that look like dogs or insects, Couldn't you imagine those things being armed with guns? | ||
I could. | ||
When they are, then it doesn't really look showy anymore. | ||
It seems pretty effective. | ||
You can't even kick those things over. | ||
Yeah, well, I mean, I think if it has a gun, I mean, it doesn't It really doesn't matter whether it looks like a dog or if it's just a small flying platform. | ||
I mean, in general, I think the more with AI and robotics, the cooler something looks, usually technically the less impressive it is. | ||
As you see, the extreme case of this is these robots that look exactly like a human, maybe shaped like a beautiful woman or something like that. | ||
They're complete hype. | ||
Like ex machina. | ||
Well, so the movies, obviously, they do it because they don't want to film in movies. | ||
But every once in a while, you have some press release. | ||
I forget what the name is of this female-looking robot that got citizenship in Saudi Arabia a few years ago. | ||
It's like a pure publicity stunt, but the media just laps it up. | ||
Wow, they've created this. | ||
It's exactly like a human. | ||
What a big breakthrough. | ||
And it's like nothing. | ||
Do you anticipate, like when you see Ex Machina, Do you think that that's something that could be realistically, that could be implemented in a hundred years or so? | ||
Like we really could have some form of artificial human that's indistinguishable? | ||
Well, I think the action is not going to lie in the robotic part so much as in the brain part. | ||
I think it's the AI part. | ||
And robotics only insofar as it becomes enabled by having, say, much better learning algorithms. | ||
So right now, if you have a robot, for the most part, in any one of these big factories, It's like a blind, dumb thing that executes a pre-programmed set of motions over and over again. | ||
And if you want to change off the production, you need to get in some engineers to reprogram it. | ||
But with a human, you could kind of show them how to do something once or twice, and then they can do it. | ||
So it will be interesting to see over the next few years whether we can see some kind of progress in robotics that enable this kind of imitation learning. | ||
To work well enough that you could actually start doing it. | ||
There are demonstrations already, but robustly enough that it would be useful and you could replace a lot of these kind of industrial robotics experts by having this. | ||
So I think in terms of making things look like human, I think that's more for Hollywood and for press releases than the actual driver of progress. | ||
More so the actual driver of progress, but someone is going to probably try to replicate a human being once the technology becomes viable. | ||
Did you see the movie Ex Machina? | ||
It's a little bit of a blur. | ||
I've seen some of these and not others. | ||
Ex Machina was the one where the guy lives in a very remote location. | ||
Yeah, like a beautiful place in Norway. | ||
He created this beautiful girl robot that seduces this man. | ||
At the end of it, she leaves him locked up in this thing and just takes off and gets on the helicopter and flies away. | ||
The thing that's disturbing is that they She knew how to manipulate his emotions to achieve a desired result, which was him helping her escape. | ||
But then once she did, she had no real emotions. | ||
So he was screaming and she had no compassion and no empathy. | ||
She just hopped on the helicopter and left him there to starve to death inside that locked box. | ||
And that is what scares people. | ||
This idea that we're going to create something that's intelligent, it has intelligence like us, but it doesn't have all the things that we have. | ||
Like... | ||
Caring, love, friendship, compassion, the need for other human beings. | ||
If you develop an autonomous robot that's really autonomous, it has no need for other people, that's where we get weirded out. | ||
Like, it doesn't need us. | ||
Right, yeah. | ||
I mean, I think... | ||
The same would hold even if it were not a robot, but just a program inside a computer. | ||
But yeah, the idea that you could have something that is strategic and deceptive and so forth. | ||
But then other elements of the movie, of course, and in general, a reason why it's bad to get your kind of map of the future from Hollywood itself. | ||
So if you think it's this one guy, presumably some genius, living out in the nowhere and kind of inventing this whole system, like in reality, it's like anything else. | ||
There are hundreds of people programming away on their computers, writing on whiteboards, and sharing ideas with other people across the world. | ||
It doesn't look like a human. | ||
And that would often be some economic reason for doing it in the first place. | ||
Like not just, oh, we have this Promethean attitude that we want to kind of bring – So all of those things don't make for such good plot lines, so they just get removed. | ||
But then I wonder if people actually think of the future in terms of some kind of... | ||
Super villain and some hero and it's going to come down to these two people and they're going to wrestle. | ||
And it's going to be very personalized and concrete and localized. | ||
Whereas a lot of things that determine what happens in the world are very spread out and bureaucracies churning away. | ||
Sure. | ||
Yeah, that was a big problem that a lot of people had with the movie was the idea that this one man could innovate. | ||
At such a high level and be so far beyond everyone else is ridiculous. | ||
That he's just doing it by himself on this weird compound somewhere. | ||
Come on. | ||
But that makes a great movie, right? | ||
unidentified
|
Yeah. | |
Fly in in the helicopter, drop you off in a remote location. | ||
This guy shows you something he's created that is going to change the whole world. | ||
And it looked beautiful. | ||
I mean, I could imagine doing some writer's retreat there or something. | ||
Well, when... | ||
The iconic image of aliens from another world is these little gray things with no sexual organs and large heads and black eyes. | ||
This is the iconic thing that we imagine when we think about things from another planet. | ||
I've often wondered if What we think of in terms of like artificial life from another planet or life from another planet is that. | ||
It's like an artificial creation. | ||
Like in our ideas that we understand that the biological limitations of the body when it comes to traveling through space, the dealing with radiation, the death, need for food, things along those lines, that what we would do is create some artificial thing to travel for us like we've already done on Mars, right? | ||
We have the rover. | ||
That roams around Mars. | ||
The next step would be an artificial, autonomous, intelligent creature that has no biological limitations like we do in terms of its ability to absorb radiation from space. | ||
And we create one of those little guys just like that with an enormous head. | ||
No sex organs. | ||
Doesn't need sex organs. | ||
And we have this thing. | ||
Pilot these ships that can defy our own physical limitations in terms of what would happen to us if we had to deal with 1 million G-force because it's moving at some preposterous rate through space. | ||
When we think of these things coming from another planet, if we think of life on another planet, If they can innovate in a similar fashion the way we do, we would imagine they would create an artificial creature to do all their dirty work. | ||
Like, why would they want to, like, risk their body? | ||
unidentified
|
Right. | |
Yeah, I mean, except I think creature might conjure up stuff that... | ||
I mean, if you have this spaceship, I mean, you don't have to have, like, build a little thing that sits and turns the steering wheel. | ||
I mean, this could be all automated. | ||
Sure. | ||
And you'd imagine a technology... | ||
That is spacefaring in a serious way would have nanotechnology. | ||
So they'd have basically the ability to arbitrarily configure matter in whatever structure they wanted. | ||
They would have like nanoscale probes and things that could shapeshift. | ||
It would not be that there would be this person sitting in a seat behind the steering wheel. | ||
If they wanted to, there could be invisible tasks, I think, like nanoscale things hiding in a rock somewhere, than just connecting with an information link up to some planetary-sized computer somewhere. | ||
I think that's the way that space is most likely to get colonized. | ||
It's not going to be like with meat sacks kind of driving spaceships around and having Star Trek adventures. | ||
It's going to be some spherical frontier emanating from whatever the home planet was, moving at some significant fraction of the speed of light and converting everything in its path into infrastructure. | ||
Of whatever type is maximally valuable for that civilization. | ||
Maybe computers and launchers to launch more of these space probes so that the whole wavefront can continue to propagate. | ||
But we are... | ||
I mean, one of the things you brought up earlier is that if human beings are going to continue and we're going to propagate through the universe, we're going to try to go to other places, we're going to try to populate other planets, And are we going to do that with just robots? | ||
Or are we going to try to do that biologically? | ||
We're probably going to try to do it biologically. | ||
One of the things you were saying earlier is one of the things that artificial intelligence could possibly do is accelerate our ability to travel to other lands or other planets. | ||
I mean, we're going to try. | ||
I mean, in fact, some people are, right? | ||
I just think that's going to not lead to anything important until those efforts become obsoleted. | ||
By some radical new technology wave, probably triggered by machine superintelligence that then rapidly leads to something approximating technological maturity. | ||
Once innovation happens at digital timescales rather than human timescales, then all these things that you could imagine we're doing, if we had 40,000 years to work on it, we would have space colonies and cures for aging and all of these things, right? | ||
But if that thinking time happens, you know, Digital space, then that long future gets telescoped, and I think you fairly quickly reach a condition where you have close to optimal technology. | ||
And then you can colonize the space cost-effectively. | ||
You just need to send out one little probe that then can land on some resource and set up a production facility to make more probes, and then it spreads exponentially everywhere. | ||
And then if you want to, you could then, like, after that initial infrastructuring has happened, you could transport biological human beings to other planets if you wanted to. | ||
But it's not really where the action is going to be. | ||
But what if we were concerned there's some sort of a threat to the Earth? | ||
Like... | ||
Some sort of asteroid impact, something. | ||
I mean, at that stage of technology, averting some asteroid would be, I think, trivial. | ||
Really? | ||
It would be like a gift of free energy. | ||
Like, oh, here comes an energy package. | ||
Great. | ||
That's a funny way to look at it. | ||
Do you think we're going to eventually colonize Mars? | ||
Well, I think the answer is if and only if we manage to get through these key technological transitions. | ||
And then I think we will colonize not just Mars, but everything else that is accessible in the universe. | ||
When you talk about these things, people always want to know when. | ||
When do you think it's going to happen? | ||
What's the timeline? | ||
Yeah, so my guess would be after technological maturity, like after superintelligence. | ||
Now, with Mars, it's possible that there would be like a little kind of prototype colonization thing because people are really excited about that. | ||
So you could imagine some little demo projects. | ||
Happening sooner. | ||
But if we're talking about something, say, that would survive long term, even if the Earth disappeared, like some kind of self-sustaining civilization, I think that's going to be very difficult to do until you have super intelligence and then it's going to be trivial. | ||
So you think superintelligence could potentially be what, I mean, one of the applications would be to terraform Mars, to change the atmosphere, to make it sustainable for biological life. | ||
Yeah, for biological life. | ||
So we have like a second spot. | ||
Yeah, for example. | ||
Like a vacation house. | ||
Now, I also think that at this, this is a very radical context, technological maturity, because we already, maybe there are additional technologies we can't even think of yet, but even just what we already know, About physics, etc. | ||
We can sort of see possible technologies that we're not yet able to build, but we can see that they would be consistent with physics, that would be stable structures. | ||
And already that creates a vast space of things you could do. | ||
And so, for example, I think it would be possible at technological maturity to upload human minds into computers, for example. | ||
You think that's going to happen, like Ray Kurzweil stuff? | ||
Well, I think, again, it would be technologically possible at technological maturity to do it. | ||
Now, whether it's actually going to happen then depends, A, do we reach technological maturity? | ||
And B, are we interested in using our technology for that purpose at that time? | ||
But both of those seem kind of reasonably... | ||
unidentified
|
Possible? | |
Yeah, reasonably possible. | ||
Possible, yeah. | ||
Especially in comparison to what we've already achieved. | ||
If I had a time machine and it could jump you 1,000 years from now into the future, would you do it? | ||
Would you jump in? | ||
I mean, I think just going on a long jet flight is kind of already stretching my... | ||
What if it was an instantaneous trip to 1,000 years? | ||
Could I come back? | ||
No. | ||
Well... | ||
I probably wouldn't. | ||
I don't know. | ||
I mean, I'm kind of a bit cautious with these things. | ||
At the very least, I'd rather think about it for a long time before. | ||
Also, I have attachments. | ||
There are people I care about here and projects and maybe even opportunities to try to make some difference. | ||
If we actually are in this weird time right now, different from all of earlier human history where nothing really much was happening and we're not yet where it's all out of our hands and it's superintelligence is running. | ||
If that actually is, if that's true, then that means we right now live in this very weird period where our actions might have cosmological consequences. | ||
If we affect the precise time and way in which the transition to machine superintelligence happens, we would be hugely influential. | ||
And if you have some ambition to try to do some good in the world, then that kind of can be a very exciting prospect as well. | ||
Like, there might be no other better time to exist if your goal is to do good. | ||
Yeah, we might be in the golden years. | ||
In terms of ability to have... | ||
To take actions that have large consequences. | ||
Also this very unique transitionary period between the times of old and the times of new. | ||
Like we're really in the heat of the change in terms of like we, you know, the internet is only 20 plus years old. | ||
Phones are only, you know, cell phones at least, people carrying them all the time, it's only 15 plus years old. | ||
This is very, very new. | ||
Yeah. | ||
So it's an exciting, crazy time where all these changes are taking place really rapidly. | ||
Like, if you were from the future, this might be the place where you would travel to, to experience what it was like to see this immense change take place almost instantaneously. | ||
Like, if you could go back in time to a specific time in history and experience what life was like, to me, I think I'd probably pick ancient Egypt, like, during the days of the pharaohs. | ||
I would love to see what it was. | ||
No, no, you just need to watch. | ||
Just to see what it looks like, you know, what it's like to experience life back then. | ||
But if I was from the future, where things were... | ||
Just of the curiosity, what do you think it would look like? | ||
Like, what do you imagine yourself seeing in this? | ||
I would imagine, I mean, I've really thought long and hard about the construction methods of ancient Egypt. | ||
I would love to see what it looked like when they were building the pyramids. | ||
How long did it take? | ||
What were they doing? | ||
How did they do it? | ||
We still don't know. | ||
It's all really theoretical. | ||
There's all these ideas of how they constructed it with incredible precision and precision in terms of the way it's astronomically aligned to certain areas of our solar system and different constellations. | ||
It's amazing. | ||
I would love to have seen how they did that and what was the planning like and how they implemented and how many people did it take and how long did it take because we really don't know. | ||
It's all speculation. | ||
During the burning of the Library of Alexandria, we lost so much information. | ||
We've got hieroglyphs and the physical structures that are still daunting. | ||
We have no idea. | ||
They look at the Great Pyramid of Giza, the huge one with two million-plus stones in it. | ||
Who made that? | ||
unidentified
|
How? | |
How did you guys do it? | ||
What? | ||
Did you draw it out first? | ||
How did you get all the rocks there? | ||
I mean, I think that would be probably the spot that I would want to go to. | ||
I would want to be there in the middle of the construction of the pyramids just to watch. | ||
So those certainly would be big, I guess, tourist destinations for time travelers. | ||
In terms of if one is thinking, I'm just saying... | ||
What was going on back then? | ||
We think the pyramids and the slave trains. | ||
But of course, for most Egyptians, most of the time, they would be picking weeds from their field or putting their baby to sleep or stuff like that. | ||
So kind of the typical moment of human existence. | ||
They don't even think it's slaves anymore, I don't think. | ||
I think they think it's skilled labor based on their diet. | ||
Based on the diet, the utensils that they found in these camps, these workers' camps... | ||
They think that these were highly skilled craftspeople, that it wasn't necessarily slaves. | ||
They used to think it was slaves, but now because of the bones of the food, they were eating really well, and they think that, well, and also the level of sophistication involved. | ||
This is not something you just get kind of slaves to do. | ||
This seems to be that there was a population of structural engineers, that there was a population of skilled construction people, and that they tried to, you know, utilize all of these great minds that they had back then and put this thing together. | ||
But it's still a mystery. | ||
I think that's the spot that I would go to because I think it would be amazing to see so many different innovative times. | ||
I mean, it would be amazing to be alive during the time of Genghis Khan or to be alive during some of the wars of 1,000, 2,000 years ago just to see what it was like. | ||
The pyramids would be the big one. | ||
But I think if I was in the future, some weird dystopian future where artificial intelligence runs everything and human beings are linked to some sort of Neurological implant that connects us all together and we long for the days of biological independence and we would like to see what was it like when they first started inventing phones? | ||
What was it like when the internet was first opened up for people? | ||
What was it like when people saw, when someone had someone like you on a podcast and was talking about We're good to go. | ||
This really Goldilocks period of great change where we're still human, but we're worried about privacy. | ||
We're concerned our phones are listening to us. | ||
We're concerned about surveillance dates and people put little stickers over the laptop camera. | ||
We see it coming, but it hasn't quite hit us yet. | ||
We're just seeing the problems that are associated with this increased level of technology in our lives. | ||
Which is, yeah, that is a strange thing. | ||
If we add up all these pieces, it does put us in this very weirdly special position. | ||
And you wonder, hmm, it's a little bit too much of a coincidence. | ||
It might be the case, but yeah, it does put some strain on it. | ||
unidentified
|
When you say a little too much of a coincidence, how so? | |
I mean, I guess the intuitive way of thinking about it, like what are the chances that just by chance you would happen to be living in the most interesting time in history, being like a celebrity, like whatever, like that's pretty low prior probability. | ||
Oh, you mean like for me? | ||
Well, from you, I mean for all of us, really. | ||
For all of us. | ||
And so that could just be, I mean, if there's a lottery, somebody's got to have the ticket, right? | ||
Or, yeah, or we are wrong about this whole picture, and there is some very different structure in place, which would make our experiences more typical. | ||
That's what I was getting to. | ||
Yeah, I gathered. | ||
Yeah, so... | ||
How much have you considered the possibility of a simulation? | ||
Well, a lot. | ||
I mean, I developed a simulation argument back in the early 2000s. | ||
unidentified
|
And so, yeah. | |
But I mean, I know that you developed this argument and I know that you've spent a great deal of time working on this. | ||
But personally, the way you view the world How much does it play into your vision of what reality is? | ||
Well, it's hard to say. | ||
I mean, for the majority of my time, I'm not actively thinking about that. | ||
I'm just living. | ||
Now, I have this weird that my work is actually to think about big picture questions. | ||
So it kind of comes in through my work as well. | ||
When you're trying to make sense of our position, our possible future prospects, the levers which we might have available to affect the world, what would be a good and bad way of pulling those levers, then you have to try to put all of these constraints and considerations together. | ||
And in that context, I think it's important. | ||
I think if you are just going about your daily existence, then it might not really be very useful or relevant to constantly try to bring in hypotheses about the nature of our reality and stuff like that. | ||
Because for most of the things you're doing on a day-to-day basis, they work the same, whether it's inside a simulation or in basement-level physical reality. | ||
You still need to get your car keys out. | ||
So in some sense, it kind of factors out and is irrelevant for many practical intents and purposes. | ||
Do you remember when you started to contemplate the possibility of a simulation? | ||
No, I mean, I remember when the simulation argument occurred to me, which is less, it's not just, I mean, for as long as I can remember, like, yeah, I mean, maybe it's a possibility, like, oh, it could all be a dream, it could be a simulation, but that there is this specific argument that kind of narrows down the range of possibilities and where the simulation hypothesis is then one of only three What are the three options? | ||
Well, one is that there is almost all civilizations at our current stage of technological development go extinct before reaching technological maturity. | ||
That's like option one. | ||
Could you define technological maturity? | ||
Well, say having developed at least all those technologies that we already have good reason to think are physically possible. | ||
So that would include the technology to build extremely large and powerful computers on which you could run detailed computer simulations of conscious individuals. | ||
So that kind of would be a pessimistic, like if almost all civilizations at our stage failed to get there, that's bad news, right? | ||
Because then we'll fail as well, almost certainly. | ||
That's one possibility. | ||
Yeah, so that's option one. | ||
Option two is that there is a very strong convergence among all technologically mature civilizations in that they all lose interest in creating ancestor simulations or these kinds of detailed computer simulations of conscious people like their historical predecessors or variations. | ||
So maybe they have all of these computers that could do it, but for whatever reason, they all decide not to do it. | ||
Maybe there's an ethical imperative not to do it or some other... | ||
I mean, we don't really know much about these post-human creatures and what they want to do and don't want to do. | ||
Post-human creatures. | ||
Well, I'd imagine that by the time they have the technology to do this, they would also have enhanced themselves in many different ways. | ||
Right. | ||
Perhaps enhancing their ability to recognize the consequences. | ||
unidentified
|
Right. | |
Yeah. | ||
Of creating some sort of simulation. | ||
Yeah, that would almost certainly have cognitively enhanced themselves, for example. | ||
Well, is the concept of... | ||
Downloading consciousness into a computer, it almost ensures that there's going to be some type of simulation. | ||
If you have the ability to download consciousness into a computer, once it's contained into this computer, what's to stop it from existing there? | ||
As long as there's power and as long as these chips are firing and electricity is being transferred and data is being moved back and forth, you would essentially be in some sort of a simulation. | ||
Well, I mean, if you have the capability to do that and also the motive... | ||
It would have to simulate something that resembles some sort of a biological interface. | ||
Otherwise, it's not going to know what to do, right? | ||
Yeah. | ||
So we have these kind of virtual reality environments now that are imperfect but improving. | ||
And you could kind of imagine that they get better and better and then you have a perfect virtual reality environment. | ||
But imagine also that your brain, instead of sitting in a box with big headphones and some glasses on, the brain itself also could be part of the simulation. | ||
The matrix. | ||
Well, I think in the matrix there are biological humans outside that plug in, right? | ||
Right. | ||
But you could include in the simulation, just as you have maybe simulated coffee mugs and cars, etc., you could have simulated brains. | ||
And so... | ||
Here is one assumption coming in from outside the simulation argument, and one can talk about it separately, but it's the idea that I call it the substrate independence thesis, that you could in principle have conscious experiences implemented on different substrates. | ||
It doesn't have to be carbon atoms, as is the case with the human brain. | ||
It could be silicon atoms. | ||
What creates conscious experiences is some kind of structural feature of the computation that is being performed. | ||
Rather than the material that is used to underpin it. | ||
So in that case, you could have a simulation with detailed simulations of brains in it where maybe every neuron and synopsis simulated and then those brains would be conscious. | ||
And that's possibility number two? | ||
Well, no, so the possibility number two is that these post-humans just are not at all interested in doing it. | ||
And not just that some of them don't, but like of all these civilizations that reach technological maturity, that's kind of pretty uniformly, just don't do that. | ||
And what's number three? | ||
That we are in a simulation, the simulation hypothesis. | ||
And where do you lean? | ||
Well, I generally tend to punt on the question of precise probabilities there. | ||
I mean, I think it would be a probability thing, right? | ||
Yes. | ||
You assign some to each. | ||
But yeah, I've refrained from giving a very precise probability. | ||
Partly because, I mean, if I said some particular number, it would get called there and it would create this maybe sense of false precision. | ||
The argument doesn't allow you to derive this, the probability is X, Y, Z. It's just that at least one of these three has to obtain. | ||
So, yeah, so that narrows it down. | ||
Because you might think... | ||
Why do we know the future is big? | ||
You could just make up any story and we have no evidence for it. | ||
But it seems that there are actually, if you start to think everything through, quite tight constraints on what probabilistically coherent views you could have. | ||
And it's kind of hard even to find one overall hypothesis that fits this and various other considerations that we think we know. | ||
The idea would be that if there is one day the ability to create a simulation, that it would be indiscernible from reality itself. | ||
Say if we're not in a simulation yet. | ||
If this is just biological life, we're just extremely fortunate to be in this Goldilocks period. | ||
But we're working on virtual reality in terms of like Oculus and all these companies are creating these consumer-based virtual reality things that are getting better and better and really kind of interesting. | ||
You've got to imagine that 20 years ago there was nothing like that. | ||
20 years from now, it might be indiscernible. | ||
You might be able to create a virtual reality that's impossible to discern from the reality that we're currently experiencing. | ||
Or maybe 20,000 years or 20 million years. | ||
Like the argument makes no assumption at all about how long it will take. | ||
Yeah. | ||
But one day. | ||
Yeah. | ||
If things continue to improve. | ||
Yeah. | ||
Computational power, the ability to replicate experiences and even feedback in terms of like biological feedback, touch and feel and smell. | ||
If they figure out a way to do that, one day they will have an artificial reality that's indiscernible from reality itself. | ||
And if that is the case, how do we know if we're in it? | ||
Right. | ||
That is roughly the gist of it. | ||
Now, as I said, I think if you simulate the brain also, You have a cheaper overall system than if you have a biological component in the center surrounded by virtual reality gear. | ||
So you could, for a given cost, I think create many more ancestry simulations with simulated brains in them rather than biological brains with VR gear. | ||
So most, in these scenarios where there would be a lot of simulations, most of those scenarios, it would be the kind of where everything is digital. | ||
Because it's just cheaper with mature technology to do it that way. | ||
This is one of the biggest, for lack of a better term, mindfucks. | ||
When you really stop and think about reality itself. | ||
That if we are living in a simulation, like, what is it? | ||
And why? | ||
And where does it go? | ||
And how do I respond? | ||
How do I move forward? | ||
If I really do believe this is a simulation, what am I doing here? | ||
Yeah, those are big questions. | ||
Huge questions. | ||
And some of them arise even if we're not in a simulation. | ||
Yeah. | ||
And aren't there people that have done some strange, impossible to understand calculations that are designed to determine whether or not there's a likelihood of us being involved in a simulation currently? | ||
unidentified
|
Yeah. | |
Yeah, I think it slightly misses the point. | ||
So there are these attempts to try to figure out the computational requirements that would be required if you wanted to simulate some physical system with perfect precision. | ||
So if we have some human, a brain, a room, let's say, and we wanted to simulate every little part, every atom, every subatomic particle, the whole quantum wave function, What would be the computational load of that? | ||
And would it be possible to build a computer powerful enough that you could actually do this? | ||
Now, I think the way that this misses the point is that it's not necessary to simulate all the details of this environment that you want to create in an ancestry simulation. | ||
You would only have to simulate it insofar as it is perceptible to the observer inside the simulation. | ||
So, if some post-human civilization wanted to create a Joe Rogan doing a podcast simulation, they'd need to simulate... | ||
Joe Rogan's brain, because that's where the experiences happen. | ||
And then whatever parts of the environment that you are able to perceive. | ||
So surface appearances, maybe of the table and walls. | ||
Maybe they would need to simulate me as well, or at least a good enough simulacrum that I could sort of spit out words that would sound like they came from a real human, right? | ||
I don't know. | ||
Now we're getting quite good with this GPT-2, like this kind of AI that just spews out words with... | ||
I don't know whether... | ||
Anyway, but what is happening inside this table right now is completely irrelevant. | ||
You have no idea of knowing whether there even are atoms there. | ||
Now, you could... | ||
Take a big electron microscope and look at finer structure and then you could take an atomic force microscope and you could see individual atoms even and you could perform all kinds of measurements. | ||
And it might be important that if you did that you wouldn't see anything weird because physicists do these experiments and they don't see anything weird. | ||
But then you could kind of fill in those details like if and when somebody were performing those experiments. | ||
That would be vastly cheaper than continuously running all of this. | ||
And so this is the way a lot of computer games are designed today, that they have a certain rendering distance. | ||
You only actually simulate the virtual world when the character goes close enough that you could see it. | ||
And so I imagine these kind of super-intelligent post-humans doing this. | ||
Obviously, they would have figured that out and a lot of other optimizations. | ||
So in other words, these calculations or experiments, I think, don't really tell on the hypothesis. | ||
Right. | ||
Without assigning a probability to either one of those three scenarios, what makes you think? | ||
If you do stop and think, I think we're in a simulation, what are the things that are convincing to you? | ||
Well, it would mainly go through the simulation argument. | ||
To the extent that I think the alternative two hypotheses are improbable, then that would kind of shift the probability mass on the third remaining. | ||
Is it really only three? | ||
So the ones are... | ||
That human beings go extinct. | ||
And also other civilizations at our stage in the cosmos or whatever. | ||
Yes. | ||
It's a strong filter. | ||
That they either go extinct or they decide not to pursue it. | ||
They all lose interest, yeah. | ||
Or it becomes a simulation. | ||
Is that really the only three options? | ||
Well, I think the only three live options. | ||
So you can... | ||
I can kind of unfold the argument a little bit more and look more granular. | ||
So suppose that the first two options are false. | ||
So some non-trivial fraction of civilizations at our stage do get through. | ||
And some non-trivial fraction of those are still interested. | ||
Then I think you can convincingly show that by using just a small portion of their resources they could create very, very many simulations. | ||
And you can show that or argue for that by comparing the computational power of systems that we know are physically possible to build. | ||
We can't currently build them, but we could see that you could build them with nanotech and if you have planetary-sized resources on the one hand. | ||
And on the other hand, estimates of how much compute power it would take to simulate a human brain. | ||
And you find that a mature civilization would have many, many orders of magnitude more. | ||
So that even if they just used 1% of their compute power of one planet for one minute, they could still run thousands and thousands and thousands of these simulations. | ||
And they might have billions of planets and they might last for billions of years. | ||
So the numbers are quite extreme, it seems. | ||
So then what you get is this implication that if the first two options are false, it Would follow that there would be many, many more simulated experiences of our kind than there would be original experiences of our kind. | ||
So the idea is that if we continue to innovate, if human beings or intelligent life in the cosmos continues to innovate, that creating a simulation is almost inevitable? | ||
No, no. | ||
I mean, the second might be... | ||
That we decide not to. | ||
Yeah, and others with the same capability. | ||
But what if they don't decide not to? | ||
If they don't decide not to... | ||
The first option, if human beings do figure out a way to not die and stay innovative and we don't have any sort of natural disasters or man-made created disasters, then step two, if we don't We don't decide to not pursue this. | ||
If we continue to pursue all various forms of technological innovation, including simulations, that it becomes inevitable. | ||
If we get past those two first options, it becomes inevitable that we pursue it. | ||
Well, so if they have the capacity... | ||
Then they will do it. | ||
And the motive, or like the desire to do it. | ||
Yes. | ||
So then they would create hugely many of these. | ||
So not just one simulation, right? | ||
Because it's so cheap at technological maturity, if you have a cosmic empire of resources, they don't have to have a very big desire to do this. | ||
They might just think, well, you know... | ||
Well, that was the big question that Elon said he would ask artificial intelligence. | ||
He said, what's beyond the simulation? | ||
That's the real question. | ||
If this is a simulation, if there's many, many simulations running currently, What's beyond the simulation? | ||
Well, yeah, you might be curious about that. | ||
I mean, I think the more important question would be, like, what do we all things considered have the most reason to do in our situation? | ||
Like, what would it be wise for us to do? | ||
Is there, like, some way that we can be helpful or have the best life or whatever your goal is? | ||
Or is that ridiculous to even consider? | ||
Maybe it's beyond us. | ||
The question of what is outside? | ||
Yes. | ||
Well... | ||
I mean, I don't think it's ridiculous to consider. | ||
I think it might be beyond us, but maybe we would be able to form some abstract conception of what it is. | ||
I mean, in fact, if the path to believing the simulation hypothesis is the simulation argument, then we have a bunch of structure there that gives us some idea. | ||
There would be some advanced civilization that would have developed a lot of technology over time, including compute technology, ability to do virtual reality very well. | ||
We'd imagine probably they would have used that technology for a whole host of other purposes as well. | ||
You wouldn't just get that technology and not be able to create a train or something like that. | ||
They'd probably be super intelligent and have the ability to colonize the universe and do a whole host of other things. | ||
And then for one reason or another, they would have decided to use some of the resources to create simulations. | ||
And inside one of those simulations, perhaps, our experiences would be taking place. | ||
So you could more speculatively fill in more details there. | ||
But I still think that fundamentally our ability to grok this whole thing would be very limited. | ||
And... | ||
There might be other considerations that we are oblivious to. | ||
I mean, if you think about the simulation argument, it's quite recent, right? | ||
So it's less than 20 years old. | ||
So if you think that... | ||
So I suppose it's correct for the sake of argument. | ||
Then up to this point, everybody was missing something like hugely important and fundamental, right? | ||
Really smart people, hundreds of years, like this massive piece right in the center. | ||
But what's the chances that we now have figured out the last big missing piece? | ||
Presumably, There must be some further big, giant realization that is like beyond us currently. | ||
So I think having some... | ||
Yeah, I mean, that looks kind of plausible, but maybe there are further big discoveries or revelations that would kind of maybe not falsify the simulation, but maybe change the interpretation, like do something that is hard to know in advance what that would be. | ||
Now, is the concept that if there is a simulation that all the historical record is simulated as well? | ||
Or when did it take in? | ||
Well, there are different options there, and there might be many different simulations that are configured differently. | ||
There could be ones that run for a very long time, ones that run for a short period of time, ones that simulate everything and everybody, others that just focus on some particular scene or person. | ||
It's just a vast space of possibilities there. | ||
And which ones of those would be most likely is really hard to say much about because it would depend on the reasons for creating these simulations, like what would the interests of these hypothetical post-humans be. | ||
Have you ever had a conversation with a pragmatic, capable person who really understands what you're saying, but they disagree about even the possibility of a simulation? | ||
Yeah. | ||
It must have occurred, but it doesn't tend to be the place where the conversation usually goes. | ||
Where does the conversation usually go? | ||
Well, I mean, I move in kind of unrepresentative circles. | ||
So I think amongst the folk I interact with a lot, I think a common reaction is that it's plausible and still there is some uncertainty because these things are always hard to figure out. | ||
But we should assign it some probability. | ||
But I'm not saying that would be the typical reaction if you kind of did a Gallup survey or something like that. | ||
I mean, another common thing is, I guess, to misinterpret it in some way or another. | ||
And there are different versions of that. | ||
So one would be this idea that in order for the simulation hypothesis to be true, it has to be possible to simulate everything around us to perfect microscopic detail, which we discussed earlier. | ||
Then some people might not immediately get this idea that the brain itself could be part of the simulation. | ||
So they imagine it would be plugged in with like a big cable. | ||
And if you just somehow could reach behind you and like that would be so that that would be another possible common misconception, I guess. | ||
Then I think a common thing is to conflate the simulation hypothesis with the simulation argument. | ||
The simulation hypothesis is we are in a simulation. | ||
The argument is that one of these three options is true, only one of which is the simulation hypothesis. | ||
Some conflation there happens. | ||
How do you factor dreams into the simulation hypothesis? | ||
Well, I think they are irrelevant to it. | ||
That is that whether or not we are in a simulation, people presumably still have dreams and there are other reasons and explanations for why that would happen. | ||
So you have dreams even if you're in the simulation? | ||
Well, why not? | ||
unidentified
|
Hmm. | |
Okay, okay. | ||
Why not? | ||
So I sometimes get these kind of random emails that's like, oh, well, you know, yes, thank you, Rostrom. | ||
Your theory is very interesting, and I found proof. | ||
And like, oh, when I looked in my bathroom mirror, I saw pixels. | ||
Like, random things like that. | ||
Crazy people. | ||
Varying degrees. | ||
I mean, maybe we're all crazy. | ||
Yes, for sure. | ||
But I think that those things are not evidence. | ||
In general speaking, you would expect... | ||
If we're not in a simulation, there's still to be various people who claim to perceive various things. | ||
Sometimes people have hallucinations, sometimes they misremember, sometimes they make stuff up. | ||
You just imagine that it would be... | ||
So the most likely explanation for those things is not... | ||
Even if we are in a simulation, the most likely explanation for those things is not that there was a glitch in the simulation. | ||
It's that one of these normal psychological phenomena took place. | ||
Right. | ||
So, yeah, I would not be inclined to think that this would be an explanation. | ||
If somebody has those kind of experiences, it's probably not because we are... | ||
Even if the simulation hypothesis is true, it's probably not the explanation. | ||
The concept of creativity, how does that play into a simulation? | ||
If during the simulation you're coming up with these unique creative thoughts, are these unique creative thoughts your own or are these unique creative thoughts stimulated by the simulation? | ||
They would be your own in the sense that it would be your brain that was producing them. | ||
Something else would have produced your brain. | ||
But obviously there's some incredible influences on your brain if you're involved in some sort of an external stimulation. | ||
That's true in physical reality as well. | ||
It doesn't come from nowhere. | ||
But it's still your brain. | ||
I think it would be potentially as much your own in the simulation as it would be outside the simulation. | ||
I mean, unless the simulators had, for whatever reason, set it up with the view that they, for some reason, they just wanted to have, oh, this is Rogan coming up with this particular idea, and that kind of configured the initial conditions and just the right way to achieve that. | ||
Maybe then, when you come up with it, maybe it's less your achievement than the people who set up the initial conditions. | ||
But other than that, I think... | ||
Because the reason I ask that is all ideas, everything that gets created, all innovation, initially comes from some sort of a point of someone figuring something out or coming up with a creative idea. | ||
All of it. | ||
Like everything that you see in the external world, like everything from televisions to automobiles, was an idea. | ||
And then somebody implemented that idea or groups of people implemented the technology involved in that idea and then eventually it came to fruition. | ||
If you're in a simulation, How much of that is being externally introduced into your consciousness by the simulation? | ||
And is it pushing the simulation in a certain direction? | ||
Yeah, I don't know. | ||
I mean, you could imagine both kinds of simulations. | ||
Like simulations where you just set up the initial conditions and let it run to see what happens. | ||
Right. | ||
And others where maybe you want to just simulate this particular historical counterfactual. | ||
unidentified
|
Right. | |
What would have happened if Napoleon had been defeated? | ||
Maybe that's our simulation. | ||
You put in some specific thing there. | ||
You could imagine either or both of those types of ways of doing it. | ||
But your simulation hypothesis, if we're in it, it's running. | ||
Now, is it running and we independently interact with the simulation? | ||
Or is the simulation introducing ideas into our minds that then come to fruition inside the simulation? | ||
Is that how things get done? | ||
Like, if we are in a simulation, right? | ||
And if during the simulation someone has created a new iPhone, why are they doing that? | ||
Are there other people in the simulation? | ||
Or is this simulation entirely unique to the individual? | ||
Is each individual involved in a different... | ||
Co-existing simulation? | ||
Right. | ||
I think the kind of simulation that it would be the clearest case for why that would be possible would be one where all people would be simulated that you perceive in each brain. | ||
Because then you could get the realistic behavior out of the brain if you simulated the whole brain at a sufficient level of detail. | ||
So everyone you interact with is also a simulation? | ||
Well, that type of simulation should certainly be possible. | ||
Then it's more of an open question whether it would also be possible to create simulations where there was, say, only one person conscious and the others were just like simulacra. | ||
They acted like humans, but there's nothing inside. | ||
So these would be in philosopher's parlance zombies, that is... | ||
It's like a technical term, but it means when philosophers discuss it, somebody who acts exactly like a human but with no conscious experience. | ||
Now, whether those things are possible or not is an open question. | ||
Do you consider that ever when you're communicating with people? | ||
Do you ever stop and think? | ||
Not really. | ||
I mean, it has occurred to me, but not regularly, no. | ||
Yeah, but does it ever get to your head where you're like, this might not be real. | ||
Like, this person... | ||
Might not be a real person. | ||
This might be a simulation. | ||
Right. | ||
I mean, I guess there are two things. | ||
One is that you'd probably have some probability distribution over all these different kinds of situations that you could be in. | ||
Maybe all of those situations are simulated in different frequencies and stuff. | ||
Different numbers of times, that is. | ||
So there would be some probability distribution there. | ||
That would be the first thought. | ||
That in reality you're always kind of uncertain. | ||
The second would be that even if you were in that kind of simulation, it might still be that behaviorally what you should do is exactly the same as if you were in the other simulation. | ||
So it might not have that much day-to-day implications. | ||
Do you think there's psychological benefits for interacting with life as if it's a simulation? | ||
No, I don't think that would be an advantage. | ||
Maybe a disadvantage in some cases. | ||
What, alleviation of existential angst? | ||
Yeah, maybe, but who knows? | ||
It could also, I guess, if you sort of interpret it in the wrong way, maybe lead it to feel more alienated or something like that. | ||
I don't know. | ||
But I think to a first approximation, the same things that would be Work well and make a lot of sense to do in physical reality would be also our best bets in a simulated reality. | ||
That's where it gets really weird. | ||
Like, if it's a simulation, but you must behave in each and every instance as if it's not. | ||
If you know, if you had a test you could take, like a pregnancy test, when you went to the CVS and you pee on a strip and it tells you, guess what, Nick? | ||
This shit isn't real. | ||
You're in a simulation. | ||
100% proven, absolutely positive. | ||
You know from now on, from this moment on, that everything you interact with is some sort of a creation. | ||
It's not real. | ||
But it is real, because you're having the same exact experience as if it was real. | ||
Right. | ||
How do you proceed? | ||
Yeah, I think there might be very subtle reprioritizations that would happen. | ||
What would you do, personally? | ||
Well... | ||
I don't know the full answer to that. | ||
I think there are certain possibilities that look kind of far-fetched if we're not in a simulation that become, like, more realistic if we are. | ||
So one obvious one is, like, if a simulation could be shut off, like if the computer where the simulation is running is if the plug is pulled, right? | ||
So we think physical universe, as we normally understand, can just suddenly pop out of existence. | ||
There's a conservation of energy and momentum and so forth. | ||
But a simulated universe, that seems like something that could happen. | ||
It doesn't mean it is likely to happen or it doesn't say anything about what time frame, but at least it's like enters as a possibility where it was not there before. | ||
Other things as well become more maybe similar to various theological possibilities that exist. | ||
Like afterlife and stuff like that. | ||
And in fact, it kind of maybe through a very different path leads to some similar destinations as people through thinking about theology and stuff have arrived at. | ||
I mean, it's kind of different. | ||
I think there is no logically necessary connection either way. | ||
But there are some kind of structural parallels, analogs, between the situation of a simulated creature to their simulators and a created entity to their creator. | ||
That are interesting, although kind of different. | ||
So that might be kind of comparisons there that you could make that would give you some possible ways of proceeding. | ||
It seems like paralysis by analysis. | ||
You just sit there and think about it, at least I would. | ||
I would almost wind up not being able to do anything or not being able to act or move or think. | ||
That seems kind of likely to be suboptimal, right? | ||
Suboptimal for sure. | ||
But the concept is so prevalent and it's so common and it's so often discussed. | ||
It's interesting how much it has just over the last 10-15 years, how long the idea has It's interesting how ideas can migrate from some kind of extreme radical fringe and some decade or two later, they're just kind of almost common sense. | ||
Why do you think that is? | ||
Well, we have a great ability to get used to things. | ||
I mean, this comes back to our discussion about the pace of technological progress. | ||
It seems like the normal way for things to be. | ||
We are very adaptable creatures, right? | ||
You can adjust to almost everything, and we have no kind of external reference point, really, and mostly these judgments... | ||
Are based on what we think other people think. | ||
So if it looks like some high-status individual, Elon Musk or whatever, seems to take the simulation argument seriously, then people think, oh, it's a sensible idea. | ||
And it only takes like one or two or three of those people that are highly regarded and suddenly it becomes normalized. | ||
Is there anyone highly regarded that openly dismisses this possibility? | ||
There must be, but I'm not sure they would have bothered to go on the record specifically. | ||
I guess the people who are dismissive of it wouldn't maybe even bother to address it or something. | ||
I'm trying to think, yeah, and I'm drawing a blank of whether there's a particular person I could. | ||
I would love to hear the argument against it. | ||
I would love to hear someone like you or Elon interact with them and try to volley back and forth these ideas. | ||
That could be interesting. | ||
Yeah. | ||
So you've never had some sort of a debate with someone who openly dismisses it? | ||
Well, like a big public debate? | ||
I don't know. | ||
Or even private. | ||
Yeah. | ||
I don't know. | ||
It was kind of a long time since when I first put this article out. | ||
I guess I had more conversations about the argument itself. | ||
What was the reaction when you first put it out? | ||
There was a lot of attention, right? | ||
I mean, pretty much right off the bat, including public... | ||
I mean, it was published in some academic journal, Philosophical Quarterly. | ||
But yeah, it quickly... | ||
True, a lot of it. | ||
And then it's kind of come in waves, like every year or so. | ||
There should be like some new group of, either a new generation or some new community that hears about it for the first time, and it kind of gets a new wave of attention. | ||
But in parallel to these waves, there's also this chronic trend towards it becoming more part of the mainstream conversation and seeming kind of less far out there. | ||
And I think, yeah, that's maybe partly just – if the idea – like maybe – Maybe if there were some big flaw in the idea, it would have been discovered by now. | ||
So if it's been around for a while, it makes it a little bit more credible. | ||
It might also be slightly assisted by just technological progress. | ||
If you see virtual reality getting better and stuff, it becomes maybe easier to imagine how it could become so good one day that you could create perfectly flawless. | ||
I was going to introduce that as option four. | ||
Is option four the possibility that one day we could conceivably create some sort of an amazing simulation, but it hasn't been done yet. | ||
And this is why it's become this topic of conversation is that there's some need for concern because as you extrapolate technology and you think about where it's going now and where it's headed, There could conceivably be one day where this exists. | ||
Should we consider this and deal with it now? | ||
Well, so I'd say that that would be highly unlikely in that if the third – so if the first two are wrong, right, then there are many, many more simulated ones than non-simulated ones, will be over the course of all of history. | ||
Over the course of all of history, but what if it hasn't yet happened? | ||
- Right, but so then the question is, given that you know that by the end of time there will have been, let's say, just a million simulations and one original history. | ||
- Sure. - And that all of these simulated people and the original history people all have subjectively indistinguishable experiences. | ||
You can't from the inside tell the difference. | ||
then what, given that assumption, would it be rational for you to believe? | ||
Should you think you are one of the exceptional ones? | ||
Or should you think you are one amongst the larger set, the simulated ones? | ||
Or should you think that it just has not happened yet? | ||
But that would be equivalent to saying that you would be one of the non-simulated ones. | ||
You're talking about in the universe. | ||
Yeah, but you could make it even just, you could look at the narrow case of just the Earth. | ||
Let's just look in the narrow case of just the Earth. | ||
In the narrow case of just the Earth, if the historical record is accurate, if it's not a simulation, then it seems very reasonable that we're just dealing with incremental increases in technology that's pretty stunning and pretty profound currently, but that we haven't Well, that's how it looks, right? | ||
Sure. | ||
Yeah, but that's also how it would look if you were in a simulation. | ||
Yes, but it's also how it would look if you're not in a simulation yet. | ||
That's also a possibility too, no? | ||
Right, yeah. | ||
But for most people for whom it looks like that, it would be the case that they would be simulated. | ||
Why? | ||
Well, by assumption, if there are all these simulations created... | ||
Well, not yet. | ||
Well, right. | ||
But you don't know what time it is in external reality. | ||
Right, but why we assume something so unbelievably fantastic when just life itself is preposterous. | ||
Because life itself, just being a human being on a planet... | ||
You know, this planet spinning a thousand miles an hour, hurling through infinity. | ||
That, in itself, is fairly preposterous if it didn't exist. | ||
But it does exist. | ||
And we know that we, at least, we're all agreeing upon a certain historical record. | ||
We're agreeing upon Oppenheimer, the Manhattan Project, World War I, World War II. We're agreeing on Korea and Vietnam. | ||
We're agreeing on Reagan and Kennedy. | ||
We're agreeing on all these things, historically. | ||
unidentified
|
Right. | |
If we are all agreeing that there's a sort of historical process and we're all agreeing, I remember when the first iPhone was invented. | ||
I remember when the first computer. | ||
I remember when this. | ||
I remember the internet. | ||
Why would we assume that there's a simulation? | ||
We could assume that there's a possibility of a simulation, but why would we assume the simulation hasn't occurred? | ||
Why wouldn't we assume the simulation hasn't occurred yet? | ||
Right. | ||
I mean, so it is a possibility that we would be in the first time segment of all of these. | ||
Wouldn't that be more likely? | ||
Well, I'd say no. | ||
I mean, so it comes down then to this field, which is tricky and problematic called anthropics. | ||
So this is about how to assign probabilities in situations where you have uncertainty about who you are, what time it is, where you are. | ||
Right. | ||
So if you imagine, for example, all of these people who would exist in this scenario having to place bets on whether they're simulated or not. | ||
And you think about two possible different ways of reasoning about this. | ||
So one is you assume you're a randomly selected individual from all these individuals and you bet accordingly. | ||
Randomly selected individuals. | ||
Yeah, so then you would bet you're one of the simulated ones because like a randomly selected ones, if most are simulated, most lottery tickets are... | ||
But why are we assuming that most are simulated? | ||
This is where I'm getting confused. | ||
Well, it will have been simulated by the end of time. | ||
By the end of time. | ||
This is like a timeless claim. | ||
But why already when it hasn't existed yet? | ||
Let's say for the sake of argument, because I don't really have an opinion on this, pro or con, open the air. | ||
But if I was going to argue about pragmatic reality, the practicality of biological existence as a person that has a finite lifespan, you're born, you die, you're here right now, and we're a part of this just long line of humanity that's created all these incredible things that's led up to civilization. | ||
That's led up to this moment right now where you and I are talking into these microphones. | ||
It's being broadcast everywhere. | ||
Why isn't it likely that a simulation hasn't occurred yet? | ||
That we are in the process of innovating and one day could potentially experience a simulation. | ||
But why are you not factoring in the possibility or the probability that that hasn't taken place yet? | ||
Yeah, I mean, so it's in there. | ||
But if you imagine that people... | ||
I follow this general principle of assuming that they would be the ones in the original history before the simulations had happened. | ||
Right. | ||
Then almost all of them would turn out to be wrong and they would lose their bets. | ||
Once a simulation has actually... | ||
unidentified
|
Right. | |
I mean, if you kind of integrate over the universe... | ||
But there's no evidence that a simulation has taken place. | ||
But there is evidence that you're alive. | ||
You have a mother, you have a father. | ||
Those things could be true in the simulation as well. | ||
Could be, but isn't that a pipe dream? | ||
Well, it depends on what simulation, right? | ||
I mean, a lot of simulations might run for a long time and have… Might. | ||
Yeah, yeah. | ||
But we know that if someone shoots you, you'll die. | ||
We know if you eat food, you get full. | ||
We know these things. | ||
These things could be objective facts. | ||
These could be… Yeah, I think they are true, yeah. | ||
Yes, right? | ||
Now, why would we assume… Why would a simulation be the most likely scenario when we've experienced, at least we believe we've experienced, all this innovation in our lifetime? | ||
We see it moving towards a certain direction. | ||
Why wouldn't we assume that that hasn't taken place yet? | ||
Yeah, I think to try to argue for the premise that conditional on there being first an initial segment of non-simulated Joe Rogan experiences and then a lot of other segments of simulated ones, that conditional on that being the way the world in totality looks, you should think you're one of the simulated ones. | ||
Why? | ||
Well, to argue for that, I think then you need to roll in this piece of probability theory called anthropics, which I alluded to. | ||
And just to pull one little element out of there to kind of create some initial plausibility for this. | ||
If you think in terms of rational betting strategies for this population of Joe Rogan experiences, the ones that... | ||
Would lead to the overall maximal amount of winning would be if you all thought you're probably one of the simulated segments. | ||
If you had the general reasoning rule that in this kind of situation you should think that you're the initial segment of non-simulated Rogen, then the great preponderance of these simulated experiences would lose their bets. | ||
But there's no evidence of a simulation. | ||
Well, I'd say that there is indirect evidence insofar as there is evidence against these two alternatives. | ||
Well, the two alternatives being that intelligent life goes extinct before they create any sort of simulation or that they agree to not create a simulation. | ||
But what about if they're going to create a simulation? | ||
There has to be a time before the simulation is created. | ||
Why wouldn't you assume that that time is now currently happening when you've got a historical record of all the innovation that's leading up to today? | ||
I think the historical record would be there in the simulation. | ||
But why would it have to be there in a simulation and not be there in reality? | ||
Well, I mean, it could be there in the simulation if it's a kind of simulation that tracks the original, yeah. | ||
If it's a fantasy simulation, then, you know, maybe it wouldn't be there. | ||
Right, but it could just be reality. | ||
It doesn't have to be a simulation. | ||
I mean, in some sense, it would be both, right? | ||
I mean, there would be one Joe Rogan experience in the real original history, and then, like, maybe a million, let's just say. | ||
In simulated realities later. | ||
But if you think about your actions that kind of can't distinguish between these different possible locations in space-time where you could be, most of the impact of your decisions will come from impacting all of these million Dior Hogan instances. | ||
Yeah, but this is once a simulation has been proven to exist, which it hasn't been. | ||
We have, at least in terms of what we all agree, we're proven to have biological lives. | ||
We breed, we sleep, we eat, we travel on planes. | ||
All these things are very tangible and real. | ||
I'd say those are true, probably even if we're in a simulation. | ||
But why would you assume we're in a simulation? | ||
This is where I'm stuck. | ||
Because why wouldn't you assume that a simulation is one day possible? | ||
There's no proof or no evidence that makes any sense to me That there is currently any simulation. | ||
Right, I mean, so it's a matter of probabilities and number schemes, right? | ||
Is it? | ||
That's what I would assert, yes. | ||
But what would point to the possibility that it's more probable that we're in a simulation? | ||
This is what escapes me. | ||
Okay, so I could mention some possibilities that would... | ||
unidentified
|
Okay. | |
So the most obvious, like a big window pops up in front of you saying, you're in a simulation. | ||
Click here for more information. | ||
That would be pretty conclusive. | ||
Right, yes. | ||
Right. | ||
So short of that... | ||
You would have weaker probabilistic evidence insofar as you had evidence against the two alternatives. | ||
So, for example, if you got some evidence that suggested it was less likely that all civilizations at our stage go extinct before maturity. | ||
Let's say we get our act together, we eliminate nuclear weapons, we become prudent and... | ||
We check all the asteroids, nothing is on collision course with Earth. | ||
That kind of tend to lower the probability of the first, right? | ||
Okay. | ||
So that would tend to shift probability over on the remaining alternatives. | ||
Let's suppose that we moved closer ourselves. | ||
To becoming post-human. | ||
We develop more advanced computers and VR and we're getting close to this point ourselves and we still remain really interested in running Ancestry simulations. | ||
We think this is what we really want to spend our resources on as soon as we can make it work. | ||
That would move probability over from the second alternative. | ||
It's less likely that there is this strong convergence among all post-human technologically mature civilizations if we ourselves are almost post-human and we still have this interest in creating ancestor simulations. | ||
So that would shove probability over to the remaining alternative. | ||
Take the extreme case of this. | ||
Imagine if we... | ||
A thousand years from now have built our own planetary-sized computer that can run these simulations, and we are just about to switch it on, and it will create the simulation of precisely people like ourselves. | ||
And as we move towards the big button to sort of initiate this, then the probability of the first simulation Two hypotheses basically goes to zero, and then we would have to conclude with near certainty that we are ourselves in a simulation as we push this button to create a million simulations. | ||
Once we achieve that state, but we have not achieved that state, why would we not assume that we are in the actual state that we currently experience? | ||
Well, I said yes. | ||
We shouldn't assume. | ||
We should assume that we are ignorant as to which of these different time slices we are, which of these different Joe Rogan experiences is the present one. | ||
We just can't tell from the inside which one it is. | ||
If you could see some objective clock and say that, well, as yet the clock is so early that no simulations have happened, then obviously you could conclude that you're in the original history. | ||
But if we can't see that clock outside the window, if there is no window in the simulation to look out, then it would look the same. | ||
And then I'd say we have no way of telling which of these different instances we are. | ||
One of them might be that there is no simulation and that we're moving towards that simulation. | ||
That one day it could be technologically possible. | ||
It could be a one in a million. | ||
unidentified
|
Really? | |
So one in a million is that life is what you experience right now. | ||
unidentified
|
One in a million. | |
No, no, no. | ||
Conditional on the other... | ||
But not even condition on those other alternatives being wrong. | ||
Let's say that human beings haven't blown themselves up yet. | ||
Let's say that human beings haven't come up with – there is no need to make the decision to not activate the simulation because the simulation hasn't been invented yet. | ||
Isn't that also a possibility? | ||
Isn't it also a possibility that the actual timeline – Of technological innovation that we all agree on is real and that we're experiencing this as real, live human beings not in a simulation that one day the simulation could potentially take place but has not yet. | ||
Isn't that also a possibility? | ||
Yeah, I mean sure. | ||
It's just a question of how probable that is given the… Why isn't it super probable? | ||
Because we're experiencing it. | ||
Well, I mean, it would be a very unusual situation for somebody with your experiences to be in. | ||
What about your experiences? | ||
For my experiences, the same there, yeah. | ||
It would be extremely unusual. | ||
But there's 7 billion unusual experiences taking place simultaneously. | ||
Why would you assume that's... | ||
Well, if there were, like, say, a million simulations, then, you know, that would be a million times more. | ||
But why would there be any simulations? | ||
Why would there not just be 7 billion people experiencing life? | ||
Right, yeah. | ||
I mean, that would have to be something that prevents these simulations from being created. | ||
This is where you lose me. | ||
Yeah. | ||
So I think maybe the difference is I tend to think in terms of the world as a four-dimensional structure with time being one dimension, right? | ||
Okay. | ||
So you think in the totality of existence... | ||
That will have happened by the end of time. | ||
You look at all the different experiences that match your current experience. | ||
Given these various assumptions, the vast majority of those would be simulated. | ||
unidentified
|
Why? | |
The various assumptions being that option one and two are false, basically. | ||
What about option, my option? | ||
Yeah, so in your option, the vast majority of all these experiences that will ever have existed will also be simulated, if I understand your option correctly. | ||
No, no, no. | ||
My option is that nothing's happened yet. | ||
Yeah, but there will have been. | ||
Maybe. | ||
But not yet. | ||
Right. | ||
But as I understand your option is that if we look at the universe at the end of time and we look back, there will be a lot of simulated versions of you and then one original one. | ||
But I'm not even considering that. | ||
And you think you might be the original one. | ||
No, I'm not even considering that. | ||
What I'm saying is we may just be here. | ||
That there is no simulation. | ||
And that maybe it will take place someday, but maybe it will not. | ||
unidentified
|
Right. | |
But you have to pick which of those... | ||
Which of those scenarios you're considering. | ||
That is the scenario I'm considering. | ||
The scenario I'm considering is we are just here. | ||
We are actually live. | ||
But what happens after? | ||
So I want the scenario to say what's happened in the past, what happens now, and what will happen in the future. | ||
Well, we don't know what's going to happen in the future. | ||
That's right. | ||
So we can consider both options, right? | ||
Yes. | ||
One option where there are no simulations created later. | ||
Right. | ||
Then I would say that means one of the first two alternatives. | ||
But another option is there could be a simulation created later, but it has not taken place yet. | ||
And that there will be simulations later. | ||
That it's a possibility, but it has not happened yet. | ||
Right, but that there will be later. | ||
That's one possibility. | ||
And so then I say, if that's the world that we are looking at, then most experiences of your kind exist inside the simulation. | ||
I still don't understand that. | ||
Why can it not have happened yet? | ||
Well, it kind of depends on which of these experiences is your present moment in that scenario, right? | ||
So there's going to be a million of them plus an initial one. | ||
You can tell from the inside. | ||
Maybe there will be a million of them, but there's right now no evidence that there's going to be. | ||
No evidence that there is. | ||
No evidence that it's ever even going to be possible technologically. | ||
We think there could be, but it hasn't happened yet. | ||
So why would you assume that we are in a simulation currently when there's no evidence whatsoever that it's even possible to create a simulation? | ||
Maybe there is some alternative way of trying to explain how I'm thinking. | ||
I understand what you're saying. | ||
I'm thinking like suppose... | ||
I understand you're saying that there's... | ||
I'm sorry to interrupt you. | ||
Sorry. | ||
I'm just thinking maybe we could think of some simpler thought experiment which has nothing to do with simulations and stuff, but... | ||
Imagine if... | ||
So I'm making this up as I go along, so we'll see if it actually works. | ||
You're taking into a room, and then you're awake there for one hour, and then a coin is tossed. | ||
And if it lands heads, then the experiment ends and you exit the room and everything is normal again. | ||
But if it lands tails, then you're given an amnesia drug And then you're woken up in the room again. | ||
You think you're there for the first time because you don't remember having been there before. | ||
And then this is repeated 10 times. | ||
So we have a world where either there is one hour experience of you in the room or else it's a world with 10 Joe Rogan experiences in the room with an episode of amnesia in between. | ||
But when you're in the room now, you find yourself in this room, you're wondering, hmm... | ||
Is this the first time I'm in this room? | ||
It could be. | ||
But it could also be that I'm later on and I was just given an amnesia drug. | ||
So the question now is, when you wake up in this room, you have to assign probabilities to these different places you could be in time. | ||
And maybe you have to bet or make some decision that depends on where you are. | ||
So... | ||
I guess I could ask you, like, so if you wake up in this room, what do you think the probability should be that the coin, that you're, like, at time one versus at some later time? | ||
Well, what is the probability that I'm actually here versus what is the probability of this highly unlikely scenario that I keep getting drugged over and over again every hour? | ||
Well, we assume that, like, you're certain that That the setup is such that there was this mad scientist who had the means to do this and he was going to flip this coin. | ||
So we're assuming that you're sure about that either way. | ||
The only thing you're unsure about is how the coin landed. | ||
Okay. | ||
Well, if that was a scenario where I knew that there was a possibility of a mad scientist and I could wake up over and over again, that seems like a recipe for insanity. | ||
Yeah. | ||
Well, it's a philosophical thought experiment, so we can abstract away from the possibility of it. | ||
My point initially, and I'll get back to it, is there's no evidence at all that we're in a simulation. | ||
So why wouldn't we assume that the most likely scenario is taking place, which is we are just existing, and life is as it seems, but strange. | ||
Okay, so if you don't want to do this thought experiment... | ||
No, I do want to do a thought experiment, but it seems incredibly limited. | ||
unidentified
|
Right. | |
Well, I'm trying to distill the probability theory part from the wider simulation. | ||
But I guess I could also ask you, if we were to move closer to this point where we ourselves can create simulations, if we survive, we become a multi-planetary, we build planetary-sized computers. | ||
Yeah. | ||
How would your probability in the simulation hypothesis change as we kind of develop? | ||
Well, it would change based on the evidence of some profound technological innovation that actually would allow Yeah, I think... | ||
It's not an outcome in that it would require you to postulate that you are this very unusual and special observer amongst all the observers that will exist. | ||
But everyone is unusual in their own way. | ||
That's true. | ||
Because there's no clones. | ||
There's no one person that's a version that's living the same exact life in a million different scenarios. | ||
But in this respect, If there are all these simulations, then most of these people are not special in this way. | ||
Most of them are simulated. | ||
And only a tiny minority. | ||
If there's a simulation. | ||
There are many simulations. | ||
Or if there's no simulations. | ||
If there are no simulations and there will never be any simulations, then... | ||
Well, who's saying there never will be? | ||
Well, so this... | ||
Since we don't know what time it is now in external reality, and we therefore can't tell from looking at our evidence, Where we are in a world where either there is just an original history and then it ends, or there is a world with an original history and then a lot of simulations. | ||
We need to think about how to assign probabilities given each of these two scenarios. | ||
And so then we have a situation that is somewhat analogous to this one with the amnesia room, where you have some number of episodes. | ||
And so the question is, in those types of situations, how do you allocate probability over the different hypotheses about how the world is structured? | ||
And This kind of betting argument is one type of argument that you can try to reduce to kind of get some grip on that. | ||
And another is by looking at various applications in cosmology and stuff where you have multiverse theories. | ||
Which say the universe is very big, maybe there are many other universes, maybe there are a lot of observers, maybe all possible observers exist out there in different configurations. | ||
How do you drive probabilistic predictions from that? | ||
It seems like whatever you observe would be observed by somebody, so how could you test that kind of theory? | ||
And this same kind of anthropic reasoning that I want to use in the context of the simulation argument, Also plays a role, I think, in deriving observational predictions from these kinds of cosmological theories, where you need to assume something like you're most likely a typical observer from amongst the observers that will ever have existed, or so I would suggest. | ||
Now, I should... | ||
Admit as an asterisk that this field of anthropic reasoning is tricky and not fully settled yet. | ||
And there are things there that we don't yet fully understand. | ||
But still, the particular application of anthropic reasoning that is relevant for the simulation argument, I think, is one of the relatively less problematic ones. | ||
So that Conditional on there being, by the end of time, a large number of simulated Joe Rogans and only one original one, I think, conditional on that hypothesis, it would seem that most of your probability should be on being one of the simulated ones. | ||
But I'm not sure I have any other ways of making it more vivid or possible. | ||
No, I completely understand what you're saying. | ||
I completely understand what you're saying. | ||
But I don't know why you're not willing to take into account the possibility that it hasn't occurred yet. | ||
The way I see it is that I have taken that into account and it receives the same probability that I'm that initial segment as I would give to any of the other Nick Bostrom segments that all have the same evidence. | ||
See, that's where we differ because I would give much more probability to the fact that we are existing right now in the current state as we experience it in real life, carbon life, no simulation, but that potentially one day there could be a simulation which leads us to look at the possibilities and look at the probabilities that it's already occurred. | ||
All right, so what about this? | ||
Suppose it is the case that... | ||
All right, so what we think happened is there was a big bang, planets formed, and then some billions of years later, we evolved, and here we are now, right? | ||
Suppose some physicists told you that, well, the universe is very big, and early on in the universe, very, very rare occasions, there was some big gas cloud. | ||
In an infinite universe, this will happen somewhere, right? | ||
Where, just by chance... | ||
That was a kind of Joe Rogan-like brain coming together for a minute and then dissolved in the gas. | ||
And yeah, if you have an infinite universe, it's going to happen somewhere. | ||
But there's got to be many, many fewer Joe Rogan brains In such situations, then, will exist later on on planets, because evolution helps funnel probability into these kinds of organized structures, right? | ||
So, if some physicists told you that, well, this is the structure of our part space-time. | ||
Like, there are a few very, very rare spontaneously materialized brains from gas clouds early in the universe, and then there are the normal rogances much later. | ||
And there are, of course, many, many more normal ones. | ||
The normal ones happen in one out of every, you know, 10 to the power of 50 planets, whereas the weird ones happen in one out of 10 to the power of 100. Normal versus weird, how so? | ||
How are you defining it? | ||
Well, the normal ones are ones that have evolved on planets and had the mother and... | ||
Different planets. | ||
Is that what you're talking about? | ||
Yeah, different planets. | ||
Okay, but we only have one planet, right? | ||
Right, but this again is like a... | ||
Well, I mean, actually, there are a lot of planets in the universe, and if it's infinite, there's got to be a lot of copies of you, right? | ||
Right, but one planet that you're aware of that has life. | ||
This is pure speculation, right? | ||
But this is a thought experiment, which in fact actually probably matches reality in this respect. | ||
Most likely there's some other planets out there. | ||
I think the fact that it matches reality is, I think, irrelevant to the point I want to make. | ||
So if this turned out to be the way the world works, a few weird ones happening from gas clouds and then the vast majority are just normal people living on a planet. | ||
Would you similarly say, given that model, that you should think, oh, it might just as well be one of these gas cloud ones? | ||
Because after all, the other ones might not have happened yet. | ||
Or have I lost you? | ||
You lost me. | ||
Sorry. | ||
Yeah. | ||
Anyway, I think that this would be a structurally similar situation where there would be a few exceptional early living versions that would be very small in numbers compared to the later ones. | ||
And if they allow themselves the same kind of reasoning where they would say, well, the other ones may or may not come to exist later on planets. | ||
I have no reason to believe I'm one of the planet living ones. | ||
Then it seems that in this model of the universe, you should think you're one of these early gas cloud ones. | ||
And as I said, I mean, this looks like it probably actually is the world we're living in, in that it looks like it's infinitely big and that would have been a few georogons spontaneously generated very early from random processes. | ||
There are going to be very few numbers compared to ones that have, you know, risen on planets. | ||
So that by taking the path you want to take with relation to the simulation argument, I wonder if you would not then be committed to thinking that you would be like, in effect, a Boltzmann brain in a gas cloud super early in the universe. | ||
I still don't understand what you're saying. | ||
What I'm saying is that we scientists agree. | ||
If you believe in science and if you believe in the discoveries that so far people have all currently agreed to, we've agreed that clouds are formed and that planets are created and that all the matter comes from inside of the explosions of a star And that it takes multiple times for this to coalesce before we can develop carbon-based life forms. | ||
All that stuff science currently agrees on, right? | ||
And then we believe in single-celled organisms, become multi-celled organisms through random mutation and natural selection. | ||
We get evolution, and then we agree that we have... | ||
We've come to a point now where technology has hit this gigantic spike that you described earlier. | ||
So human beings have created all this new innovation. | ||
Why wouldn't we assume that all this is actually taking place right now with no simulation? | ||
Yeah, I mean, the simulation argument is the answer to that, but with a qualification that A, the simulation argument doesn't even purport to prove the simulation hypothesis, because there are these two alternatives. | ||
B, that even if the simulation hypothesis is true, in many versions of it, it would actually be the case that In the simulation, all of these things have taken place. | ||
And the simulation might go back a long time, and it might be a reality tracking simulation. | ||
Maybe these same things also happened before or outside the simulation. | ||
I understand that. | ||
But, or, all these things have actually happened, and there is no simulation yet. | ||
That's possible, too. | ||
unidentified
|
Doesn't that seem really probable? | |
Well, to me it seems probable only if at least one of the other alternatives is true. | ||
Or, I admit that there is also this general possibility, which is always there, that I'm confused about some big thing, like maybe the simulation argument is wrong in some way. | ||
I'm just looking at the track record of science and philosophy, we find we're sometimes wrong. | ||
So I attach some probability to that. | ||
But if we're working within the parameters of what currently seems To me to be the case, that we would be the first civilization in a universe where there will later be many, many simulations seems unlikely for those exact reasons. | ||
And that if we are the first, it's probably because one of the alternatives is true. | ||
It's a mind blower, Nick. | ||
The more you sit and think about it, the more you ponder these concepts, and I'm not on one side or the other. | ||
It's scary, but it's also amazing. | ||
And what else is there that we haven't figured out yet? | ||
If we come back in 50 years, Even just with human beings thinking about stuff. | ||
And I think I have this concept of a crucial consideration. | ||
I alluded to it a little bit earlier. | ||
But the idea of some argument or data or insight that if only we got it would Radically change our mind about our overall scheme of priorities. | ||
Not just change the precise way in which we go about something, but kind of totally reorient ourselves. | ||
An example would be if you are an atheist and you have some big conversion experience and suddenly your life feels very different. | ||
What were you doing before? | ||
You were basically wasting your time and now you found what it's all about. | ||
But there could be sort of slightly smaller versions of this. | ||
I wonder what the chances are that we have discovered all crucial considerations now. | ||
Because it looks like... | ||
At least up until very recently, we hadn't, in that there are these important considerations that seems to, whether it's AI, like if this stuff about AI is true, like maybe that's the one most important thing that we should be focusing on and the rest is kind of frittering away our time as a civilization. | ||
We should be focused on AI alignment. | ||
So we can see that it looks like all earlier ages, up until very recently, We're oblivious to at least one crucial consideration, insofar as they wanted to have maximum positive impact on the world. | ||
They just didn't know what the thing was to focus on. | ||
And it also seems kind of unlikely that we just now have found the last one. | ||
That just seems kind of... | ||
Given that we keep discovering these up until quite recently, we are probably missing out on one or more likely several more crucial considerations. | ||
And if that's the case, then it means that we are fundamentally in the dark. | ||
We are basically clueless. | ||
We might try to improve the world, but we are... | ||
Overlooking maybe several factors, each one of which would make us totally change our mind about how to go about this. | ||
And so it's less of a problem, I think, if your goal is just to lead your normal life and be happy and have a happy family. | ||
Because there we have a lot more Evidence and it doesn't seem to keep changing every few years. | ||
Like we still know, yeah, have good relationships, you know, don't ruin your body, don't jump in front of trains, like these are tried and tested, right? | ||
But if your goal is to somehow steer humanity's future in such a way that you maximize expected utility, there it seems our best guess is keep jumping around every few years and we haven't kind of settled down into some stable conception of that. | ||
Nick, I'm going to have to process the conversation for a long time. | ||
But I appreciate it. | ||
And thank you for being here, man. | ||
It was really cool, very fascinating discussion. | ||
Good to meet you, yeah. | ||
Thank you. | ||
Thank you very much. | ||
Thank you. | ||
If people would like to read any of your stuff, where can they get it? | ||
NickBostrom.com. | ||
Probably the best starting point. | ||
Okay. | ||
Thank you. | ||
My brain's broken. |