Danny Jones Podcast - #158 - Why Discovering Advanced Alien Life Would be Bad for Humanity | Robin Hanson Aired: 2022-10-18 Duration: 02:48:01 === Singing at Caltech (04:55) === [00:00:08] Yeah, you can sing for us. [00:00:09] Are you a singer? [00:00:10] I used, I mean, in high school. [00:00:14] In high school, you were a singer. [00:00:16] In college, you know, just in the choir. [00:00:18] But, you know, just like everybody, everybody likes to sing as long as nobody's listening, right? [00:00:22] Yeah, that's for sure. [00:00:23] Yeah, I love it. [00:00:23] People like music. [00:00:25] I like to hum along in the shower, right? [00:00:26] What's your favorite kind of music? [00:00:28] I don't have, I'm not very specific. [00:00:30] It's more about like emotional, the emotional palette that it's playing with. [00:00:35] It's different genres of music have the different emotional palette. [00:00:39] Exactly. [00:00:39] I'm interested in it. [00:00:40] It depends what kind of mood you're in. [00:00:41] Right, which kind of palette I like. [00:00:43] But I particularly like sort of glorious music. [00:00:46] Glorious music. [00:00:47] Vangelis. [00:00:49] Right? [00:00:50] Like sort of like epic orchestras. [00:00:52] Like as if there was like grand big things. [00:00:55] Yes. [00:00:56] Like a big battle scene in a movie, the kind of music you would see behind that. [00:00:59] Like an epic. [00:01:00] Or just Brian Eno, for example. [00:01:04] I don't even know what that means. [00:01:05] He's a musician. [00:01:06] Sorry. [00:01:06] Oh, he's a guy. [00:01:07] Oh, a person. [00:01:07] Right. [00:01:08] But I'm just saying I could play you some. [00:01:11] Oh, yeah. [00:01:12] Do you play any instruments? [00:01:13] No. [00:01:13] I mean, I played a recorder a long time ago by that time. [00:01:16] Oh, okay. [00:01:17] Like in college, I would relax by going into like stairwells in the university and playing my recorder in the stairwell. [00:01:24] And I really got into, well, somebody told me I was doing that near their class and they said they were hearing this music from the like air ducts in their classroom. [00:01:35] They couldn't figure out where it was coming from. [00:01:37] Interesting. [00:01:38] That's very interesting. [00:01:41] Okay. [00:01:41] Well, let's officially get this thing going. [00:01:43] Mr. Robin Hansen, it's great to meet you, first of all. [00:01:47] Nice to meet you, sir. [00:01:48] For our listeners and our viewers, can you give me a brief description of who you are, your background, and your areas of interest? [00:01:57] That's going to be hard because I've just been all over the place. [00:02:00] I know you really have. [00:02:01] But if you could give me some sort of a synopsis. [00:02:03] So long ago, I started out in engineering in college. [00:02:07] I switched to physics, got an undergraduate degree in physics, and then I decided I wanted to understand what the hell science was. [00:02:13] So I went off to the University of Chicago to study philosophy of science. [00:02:17] But then I Kind of answered the questions for myself and switched back to physics. [00:02:21] And then I saw cool things happening in Silicon Valley and heard about AI and hypertext. [00:02:27] And so I left school with two masters in physics and philosophy of science to go off to Silicon Valley where I did AI research and Bayesian statistics as a job. [00:02:37] And on the side, did hypertext publishing and a bunch of other stuff for nine years. [00:02:44] And then I went back to school to get my PhD at the age of 34 with two kids, age zero and two. [00:02:52] Wow. [00:02:53] In a four years in year out program, because they guarantee, you know, they basically everybody gets kicked out after four years. [00:02:58] That was good. [00:02:59] And I got my degree in social science from Caltech. [00:03:04] And then I went on the market and did better in political science than I did in econ, but I didn't get a job. [00:03:09] I got a postdoc in health policy for two years at Berkeley. [00:03:13] And then I went on the market again and got my current job as a professor of economics at George Mason University. [00:03:21] And since I got Tenure in 2006, I allowed myself to spread out a bit more and I started a blog called Overcoming Bias. [00:03:30] I've since written two books one, a futurist book called The Age of M Work, Love, and Life When Robots Rule the Earth, and then a psychology book called The Elephant in the Brain Hidden Motives in Everyday Life. [00:03:45] I'm still pretty much all across the map. [00:03:48] I did a lot of different things. [00:03:49] The thing I'm most famous for is the idea of prediction markets, which I was working on when I was that. [00:03:55] Computer researcher, but back before I went back to school, and that's why I picked Caltech because they did experimental econ, and I thought that would help in these institution ideas. [00:04:08] I have a wide range of like big institution ideas I'm into too. [00:04:11] So, but as you know, recently I went back to my astrophysics work and I did this Gravy Alien stuff, picking up on the great filter stuff I did while I was a postdoc doing health policy at Berkeley back in that. [00:04:28] Long ago. [00:04:29] And so I think most intellectuals are just naturally broad. [00:04:36] And in fact, that's probably one of the main failure modes when people want to become an academic, they don't focus enough. [00:04:43] And I was at risk of failing in that way by not focusing enough because the academic world really wants you to be the best at one thing and stick with it for the rest of your life. [00:04:54] And doing a lot of different things, it doesn't reward so much. [00:04:57] So I slipped by, I managed to sneak by. [00:05:02] Being a little too broad. === Focusing on the Future (05:26) === [00:05:04] The Future of Humanity Institute at Oxford. [00:05:08] You're a researcher there. [00:05:09] I have an affiliation there. [00:05:11] When they were starting up a long time ago, they wanted to collect prestigious affiliations so they could credit everything who there was listed associated with them for what they did every year. [00:05:21] And so I was happy to be included in that when they first got started, when they were small, and now they are big and don't need me anymore for that purpose, but I'm happy to still be affiliated. [00:05:31] So the purpose of that was to look at. [00:05:35] The big picture questions about humanity, right? [00:05:38] Absolutely. [00:05:39] And what to you, what are the fundamental big picture questions about humanity? [00:05:44] Well, there's where we came from and where we're going. [00:05:50] So, the future of humanity is here, of course, focused on that future. [00:05:53] But, you know, I think the first job, as always, is diagnosis before you do prognosis. [00:06:01] So, the first thing you need to do is just have a foggiest idea of what might happen, when, and How, and then you can start to think about how we could move it a little. [00:06:13] So, I think people start way too quick to say, What do I want the future to be like? and sort of map out their ideal case. [00:06:22] And actually, it's pretty hard to actually just figure out what's likely to happen if you do nothing. [00:06:27] And you're probably not going to be able to do a lot. [00:06:30] So, figure out what's likely to happen if you do nothing and then maybe nudge it a bit. [00:06:35] So, people thinking about the future, they of course think about things like global warming, they think about Population and fertility, they think about the future of innovation, technology like space travel, energy, but innovation of social institutions too. [00:06:52] I've been especially interested in forms of governance, ways that you can interact with the economy. [00:07:01] And people are usually focused on the future in our era, mediated by technology. [00:07:08] That is, people imagine particular new technologies coming and then they think about how that will affect the future. [00:07:13] So That's been true for a while. [00:07:17] And so, obviously, artificial intelligence is one kind of technology. [00:07:21] I mean, since I did AI research for nine years, that was especially salient for me. [00:07:27] Obviously, people think about future energy sources, they think about space travel, they think about environmental impacts. [00:07:38] But there's just a lot more to think about, really, if you get creative about it. [00:07:42] So, I always thought that there was a neglect of the intersection of social science and tech in thinking about the future. [00:07:52] So, most futurists that you hear about are tech people. [00:07:57] They know about some kind of a tech and they want to tell you where they think that tech is going and then what social implications that will have. [00:08:06] And, like myself, when I was a physics student a long time ago, they were probably taught that social science doesn't exist. [00:08:14] Most tech people are told that basically hard sciences like physics or computer science or chemistry, even biology, those are real sciences and people really know stuff there. [00:08:25] Over in those social sciences, that's all fluff and made up, and there isn't really anything there. [00:08:28] So, when tech people go to try to figure out the social implications of the technologies they're imagining, they kind of wing it with their intuitions. [00:08:37] They don't really use expert social science. [00:08:40] Right. [00:08:41] Whereas the social scientists who do understand social science, when they think about the future, they tend to be skeptical about all those tech projections. [00:08:50] Right. [00:08:50] They look at the world around them and they figure the technology of the future couldn't possibly be that different. [00:08:55] The technology they see around them and science fiction y projections, it must just be fantasy. [00:09:01] So they don't really take seriously the tech and therefore they don't really project what future tech might be and then what the social implications are. [00:09:11] So there's this missing intersection. [00:09:14] On the one hand, we have tech people who can envision future technologies. [00:09:18] And on the other hand, we have social scientists who can project social changes as a result of whatever perturbations come, including technologies, but they don't put it together much. [00:09:29] And that's what you do. [00:09:30] That's one of the things I've tried to do. [00:09:31] So, my highest, most meta level principle is to look for important, neglected things. [00:09:38] So, you know, that's neglected there, right? [00:09:41] You find a way that people are neglecting something that's important, and then you can have it all to yourself. [00:09:48] So, what is the future of humanity and artificial intelligence? [00:09:53] If we look into the near future and the distant future, there's a lot of people who have very dystopian views of how this might look. [00:10:01] What Are your views on this? [00:10:05] Are they dark or dystopian or are they optimistic? [00:10:12] So, first of all, in terms of ordinary artificial intelligence of the sort that most people see and have seen for a long time, of the sort that I did as a researcher, I see that as accumulating gradually, improving steadily, and having a long road still to go. === Westworld and Exponential Growth (08:25) === [00:10:31] So, that sort of process looks like It will give you plenty of warning before you have bigger problems to deal with. [00:10:41] So, we are a long way away from machines being able to do most everything humans do, perhaps even centuries. [00:10:50] And as they start to become more capable, then they will basically take up a larger fraction of world income. [00:10:57] The key signature of AI getting more capable is you pay it more money instead of people. [00:11:03] So, because most income goes to pay people today, you can be pretty sure that people are the main valuable thing today. [00:11:10] But later on, with time, we will pay more and more for the machines and the software because they'll be doing the important things. [00:11:17] And then eventually they'll do most things. [00:11:20] But there's a long way to go before that. [00:11:22] And that process will be pretty gradual. [00:11:24] That is, I don't foresee a sudden change where all of a sudden, you know, people, humans were doing, you know, half the jobs yesterday and next week they're doing almost nothing, right? [00:11:36] The scale, the timeline isn't necessarily linear in a sense that the exponential growth of technology. [00:11:43] The growth of technology becomes exponential, right? [00:11:45] Like what we did in the last 100 years compared to the next 100 years will be extremely sped up? [00:11:51] Well, I actually think you should think in terms of a log-normal distribution over task difficulty, which is a mouthful, I guess. [00:11:59] So the idea is the difficulty of tasks in automating varies enormously. [00:12:07] So they are spread across a very wide spectrum in terms of how difficult tasks are to automate. [00:12:14] So, as you know, Moore's Law. [00:12:17] For a while, computers got twice as cheap every two years. [00:12:21] Right. [00:12:23] But we didn't see an exponential displacement of humans from jobs, even as technology was increasing exponentially. [00:12:34] That is, we've seen a relatively linear, steady displacement of jobs, even though technology is increasing exponentially. [00:12:42] And a way to understand that is to imagine there's a distribution for each job. [00:12:47] Of how hard it is, how much computing power does it take to automate that job? [00:12:52] And think of it in terms of many, many orders of magnitudes, right? [00:12:55] So if it was a number like some jobs take one and some takes 10, and 10 take 100, and 1,000, and a million, and a billion, and that jobs are spread really far across this really wide range of difficulty. [00:13:07] So the very first machines automated some very simple jobs that people were doing. [00:13:13] And then over time, as computers have gotten better and cheaper, we've been able to automate. [00:13:19] More jobs by moving up that skeptic. [00:13:21] Like, first it was the jobs that only had a difficulty of one, and then the jobs have 10, and then the jobs that have 100. [00:13:27] But the idea is the range is really wide. [00:13:31] Like, there's jobs up at a trillion, and a job's at a quadrillion, right? [00:13:35] And so that's why, even as technology has been improving exponentially, the fraction of jobs that the machines do has not been increasing exponentially because the difficulty of job is just really widely spread. [00:13:48] And at the moment, we are, you know, Automating a new set of jobs that we couldn't automate 10 years ago. [00:13:54] But we have just a little, still really a long way to go. [00:13:58] And so that would be my picture for the ordinary kind of AI that we've seen so far. [00:14:04] Now, my book, The Age of M, is about a somewhat different kind of AI. [00:14:08] And that different kind of AI has more of a threshold effect where nothing happens until something happens and then a lot suddenly happens. [00:14:17] And that's the idea of a brain emulation. [00:14:20] So You have a brain in your head, and it has a whole bunch of cells of different types, which are connected to each other with little wires. [00:14:27] And there's a system by which, when signals come into your eyes or your hand, it goes to the brain, and signals go through the cells, and each cell takes signals in and sends signals out. [00:14:39] And we could, in principle, copy that whole system in your brain and make a computer model of a brain like yours that has a substitute for each cell and a model of how each cell works that takes signals in and sends signals out, that sends it to the other cells. [00:14:55] Just like in your brain. [00:14:57] And if we could make a brain emulation of your brain, then it would behave just like you in the same situation. [00:15:03] And that would be powerful human level artificial intelligence. [00:15:07] We are a long way from being able to do that, but we will sometime in the next few centuries be able to do that. [00:15:14] And we might, I think, be able to do that before we can do other kinds of human level artificial intelligence. [00:15:21] And this kind of AI has a threshold. [00:15:24] That is, when we make this emulation of your brain, Say we get it wrong, like this cell models are off or the connections are off. [00:15:31] If it's wrong enough, it just won't work. [00:15:33] It'll just be mush, it'll just be a mess. [00:15:36] But once we pass a threshold of the model being close enough to you, then it'll basically work. [00:15:43] And so that's this threshold where brain emulations, before a certain point, they're just not really valuable at all. [00:15:50] And then after a certain point, they're really valuable. [00:15:53] And then that would have more of a sudden transition into this world where brain emulations were cheap and common. [00:16:00] And that's what my book is all about is what that world looks like. [00:16:03] Have you ever seen the movie or the show Westworld on your Facebook? [00:16:05] Yes, of course. [00:16:06] Yes. [00:16:06] Yeah, that is almost exactly what Westworld is about, right? [00:16:10] It's about a park. [00:16:12] Can you explain? [00:16:13] Right. [00:16:13] Well, so the. [00:16:15] I mean, initially, it was a movie long ago when I was a kid. [00:16:18] It was also a movie long ago, yeah. [00:16:20] Right. [00:16:21] And so the idea is it's an amusement park where there are robots to entertain the guests. [00:16:25] And they don't really specify how the robots are designed. [00:16:28] So, in fact, in the TV show Westworld, they also have brain emulations later in the show as a different thing. [00:16:35] And so the robots in Westworld are not brain emulation robots, but they are AI that are very effectively similar to humans. [00:16:43] So they present them as very human like AI. [00:16:46] Right. [00:16:47] Even though they're apparently not designed that way. [00:16:52] And of course, it's crazy stupid, like a lot of fiction is. [00:16:55] If you could actually make those robots as they made them in that show, you wouldn't mainly use them in an amusement park. [00:17:01] You'd use them everywhere in the economy. [00:17:03] I mean, so the key thing is human workers get more than half of world income. [00:17:09] So if you could have a substitute for human workers, you can make trillions of dollars basically replacing human workers with it. [00:17:16] So the idea that you would just use that in an amusement park and nowhere else. [00:17:21] It's kind of crazy. [00:17:23] Well, the idea, the main incentive for Westworld, correct me if I'm wrong, I may be misinterpreting it. [00:17:29] That's been a while, but I believe the main incentive of it was to longevity, right? [00:17:36] To live forever, to be able to learn enough from these real humans that were interacting with them to be able to copy them, to copy their minds. [00:17:46] So when their biological body dies, they can take that copy and put it in a cyborg or a So it's a robot. [00:17:56] There were three main lines of business that I recall from Westworld TV show. [00:18:01] The first main line of business was these entertainment of the amusement park. [00:18:06] A second line of business was they were trying to make uploads, and I believe they showed that as failing. [00:18:11] That is, they just didn't work. [00:18:14] And then the third secret line of business that they reveal later on is that they're watching the visitors to the park very closely and therefore learning to predict human behavior. [00:18:25] And somehow the idea is that the valve that was much more valuable than just having human level robots somehow because somehow you could market to them, right? [00:18:32] You could send them advertisements and predict what they do. [00:18:35] And so, in the later seasons, they have these big spherical computers that are predicting the world what will everybody do, right? [00:18:43] As if that was some enormously economically valuable thing compared to the robots, which that seems crazy because, like, people predicting future behavior that the moment that doesn't command much income in the world, there's not a lot of demand for that. === High Bandwidth Brains (03:29) === [00:18:56] There is some, but it's not that big. [00:18:58] But people working. [00:18:59] There's a huge demand for that. [00:19:01] The world is full of people working. [00:19:02] And if you can substitute for people working, you can make trillions. [00:19:06] I don't think you can make trillions just because you can predict what TV show somebody will watch next or which way they will turn on the street. [00:19:12] There's just not that much money there. [00:19:14] But it sets up this dystopian thing oh, my God, they can predict what I'll do. [00:19:20] I just feel dehumanized now. [00:19:22] So the show went for this emotional threat of you feeling threatened by the fact that a machine can predict you. [00:19:30] As the emotional anchor of the show, as opposed to these machines can take your jobs. [00:19:35] Right. [00:19:36] It's more of like an advertising type thing, right? [00:19:39] They can make money with advertising to you or selling something. [00:19:42] Right, which is tiny compared to work. [00:19:45] Right. [00:19:45] Like advertising is like 2% or 3% of GDP, right? [00:19:50] Whereas work is like 70% of GDP. [00:19:53] Oh, wow. [00:19:53] Right. [00:19:53] So, you know, if you can take over the advertising industry, you can take over the work industry, take over the work industry. [00:20:00] Right. [00:20:01] Would, is that, do you imagine those little, those little pearls, they called them, do you imagine, is that like a, A realistic physical form of what a brain might be uploaded to? [00:20:15] I mean, that's just to have a dramatic physical thing to look at. [00:20:18] I mean, obviously, almost all computers these days have wires by which they transfer files by wires from one computer to another. [00:20:27] Sometimes when they want to send a file physically, they have them on tape and they mail them. [00:20:32] But I mean, the fact that they have these little shiny silver balls doesn't make much sense. [00:20:37] But that's not forgivable because that's also not terribly wrong. [00:20:40] I mean, you could do it that way. [00:20:41] Yeah, that you would. [00:20:43] It seems like we're at least beginning one of the first steps towards that. [00:20:51] For example, the Neuralink that Elon Musk is producing that directly interfaces with the brain. [00:20:56] I mean, it's essentially like a super evolved version of the iPhone, you know, but it's actually attached to your brain. [00:21:04] You would think about things and be able to communicate without speech. [00:21:08] In principle, it would just have a higher bandwidth. [00:21:10] Right, exactly. [00:21:10] Your brain could talk to a machine and back and forth with a higher bandwidth. [00:21:13] So, I mean, obviously, bandwidth is valuable. [00:21:16] Key question is for what? [00:21:18] What is exactly the value of high bandwidth? [00:21:21] So, well, I mean, your brain was designed for high bandwidth through your eyes, say, and ears, right? [00:21:27] So, your brain really wasn't designed to take high bandwidth input in through other channels. [00:21:31] That is, your brain, you know, has a whole system there for what it's where it expects to get input and then has a whole system for processing that input. [00:21:39] So, it's not obvious that we can actually usefully give your brain a lot of input other than through the input channels it was designed to take. [00:21:48] Interesting. [00:21:50] Yeah, because I always think about it. [00:21:51] The first thing that I think about when I think of things like Neuralink is being able to, like, the advantages that you would have over other human beings, like being able to, for example, download a book into your brain and know it in 30 seconds, or, you know, the advantages you have in the economic world if you had that technology. [00:22:07] That would be hard. [00:22:08] So the brain emulation scenario is a scenario where, you know, you equivalently have Neuralink, you know, easy because they're just on computers anyway. [00:22:16] So it would be easy to send information in. [00:22:19] But it's not clear you could just, you know, download a book and they would know it. [00:22:24] Because that requires that you restructure the brain. === Downloading Books Instantly (02:34) === [00:22:26] So, um, oh, how so? [00:22:30] Well, um, you know, so if you imagine like having files on your computer, you could download a movie and then the movie's on the computer. [00:22:37] And if you hit play, you play the movie. [00:22:38] You need software, right? [00:22:39] But if you have a large, say, a library, right? [00:22:43] A large library full of books, and now you want all those books to reflect some new insight in math, right? [00:22:50] You don't just like put the math book somewhere in the library, right? [00:22:53] All the other books don't update on the math book sitting in the library. [00:22:57] You'll have to get All the other books to change in response to this new thing you know so that they've integrated into the rest of the books, right? [00:23:05] So, taking any large library and getting it to reflect some new math insight would be hard, would be time consuming. [00:23:12] You'd have to go find all the books that actually are related to that, understand them, and rewrite those chapters in order to reflect the new insight that you were trying to get in. [00:23:25] Right. [00:23:25] That makes perfect sense. [00:23:29] So, one of the most memorable quotes from that movie to me, which I've talked about on this podcast many times, and I think about a lot, which I'm fascinated to hear your opinion on it, was when Ford, the man who developed the theme park with the AIs, he said that human intellect is like peacock feathers. [00:23:52] It's an elaborate display to attract a mate. [00:23:55] All of the best of Mozart. [00:23:57] All the best of Shakespeare, the best of Michelangelo, even the Empire State Building, just an elaborate mating ritual. [00:24:04] And maybe that all the things that humanity has accomplished doesn't even matter because it was done so for the basest of reasons. [00:24:13] Even the peacock can barely fly, it lives in the dirt and it picks insects out of the muck. [00:24:20] What are your thoughts on that quote? [00:24:23] I do expect that peacock feather kind of. [00:24:27] Drives were an important drive in human mind evolution, but it's also clear that human minds are spectacularly capable compared to most other animal minds. [00:24:40] It's clear that human civilization, together with human minds, has been able to accomplish amazing things. [00:24:47] Now, it could be it's only accomplishing them with 10% of the brain and the rest of the 90% is peacock feathers, but still, something in there is amazingly capable compared to all the other animals. [00:24:58] That's just Yes. === Useful Capabilities vs Waste (02:45) === [00:25:01] So then you might want to understand, like, which parts are which. [00:25:05] Try to disentangle the fundamentally, you know, more useful parts from the others. [00:25:10] But certainly something in there is useful. [00:25:11] But the brain is huge. [00:25:13] So it could well not be most of it. [00:25:17] Well, it goes back to the main, one of the main, or the main theory of your book, The Elephant in the Brain, right? [00:25:24] Where there's explain what that book is and what the basic idea of it is. [00:25:30] So the idea is that you, Are wrong about why you do many things. [00:25:36] Right. [00:25:38] That is, if I ask you why you do most of the things you do, you'll give me a reason. [00:25:43] And it turns out most of those reasons are wrong. [00:25:46] That is, it's the reason you have in your head. [00:25:49] It's the reason you can sort of explain and justify. [00:25:51] And to some degree, your behavior is driven by that thing, but not mostly. [00:25:58] So, but most of the things you are doing instead, the motives you have are reasonable motives to have. [00:26:05] They just don't look as nice as the motives. [00:26:07] You'd like to point to. [00:26:10] So, for example, you go to school not to learn the material because you don't actually learn much material. [00:26:16] You go to school to show off, to show that you are capable, and also to sort of learn modern workplace habits. [00:26:25] You go to the doctor not to get well, but to show that you care about people and to let them show they care about you. [00:26:32] You vote not to help the nation do better, but to show your allegiance to your political tribe. [00:26:41] Each of these things is a useful function for a social animal. [00:26:46] It's just not the function you claim. [00:26:48] Now, some of these functions are more zero sum or have waste at least. [00:26:54] So many of these functions that you don't like to admit, that you're not really aware of, are showing off. [00:27:01] And showing off is often wasteful. [00:27:04] Like going all those years to school just to show off how smart and conscientious and conformist you are. [00:27:10] Is a lot of waste, but it's not all waste. [00:27:16] It is important for people to be able to judge who's how capable and to sort them into different kinds of tasks and roles based on knowing about their features. [00:27:27] And, you know, large social worlds, we do need to show and judge loyalty. [00:27:34] And a lot of these signals are to show and judge loyalty. [00:27:38] So, like the peacock feathers are just showing that you can put up with the feathers. === Simulating Human History (15:12) === [00:27:46] That is, you have enough. [00:27:48] You know, other capabilities that if a predator comes after you, you can usually survive, even though you're carrying around this big, wasteful set of feathers. [00:27:58] But those feathers aren't really useful for much else. [00:28:03] But many of the capabilities that humans have acquired that we show off are actually more useful for other things. [00:28:11] Now, maybe evolution just got lucky and didn't realize how useful they would be, but clearly, something in the human brain, the capacity the human brain has, Allowed us to do enormous things compared to most animals. [00:28:24] So, acquiring wealth would be an example of something that would be impressive and is also has many other reasons. [00:28:28] Right, exactly. [00:28:29] Sure. [00:28:30] Or language. [00:28:30] So, for example, most people use a larger vocabulary than they really need to because they show off how much they know to do a larger vocabulary. [00:28:40] And I noticed I use the word vocabulary instead of just saying you know many words. [00:28:43] So, I was showing off that I had a bigger vocabulary. [00:28:46] But words are useful. [00:28:50] So, you are learning things like that. [00:28:53] Many ancient societies. [00:28:55] People who are rich, what they show off by getting things from far away. [00:29:01] Strange, you know, things that local people wouldn't have, that they would have an unusual thing. [00:29:06] And that encouraged trade and long distance travel interactions. [00:29:12] War has actually been one of the largest drivers of innovation over the last 10,000 years. [00:29:17] War is terribly destructive. [00:29:19] Nevertheless, the drive to improve war technology has, in fact, you know, driven a lot of technological improvements. [00:29:27] Right. [00:29:28] So the key thing is once you open some new territory of learning and expanding, even if you're wasting most of your effort there, the remaining effort can just produce huge returns. [00:29:41] Yeah. [00:29:42] One of the most absurd examples I've thought of when it comes to the idea of the elephant in the brain is Elon Musk. [00:29:49] You know, you think of him as this just crazy flamboyant, you know, the richest man in the world. [00:29:55] And he also builds rockets and wants to make our species interplanetary. [00:30:00] Or does he? [00:30:01] Or does he just want money and women? [00:30:04] I'm not sure I care. [00:30:06] I'm not sure I care. [00:30:07] Right. [00:30:07] Because at the same time, whether that is his drive or not, the things that he's accomplishing are valuable and very important to us. [00:30:17] I mean, I think if you're going to pick one person on the planet today that you want to be really impressed with what they've accomplished, it would have to be him. [00:30:25] That's true. [00:30:26] So, you know, I have to give hats off. [00:30:27] I mean, and he's going on to accomplish more things. [00:30:30] No doubt many of them will fail, but. [00:30:33] But yes, we humanity is on a tear, and we have a bright future ahead of us if we so choose, even if we are now wasting enormous fractions of our civilization on peacock feathers. [00:30:52] And in some sense, that makes the prospects even brighter. [00:30:55] If we could cut the waste, we could get even more. [00:30:59] Right. [00:30:59] Simulation theory. [00:31:01] How far do you go into simulation theory in your books, and what are your thoughts? [00:31:08] Estimations of the probability of simulation theory. [00:31:11] So many decades ago, the idea that we might be living in a simulation was a common theme on some of the mailing lists I was on. [00:31:20] And then at one point, one of the people who was on the mailing list went off and wrote a philosophy paper based on that, Nick Bostrom. [00:31:27] And then that got more attention. [00:31:29] And then at the time, that inspired me to write a paper on how to live in a simulation because that was neglected. [00:31:37] So there are many of these topics where, again, people, Want to mainly focus on the technology itself. [00:31:43] When it might happen, is it possible? [00:31:44] And they rarely neglect the social implications. [00:31:47] So I thought, let's just think about the social implications of a simulation, i.e., if you were living in one, how should you live your life differently? [00:31:54] But, and so I do have that paper on that. [00:31:57] But the key idea here is that in the future, things like brain emulations will be possible. [00:32:07] And therefore, it will be possible to create creatures who are in a world that. [00:32:12] Looks completely real to them, but is in some sense not. [00:32:17] It's fake. [00:32:18] So, like in the movie The Matrix or things like that, it will be possible to make such creatures. [00:32:23] And some of those creatures would be set in a world like the past of that world. [00:32:30] Just like today, when we make movies, some of our movies are set in our past and some of our games are set in our past or something like that. [00:32:37] And so, if there was a creature set in some fictional world, Who doesn't know they're in a fictional world. [00:32:44] And if that fictional world is in the past of that civilization, then if you are sitting here in your world, you could ask, how do I know I'm not that creature? [00:32:57] Exactly. [00:32:57] I could be, in principle, you know, in fact, that future creature who thinks they're in this current day, but is actually in the future in a simulated world made to look like their past. [00:33:10] So the relative chances that I am a simulation like that really comes down to a numbers game like that. [00:33:16] How many people are there here today, really? [00:33:20] And how many people would there be in the future simulating my time now? [00:33:26] And that's, you have to come up with estimates of those relative numbers in order to guess could I be in a simulation? [00:33:32] So if you think there are relatively few people in the future who would be simulations of the past, then you think it's not very likely that I'm in a simulation. [00:33:41] But if you think there's enormous numbers of them in the future, then you would have to think, well, then it's pretty likely, right? [00:33:47] Now, We should first notice like, if you were simulating the past, you're not going to equally randomly select from the past to simulate. [00:33:58] So, when we do movies or we do games, right, we do Caesar all the time or Napoleon all the time, and all the other people not so often, right? [00:34:08] So, that means your chance of being in a simulation does depend on how photogenic your life is. [00:34:17] In the sense, is your life the sort of life somebody might want to simulate? [00:34:21] Right. [00:34:22] Would you be an interesting player? [00:34:23] Right. [00:34:24] Or are you a famous person? [00:34:25] Did you have a big impact on the future? [00:34:27] Right. [00:34:28] So we'll often pick famous people in the past who we see as having had a big historical impact as the people we simulate in our past when we do novels or games or things like that. [00:34:39] So then, you know, that would modify your estimate for yourself. [00:34:44] If you're a random person, that's different than if your life seems to be a specially good story, especially a story that the future might. [00:34:53] So Elon Musk himself. Should assign a higher probability than the rest of us to living in a simulation because it would be more likely somebody would simulate his life compared to ours. [00:35:05] Right. [00:35:05] Okay. [00:35:07] But let's set that aside and let's just think about an ordinary average person today and ask what's the chance they're living in a simulation. [00:35:16] Well, I'd say we're going to basically do an integral where we multiply two numbers. [00:35:24] So basically, there's all these different years in the future, and each year in the future will have a certain size economy, which has some amount of past simulations that are happening in that economy, right? [00:35:37] So in each Future year, there's some percentage of that economy that they spend doing these past simulations, right? [00:35:43] So we could count that now in terms of our how many historical novels do we have or historical games or something, right? [00:35:49] And we could also have which past they're interested in, you see. [00:35:54] So as the future, as we get farther in the future, we expect the future to get bigger, the economy to get bigger. [00:36:01] And then even if there's just a constant fraction of that future economy that's spent thinking about the past, that amount gets bigger. [00:36:10] Right. [00:36:10] Right. [00:36:11] So the farther we go into the future, the more past simulations we expect to be happening at that moment in that distant future time. [00:36:18] Okay. [00:36:19] But we expect those simulations to be spread out across their past according to how interested they are in their past. [00:36:26] That is, which past are they interested in? [00:36:28] Right. [00:36:29] And so now when we're thinking about the future, we're asking, you know, it's an integral over the future. [00:36:34] For each future year, one number is how big is that economy and, you know, what fraction of it is devoted to historical simulations. [00:36:42] And the other is how. [00:36:45] Larger fraction of that would we count because we are competing with all the other dates at which they could be trying to make simulations? [00:36:53] Wow, that's complex, right? [00:36:55] But there's a simple answer, okay, which is that if interest in the past fades faster than the world grows, then most of the simulations of you will be in the relatively near future. [00:37:16] So, say. [00:37:17] If interest in the future doubled at the moment, sorry, the future economy grows by a factor of two every 15 years at the moment. [00:37:26] But if interest in the fact falls by a factor of two every 10 years, then as you can see, even though the future gets much bigger, it's even less interested in you, in simulating your date. [00:37:40] Because they'll be much more interested in simulating things nearer their own date in terms of their history. [00:37:46] So the final bottom line is what we want to ask is, How fast does interest in the past or decline with time compared to the rate at which the population or the economy grows in time? [00:38:00] And there's something called Google Ngrams. [00:38:03] And you can just type in a year like 1900 into Google Ngrams and it will show you how the interest in that year rose with time up until that year and fell with time after that year. [00:38:13] Really? [00:38:14] And you can just pick all sorts of dates and see that. [00:38:16] And so you can see how fast does interest in the past fall away as you move away from that year. [00:38:23] And in fact, it falls away faster than the economy or the population grows. [00:38:28] But there are certain years that are way more interesting than others. [00:38:31] Certain times in history, like the ancient Egyptians, when the pyramids were built, or World War II. [00:38:36] But we're just trying to do the overall average first. [00:38:38] Okay. [00:38:40] So the overall average seems to be that our interest in the past fades faster. [00:38:46] And that also is roughly true. [00:38:48] That is, say, in the last 10,000 years, the world population doubled roughly every 1,000 years up until recently. [00:38:54] But I think it seems to me obvious that interest in the past falls faster than a factor of two every thousand years. [00:39:00] So, right. [00:39:02] Okay. [00:39:02] I mean, so if you look at most historical simulations, you know, a thousand years ago, you know, almost all the simulations, the percentage of historical simulations that are past the year 1000, probably well over 90%. [00:39:14] Right. [00:39:15] And you go back another thousand years and it falls even faster. [00:39:18] And, you know, the fraction of historical simulations that are of a thousand BC, almost nothing. [00:39:26] Right. [00:39:26] Look on Netflix or some other thing. [00:39:28] Find me a historical story that took place. [00:39:32] More than 3,000 years ago. [00:39:34] Right. [00:39:34] There's just almost nothing. [00:39:35] But 3,000 years ago is only a factor of eight reduced population because it goes by a factor of two every thing. [00:39:44] So, according to the, if it were proportional to population, then one eighth, say, of historical simulation should be more than 3,000 years ago. [00:39:53] Right. [00:39:54] But far fewer. [00:39:55] I see. [00:39:56] So, here's the bottom line, right? [00:39:59] We should only expect the future to simulate us when it's relatively soon. [00:40:03] But actually, we can't do simulations yet. [00:40:06] It'll be a while till we can do simulations of the past. [00:40:10] It'll be a while before we can create creatures who are in a simulation that don't know they're in the past. [00:40:15] We're not able to do that yet. [00:40:16] Right. [00:40:17] And so, by the time they can do it, it'll be far enough in the future, they'll be hardly interested in us at all. [00:40:23] Right. [00:40:23] And then they'll only be able to do a few at first. [00:40:26] And as they get able to do more, they get even less interested in us faster. [00:40:29] Why would they be interested in us? [00:40:30] We're just too far in the past. [00:40:31] We're too far in the past. [00:40:33] Unless we are somehow special. [00:40:34] So, some people have claimed that we're in the special era. [00:40:38] That the future will be unusually interested in us, especially because we are different. [00:40:46] Why would they be interested in us right now? [00:40:48] Well, many people are attracted or tied to this idea that this is a pivotal moment in history. [00:40:55] And that's what motivates them to do a lot of things they do because this is the moment that matters. [00:41:02] And of course, it's a bit of arrogance and self-sufficiency. [00:41:05] It sounds like something that every. [00:41:06] Obviously, it could be true in principle. [00:41:08] I don't. [00:41:09] Actually, I think it is that true. [00:41:11] I mean, some important things are happening now, but important things happened a century ago and a century before that. [00:41:16] What would be the social implication of everybody being aware that they are in a simulation? [00:41:23] Of human beings being aware that we're in a simulation? [00:41:24] What would change? [00:41:25] What would be the social implications? [00:41:28] Well, you have to ask what's the purpose of the simulation? [00:41:33] So clearly, somebody who made a simulation could have told people in the simulation that we're in a simulation, right? [00:41:38] If they had wanted to do that. [00:41:39] Okay. [00:41:40] So clearly, they didn't want to do that, right? [00:41:43] Right. [00:41:43] So, if people in a simulation figure out they're in a simulation against the wishes of the people who set up the simulation, the question is, what can the people who set up the simulation do in response? [00:41:53] And they have enormous powers to deal with that. [00:41:55] So, I don't think they're actually going to suffer that problem. [00:41:58] So, they wouldn't let us allow us to figure it out. [00:42:00] If they don't want to, it doesn't seem like they want to. [00:42:03] Unless they want to do a simulation of a world that finds out and see what they do. [00:42:07] But that's, you know, that's going to be a kind of unusual, special thing to simulate. [00:42:11] No doubt they do that sometimes, but not very often. [00:42:14] Right. [00:42:14] So, one of the main things you can do in a simulation is you can sort of have noise and error in how you do the simulation. [00:42:21] And then, if something happens that wasn't supposed to happen, you just back it up and you do it again. [00:42:28] Right. [00:42:29] So, simulations don't have to just run forward, they can back up. [00:42:36] So, that's just the nature of the way we do computer simulations, all sorts of systems today. [00:42:41] So, that means if there's anything you don't want to have happen in a simulation, it's relatively easy to prevent it. [00:42:48] That is, if you see the thing go wrong, then you stop, you back it up, and you take that, you know, you change something and then you roll it back forward and you keep doing that until you get the thing you wanted. === Filtering Cosmic Signals (08:49) === [00:42:59] Right. [00:43:00] Right. [00:43:00] That makes sense. [00:43:02] So, you know, if you're in a simulation, you are very much at their mercy and you will not know things they don't want you to know. [00:43:09] What if they did want us? [00:43:10] What if they did decide to let us know we were in a simulation? [00:43:12] Well, then we're in a simulation where they want to find out what happens when they let us know. [00:43:15] And of course, they will try that different variations on that. [00:43:18] And then, you know, they will see the different variations. [00:43:22] Wow. [00:43:24] That's fascinating. [00:43:25] And people like Elon Musk, people who have more interesting lives, who have made a greater impact on history, those are people that are more inclined to believe. [00:43:33] Well, they should be more inclined to believe that for them, it is in fact true that they are more likely to be living in simulation. [00:43:41] Right. [00:43:44] The Great Filter. [00:43:46] What inspires you to write this paper on the Great Filter? [00:43:51] I believe you collaborated on it with a few people. [00:43:53] Is that right? [00:43:54] And then what is the basic idea of the Great Filter? [00:43:57] So, the Great Filter is something I did over 20 years ago, 25 years ago almost now. [00:44:04] And 1996, right? [00:44:06] Started, and that was about just reframing the great, uh, what they say Fermi's question where is everybody? [00:44:15] Uh, so we look up in the universe and looks empty and dead, and we look at our future, we see this prospect for us to be lively and invisible, and there's a bit of a conflict here. [00:44:27] We don't see around us the things we expect to be in the future, so. [00:44:35] I reframed that question, just asked a different way. [00:44:39] And I said, well, big visible things of the sort that we hope to become soon, they will have to appear by going down a path. [00:44:47] They start with simple dead matter early in the universe, and then they're on a planet, say, and life appears, and life goes through stages, and eventually it gets to our stage, and eventually it goes from our stage onto this big visible stage. [00:45:00] And going down that path, it must be very hard to go from the beginning and get all the way to the end by this point in time. [00:45:08] The history of the universe because almost nothing has done it. [00:45:10] So, that would be to say there's a filter on that path. [00:45:13] Things start in at one end of a pipe and almost nothing comes out the other end of the pipe. [00:45:18] Because, look, this whole universe started on the pipe. [00:45:22] Every piece of rock anywhere could have started down this path to evolve life and spread. [00:45:28] But we're the only thing we can see that's farther down the pack. [00:45:35] And we don't see anything that's farther beyond where we hope to be. [00:45:39] So, that's the claim that. [00:45:41] To get all the way to where we hope to be is very hard. [00:45:44] There's a very large filter between that final destination and the initial stage of simple dead matter. [00:45:52] And when you say filter, what specifically are you referring to? [00:45:55] Like, what would a filter be? [00:45:57] What's an example of a filter? [00:45:58] So, you could have a planet where it never evolved even life in the first place, right? [00:46:03] And then it failed to pass that first step. [00:46:05] And then you could have some place with simple life, but didn't become more complicated life, like with sexual reproduction, say, or photosynthesis, or You know, a whole bunch of multicellular things, right? [00:46:19] So, on Earth, life went through a bunch of stages, and we could say each stage, when it made a key change, it passed through a filter step. [00:46:27] It succeeded in accomplishing something that maybe most planets never do. [00:46:33] And that's the great filter the difficulty of going all the way down this path to the place where we are now and then on forward. [00:46:41] So, the fact the universe looks dead and empty says that filter is big. [00:46:46] And it also raises a question how far along the filter are we? [00:46:49] Because we're not all the way. [00:46:51] And so clearly it's a huge filter. [00:46:53] And even if just a small fraction is ahead of us, that says bad news about our future. [00:46:58] A small fraction is ahead of us. [00:46:59] What is it? [00:46:59] What do you mean by that? [00:47:00] So, it could be say the total filter is maybe 10 to the 24, for example, like only one planet out of 10 to the 24 ever becomes reaches advanced life that becomes visible in the universe. [00:47:13] Well, it could be that we've gone through 10 to the 22 of it so far, and we only have 10 to the 2 left, but that would still mean we only have a one percent chance of getting from here to that final destination. [00:47:26] That is 99% chance we won't probably by dying, which is kind of scary, yes. [00:47:34] What are the chances that we've already made it through the great filter? [00:47:37] Well, that's again about how much of it we have passed. [00:47:40] So you could look at our history and try to guess which things were hard. [00:47:49] But the problem is actually the timing doesn't tell you much. [00:47:52] So it turns out that if we were very lucky on Earth to get as far as we did, then even if the different steps before us had different difficulties, the timing would be roughly equally spaced. [00:48:05] And therefore, we do see roughly equal space timing between major transitions. [00:48:10] And so that actually doesn't tell us much about how hard those steps were. [00:48:15] If we were to find other life out there in the universe that had gone independently down this path, that would be big information about how hard the steps were. [00:48:23] So if we found some other life out there, say, that was just primitive life that didn't share an origin with us, well, that would be big news about how hard that first step was. [00:48:35] And it would suggest it's easier than we might have feared. [00:48:38] So. [00:48:39] Paradoxically, any evidence we see of independent life out there in the universe that's gone anywhere along the path to where we are is bad news about our future. [00:48:51] If we discover any sort of life out there, that's bad news for us because it would suggest that maybe these earlier steps aren't as hard as we thought, and therefore the later steps we still have to go are harder than we thought. [00:49:05] So that's all the great filter stuff. [00:49:08] You were talking about the work I did with some other people, that's in the last. [00:49:11] Two and a half years. [00:49:12] Yes. [00:49:12] Like that's what work we call grabby aliens. [00:49:15] Right. [00:49:16] And that's basically giving numerical estimates for the great filter. [00:49:20] Right. [00:49:20] Telling you just how big it is and therefore saying more specifically the distribution of aliens in space time. [00:49:28] And we claim that we kind of have the answer to that. [00:49:31] So, what are grabby aliens? [00:49:35] So, there's two kinds of aliens in the universe the kinds you can see and the kinds you can't. [00:49:42] So, we just postulate there are some kinds of aliens that would just be pretty obvious. [00:49:46] That is, they started somewhere, but then they expanded and kept expanding. [00:49:51] And in the volume they expanded to do, they changed things. [00:49:54] They did stuff. [00:49:56] We don't know exactly what they'll do, but just like humans and life on Earth has changed stuff for it, wherever it goes, they would change stuff. [00:50:04] And if they do that to a big enough volume, they would be noticeable. [00:50:08] And so we can think about what our data says about the obvious ones. [00:50:14] In a much easier way than we can about the other kind, the quiet ones. [00:50:18] So we can come back to the quiet ones, but we're first going to focus on the big obvious ones. [00:50:22] Okay. [00:50:23] And, you know, and the key point is we don't actually see them, right? [00:50:28] They would be big and obvious, and we don't see them. [00:50:30] But if we think carefully, we can figure out a lot about what that implies. [00:50:36] Might not realize that it says a lot that we can't see them. [00:50:40] So what we have is a three parameter model of where aliens are in space time. [00:50:47] And each of the parameters can be fit to data. [00:50:50] And then you kind of need to believe this model to explain why we're so early in time, which is one of the key data points, our current date. [00:51:00] And so, you know, I'm happy to walk through if you want. [00:51:03] Yeah, absolutely. [00:51:03] Each of these parameters. [00:51:04] Austin, we should be able to pull up some sort of graphics that represent this concept and this study. [00:51:11] Yeah, let's go through that. [00:51:13] We are here at 14 billion years into the history of the universe. [00:51:19] That sounds late. [00:51:20] 14 billion years seems like a long time, but it's actually pretty early. [00:51:25] The average star will last for 5 trillion years. [00:51:30] Now, the peak of star formation in the history of the universe was like 4 billion years after the initial. [00:51:35] So we're well past the peak of star formation, but most stars will last for trillions of years. [00:51:43] And so we are early in terms of when most stars will exist. === The End of Life Deadline (11:50) === [00:51:48] And we have a simple theory of when advanced life should appear. [00:51:54] That says that the most likely time for advanced life to appear would be much later in the history of these stars. [00:52:01] So, I've told you about how life on Earth had to go through a number of difficult steps. [00:52:06] Yes. [00:52:07] And that Earth is probably lucky in having gone through all of those steps in an unusually short time. [00:52:12] That is, most planets out there never get to our point before that planet ends. [00:52:17] Right. [00:52:18] Certainly by now in the history of the universe. [00:52:21] So, that suggests that it's just hard. [00:52:26] To do each of these steps, and perhaps each of the steps just has a crazy long expected time, but that our planet got lucky. [00:52:34] So, an analogy is cancer. [00:52:37] In your body, you have many billions of cells, and to produce cancer, a cell needs to undergo like six different mutations in that same cell in the entire history of your life. [00:52:48] Now, these mutations are so unlikely that most of your cells never have any of these mutations, and only a few have a couple of them. [00:52:57] But by the end of your life, typically one of them. [00:53:01] Will get all six mutations. [00:53:04] And that's analogous to all these planets in the universe who need to go through a bunch of steps to produce advanced life. [00:53:12] And most of them never even get to the first step, and hardly any of them get to, you know, four steps. [00:53:17] And maybe our life got through all six steps to get to where we are before the deadline of life closing on the planet Earth. [00:53:26] So that's like a cell in your body getting cancer before the deadline of your lifespan ending. [00:53:32] In other words, planet Earth has developed malignant cancer. [00:53:36] Right. [00:53:36] Although this cancer is great. [00:53:38] Depends on how you look at it. [00:53:39] Sure. [00:53:40] But I'm going to take that stand. [00:53:43] This life is great. [00:53:44] But so what we know is that the chance of you getting cancer as a function of time goes as a power law in time, where the power is the number of these mutations that have to happen. [00:53:56] So typically around six. [00:53:58] So you are far more likely to get cancer near the end of your life than the beginning. [00:54:02] It's not constant in time. [00:54:04] The chance just increases dramatically toward the end of your life. [00:54:07] It goes as this power law. [00:54:09] Of the power of sixth of time, because there's roughly six mutations. [00:54:13] Okay. [00:54:14] So, similarly for planets, if advanced life like us is going to appear on a planet like Earth, it's not equally likely to appear at any moment in the history of that planet. [00:54:23] It's far more likely to happen near the end of that planet's life. [00:54:28] And it actually seems like there's roughly six hard steps that need to happen on a planet like Earth for us to produce life at our level as well. [00:54:37] So, that means over time, The chance of advanced life increases with time. [00:54:43] Right. [00:54:45] Now, but longer lived planets, they've got a lot more time for all these hard steps to go in. [00:54:52] And so the actual chance of life appearing on a longer lived planet also increases by this power of six. [00:54:59] So we are a planet with, say, five billion years life, and say the typical planet out there will have five trillion years of life. [00:55:08] That's a factor of a thousand more. [00:55:10] And if there's six of these steps, the chance that life would have Appear on one of those planets till the end of its life compared to what our planet, the end of our life, is a thousand to the power of six or 10 to the 18. [00:55:24] That is, it's crazy unlikely for life to appear on a planet like ours so early in the universe compared to on a longer lived planet later in the universe. [00:55:37] So there's something wrong with this analysis because our analysis is saying we are just crazy early. [00:55:42] Sorry, pull that thing just a little bit closer to you. [00:55:44] You can adjust, you can pull it like this. [00:55:46] Pull it closer. [00:55:47] There you go. [00:55:48] So we're just crazy early compared to when we should expect life to appear. [00:55:52] But the key assumption we made in this analysis is that the universe would just sit empty and wait until we appeared. [00:56:00] And that's the assumption we're saying is wrong. [00:56:03] So, what's actually happening is life is popping up and appearing at various places in the universe, advanced life. [00:56:10] And then, sometime, it becomes grabby, it becomes visible. [00:56:13] And when it becomes grabby, it expands at some speed. [00:56:17] And then these civilizations appearing at different times expand, and then their expanding waves eventually meet each other. [00:56:24] And then at that point, the universe is full of grabby aliens. [00:56:28] And then it's too late after that for life like us to appear. [00:56:32] So there's a deadline in the history of the universe. [00:56:33] First, the universe starts out empty, and then advanced life starts to appear, and then it expands, and then fills everything up. [00:56:41] And then too late, you can't appear anymore. [00:56:44] And that's the explanation for why we are here so early in the history universe. [00:56:48] We are now at a period where the universe is filling up with aliens. [00:56:52] And in a billion years or two, it'll be all full. [00:56:55] And we couldn't appear five trillion years in the future because it will have long since been full then. [00:57:01] So we are just extremely early compared to other solar systems and other. [00:57:08] We're extremely early compared to when we would have appeared had the universe just waited empty for us. [00:57:14] The universe is not waiting empty for us, is the key point. [00:57:17] There's a deadline, and that's why we're appearing so early. [00:57:20] Hmm. [00:57:22] So, that's the first thing to say is this is why you need to believe in this theory is that we are crazy early otherwise. [00:57:29] You have to believe there are aliens out there and they are filling up the universe right now. [00:57:33] And in fact, right now, roughly half the volume of the universe is full of alien civilizations that we can't see. [00:57:40] And in another few billion years, it'll just all be full. [00:57:43] And we may well soon join them, i.e., going out and expanding and becoming a gravity alien civilization. [00:57:49] So, this model of aliens appearing has three key parameters. [00:57:55] There's the First, we're going to assume they just appear at random places in space on the large scale of the universe. [00:58:01] The universe is pretty similar. [00:58:02] They just appear in a random galaxy, say. [00:58:05] And then they expand at some speed after they appear. [00:58:10] So that's one of the parameters. [00:58:11] How fast do they expand? [00:58:13] And then the other two parameters are when they appear in time. [00:58:17] So I told you there's a power law. [00:58:19] That is, the number of hard steps makes them appear later in time more rapidly according to a power law. [00:58:25] So a power law has two parameters it has the power and it has a constant. [00:58:30] And so these are the three parameters that we can fit to data. [00:58:33] So the power comes from the history of life on Earth. [00:58:36] That is, the timing of when life appeared on Earth and how much time we seem to have left from now include, and other data we have suggests there's roughly six hard steps on Earth, therefore a power of six, roughly somewhere between three and 12 or something, but roughly a power of six. [00:58:54] The constant of the power law comes from our current date because we are a random sample of when an advanced civilization appears. [00:59:04] That is, we're not advanced enough to become grabby yet, but if we do so, it'll be within a million years or so. [00:59:09] So, a million years. [00:59:10] Which is a short time on the scale of 14 billion years. [00:59:14] So, relatively short time from now, we will either become grabby or not. [00:59:17] But that makes our data a random sample from when these things happen. [00:59:21] So, that gives us the constant in front of the power law. [00:59:24] So, we have the constant in front of the power law. [00:59:25] We have the power of the power law. [00:59:27] And all we need is the speed of expansion. [00:59:29] And then we've got the whole model. [00:59:31] So, how can we get the speed of expansion? [00:59:34] Well, if you run this model and simulate it, you find that when civilizations are born and they look in the universe, we ask, can they see other civilizations in the universe? [00:59:43] And what we find is if they expand slowly when they're born and they look out, they see lots of other civilizations. [00:59:50] Huge, you know, spherical volumes in the sky, much bigger than the full moon, full of aliens. [00:59:56] Spherical volumes in the sky. [00:59:58] Well, because they start at some point and they expand spherically away from there. [01:00:02] Okay. [01:00:02] And so you would look in the sky and you'd basically see vast spheres of alien civilizations. [01:00:07] If everybody grew slowly, that's what the typical civilization would see. [01:00:12] Okay. [01:00:13] But if they grow very fast, Most of them won't see any others when they're born because they don't see them until they're almost there. [01:00:23] Because there's a selection effect, you couldn't appear within the volume of a different alien civilization because they would preclude you. [01:00:32] That is, if they had been born and using the volume, you would no longer be able to evolve there, at least if they so chose. [01:00:41] So the fact that we don't see any aliens in our sky. [01:00:45] Is the evidence that they must be expanding at a very fast speed, say roughly half the speed of light or even faster, which is extremely fast? [01:00:54] But it explains why we wouldn't see them coming until they're actually here. [01:00:58] Until they're nearly here, but it might be. [01:01:00] I mean, it might be millions of years before they're here, you wouldn't see them, but still at a typical birth date, you won't see them yet. [01:01:06] Right. [01:01:07] Okay. [01:01:08] And now, what is this model? [01:01:11] Explain, Austin, is this there's no video? [01:01:16] There's a video farther down. [01:01:19] There you go. [01:01:20] Yeah, that's the video. [01:01:23] The following is a simulation of the Gravy Aliens model of alien civilizations. [01:01:28] Civilizations are born and then expand outward at a constant speed. [01:01:30] A spherical region of space is shown. [01:01:33] By the time we get to 13.8 billion years, the sphere will be. [01:01:37] So, this is a random place in space. [01:01:39] Civilizations appear, they grow, and then they meet each other, and then eventually the volume is full. [01:01:46] And so, in the upper right hand corner, you can see the date of the simulation. [01:01:49] So, we're past our current date now. [01:01:51] Okay. [01:01:52] And it's showing that the universe is expanding and it counts how many of them. [01:01:57] Or you can see the same thing again, but now we can see through the volumes, maybe see more what's going on. [01:02:02] So you can see up on the upper right, we're before today at the moment. [01:02:05] The universe is expanding. [01:02:06] You show how many civilizations there are. [01:02:10] There's roughly four times 10 to the eight galaxies in this volume. [01:02:14] So it's a very large region of space. [01:02:16] And then it's all full. [01:02:18] So that's the idea. [01:02:21] Okay. [01:02:21] So it's a relatively simple model. [01:02:23] Things just appear at Random times, and then they expanded a certain same speed. [01:02:27] But the time at which they appear goes as this power law. [01:02:30] That's the key thing here, right? [01:02:31] Right. [01:02:31] At the beginning, hardly anything was happening. [01:02:33] And later till the end, everything was just going crazy. [01:02:35] Everything goes crazy. [01:02:36] Right. [01:02:36] Okay. [01:02:37] What now, what makes a civilization grabby? [01:02:42] So, the definition of grabby is that it expands and changes the appearance of the volume it's in so that you wouldn't notice it. [01:02:54] That's the key idea. [01:02:55] Now, we could ask, why would that happen? [01:02:59] And, you know, in some sense, you could just imagine it's like life on Earth, just once there's some competition of different. [01:03:05] Parts trying to colonize things, eventually it all gets colonized. [01:03:12] We will, in our future, perhaps face a choice to become grabby or not, and that may illuminate better the issues involved. [01:03:23] So, if you just imagine technology just arbitrarily getting improved, then eventually people could leave here and go places, and eventually they would then, you know, compete to grab things, and when they grab things, they'd use them in some way, and then they would change them. === Loud Civilizations in Space (03:12) === [01:03:39] And that would just be a naturally way that they could become grabby. [01:03:42] Of course, they have to be capable enough to actually do interstellar travel, which we are not yet. [01:03:48] But within a million years, it seems like we would. [01:03:52] So the simplest argument for being grabby is just to say if technology and abilities just keep improving, then eventually there'll be a point when interstellar travel is possible and then it would be done. [01:04:06] And then people would move in different directions and then they'd land somewhere and they'd colonize that and grow there and expand out. [01:04:12] So, just in the way life expands across the Earth when it's possible, then life would expand through the universe. [01:04:22] Now, of course, there's that prerequisite. [01:04:25] You have to be advanced enough to be able to do interstellar travel, and then you have to not be dead at that point. [01:04:32] That's important. [01:04:34] And you have to allow it. [01:04:37] So, I actually think it's not clear what fraction of them will allow. [01:04:41] I think there may be, in fact, Most civilizations don't allow it. [01:04:46] That's a plausible possibility. [01:04:49] But even so, the universe is big enough that some of them will. [01:04:54] And so the Gravy Aliens model. [01:04:55] So the key idea is you start out as a quiet civilization and then eventually have the choice to become loud. [01:05:05] And there's this key question of the ratio between the quiet and the loud. [01:05:08] What fraction of quiet eventually give rise to a loud? [01:05:12] And that fraction matters for two different things. [01:05:16] One is our future. [01:05:18] If, say, only one in a thousand quiet become loud, then until we know otherwise, that's the best estimate for our choice was that only one in a thousand chance we would become loud. [01:05:29] And the ratio also matters for SETI, the Circus for Extraterrestrial Intelligence. [01:05:36] This statistical model I was just telling you about, we can fit each of these parameters to data. [01:05:41] And I didn't tell you the actual final numbers, which are roughly loud alien civilizations appear roughly once per million galaxies. [01:05:50] Okay. [01:05:51] And right now is sort of the middle of the distribution of when they appear. [01:05:55] And if we start to expand at the speed, you know, half the speed of light like the rest of them are, then we would meet them in roughly a billion years. [01:06:03] Okay. [01:06:03] So that's the answers. [01:06:04] But those are the statistics for the loud ones. [01:06:08] But if there are a ratio of quiet to loud, then the higher that ratio is, the closer is the nearest quiet one. [01:06:14] Because we know the loud, the nearest loud one is like once per million galaxies away. [01:06:20] But if, say, there was a million to one ratio of quiet to loud, well, now the nearest quiet one might be in our galaxy. [01:06:31] Whereas if it's only a 10 to one ratio, you see the nearest one is one in 100,000 galaxies away, which is a long way away. [01:06:38] Yes. [01:06:38] It seems like if we were to become that advanced and be able to achieve interstellar transportation, it seems like we would be, based on what we are like now, we would become a loud one. === Integrated World Communities (16:08) === [01:06:52] Civilization, because that's just human nature now. [01:06:54] We expand, we influence, we grab things, and the way economics are involved. [01:06:59] But there are contrary forces at play. [01:07:03] And so that's worth thinking through. [01:07:08] So, as you know, the world is pretty integrated at the moment. [01:07:13] Once upon a time, empires were separated across the globe and didn't interact very much. [01:07:17] Distances were too far to communicate or travel. [01:07:20] But as we've made travel and communication easier, the world has become more integrated over the last. [01:07:27] Century. [01:07:27] Right. [01:07:29] That integration has made an integrated economy. [01:07:32] We trade across the globe and integrated society. [01:07:35] People meet and interact across the globe and create communities across the globe. [01:07:41] And the size of nation and empires has also been increasing over centuries. [01:07:46] That is, we now have things like the European Union, large coordinations of nations on large scales. [01:07:54] Yeah. [01:07:55] Now, you know, for a long time, people have thought. [01:07:59] Doesn't that mean we'll have a world government soon? [01:08:01] Won't we merge these into an even larger unit? [01:08:04] And people have been concerned about that and wary of that. [01:08:09] And so far, we haven't. [01:08:12] But what they haven't noticed is that we have created something like a world mob or world community, which we did not have a century ago. [01:08:22] So if you look at many different areas of regulation, say nuclear power or medical ethics or organ sales or electromagnetic spectrum or plane safety or things like that. [01:08:33] It turns out that the world does it pretty similar everywhere. [01:08:38] So we have a world community that talks to each other about these things, and people want to be respected in those world communities. [01:08:46] And there's some conformity pressures in those world communities. [01:08:49] And those conformity pressures basically induce a pretty high level of convergence of regulation all around the world. [01:08:57] It was especially dramatic in COVID, if you recall, at the beginning of COVID, at the very beginning, the usual. [01:09:04] Health experts had their usual recommendations, which were against mass and against travel restrictions. [01:09:10] And then elites around the world talked about the subject a lot and came to a different conclusion. [01:09:18] And then the whole world did it the different way. [01:09:21] That is, almost everywhere people followed the new consensus about how to treat COVID, which was different than what the elite consensus had been a few months before. [01:09:32] Because we just have this large world community where people talk a lot and then they feel some social pressures to. [01:09:38] Go along with what everybody else thinks in the way that most communities do. [01:09:41] But now we have a world level community. [01:09:43] We have world level communities on many kinds of topics, especially with regulation and other sorts of business practices, where we have a whole world where we do things together in that way. [01:09:57] And I think a lot of people like that. [01:10:00] That is, instead of having a world in great war, first world people like that. [01:10:05] Yes. [01:10:05] Right. [01:10:06] Instead of a world at war and in conflict around the world, we have a world where. [01:10:11] We have a lot more peace because we talk things out and decide together rather than fighting and competing them out. [01:10:19] And that's been going on for the last century. [01:10:22] And I think we have to predict that will continue. [01:10:25] And so the obvious prediction is that we will have even stronger world communities that limit variation even more in the future. [01:10:33] We may have a stronger world government. [01:10:35] We may not, but we will have the stronger world community. [01:10:38] And we will correctly credit it and maybe incorrectly also for many advances like. [01:10:44] Dealing with global warming or dealing with inequality or dealing with overfishing. [01:10:49] That is, these world communities have been trying to tackle those sorts of problems and they have often had success. [01:10:56] So I think over the coming centuries, we will have a more strongly integrated world community that likes the fact that they can talk together and decide what to do and, in fact, prevent destructive competition that they don't like, i.e., making the regulations everywhere in the world the same, including preventing war. [01:11:17] That's the near term future that we should expect given recent trends. [01:11:21] But now project that forward and imagine the point in time a few centuries from now when interstellar travel becomes possible. [01:11:30] And imagine the choice to allow that. [01:11:34] People will know that as soon as they allow an interstellar colony to leave Earth and go off to another star, that will be the end of this era when we could all talk and decide together and exert enough social pressure to make sure we all do it together the same way. [01:11:51] Once you send those colonies out, they are beyond your control to regulate. [01:11:56] You cannot stop their competition and you cannot stop their evolution. [01:12:00] They will evolve and become strange things. [01:12:03] So, one of the things I think over the next few centuries people will like is that they will prevent evolution. [01:12:08] They will prevent our descendants from changing to become strange descended creatures. [01:12:14] Wait, wait, wait, wait. [01:12:15] Explain that again, that last part. [01:12:17] Why would they want to stop evolution? [01:12:19] Why would they want to stop us from evolving? [01:12:20] So, you're saying they would want to stop us from becoming different? [01:12:23] Yes. [01:12:24] So. [01:12:25] We've, I mean, as a futurist, there have been many conversations over the last few decades about the possibility of, say, genetic engineering or other sorts of changes to people, including artificial intelligence. [01:12:36] And a consistent theme is a fear, even horror, of the changes that might happen and the ways that our descendants might be weird. [01:12:46] People have been pretty against allowing substantial genetic engineering to make our descendants genetically different. [01:12:53] They are wary of allowing AI descendants to be mentally different. [01:12:59] And they are often just wary of, say, population competition. [01:13:02] That is, people have often been worried about overpopulation. [01:13:06] And one of the fears they've had is that even if most places limit their population, the few places that don't eventually dominate the population. [01:13:14] And so people have wanted to and talked about having regulation of fertility to prevent overpopulation, which is competition by population size. [01:13:24] Okay. [01:13:26] So all of these themes together are showing a wariness. [01:13:30] Of allowing evolution, of allowing our descendants to become different from us. [01:13:34] That is, we are human in a certain way and we like being human and we are wary that our descendants might just not be very human. [01:13:40] They could have very different mental styles that are in conflict with ours. [01:13:46] And that scares people. [01:13:48] But when we have this world community, they will just stop those things that they don't like. [01:13:54] Just like today, we have basically stopped nuclear power of the sort that scared people. [01:14:00] So, are you saying if we did become a well integrated world community or if there was a one world governance somehow that we would decide not to become interplanetary or interstellar? [01:14:14] The key choice at that point would be to continue to have a unified world community. [01:14:19] Now, by world, I mean the whole solar system. [01:14:21] So, within a few centuries, right, they would be spread across the solar system, but still the solar system is small enough to allow this centralization and coordination. [01:14:30] Our solar system. [01:14:30] So, if we had Communities on Mars, all these planets, we would still be able to control it within the sun. [01:14:35] Yes, right. [01:14:35] Anything in the solar system, you can throw a rock and smash it if it misbehaves. [01:14:40] Okay. [01:14:40] Okay. [01:14:41] That's it. [01:14:41] Basically. [01:14:42] Yeah. [01:14:42] Wow. [01:14:43] So the solar system is still small enough to be within the reach of central control. [01:14:50] Okay. [01:14:51] So if we continue to enjoy this world community where we have reduced war and conflict and harmful competition and things we are scared of, Like genetic engineering or nuclear power or whatever, by having this global community that agrees on what things to stop. [01:15:10] And then at one point, we have the choice to send out an interstellar colony, and we know the consequences of that. [01:15:18] That will be a hard choice, I predict. [01:15:20] That is the consequence of allowing interstellar colony is that we no longer regulate behavior out there. [01:15:26] We'd be just letting go. [01:15:28] They would be allowed to do whatever they want, they were allowed to change however they want, to adopt technologies however they want. [01:15:35] And then They could come back here, contest us, contest here, control here, right? [01:15:40] Whatever they develop out there, whatever advances they produce out there, they can come back here. [01:15:46] Wow. [01:15:47] And the question is, would we then allow that? [01:15:49] So I can see it going either way, but the point is, I can see it going either way. [01:15:54] Right. [01:15:54] It's not a crazy thing for people to want to do to enjoy, to continue their community that they understand and like versus allowing strange. [01:16:06] Evolution and competition, and perhaps fierce, unchecked competition. [01:16:12] Right. [01:16:13] So, the idea of panspermia is that a rock came here a long time ago containing genetic material from somewhere else in the universe, and then we derived from that. [01:16:24] I've heard you explain even in more depth the idea that when the universe was created, the same life forms or biology that was on one rock expanded into everywhere. [01:16:38] So, the idea is. [01:16:39] If there are aliens out there that are expanding or not, that they all came from the same source. [01:16:46] So, the motivation here is that the simple Grabby Aliens model we just described, again, has the nearest aliens being a million galaxies away, right? [01:16:57] Once per million galaxies. [01:16:58] So, it has almost no prospect of any aliens nearby under that simple model. [01:17:06] But you might ask, well, but couldn't there be aliens closer than that? [01:17:12] Might that not make sense? [01:17:14] And you could ask me, try to come up with a story where that makes sense. [01:17:18] And then I realize I can. [01:17:21] As the simple model has things appearing independently, randomly, and independently. [01:17:27] But what if they appear in a correlated way? [01:17:30] What if clumps appear together? [01:17:33] That would then allow the nearest aliens to us to be much closer. [01:17:38] Even if, on average, aliens appear only once per million galaxies, if we're here and they're correlated, then they could be here too. [01:17:46] So, the question is what sort of correlation process could there be that would make aliens appear together in clumps rather than appear independently around the universe? [01:17:57] And panspermia is a process that could create that correlation. [01:18:01] That's why panspermia is interesting in this context, it would allow there to be aliens much closer than they would otherwise be. [01:18:13] So the panspermia story would be that life appeared somewhere on another planet, not Earth. [01:18:20] And then that actually allows life to have had a much longer time to evolve. [01:18:24] So if you actually look at the earliest life we can see on Earth, it looks really complicated. [01:18:30] Making it hard to believe that evolution could have produced that complexity in such a short time. [01:18:36] So, postulating an Eden, another planet before Earth, gives a much longer time scale for much simpler life to have slowly gotten more complicated on Eden. [01:18:48] And then maybe the life we see on the beginning of Earth is like halfway down the whole path. [01:18:54] So it jumped to another place to evolve? [01:18:57] Right. [01:18:58] So it wouldn't do it on purpose, of course. [01:19:00] As you know, rocks are falling from the sky all the time. [01:19:03] And when rocks hit the ground, they often knock rocks from the ground back up into the sky. [01:19:08] And inside those rocks is often life. [01:19:11] So that's the obvious route by which life could move around in the universe is in rocks. [01:19:18] Now, at the beginning of our solar system, when the solar system started, it was born in a nursery where, say, a thousand other stars were all being born in the same place at the same time. [01:19:31] And at the beginning of our solar system, there were lots of rocks flying back and forth. [01:19:36] You know, enormous numbers of rocks flying back and forth in that early solar system. [01:19:41] So, if a rock came from outside that nursery into that nursery, it would have plenty of opportunity not only to land places, but to get, you know, smashed and knocked around and spread around. [01:19:53] So, it would be plausible that life that seeded Earth at that early point of time in our stellar nursery would have also seeded many other planets in that nursery at the same time. [01:20:06] So now we would have the scenario life appeared on some Eden many billions of years ago, maybe lasted five billion years there. [01:20:15] Then a rock, you know, hit that, flew, maybe that rock drifted for millions of years, actually, and then eventually landed in our stellar nursery where it seeded not just Earth, but many other planets. [01:20:31] And then that makes the correlation. [01:20:34] So these stellar nursery only lasts for, you know, Few million years and then it drifts apart. [01:20:40] These planets that were all born together drift away and they basically form a ring around the galaxy. [01:20:47] Right. [01:20:47] And we can actually find them in the sky. [01:20:49] That is, they all have the same mixture of chemicals because the stellar nursery was an integrated mixture of chemicals. [01:20:55] So they just all have the same proportion of chemicals that you can see in a stellar spectrum. [01:20:58] So we can actually find them in the sky, find our stellar siblings, and they could find us. [01:21:06] So if life started on a whole bunch of planets, That was seated in this nursery, then over the last four billion years, they would have all been trying to advance and evolve. [01:21:20] And if the advancement of life from that stage wasn't terribly hard, then it might be that other planets also went a long way down that path. [01:21:31] And that one of them might have reached our level before us. [01:21:35] And then there could be aliens nearby. [01:21:40] This would be the best explanation for, like, The modern UFOs that we see today, right? [01:21:46] If you're trying to ask how could UFOs be aliens, how could that make any sense at all, then this is the best story I can come up with. [01:21:55] Because remember, otherwise, if they're once per million galaxies, they're just crazy far away, and it would just be very unlikely for them to be anywhere near here. [01:22:06] This puts them close. [01:22:09] Now, but we can go farther in trying to come up with a scenario to explain, say, UFOs by thinking more about the scenario I just described. [01:22:19] So, if another alien civilization appeared as a panspermia sibling, there's another thing that we know about it, which is that it did not get grabby. [01:22:33] Right. [01:22:34] Right. [01:22:34] Because otherwise, our galaxy would be full of them. [01:22:37] Right. [01:22:37] Right. [01:22:38] So, we also know the timing would be they would have appeared, say, roughly 100 million years ago, would be sort of a random time. [01:22:44] Not in the last thousand years at all. [01:22:45] 100 million years would be roughly when they would have appeared. [01:22:50] So, The scenario would be they appeared 100 million years ago somewhere in our galaxy, and then in that 100 million years, they did not colonize the galaxy, they did not expand and things. === Choosing Not to Expand (05:08) === [01:23:00] So, they must be one of these civilizations that chose not to expand, they chose to prevent it and succeeded at that. [01:23:06] So, you have to realize that's actually a pretty impressive achievement. [01:23:10] Remember, they're a big civilization, and if any one part of them ever left with an interstellar colony, that's the end. [01:23:17] The era of this unified thing ends, and competition returns. [01:23:21] So, for 100 million years, they managed to stop that. [01:23:26] Right. [01:23:26] Now, couldn't an explanation for if there are aliens here, if these UFOs are aliens, couldn't they just be sort of like hall monitors or security guards making sure that. [01:23:40] We could just list a large number of logical possibilities, but it seems more useful to walk through what we do know about them and then collect those implications because we can draw some conclusions from the things we do know to limit the space of theories, right? [01:23:57] So, one thing we know is. [01:23:59] They did not colonize the universe. [01:24:01] Right. [01:24:02] Therefore, they had a rule that they successfully enforced for 100 million years against that in their civilization. [01:24:10] So, that's the thing we know about them, right? [01:24:13] We know how old they are, and we know they've chosen not to allow expansion. [01:24:17] And then we know that they could find us, that they could easily see where their panspermia siblings were, and they could look at them in telescopes and track them. [01:24:27] So, they could have been tracking us. [01:24:29] All that time. [01:24:32] And we know that they would know that we threaten to break their rule. [01:24:38] They've had this rule against expansion, and we don't, haven't been told about the rule. [01:24:45] And we, if they did nothing, we might well break their rule. [01:24:49] And then the thing they were trying to prevent would happen, right? [01:24:54] The universe gets colonized nearby by something else. [01:24:58] So that creates a plausible motive for them to be here. [01:25:01] Right. [01:25:03] So, I mean, the Keith question is like, if they've had this rule against expansion, that's probably a rule against a lot of travel, too, right? [01:25:09] They probably prevented people from traveling very far from their home. [01:25:12] It could be a risk. [01:25:13] Because every traveler becomes a risk to create this interstellar explosion, right? [01:25:17] So now we have a reason why they would want to be here and take a chance, an unusual chance, to actually allow someone to come here. [01:25:27] Because this is a high priority. [01:25:30] Like, we are at risk of breaking the rule, a rule they've kept successfully for 100 million years. [01:25:36] And a rule that's obviously very important to them because they've paid large costs to keep it. [01:25:42] So, already we can see that they have chosen, we have a reason why they would be here and why they wouldn't be everywhere else. [01:25:52] Those are the key things we're trying to explain. [01:25:55] And another thing we conclude is they didn't just kill us. [01:25:59] That would have been really possible, right? [01:26:02] So, they must feel some affiliation, some reluctance to just kill us. [01:26:08] So, they have some other plan to prevent us from expanding other than just killing us. [01:26:15] Right. [01:26:16] Which goes against, well, didn't Stephen Hawking have a famous quote on this subject about another civilization coming here? [01:26:23] He said if it did come here, it would be the end of us. [01:26:26] They would farm us or they would consume us or something like that. [01:26:29] Are you aware of it? [01:26:29] Right, right. [01:26:30] But it's certainly, if you just took random past, you know, connect places when humans met each other in the past as your data set, you might well say, yes, those don't usually go well. [01:26:42] Right. [01:26:43] Which is fine. [01:26:43] But we know more things here, you see. [01:26:46] So we, We aren't just making an analogy to a random path earthing. [01:26:50] We are drawing step by step conclusions from the things we know. [01:26:54] That is, you know, we'd know that they would be a panspermia sibling. [01:26:58] Therefore, they would be, even though they're near here, the nearest other alien could be a million galaxies away, right? [01:27:06] So they're near here, but hardly anything else is, right? [01:27:08] So it's not a galaxy populated full of alien civilizations. [01:27:12] It's just them and us and maybe one other or something like that. [01:27:15] That would be all there is. [01:27:17] Do you, how deep do you go? [01:27:18] I know you're very science based, analytical, cold calculated person when it comes to this stuff, but how much attention do you pay to the stories of like interactions with other beings that are accounted for? [01:27:33] I mean, there's books and movies and documentaries about this kind of stuff. [01:27:37] How much like UFOs and nukes being around nuclear weapons, shutting them down? [01:27:41] There's a famous story of a UFO landing in Africa in front of a school and communicating with children telepathically about the environment and. [01:27:52] The dangers of technology. [01:27:53] What do you make of those stories? [01:27:55] And how much credibility do you give them? [01:27:59] So, this entire topic is in disrepute in general among elites, elite academics, and elites of all sorts. === UFOs and Elite Disrepute (14:44) === [01:28:09] So, if I'm going to venture into this subject, I want to do it very carefully so that my thoughts won't be wasted. [01:28:17] If I am sloppy about this, then I will be tainted with the same disrepute that. [01:28:22] Everybody else has been who touches the subject. [01:28:24] Right. [01:28:25] So I'm very eager to be methodical and careful here in exactly how I do this. [01:28:30] Right. [01:28:31] So that means I want to sort of logically lay out the space of possibilities and then lay out sort of the general inference task and then place myself somewhere in that set of tasks and say which things I'm doing, which things I'm not. [01:28:47] I'm not going to present myself an expert on all the tasks that are relevant for thinking about this topic, but I will venture into some of them and say for some of them. [01:28:56] I have relevant expertise and I will speak to those. [01:28:59] Okay, so first of all, we just say look, there are things people think they see, right? [01:29:08] And they have to have some explanation, and we can categorize the kind of explanations they could be, right? [01:29:13] So, for example, they could be aliens, sure, or they could be just some other hidden Earth organization who has capabilities beyond what they've advertised. [01:29:23] Or it could all just be mistakes and delusions. [01:29:27] Drunk drunkards and blowhards, and you know, people trying to get attention, right? [01:29:32] Or there could be some organized hoax behind it all. [01:29:35] Somebody's like planning and purposely having people lie and putting things up there that look like things and just trying on purpose to make us think of something, right? [01:29:45] Those are the four main categories of explanation, right? [01:29:48] And we're interested in judging which is the truth. [01:29:51] So, in general, what we want is what's called a Bayesian analysis here. [01:29:55] That is, for each of these categories of explanation, we want what's called a likelihood and we want a prior. [01:30:01] A prior is what's the chance of that scenario, ignoring all this evidence, just A priori, what would be the chance you would think of something like this happening? [01:30:11] And the likelihood is given that this theory was true, how likely is it that you'd see the sort of things you'd see? [01:30:20] That's and then you basically, once you have a likelihood and a prior for each of these, then you multiply those two and you do a weighted average and you get your posterior. [01:30:28] I see which theory you believe. [01:30:29] So that's our task. [01:30:31] We need likelihoods and we need priors. [01:30:34] Okay. [01:30:36] So I am more of an expert on the priors. [01:30:41] That is, the likelihoods have to involve looking at these cases in particular in some detail, right? [01:30:48] Because we're talking about how likely it is that these theories would account for the particular things people see. [01:30:54] Whereas the prior is just about what kind of things could have happened in the universe, what kind of things do ever happen. [01:31:02] And so my expertise as someone who's done physics and astrophysics and economics, et cetera, puts me at I can speak to the prior at least, at least for some of these, and less so to the likelihoods. [01:31:18] Now, I'll just say I've looked at enough UFO reports to say it's more compelling than the data I can find for fairies and ghosts. [01:31:30] I go look up online what's the best evidence for fairies and ghosts, and they don't impress me very much by comparison. [01:31:36] Okay. [01:31:38] I can also say in the past, there were things that were Equally similarly disreputed to UFOs that in fact turned out to be true. [01:31:46] So, once asteroids, people thought rocks falling from the sky was crazy. [01:31:53] Ball lightning is something people thought was pretty crazy, except they made it in the lab and they decided it must be true, even though the actual evidence of it outside the lab still looks pretty weak. [01:32:03] But they made it in the lab, so they figured, okay, I guess that must be true. [01:32:07] And then there were these very high level lightning, pink lightning, that looks kind of like an octopus. [01:32:12] People said they saw a long time ago. [01:32:14] The first pilot said they saw, and people thought that was crazy. [01:32:16] And then a couple decades ago, NASA took pictures. [01:32:18] They said, Oh, okay, I guess it's real. [01:32:20] So that would be my context is to say, like, just because this isn't disreviewed doesn't mean it couldn't be because we've often thought things were crazy and then changed our mind about them. [01:32:33] So, and that, you know, the actual evidence I can see, I think again, it's better than ghosts and fairies. [01:32:43] Okay. [01:32:43] Okay. [01:32:44] But how much better? [01:32:46] Is what you need an expert to go into those details for. [01:32:49] But what I do want to say is the prior is high enough that you should be taking the data seriously. [01:32:56] So think about a murder trial. [01:32:57] Okay. [01:32:58] Okay. [01:32:59] In a murder trial, they claim A murdered B, right? [01:33:03] Now, if the prior on that was crazy low, you would just say, no, that's crazy. [01:33:07] I'm not even going to think about that, right? [01:33:10] So how low is the prior in a murder trial? [01:33:12] Well, let's say roughly on average, one out of a thousand people's killed. [01:33:16] And They might have a thousand other people nearby who could plausibly be the one who did it. [01:33:22] Okay. [01:33:22] So I'd say the prior is roughly one in a million for any given murder accusation. [01:33:27] Okay. [01:33:29] So a one in a million prior is something that evidence can typically overcome. [01:33:34] That's not so crazy unlikely. [01:33:36] It's not like one in a quadrillion prior, where you'd say, you know, go away. [01:33:40] That's silly. [01:33:41] It's high enough that you got to take it seriously. [01:33:44] So, my rough guess for the UFOs' aliens prior is roughly one in a thousand. [01:33:50] Now, that's quite a bit stronger than typical murder trial, which means you got to look at the evidence. [01:33:54] You can't just dismiss it, just like you would in a murder trial. [01:33:57] Now, if you have an accusation in a murder trial, you don't stop there. [01:33:59] You don't just say, well, okay, you accused, and so they're guilty, right? [01:34:02] You have to look at the evidence, right? [01:34:03] Yes. [01:34:04] And so, similarly here, just because this is in the realm of High enough plausible probability that it could be true doesn't make it true. [01:34:12] It just means you got to look at the evidence. [01:34:14] So I could walk through my calculation for one in a thousand prior, but I might also just mention one of the other categories I think has an even higher prior, which is the hoax category. [01:34:27] That is, the US government and other governments have at times organized big hoaxes of capabilities of the sort. [01:34:40] And I would give that a 1% chance. [01:34:44] That's still like 99% chance, no, right? [01:34:48] But I'd still say that's the sort of thing that does happen at times. [01:34:51] And so you got to take that seriously as well as a possible explanation. [01:34:56] And then again, it comes down to looking at the data like, how plausible is it that they could pull it off for the particular things we see? [01:35:06] I'm not very impressed with these delusion mistakes theory. [01:35:12] That seems to me it's hard. [01:35:14] It's just harder to account for all these stories as all delusions and mistakes. [01:35:20] Something more systematic would seem to be going on, I would judge. [01:35:24] But again, there are several other candidates. [01:35:28] What do you think the probability is that there are private organizations, aerospace organizations, companies that hold information, knowledge, or technology that they are? [01:35:48] Withholding from the public or academia that could explain some of these anti gravity sightings. [01:35:56] I mean, if you combine a theory of saying there really are aliens and then some people really got that technology, now you're basically combining the two of these theory categories, right? [01:36:07] If you just want to say there are advanced organizations out there that have advanced technology, they didn't get it from aliens, they just made it up themselves, then it's hard to understand how they could be much more advanced than everybody else. [01:36:22] Because we know a lot about technology and how it advances in the world and how it happens by diffusion and people hearing of other things. [01:36:28] So we just almost never see enormous advances by some sides compared to others, right? [01:36:35] It's usually within a range of the kinds of things we expect. [01:36:38] So the kinds of abilities reported for UFOs do seem to me substantially beyond the sort of things you might expect, at least for several decades ago. [01:36:52] Now, More recently, there's this technology whereby you might, say, send a laser to some place in space, in the sky, in the atmosphere, and say, heat it up, and then make a little plasma there, and then, like, be able to draw that. [01:37:07] And then, basically, like, if on a TV screen, draw an image in space and then move it around. [01:37:12] And you can move an image that you drew around very quickly. [01:37:15] So, that seems to be an ability that's near feasible now that could explain UFOs. [01:37:21] But that wouldn't explain UFOs seen in 1960 or something. [01:37:25] Right. [01:37:25] That would have to only be explaining. [01:37:27] Very recent UFOs, because that's a very recent capability that apparently people have now, is the ability to draw. [01:37:34] Like a laser pointer. [01:37:35] Right, basically. [01:37:36] But except it doesn't go against a wall, it stops at a place in space and draws something there. [01:37:42] Right. [01:37:43] That's exactly how some of these pilots have been describing the things that they've been seeing, like Commander Braver and Ryan Graver. [01:37:49] So, from very recent observations, that becomes a more plausible story, but not for more ancient observations. [01:37:57] Aerospace organizations, similar to Lockheed Martin, containing some sort of technology that could. [01:38:01] Explain this stuff would have to be directly, it would have to be because they discovered some sort of alien technology. [01:38:08] It wouldn't be something that was just created here. [01:38:11] If it was far more advanced than the stuff that everybody else has, which it is, right? [01:38:17] I mean, I don't know. [01:38:18] If it's true, if it is true, it is clearly way more advanced than the kind of propulsion that we have. [01:38:24] So I didn't finish describing the most plausible scenario. [01:38:28] So there's another thing we know about UFOs that you would need to explain to explain UFOs as aliens. [01:38:35] It's a crucial thing many people have noted, which is they could have either been completely invisible or completely visible. [01:38:43] Those were, I mean, 100 million years more advanced civilization than ours. [01:38:48] They could have just been completely dark, orbiting and seeing everything without being at all visible, or they could have just shown up on the White House lawn, as they say, and just been completely visible. [01:38:59] So apparently, they are not doing either of those things. [01:39:03] So we need an explanation for why they're doing this weird thing of hanging out at the edge of visibility. [01:39:09] To sort of make themselves be somewhat visible, but not more visible. [01:39:14] That's weird. [01:39:16] So, you know, if I weren't trying to explain that part, I might give a higher prior to UFOs as aliens, but I need to explain that part too. [01:39:26] Because again, the other weird thing about aliens is like, why aren't they in the rest of the universe, right? [01:39:31] Why didn't they take over the galaxy? [01:39:32] Why are they only here and nowhere else? [01:39:35] That's the first thing to explain. [01:39:36] And so we postulated for that, that they have this rule against expansion. [01:39:40] And then that's why they're here. [01:39:43] But then we also need to explain why they are hanging out at the edge of visibility, not making themselves aware to everybody, right? [01:39:48] But also not completely hiding exactly, right? [01:39:51] Why be partially visible? [01:39:53] What's the point, right? [01:39:54] Because they clearly could have chosen anything else that they wanted. [01:39:58] So we want to explain that, and we want to do that in the context of what we know of their motives to be here, right? [01:40:05] They're here to convince us not to expand without killing us, right? [01:40:10] Because they could have killed us. [01:40:11] Those are the other things we know. [01:40:12] So they somehow are going to use this method of hanging out the edge of our visibility. [01:40:17] To convince us not to expand and not kill us. [01:40:22] That's the purpose of this. [01:40:23] So, how does that work? [01:40:25] So, the thing I notice is that humans have domesticated other animals primarily by being the top of their pecking order. [01:40:38] If we make dogs do what we want, it's because we are the top dog. [01:40:43] Okay. [01:40:43] We go into a pack of animals and we convince them we are in their status hierarchy and we are at the top. [01:40:50] And that's how. [01:40:52] They do what we say. [01:40:53] That's how we domesticate other animals. [01:40:55] And it's how we domesticate ourselves. [01:40:56] That is, all through history, emperors have convinced people to obey the emperor because they showed they were the top person in the society. [01:41:04] They had a big crown and a big palace and a big army. [01:41:08] Right. [01:41:08] And they displayed themselves as the top of the pecking order to convince others to go along. [01:41:15] Right. [01:41:16] So the hypothesis is that's what the aliens want to do here. [01:41:20] They want to be the top of our pecking order. [01:41:24] How do they do that? [01:41:25] They show that they are better than us. [01:41:28] They show very impressive abilities, but not violent confrontation with us. [01:41:35] They are not hurting us, but they are better than us. [01:41:39] Just like you can't really make a bunch of dogs obey you if you kill half of them. [01:41:43] Right. [01:41:44] You have to be in their tribe, but they're not being like physically threatening. [01:41:48] Right. [01:41:49] Not threatening, but better. [01:41:52] Better. [01:41:52] Clearly better. [01:41:53] Right. [01:41:53] Right. [01:41:55] And We should be smart enough to figure out why they're here and what their agenda is. [01:42:00] They don't even have to say anything. [01:42:02] They just have to show themselves as very impressive, better creatures nearby. [01:42:09] Now you might think, well, yeah, but why don't they show a little more? [01:42:12] What's the harm in that? [01:42:13] Well, why would it be a problem if they were to give us a little more detail? [01:42:17] And then you realize, well, think of how much humans have hated other humans for pretty minor differences between human civilizations. [01:42:26] Right. [01:42:27] Humans are pretty similar to each other. [01:42:29] Nevertheless, we find these reasons to hate each other based on what to an alien must seem like a pretty minor difference. [01:42:36] Right. [01:42:37] So they would know that. [01:42:38] They would know that if they showed us details of their lives and history, something about it could really put us off. [01:42:46] Right. [01:42:47] Maybe they eat babies. [01:42:47] They don't think that's a problem, but we hate eating these. [01:42:50] Maybe they have tentacles. [01:42:51] We hate tentacles. [01:42:52] Who knows? [01:42:52] Right. === Hiding from Rivals (07:50) === [01:42:53] Right. [01:42:53] Point is, they don't want to take that chance. [01:42:56] So, They're just going to not ever show us much detail about us, about them, but just clearly show they exist and that they are much more capable than us and that they're peaceful. [01:43:07] That's it. [01:43:07] That's all they have to do. [01:43:08] And that's a very simple strategy they could have approved from the beginning. [01:43:12] So remember, another constraint in this whole process is they're scared of any expedition getting out of control and ending their era of no interstellar colonization, right? [01:43:22] So when they authorize this one expedition, not only do they only authorize it, but they have to ask, how are we going to keep control of it? [01:43:28] How do we limit its behavior? [01:43:30] How do we make sure it only does the thing we want it to do? [01:43:34] So they would want to pick a very clear strategy, a simple strategy that they could authorize it to do and only do, not be creative and come up with a bunch of new ideas when it got here. [01:43:44] That's taking too many chances, right? [01:43:47] They want to say, let's have a plan. [01:43:48] They're going to go there, execute the plan. [01:43:50] That's all they're going to be authorized to do. [01:43:51] That's all they're going to be capable of doing, hopefully. [01:43:54] And then we are taking the minimal risk with this expedition out there to go convince these people not to expand. [01:44:01] Right. [01:44:02] And so you have to think what would be a plan they could think of ahead of time without knowing much about us. [01:44:07] Like they can't wait till I get here and like learn a lot and then make a plan based on that because that would give too much discretion to the local team. [01:44:13] They say from a distance, what can we know that will make a plan that they could execute? [01:44:18] It seems it's interesting though, like if you look just in the last 150 to 60 years, how the stigma of the subject has become. [01:44:34] Less and less. [01:44:35] It's still pretty high. [01:44:36] It's, it's, but now that the government is talking about it and the New York Times is talking about it, it's still pretty high. [01:44:44] I mean, if you talk to other intellectuals, I think you'll find doctor professors of various sorts, physicists, astrophysicists, you know, talk to just all sorts of prestigious people. [01:44:55] I still think you think that stigma is still pretty high. [01:44:57] Where does it go? [01:44:58] If it keeps going on this trajectory of less and less stigma, more people entertaining this conversation that you and I are having right now, and the world, everyone in, A larger percentage of human beings on earth being aware of this idea and being understanding it. [01:45:17] How does that affect us? [01:45:18] If, say, the US government's new willing to flirt with the idea produced more concrete evidence that then persuaded other people, that would make a big switch. [01:45:28] But if all they do is just release the sort of thing they've released so far and they don't release anything more that's more persuasive, I don't think it'll tip the balance for a long time. [01:45:40] I mean, it's basically just going to be the slow accumulation of more and more solid evidence that tips the balance because people are just not that willing to accept this. [01:45:49] So that's, but it'll just take a long time. [01:45:53] But of course, that's fine from the under this theory. [01:45:55] That's fine for the aliens. [01:45:56] They're not in a rush. [01:45:58] We're no longer, we're nowhere near threatening to do a large-style colonization yet. [01:46:02] And what would human beings do if they did discover it? [01:46:05] Like, if a small group of human beings discovered this somehow? [01:46:10] Well, I mean, under many people claim they already have, right? [01:46:13] Yeah, exactly. [01:46:14] Small groups discover, but they can't convince the rest of people yet. [01:46:16] Well, would they even want to convince them? [01:46:18] Some do, some don't. [01:46:19] Or would they want to just find some economic benefit for it and try to find some leverage or competition? [01:46:24] Between nations or between. [01:46:25] It's hard to see much benefit here. [01:46:27] I mean, the whole point is like, look, they're not like handing out Christmas goodies or something. [01:46:32] Yeah, but like, they're just off in the distance being impressive. [01:46:36] Right. [01:46:36] But I'm saying, like, if we did find evidence of this, like the story of Bob Lazar is the most famous one of. [01:46:43] I'm sure you're familiar with it. [01:46:45] Yeah, where he claims he worked on back engineering these anti gravity crafts and the government has been working on it for decades, but we haven't been able to figure it out. [01:46:53] And we want to use this as sort of leverage against other nations. [01:46:57] For weapons and war. [01:46:59] So, again, there's this vast space of possibilities, but I have to sort of stay anchored on the possibilities that make sense. [01:47:06] Right. [01:47:06] I understand. [01:47:06] That I can make sense of. [01:47:07] And so, in the simple theory, the aliens are just here to impress us and like convince us they exist and that we should do what they say. [01:47:15] Under that simple theory, they don't have any particular reason to hand out goodies to some people or to like reveal their technology to us. [01:47:22] Yeah. [01:47:22] It seems kind of crazy that they would just accidentally crash their vehicle and leave it behind. [01:47:26] I mean, again, 100 million years more advanced, it's like, And so, and then we could be able to capture. [01:47:31] And similarly, I got to say, abductions still don't make that much sense in this story, right? [01:47:39] What they need to do is just be visibly impressive. [01:47:41] Why they would need to pick some people up and like study them on a personal experience. [01:47:45] Yeah, there would be no need for that. [01:47:48] Again, 100 million more advanced, they could study us just fine from orbit. [01:47:51] They don't need, they wouldn't need to physically pick us up and poke us with needles. [01:47:54] No, exactly. [01:47:55] No, that would be crazy. [01:47:59] A very primitive civilization would need to do that sort of thing. [01:48:04] Really? [01:48:06] So, if they had that, they'd have to have the technology to traverse light years to get here. [01:48:13] Let's talk about what we can roughly guess about very advanced civilizations, actually. [01:48:19] So, the first thing we should be able to guess about very advanced civilizations is they are just artificial. [01:48:25] Like, they are made in factories, they're designed, you know, they can swap their parts out, you know, just like we make artificial cars instead of growing horses. [01:48:36] Right. [01:48:36] Right. [01:48:37] They are just artificial creatures that, and their minds are even artificially designed. [01:48:43] And, you know, there's a new product line that comes out from some source, and they're all the same in the product line. [01:48:48] And then they have some, they just, the modern economy where you make things in factories and you have firms that make them at scale and distribute them, that's the kind of way they're made and the kind of way they run everything, basically. [01:49:01] So they aren't, you know, made, they don't reproduce with, Cell mitosis, or whatever the way we do it, that's how effective it was. [01:49:11] Surely that's how they would be. [01:49:14] So, there was long since become artificial. [01:49:17] So, they've long since been able to be any physical form they want to. [01:49:20] It's not like they have a certain shape body that they have to fit into a craft or something, right? [01:49:25] They can move into any kind of body they want to make, right? [01:49:28] Just like on a construction side, we make different kinds of trucks and different kinds of shovels or whatever. [01:49:33] It's not like the nature of a construction worker to have a certain kind of shovel. [01:49:36] It's like you have a bunch of different things for whatever purpose you want to make them, right? [01:49:41] So, there's not going to be the size of the aliens, right? [01:49:45] Or even their lifespan or how many eyes they have. [01:49:47] None of that makes sense. [01:49:49] They are artificial creatures who can. [01:49:51] Have as many eyes as they want to in any given circumstance. [01:49:55] And they can add eyes and take them away just fine, right? [01:49:57] Just like we can add a headlight to a car. [01:50:02] So that's one thing I think we can say robustly about aliens is that they are artificial. [01:50:09] The other thing we might want to say depends on how much they've controlled their competition. [01:50:17] So, you know, they prevent interstellar colonization and they probably prevent some other kinds of evolution, but how much evolution do they prevent? [01:50:25] How? [01:50:26] How restrictive are they about their evolution? [01:50:29] So, I think we can imagine two extremes. [01:50:32] One extreme is they are minimally restrictive, like they prevent people from leaving their solar system, but most everything else inside their solar system, they just allow relatively free evolution and competition, right? === Immortality and Fragility (15:46) === [01:50:43] All they do is just prevent stuff from leaving, right? [01:50:48] Under that scenario, we should expect evolutions to continue, and then we can ask what do we expect evolution to produce in the long run for various features of creatures. [01:50:58] The opposite extreme is where they have a very restrictive, very centrally controlling society that has prevented most competition and evolution. [01:51:10] Under that other extreme, their technology may have stagnated at some point when they could no longer innovate more because they were being so restrictive. [01:51:22] And even worse, their society might have come to rot. [01:51:26] Rotting is a feature that we see consistently in large, complicated systems that humans have built. [01:51:32] And even observed. [01:51:34] And rotting is the sort of thing that central control risks, and it becomes a big issue for a long term civilization that has maintained central control for a long time. [01:51:46] I think most people don't know enough about rotting. [01:51:48] Can you give me an example of rotting in today's world? [01:51:52] Awesome. [01:51:52] Hold on. [01:51:52] Awesome. [01:51:53] Can you turn the air down? [01:51:54] It's getting hot in here. [01:51:57] Sorry. [01:51:58] Nope. [01:51:59] Do you want to pause while he does that? [01:52:01] Yeah, we can keep going. [01:52:02] It's fine. [01:52:03] Okay. [01:52:04] Unless you want to take a bath and break. [01:52:09] No, I'm fine. [01:52:09] Okay, cool. [01:52:10] So, most large software systems are not tiny systems, rot. [01:52:16] Think of the Apple operating system or Google operating system or the system that runs any large bank or anything like that. [01:52:25] Large software systems over time become harder to usefully change, they become more fragile, and eventually they are thrown away and replaced with brand new systems. [01:52:37] This is the consistent software practice. [01:52:38] At enormous Cost. [01:52:41] If you think of any big company, most large companies have a big software system somewhere in their process that they developed, and it's a big part of their value that they have compared to their competitors. [01:52:52] And all of these large software systems rot with time and are typically just thrown away entirely and started over from scratch. [01:53:01] This is because they're overlooked, because they're so large and complex. [01:53:05] They are looked at carefully, but still the key thing is the process of changing them makes them rot. [01:53:11] So, if you have a large software system and nothing changes, then you never have to change anything. [01:53:17] It can just keep going. [01:53:18] But typically, you're trying to add features. [01:53:21] The environment it's in changes. [01:53:23] Maybe the kind of hardware it connects to changes. [01:53:26] And so, slowly over time, things are changing and you need to adapt it to those changes. [01:53:30] Customers may change, customer preferences, customer desires. [01:53:34] Customer desires. [01:53:35] Firmware updates. [01:53:36] Right. [01:53:36] All sorts of changes. [01:53:37] But the key thing is you have a big, complicated thing and stuff is changing. [01:53:41] So, you constantly have to adapt the system to deal with these changes. [01:53:45] And the key thing that happens is you have dependencies between different parts of the system. [01:53:51] So, if you don't have any dependencies, then you'll just change one part of the system and you don't have to change anything else. [01:53:58] But when you have to have dependencies, when you change one part, you also have to change the other parts that are connected to it in a matching way. [01:54:06] Otherwise, it's broken. [01:54:08] And the problem is when you take a system with two parts that are connected to each other, And that you change both parts to manage something, you tend to make those things more fragile, that it's become harder to change in the future. [01:54:22] This is sort of tied to what we were talking about in the beginning of downloading books to the human brain. [01:54:28] Right, exactly. [01:54:29] So, in fact, the human brain rots too. [01:54:34] Not just the cells rot, your mind rots in the sense that you start out with what's called fluid intelligence and you end up with what's called crystallized intelligence. [01:54:43] Crystallized intelligence means you know many specific things, but you can't learn new general skills very easily, not compared to when you were young and had fluid intelligence. [01:54:54] So your mind over your lifetime becomes harder to usefully change. [01:54:59] Old dogs, in fact, find it hard to learn new tricks. [01:55:02] That's actually true about brains. [01:55:05] So human minds rot with time, just like software does, and plausibly, even large. [01:55:13] Rule systems like legal systems or systems of organizations, they rot with time. [01:55:18] Corporations rot. [01:55:19] That is, most corporations eventually die and are replaced by new ones. [01:55:23] Even when they try to, you know, prevent that, it consistently happens. [01:55:29] Right. [01:55:30] So, rot. [01:55:30] And the only solution we've ever had as a robust solution to rotting is dying and replacement. [01:55:38] Right. [01:55:38] So, human bodies rot and are replaced with new organisms. [01:55:42] Our minds rot and die or are replaced. [01:55:44] Our software. [01:55:45] Has to be replaced, our firms have to be replaced, and all at great expense. [01:55:51] We have made enormous investments in things that slowly rot and then have to be thrown away. [01:55:57] Now, but so far, like that's been the nature of history, but now we are starting to create global structures that we don't want to replace at a global level, like these global communities and these global habits of regulation. [01:56:10] And so now we're entering a new era. [01:56:11] What happens when the world as a whole coordinates on, say, how we regulate nuclear energy and nobody is allowed to do it different? [01:56:18] And then that system. [01:56:20] Rots with time. [01:56:22] What will happen then? [01:56:24] Will we be willing to throw it out at some later point? [01:56:28] Or will we just deal with it slowly getting less efficient? [01:56:33] Because, like, systems that rot typically can function just at a lower efficiency. [01:56:39] Like, the old software was working, it just got harder and harder to change. [01:56:44] But you could just change it less and gain fewer advantages from changing if you insisted on sticking with it. [01:56:52] The reason why we throw it away is because the new thing will be. [01:56:54] Cheaper and easier to change, and we want to change things. [01:56:57] Yeah, but don't we just install like now? [01:56:59] We have these firmware updates for everything for all these systems, they're constantly being updated and improving fixes. [01:57:04] That's part of the rot. [01:57:06] The firmware updates are part of the rot. [01:57:08] Yes, absolutely. [01:57:09] They are changes that are trying to make other things fit each other. [01:57:14] Okay. [01:57:14] The reason why you have the firmware update is because something changed, and this is the strategy to do that. [01:57:20] But in the process, what is becoming crystallized or hardened or becoming? [01:57:26] I don't understand. [01:57:26] So the key idea is that it's harder to usefully change. [01:57:30] So the bits of the software don't disappear, right? [01:57:33] They stay there, but still it gets harder to change them. [01:57:37] Okay. [01:57:37] I mean, I don't know. [01:57:38] Think of the Marvel Universe or something, right? [01:57:40] The Marvel universe is rotting in some sense that, like, they keep making new assumptions and new characters, but they are limited in how many new things they can do because it has to be consistent with all the old things they've done. [01:57:52] I understand. [01:57:53] And slowly over time, they will be more limited what they can do in the Marvel universe. [01:57:57] Right. [01:57:58] I mean, it may still be a fun universe, but it would be, you know, but eventually it will be thrown away and replaced with other universes, right? [01:58:07] So, what is this rot? [01:58:08] I forgot what this was in context to. [01:58:11] So, first, it's a rot that, Say the aliens would suffer if they had a central government that limited their changes. [01:58:20] And it's a rot that we may suffer in our future if we, you know, have a strong central government or global mob that makes us all do everything together the same. [01:58:34] But it's a very difficult choice because, you know, the alternative is competition and therefore conflict. [01:58:42] And What do we do? [01:58:46] Right. [01:58:47] So, say, think of nuclear power. [01:58:49] Apparently, we could have had a vastly more powerful nuclear power industry than we did now. [01:58:53] We were scared and we over regulated. [01:58:56] But now we kind of over regulated everywhere, similar ways, similar degrees. [01:59:01] What if some place wants to not be so heavy regulated nuclear power? [01:59:06] Are other people going to be scared of that? [01:59:09] Will they allow it? [01:59:11] Or will they just insist that everybody keep on with the current program to prevent the things they're worried about? [01:59:20] And that'll be true for lots of different ways we're organized. [01:59:23] So, the key thing to notice is up until recently, the world has competition at the highest level. [01:59:30] Empires competed, species competed, right? [01:59:34] The fundamental thing that happened was empires rose and fell, and empires fell in substantial part because they rotted, but humanity could go on because humanity wasn't tied to any one empire. [01:59:48] We were the thing that lasted past the empires. [01:59:52] But once humanity becomes tied to an empire and then the empire starts to rot, well, then what happens then? [02:00:02] And that would be like a world civilization. [02:00:05] Right. [02:00:06] With being tied to. [02:00:08] With where we agree together to do things the same way and collect more and more rules about such things. [02:00:15] And that begins to rot. [02:00:16] What happened? [02:00:16] The question is what happens then? [02:00:18] Yes. [02:00:19] Are we willing to allow competition enough to drastically replace big parts of such things? [02:00:24] Or do we just slowly get less competent? [02:00:27] And less capable. [02:00:29] You've mentioned in the past that you plan on doing something with your body after your physical being passes away. [02:00:39] Right. [02:00:39] So I am a cryonics customer. [02:00:41] Cryonics, yes. [02:00:42] I'm holding this neck tag here up if you're not watching. [02:00:48] And basically, it says that if there's a medical emergency, they should call this number. [02:00:52] And the plan is then to freeze my brain in liquid nitrogen. [02:00:57] And once it's frozen, it won't change basically for centuries. [02:01:01] If they keep it frozen. [02:01:03] And then that would give me a chance to come back as a brain emulation later if the world I describe in the Age of M comes to pass. [02:01:12] So that's a chance I have at immortality. [02:01:17] I'd say another thing we can know about advanced civilizations is they are immortal. [02:01:25] But immortality doesn't necessarily mean what you think it does. [02:01:29] The civilizations are immortal, and entities within them are, in principle, immortal. [02:01:35] That is, immortality is just another robust feature of advanced civilization. [02:01:38] Meaning their consciousness is immortal. [02:01:40] It's capable of being immortal. [02:01:42] That's the key distinction. [02:01:43] So it's capable of being immortal. [02:01:45] Today, houses and cars are immortal in the sense that if you keep maintaining them and repairing them, they can go on indefinitely. [02:01:54] But we don't always choose to keep repairing and maintaining them. [02:02:00] That is, they over time become more expensive to maintain, and the value goes down because new things are more competitive or You know, more capable compared to them. [02:02:11] So that will also happen for our descendants who are immortal or aliens who are immortal. [02:02:17] That is, in principle, they could last forever, but it will cost to keep them going and their value may decline because new things are just more capable than they are. [02:02:29] Right. [02:02:30] So that would be an issue for me wanting to be immortal. [02:02:34] That is, they could bring me back in the future. [02:02:39] But will they? [02:02:41] And even if they do, how long would they keep me? [02:02:44] And so that's an issue I describe in my book, The Age of M, about a world where everything is in principle immortal, but it's still a matter of choice. [02:02:53] Right. [02:02:54] Right. [02:02:54] What would be the incentive to bring you back? [02:02:57] Or to keep me back if I had been brought back temporarily. [02:03:00] Right. [02:03:00] This kind of is the same thing as why would aliens abduct human beings and poke them prod them to try to stuff them? [02:03:05] It's like, what's the point? [02:03:07] Right. [02:03:07] Because it's so out of date. [02:03:11] Now, I do expect that because there are so few of us cryonics customers, like only maybe 200 people have been frozen so far. [02:03:19] That's it. [02:03:19] Maybe, you know, 3,000 people are signed up for it. [02:03:22] Wow. [02:03:23] Even though it's had free national publicity for many, for a half century. [02:03:28] So, if it can, unfortunately, if it continues that very few people do this, then the few of us who are available later on would be celebrities of sorts. [02:03:37] And so they would almost surely bring back the few of us. [02:03:41] However, if what I prefer is like a large fraction of the Earth's population were crying as customers, well, now it's much more of a question how many of those billions would be brought back? [02:03:53] If you're relying on the generosity of the future to help your future self out, you may be taking a needlessly large chance. [02:04:01] You know, what we often do is protect ourselves in the future. [02:04:05] That is, when you retire, you often retire on the basis of assets you've saved yourself rather than. [02:04:11] Throwing yourself on the mercy of your grandchildren or something to help you when you're retired. [02:04:16] So, similarly, I might think well, the best thing would be for cryonics patients to pay for their own revival, to set aside assets that are available later to cover their own case. [02:04:27] And this is not done. [02:04:28] It is often done. [02:04:29] Yes. [02:04:29] Oh, it is often done. [02:04:30] Yes. [02:04:30] Oh, wow. [02:04:32] Who are the type of people? [02:04:33] What type of people typically do this cryonics brain freezing? [02:04:37] Do you have any friends that do it? [02:04:39] Well, yes, but it's a very small fraction. [02:04:41] As I told you, there's only Like 200 people have been frozen so far, and maybe 3,000 people have signed up or something. [02:04:47] Yeah. [02:04:47] So far, a very tiny fraction. [02:04:48] So basically, first of all, contrarians, right? [02:04:51] Yeah, obviously. [02:04:52] You have to be willing to sort of put up with the social pressure that disapproves and do it anyway. [02:04:58] What is the process like? [02:04:59] So, they actually take your brain out of your skull and they put it in like a bag or they freeze it in carbonite. [02:05:04] Like, what does this look like? [02:05:07] So, the usual approach today, there's the option of whole body where they would freeze your whole body, or there's their head only where they would just freeze your head and not freeze the body. [02:05:21] With all of this? [02:05:22] Yeah, with everything. [02:05:23] So, I mean, the key thing is they're just trying to get you cold fast without too much damage happening while they get you cold. [02:05:31] So, then the main thing is just they basically pumped. [02:05:34] Antifreeze through your blood veins. [02:05:37] Okay. [02:05:37] Right. [02:05:38] And so they don't want to rip off any skin or anything. [02:05:41] They just want to pump you as fast as they can with antifreeze so that you get cold as fast as possible. [02:05:49] And then they just want to preserve you in that state where, again, so the usual approach is through liquid nitrogen. [02:05:55] Now, there is an alternative technological approach that seems feasible, which is sometimes called plastination, basically, where they would fill your. [02:06:06] Body with plastic, in essence, and sort of freeze the chemical reactions that way, not just by being cold, but by just basically putting stuff in between stuff that prevents the reactions. [02:06:20] And then that could, in principle, be preserved at room temperature, which would make it easier to preserve these things. [02:06:26] But the problem is that that's a whole separate technology that would have to be developed. === Freezing the Body (03:33) === [02:06:30] There's just too few customers here to justify having separate technologies. [02:06:34] Many of the customers today would not like the plastic approach. [02:06:39] Because they're hoping that their current bodies will be revived. [02:06:42] I'm skeptical about that. [02:06:43] But again, the key problem is just not enough customers here. [02:06:47] And I mean, honestly, I think this is a terrible shame in the sense that it seems to me that most people today don't need to die. [02:06:57] That is the main risk about Chronics is that these organizations can't last because they don't have enough customers. [02:07:04] So if they had lots of customers, this would just be much cheaper and more reliable. [02:07:09] So if millions of people use this stuff, then the price would come way down. [02:07:13] And the chance of you not surviving would go way down. [02:07:18] What is the name of the company that does this? [02:07:19] Is there a little bit of a company? [02:07:20] There are several. [02:07:20] There are several. [02:07:21] Okay. [02:07:22] My supplier is called Alcor. [02:07:25] Alcor. [02:07:26] Okay. [02:07:26] There's also Cryonics Institute, which is an alternative, and several others. [02:07:30] But again, the key thing is they just hardly have any customers. [02:07:33] What is the price of doing this? [02:07:36] I'm sure it ranges. [02:07:37] A few tens of thousands of dollars up, depending on how much you want and which firm you go with. [02:07:43] Right. [02:07:43] Basically, it's quite accessible via life insurance for most people. [02:07:47] At least first world people. [02:07:49] Right. [02:07:50] Wow. [02:07:51] So, an interesting comparison that often strikes me is there are some people who, when they die, have their ashes thrown into space. [02:08:01] Into space? [02:08:02] Yes. [02:08:02] Some people are turned into ashes and have their ashes thrown into space. [02:08:07] And that costs a similar amount. [02:08:11] And a similar number of people do it. [02:08:14] That's pretty wild. [02:08:15] It is. [02:08:16] And that causes a lot less marital problems. [02:08:22] So, cryonics customers do actually have a lot of problems where their family and friends or spouses are upset that they are going to use cryonics. [02:08:31] Interesting. [02:08:31] And people are not upset about Ashes into Space. [02:08:36] Isn't that what it is? [02:08:36] So, it's a similar amount of money. [02:08:38] But, I mean, so obviously, one would seem to have a chance of working. [02:08:42] Ashes into Space is not going to bring you back. [02:08:44] But it's in fact the fact that you think it might work that bothers people. [02:08:50] That is. [02:08:51] It's like betrayal and abandonment. [02:08:53] Like you're willing to leave them to go to this future world without them. [02:08:56] Right, right. [02:08:57] And that bothers people. [02:08:59] Whereas ashes into space doesn't. [02:09:01] So that's one of the main obstacles to chronics, is often people who are inclined in that direction face resistance from family and friends who not only find it weird, but find it like somewhat emotionally threatening in the sense that you're willing to leave them and go somewhere else. [02:09:20] Yeah. [02:09:21] No, it's, I mean, the idea of wanting to preserve your body after you die to be saved for. [02:09:28] Potentially thousands of years to be brought back by a future civilization. [02:09:33] It goes back to something that you've talked about before of wanting to create more descendants of yourself or more versions of yourself to last into the future. [02:09:50] How does that idea relate to us compared to potential alien civilizations? [02:09:59] The main way in the past. === Planning the Future (03:51) === [02:10:03] Our ancestors have influenced the future is via descendants, right? [02:10:09] Even in the last 10,000 years, with a lot of cultural evolution, still, I'd say the main way most people have influenced the future is via descendants. [02:10:19] That's because biological and cultural evolution have just been powerful forces in the world. [02:10:24] Um, so the question is for any one person, how do they want to be involved with that process? [02:10:33] That is, do you want to influence the future, and if so. [02:10:37] How do you expect to do that? [02:10:41] You know, if you imagine a corporation or something, right, it has a large structure that plans and then it creates a future version of itself more in many ways through planning than through sort of random variation and selection, right? [02:10:59] So sometimes firms have spin off firms and then new firms are based on that. [02:11:02] But firms influence their future selves more through planning in some way, in a sense, right? [02:11:09] They decide to make a division and they decide to build a factory and they decide to promote somebody and things like that, right? [02:11:17] So there's a sense in which we influence the future, at least in short terms, often through planning. [02:11:24] And you might think of that sort of the main alternative to evolution as a way to influence the future, right? [02:11:32] And that's very popular in the sense that when people think about how do we want to influence the future, they often think in terms of plans. [02:11:39] They say global warming. [02:11:40] How are we going to solve global warming? [02:11:41] Let's come up with a plan and implement it or something, right? [02:11:44] Or war. [02:11:44] How are we going to stop war? [02:11:45] Let's come up with a plan and implement it, right? [02:11:47] So I would see, you know, planning versus sort of some sort of competitive adaptation as the two main forces that we know about for influencing the future. [02:12:01] And up until recently, planning only ever had much impact on pretty short timescales. [02:12:06] You could plan your short, your lifetime near, in your near future, right? [02:12:10] You could plan a town, plan a firm. [02:12:13] But over longer time scales, it was evolution and adaptation that had most of the impact on change. [02:12:22] If those are the only two processes, I guess aliens would have to choose between them too, right? [02:12:28] Right. [02:12:30] And planning has become ascendant, right? [02:12:34] In recent centuries, we are doing more planning. [02:12:37] In the past, we didn't even have large firms, right? [02:12:39] Most people were small artisans working on a small scale. [02:12:43] Firms are a new thing, or large government agencies, those are a new thing. [02:12:48] Large, longer term projects, building huge dams, things like that. [02:12:51] Those are all relatively new things. [02:12:53] So planning is ascendant, right? [02:12:57] More of the world today is structured by plans than was true in the past. [02:13:03] But still, even so, at the larger scale of our society, most of it isn't planned, right? [02:13:10] New firms arise, old ones go away, you know, new technologies arise, mostly not through long term planning, right? [02:13:18] But planning is growing. [02:13:20] And I think that tempts people to think what if we could do everything with planning and then we wouldn't need competition? [02:13:28] And that's, I think, What's happening with these world mobs or world government thing? [02:13:32] That's sort of the key choice when we're thinking about which future we're going with the quiet or the loud aliens, really. [02:13:39] That is, in some sense, the quiet aliens are likely the ones that liked planning, liked collective decision making, liked having a world community that decided things together and chose consciously what to do and controlled things consciously. === Returning to Foraging (16:00) === [02:13:55] And the loud aliens are the ones who allowed competition, allowed evolution, and allowed adaptation. [02:14:03] And all the conflict and strangeness that would produce. [02:14:07] Right. [02:14:09] So the loud aliens are the ones that cared more about descendants. [02:14:17] Or they failed at the other approach. [02:14:18] I mean, or they just failed at the other approach. [02:14:20] They could have wanted to do the other one and just failed at it. [02:14:23] Right. [02:14:23] Remember, it just takes one colonist whoever leaves through the entire history of civilization and goes off to another star. [02:14:30] That's all it takes to end the era of the central control. [02:14:33] So it's a high bar. [02:14:35] Right. [02:14:36] So, quite likely, they never intended for it. [02:14:38] It was the outcome they, you know, happened because they failed to achieve the other one. [02:14:44] You talked about how humans today are becoming more and more like foragers because we are the way societies are structured in the last few centuries compared to, and this is something that you've written extensively about. [02:15:01] How are we more like foragers? [02:15:03] Foragers are the way humans lived for a million years before, say, 10,000 years ago, roughly. [02:15:08] Right. [02:15:09] And we lived in small groups, and these small groups were governed sort of informally by collective decision making, where we sat around the campfire and decided things together. [02:15:18] And if people didn't like it, they could leave. [02:15:21] And that's how we did things for a million years. [02:15:24] And we had a lot of cultural flexibility that allowed us to enter a lot of different regimes. [02:15:28] Some of us became Eskimos, and some of us lived in the rainforest, et cetera. [02:15:33] But we still mostly had the similar social structure of these small bands who lived collectively and then. [02:15:40] Basically, you know, they had a lot of things that are different from us. [02:15:43] They were relatively promiscuous. [02:15:45] They didn't so much have marriage. [02:15:46] They shared a lot of food, made decisions collectively. [02:15:49] They had a lot of leisure time. [02:15:51] They had a lot of art and travel. [02:15:55] And then at some point, farming became possible, which I'm including herding and farming. [02:16:01] And farming just requires a very different social organization and sort of attitudes toward life in the world. [02:16:08] Farmers lived in higher densities, for example, maybe communities of a thousand instead of communities of 30. [02:16:13] They had to plant seed and save for the future, and they had to have self control. [02:16:18] They had property. [02:16:20] Instead of sharing everything, they were living close enough to each other, they could have war and trade. [02:16:26] They needed to manage self control for war, and they needed to deal with inequality because they had these large groups that would produce inequality. [02:16:33] And farmers were just very different than foragers. [02:16:37] And you might think it just couldn't happen that foragers could become farmers, but humans had enough cultural plasticity that we could change over maybe 50,000 years. [02:16:47] Slowly change to become farmers. [02:16:50] And farmers then accept more slavery, inequality, marriage, property, war, trade, working harder, less nutritious, less travel, all sorts of ways in which farmers were just different than foragers. [02:17:06] But that's the way the farming world lasted for 10,000 years under this new economic thing, which was much more productive in the sense that people could just live much more densely, much more people per square mile living as farmers than they could as foragers. [02:17:19] So farmers won in that sense of just displacing foragers. [02:17:24] And then in the last few centuries, we've gotten rich. [02:17:29] That is, we found ways to generate wealth faster than we grow the population. [02:17:32] So the wealth per person has gone up. [02:17:35] And in that time period, as we've gotten rich, the cultural pressures that turned foragers into farmers have weakened. [02:17:44] That is, a lot of what made foragers into farmers was cultural pressures that was mediated by the threat of poverty. [02:17:51] That is, so if you take, so foragers were pretty promiscuous. [02:17:55] Okay, they didn't really marry. [02:17:57] They stayed with a partner for a couple of years and switched. [02:17:59] Okay, farmers were supposed to be monogamous, right, for a lifetime. [02:18:05] And so, if you take, say, a young farming woman who is tempted to be promiscuous, what makes her not? [02:18:13] Well, she is told over and over again at great expense and clearly, if you break our rules about promiscuity, you and your child may starve. [02:18:24] And we can show you examples. [02:18:26] Of people to whom that's happened. [02:18:28] And that tended to be persuasive. [02:18:32] Farming young women and men learned enough self control to follow the farming norms that were different than the foraging norms because they would die otherwise. [02:18:42] And they had substantial threats that were powerful. [02:18:46] But when you get rich and somebody says, you know, if you don't follow the rules, then we're going to disapprove of you. [02:18:53] And you say, yeah, but I'm rich. [02:18:55] I can afford this. [02:18:57] Then that doesn't work so well. [02:18:59] So, in the last few centuries, as we've gotten rich, we have been felt less inclined to go along with the forager norms and rules. [02:19:07] We've looked inside ourselves and said, Yeah, but what do I feel like? [02:19:09] And we've done those things. [02:19:12] And we continue to move in that direction of doing those things that we feel like doing because we can. [02:19:18] We're rich enough that we can. [02:19:19] So, today, a young woman is told, Hey, if you have a child out of wedlock, then you and your kids may have problems. [02:19:24] They go, Okay, but I've seen other women, they have kids, they seem to do okay. [02:19:28] I think I'll do okay. [02:19:29] So, you know, don't bother me. [02:19:33] And so we've been moving back to forager values outside of work. [02:19:38] Now, work is different in the sense that work is the goose that lays the golden egg. [02:19:42] It's the reason we are rich. [02:19:44] So, in fact, at work, we are hyper farmers. [02:19:48] We put up with domination and ranking and stuff at work that most farmers wouldn't put up with, and certainly not most foragers. [02:19:54] So, we are schizoid in the sense at work. [02:19:57] Wait, what do you mean by that last part? [02:19:59] We put up with more dominance and ranking. [02:20:02] At work, most people at work in the modern workplaces are told what to do all the time and they're told they're doing it wrong and they're shown they are worse than somebody else who's shown to be higher than them, even if they don't agree with that. [02:20:16] People and most farmers would never put up with that. [02:20:20] Most farmers were relatively independent farmers or cobbler or fisher or whatever. [02:20:27] They worked in small groups as relative equals and they didn't have a boss telling them what to do all the time. [02:20:33] And after an initial apprenticeship, They were in charge of how they did things and they could just do it that way for the rest of their life because things hardly changed. [02:20:41] And so, and even that, so most foragers, again, are very egalitarian. [02:20:48] They're hyper egalitarian. [02:20:49] They just, they refuse to do, they won't tolerate any sort of implication that anybody's better than anybody else. [02:20:55] They're really sensitive to that and they will really squash that very. [02:20:58] Is this also because there are way less people? [02:21:01] And it's not, that's not central to it. [02:21:04] It was more just the norm that nobody should stand up. [02:21:07] So compare it to chimpanzees. [02:21:10] In a chimpanzee group, there's a dominant set of chimps who rule over the group and are unequal. [02:21:16] Okay. [02:21:16] And humans didn't want that to happen. [02:21:18] So humans are. [02:21:20] Consciously and purposely preventing themselves from having a dominant group like chimps do and making sure everybody is treated equally. [02:21:27] But that means they're constantly wary of somebody thinking, I'm the big chimp and I'm going to take over here. [02:21:32] And so they have to be wary for that and stomp it as soon as they can. [02:21:35] Okay. [02:21:35] Because foragers are very egalitarian and they enforce that vigorously. [02:21:42] And so part of that is nobody tells anybody what to do in part of the forager groups. [02:21:46] And even children are not told much what to do. [02:21:50] You know, raised in a forager group by children get bored and then they follow an adult around them. [02:21:55] Maybe the adults will show them what they're doing if they're nice. [02:21:58] And that's how children learn to do what adults do. [02:22:03] And so farmers have more ranking because farmers have classes, right? [02:22:07] There's the king and there's lords and maybe slaves, but there's usually only a limited range of classes. [02:22:12] And most people interact mainly within their class. [02:22:14] So most farmers still have a relatively egalitarian world where most of the people are at their level and nobody's giving orders. [02:22:23] And that's how most of the farmer life is. [02:22:25] And so at work, but we put up with a lot of domination and ranking at work. [02:22:32] And it seems like school is a big part of why that's possible. [02:22:35] School seems to make the modern worker in a way that non school doesn't. [02:22:41] So we have many stories from around the world when modern companies would go into some local area and try to put up a factory or some modern workplace and try to get the locals trained to work in the modern workplace. [02:22:53] And that often failed badly. [02:22:55] Because the locals would just not do what they're told. [02:22:58] Even if they're offered large wages and would be fired, if they wouldn't do what they're told, they just won't do what they're told. [02:23:03] So, like one classic example was in India and England at the same time, they had the same factory for, you know, spinning, making clothes or whatever, you know, making textiles, right? [02:23:16] And in England, the factory was six times as productive than the Indian factory because an Indian worker would. [02:23:26] You know, have a machine and they would sit next to the machine, they would tend that machine, and the English worker would have six machines and sit next to them and tend them. [02:23:33] If you told the Indian worker to tend six machines, he'd say, No, that's not what I'm going to do. [02:23:40] I refuse. [02:23:42] And quite consistently across a wide range of stories that I've heard, it's just hard to get local people to do what they're told at a workplace, hard to get them to show up on time, and especially hard them to just tell them that they're doing it wrong. [02:23:55] That is, workers just are typically very proud and they think, However, they do it is right, and they don't want this foreigner person telling them wrong. [02:24:01] And especially commonly, like I told a story someone I knew worked near a United States Indian reservation, and basically, people off the reservation coming into a workplace would have this sense of status in their work from their reservation, and they insisted that all their work tasks in a job reflected that relative status. [02:24:20] They would not want to do a task that was lower status than somebody else they thought was lower status than them in the workplace. [02:24:26] And so, I mean, for example, I understand when the United States went into like, when the United States goes into Arab countries and like tries to train militaries, say in other parts of the world, what they often has, if they have a room full of people are training and some of them are higher than others in the room, then they can't do anything in the room that would ever make a higher person look lower than somebody else in the room in terms of their ability on any task. [02:24:50] Okay. [02:24:51] Right. [02:24:52] And so they either have to only train people all at the same level or train them in a way that they never actually do anything because they might look worse. [02:25:00] And this is just a very common sort of thing. [02:25:02] People around the world are just very sensitive to the status they think is due to them. [02:25:07] And then the tasks on the job might threaten that status and make them look worse than somebody else who's supposedly lower status. [02:25:14] So, but the key thing is that school changes this because the main thing that happens at school is every day you are ranked and rated many times in a way that most adults wouldn't put up with. [02:25:27] Like on most jobs, most people get an annual evaluation, right? [02:25:31] And that's a big sensitive thing. [02:25:32] You're not ranked. [02:25:33] Every damn day, right? [02:25:35] Constantly, but students in school are ranked every day. [02:25:39] And in fact, we know that that level of frequency of rating and ranking of students makes them learn slower. [02:25:44] That's actually a thing we know. [02:25:46] We know they would learn faster if they weren't rated so often, but we do rate them so often, plausibly because that's the product. [02:25:52] That's what we're trying to do. [02:25:53] Trying to produce kids who will accept a modern workplace. [02:25:58] Why are we trying to do that though? [02:26:00] Because that's actually more productive. [02:26:01] That is because it makes us rich. [02:26:03] That's the thing that makes us rich. [02:26:05] We get rich because modern workplaces require a high degree of organization, a high degree of specialization, and they change fast. [02:26:12] And so people have to be told what to do and told how to change it. [02:26:16] And that's the goose that lays the golden egg. [02:26:18] That's how we can be rich. [02:26:20] More agreeable or people that are more willing to take orders. [02:26:24] Right. [02:26:25] And we want to produce people like that because it makes us more rich and it makes us more productive. [02:26:33] Wow. [02:26:34] At the beginning of the Industrial Revolution, people saw this and they were terrified. [02:26:38] Many people. [02:26:39] So you can see like famous movies and novels around, say, you know, the year 1900, as people were seeing the Industrial Revolution spreading and they were seeing this degree of regimentation and structure in workplaces. [02:26:53] And they were terrified that that would spread to everything. [02:26:57] And that was plausible in the sense that they could see that in a factory or a shipyard or one of these very structured modern workplaces. [02:27:05] People did not have much discretion about how they did things or what to do. [02:27:08] They were told exactly what to do and how to do it and when. [02:27:11] And that seemed inhuman compared to prior workplace habits. [02:27:16] And then they thought that'll spread to everything. [02:27:19] And so people wrote science fiction novels in the late 1800s describing what would happen if everything were regimented that way. [02:27:26] What if society told you who to marry and where to live, what to eat for breakfast, and what clothes to wear all the time, always? [02:27:33] Right. [02:27:34] And that was this terrifying vision of the future. [02:27:36] That people were imagining because they were seeing how regimented work was. [02:27:40] And they were thinking that will happen to everything. [02:27:43] And it could have happened to everything if we hadn't gotten rich. [02:27:47] But as we got rich, we could spend our extra wealth on not having that happen outside of work. [02:27:54] So, you know, in a modern world, living in a big dormitory with a big cafeteria where you all wear the same clothes is much cheaper to supply. [02:28:05] So, like, I visited a commune and there are communes out there. [02:28:10] That are living very poorly because they're basically supplying their own food and clothes and stuff. [02:28:15] But they can do it by having just a high degree of regimentation and structure where people live in communal housing and have communal food and clothes. [02:28:24] And that is much cheaper than the way we do things. [02:28:28] But as we got rich, we decided instead of living in big communal housing and things, we would have personalized housing and personalized kitchens and personalized clothes. [02:28:36] Because, in fact, that's what we've spent most of our wealth on in the last century. [02:28:40] So, a century ago, people were looking at it. [02:28:43] Exactly. [02:28:44] Variety, individual variety. [02:28:46] That's what we've spent most of our wealth on. [02:28:49] We could have, say, vastly more food. [02:28:51] We could say maybe work two hours a week or something with all this technology we have. [02:28:56] We don't need to work very much. [02:28:58] If only we would be willing to accept very standardized communal food and clothes and housing, et cetera. [02:29:05] But we weren't. [02:29:07] We chose, we'd rather work a lot more and have a lot of personal individual variety. [02:29:12] That is what you're explaining when you're saying we're becoming more like foragers. [02:29:16] The individuality is part of the forager trend, I think. [02:29:20] So, a whole bunch of trends are explained by this forager return. [02:29:23] So, we say over the last few centuries, we have less slavery, less religion, less war, less marriage, more promiscuity, more leisure, more travel, more art, less overt domination of all sorts at workplaces. [02:29:46] And in all sorts of contexts, these are all trends that are explained by this moving back to forger styles outside of work. === Wealth as a Shot at Power (17:28) === [02:29:55] And many people have painted science fiction futures where this trend continues. [02:29:59] And they love this idea, like Star Trek is partially like this, or the culture novels, where basically in the future, people work less, even less than they do now. [02:30:07] They have even more variety. [02:30:09] They have even more promiscuity. [02:30:11] They have even less religion. [02:30:13] They have even less war. [02:30:17] So, where does that go? [02:30:18] Even less fertility. [02:30:19] Where does that end up? [02:30:21] Oh, less fertility is part of that. [02:30:22] Yes. [02:30:23] Fertility is also, foragers had lower fertility than farmers. [02:30:26] Why is that? [02:30:30] And why would we, why would our fertility be declining? [02:30:35] So, I have a more detailed theory, but the first thing to notice is, in fact, foragers are lower fertility than farmers. [02:30:44] So, farmers are often in a situation where they can have a lot of kids and they do. [02:30:49] Whereas foragers are usually in a relatively stable environment where they control their fertility in order to make sure that they can stay in that stable environment. [02:30:56] So farmers have had just more variety in the size of their environment where they, if they go into a new area and they can just make a lot of farms, et cetera, and they can just expand a lot. [02:31:05] Whereas foragers haven't had that same ability. [02:31:07] But it is basically also just true that foragers just tended to have lower fertility. [02:31:12] The space births say, you know, farmers might space births once a year and foragers might space them every three or four years or something. [02:31:20] Okay. [02:31:21] But I have another theory that also can try to explain this lower fertility because this is a central fact about our civilization today. [02:31:28] That is, if you want to complain about our civilization, it's like one of the biggest things to complain about. [02:31:34] You can say, well, any decent civilization wouldn't do this in terms of it was hoping to survive and reproduce, right? [02:31:41] It would just, why would it have such a lower fertility? [02:31:44] So I do think there's a serious puzzle to be explained in why we have low fertility. [02:31:49] And I do have a favorite theory, but it's not. [02:31:53] Just the forager theory, actually. [02:31:54] It's a variation. [02:31:56] But I mean, I do have to admit the forager correlation does at least predict the sign of the effect. [02:32:01] Okay. [02:32:02] Well, please expand on it. [02:32:03] Okay. [02:32:04] But so the idea is that over the last 10,000 years, we lived in these larger groups, which often had things like kings and queens. [02:32:13] I mean, like, king and queen of a thousand people isn't quite the same thing, but still, you had somebody at the top of a hierarchy who had much better long term success for having grandchildren, say. [02:32:24] So over the last 10,000 years and the prior 50,000, say, as people were adapting to this, we evolved to ask, should I take a shot at being king or queen? [02:32:37] That's a new strategy option that didn't exist before. [02:32:39] Foragers didn't really have kings or queens. [02:32:42] But now that we have kings or queens, you might ask, well, should I go for that? [02:32:47] So I could go for that, or I could have my kids go for that, right? [02:32:50] Should I try to raise kids in such a way as they might be king or queen? [02:32:53] But only certain people would have that option, only people that was in their lineage. [02:32:58] No. [02:32:58] So, well, the question is, what are the conditions for having a shot at being king or queen? [02:33:04] So, evolution, cultural evolution, and biological evolution had to find a way to code that. [02:33:08] That is, you know, there's a lot of different variety of civil of groups and they have different practices, everything. [02:33:13] What would be a robust indicator that you had a good shot at being king or queen and the hypothesis is relative wealth would be a good indicator. [02:33:23] That is, in almost all societies. [02:33:25] If you were higher status, you would have more wealth. [02:33:29] So your seeing that you had more wealth than others would be an indication that you had a shot at being king or queen. [02:33:36] You lived in a bigger place, you had more food, more clothing, more leisure just All sorts of indications of wealth would be a robust indicator that you had a better shot than most of being king or queen. [02:33:48] So, first let's ask, what would you do if you had a shot of being a king or queen? [02:33:52] So, in most societies, there's an elite culture and then there's signs of being a member of the elite culture. [02:33:58] And what you're trying to do is acquire those signs that show that you deserve to be a member of the elite culture. [02:34:04] So, you might do embroidery or poetry or learn to do balls or like different societies have different things, but almost all of them will have a set of cultural. [02:34:14] Practices that elites have or rich elites have that others don't that mark you as a valid member of elites such that you are a valid candidate for being king or queen. [02:34:26] Right. [02:34:26] Right. [02:34:27] So now the story is if I or my child has a shot at being king or queen, then I want to invest in them in developing these cultural markers. [02:34:35] I want to teach them poetry, to dance, to embroider, to sing, play the piano, whatever it is that in any one society is a marker of the elites, maybe read. [02:34:45] More fiction, know how to, you know, et cetera. [02:34:48] But it will take time to invest in that, right? [02:34:50] And so the idea is that if I have a shot at being king or queen, a better shot than most for me or my child, then for me or my child, I want to invest more in these cultural markers. [02:35:00] And the key point is that will come at the cost of fertility. [02:35:04] That is, it'll take time away from other things. [02:35:07] And that if I want to invest more in each child, I can't have as many children, right? [02:35:11] Okay. [02:35:12] So the idea is if I had a shot at being king or queen, I would have fewer children, or they would have fewer children. [02:35:19] Because they have a better shot at being king or queen. [02:35:22] And if they do, then that will more than compensate for the current fertility loss because king or queen will have lots and lots of grandkids. [02:35:30] So now the question was what would be the indicator that said I did have a shot? [02:35:35] The indicator would be relative wealth. [02:35:37] That would work. [02:35:37] So that's the simple story, but that story doesn't work yet. [02:35:42] Because in fact, over the last few centuries, we haven't had an increase in relative wealth, we've had an increase in absolute wealth. [02:35:50] So, the strategy I've just described says look around yourself at your relative wealth to the people around you. [02:35:56] And if you have a high relative wealth, then you have a shot at being king or queen. [02:35:59] And then you should lower your fertility now, invest more in the signs and cultural markers that would make you an elite, qualified to be a king or queen. [02:36:06] But the story doesn't work yet. [02:36:08] So I have to add one last mistake to my story. [02:36:11] I have to say, well, up until the last few centuries, the average wealth never changed. [02:36:20] Societies pretty much always had the same average wealth, which was pretty near subsistence. [02:36:25] People varied within a society about their relative wealth compared to the average, but the average almost never changed. [02:36:31] So, this heuristic look at my relative wealth was perfectly fine as a heuristic look at my absolute wealth because my absolute wealth always told me my relative wealth until the last few centuries. [02:36:40] So, why bother to do this relative thing? [02:36:43] Just look at absolute. [02:36:44] So, the story is evolution culturally and biologically encoded in us this habit of looking at our absolute wealth as an indicator of whether we had a shot at being king or queen. [02:36:55] And if we did, we would lower our fertility in order to invest in the cultural markers of eliteness such that we would then be a valid candidate for king or queen. [02:37:05] And then the problem is in the last few centuries, all of a sudden, absolute wealth did vary. [02:37:11] We got rich consistently, the whole world, not just temporarily, but persistently. [02:37:16] And that threw off this heuristic. [02:37:18] So now everybody thinks they have a shot at king or queen, and everybody's cutting back their fertility to collect cultural markers of eliteness so that they can all be king or queen, which is not working very well because they can't all be king or queen. [02:37:32] What is happening, it seems like, right now in society, especially with young people, is that. [02:37:38] People are investing more and more in their education or their careers. [02:37:42] People my age in particular, they are, especially young men and women, they are putting way more value into their finances, their education, their careers. [02:37:54] And I would say these are markers of eliteness. [02:37:56] That is, the things they're collecting instead of kids are markers of status in our society. [02:38:02] Right. [02:38:03] And this is part of the reason that fertility is going down. [02:38:09] Because we have less value on producing more children and more value on gaining eliteness for ourselves. [02:38:17] And my explanation is that's executing this strategy of trying to have your kids or you be king or queen. [02:38:26] Right. [02:38:26] So by the time you do have kids, you have developed this wealth or you have accumulated enough. [02:38:32] You can train them to have high cultural markers so that they have a shot of being king or queen. [02:38:37] Except none of you have a shot of being king or queen anymore. [02:38:41] I mean, in the past you would have, but now you don't. [02:38:44] So, why did we end up here? [02:38:47] How did it get like this? [02:38:49] So, the key idea is that evolution evolves strategies that are heuristic and that use the cues available in a crude way. [02:39:00] Right. [02:39:01] So, evolution can't look at the world you're actually in and tell you exactly what to do in your world. [02:39:07] Right. [02:39:07] It can only find markers of the world you're in and then find correlates between things you should do when those markers exist. [02:39:15] That's how evolution tells you what to do. [02:39:17] It can't look at more detail, right? [02:39:19] So it evolves this connection between the cues you see and the behaviors you should do to respond. [02:39:25] And the way it encodes those is going to be opportunistic, it's going to be based on what happens to be stable or not. [02:39:32] So, for example, imagine there were things like water, except they weren't water. [02:39:40] Okay. [02:39:40] Right? [02:39:40] So imagine there's something like water that tastes like water, it isn't water, or something else that smells like water that isn't water. [02:39:45] Right now, in our world, everything that smells and tastes like water is water. [02:39:49] It's not a problem, right? [02:39:50] So, in our world, evolution could have taught you when you're thirsty to get water based on the taste or the smell. [02:39:55] Either one, it would work fine, or just the way it flops when it moves or something, right? [02:40:00] When evolution wants you to go get water, it has a whole bunch of cues to work with that all go together in our world. [02:40:05] So it doesn't that much matter, right? [02:40:08] So, you know, maybe you cued off the smell of water and then that smells good. [02:40:12] And now we're going to drink water, right? [02:40:14] So now imagine a world where, okay, there's a thing that smells like water that isn't water that isn't really, isn't actually healthy for you, right? [02:40:21] Well, now if you appeared in that world all of a sudden where you had been trained in the prior world, then you would start drinking the stuff that smells like water and And that would hurt you because it's not actually water. [02:40:31] Right. [02:40:32] But that's because you moved into a new world, right? [02:40:36] You moved from a world where all the different cues about water all went together. [02:40:40] So you didn't actually need to use them all. [02:40:41] You could just pick any one of them. [02:40:42] You moved into a world where all of a sudden it makes a difference. [02:40:45] And now you'd be making a mistake. [02:40:47] So the key idea is evolution has this general task of figuring out what cues in the environment should trigger what behavior. [02:40:54] But it's only going to bother to look at the actual variation in the world you're around, not all possible variation. [02:41:02] In order to figure out what to get you to do. [02:41:05] So the story is that for our ancestors, for a million years, our ancestors lived in societies whose average wealth never varied. [02:41:16] So there was no point in evolution coding you with this way of checking your relative wealth to other people. [02:41:23] It was much simpler to have you just check your absolute wealth. [02:41:25] Right. [02:41:26] That was the perfectly fine cue up until recently to decide whether you had a shot at being king or queen. [02:41:34] But all of a sudden, the world changes. [02:41:36] And now, this heuristic that worked fine in the old world doesn't work so well in the new one. [02:41:41] And that's why fertility is declining. [02:41:43] We're just executing a heuristic that is misfiring because it's in a new world. [02:41:51] And there are projections that show birth rate where it's going. [02:41:54] And I think Elon Musk is one of the most vocal people about this. [02:42:00] So, which is why he's having like. [02:42:01] So, the question is how will this problem get fixed? [02:42:05] And you might think, okay, eventually, say, evolution would fix it. [02:42:09] But now we have to ask, well, how exactly would evolution fix it? [02:42:13] So I think the key question is how much cultural conformity pressures are involved here? [02:42:20] That is, you know, so for example, we've seen societies like, you know, the Mormons, say, or Orthodox Jews who have an unusual culture that promotes fertility. [02:42:31] Right. [02:42:32] But nevertheless, their fertility has also been falling at a similar rate, just at a delayed time. [02:42:38] And even those societies apparently aren't failing to resist the larger cultural pressure to adopt the larger cultural attitudes toward fertility. [02:42:52] Right. [02:42:52] So, that is your attitude toward fertility is a combination of like your genetic, unique attitude toward fertility plus the cultural attitude toward fertility around you that you're being taught. [02:43:02] Right. [02:43:04] So, biology can only fix this when that, if that. [02:43:09] That biological attitude got so strong as to overwhelm this cultural attitude. [02:43:14] And that doesn't seem to be true to most people today, that most people are pretty culturally influenced by the people around them. [02:43:19] Some people might have more of a biological inclination toward fertility, but they all seem to be pretty influenced culturally by the people around them. [02:43:29] So, if you think about it, I think I did the numbers like in 1500 years, humanity would go extinct if the population went down by half every two generations or something like that. [02:43:39] So that's kind of the timescale we have to work with. [02:43:42] Wow. [02:43:43] So the question is well, first of all, can biology create humans who resist cultural pressures enough over that timescale? [02:43:51] Which I'm doubtful for. [02:43:53] More likely scenarios can culture create a new culture that resists outside cultural influence over that time scale? [02:44:01] And that seems more plausible as we have seen cultures that are somewhat insular in the past, right? [02:44:06] Cultures that train people in that culture to follow that culture and to ignore outside cultures. [02:44:11] Say, forget all of them. [02:44:13] They're evil. [02:44:13] They're wrong. [02:44:14] Just listen to us. [02:44:15] Right. [02:44:16] So then the win scenario seems to me is that we end up with an especially insular, fertile culture. [02:44:25] That is a culture arises which has a strong value of fertility and insularity. [02:44:31] That is, it teaches people to not listen to outsiders and to stay away from outsiders and to just listen to people in the culture. [02:44:39] This has to be more insular, as I said, than the Mormons are now because they are failing. [02:44:44] They are, in fact, have, you know, responding enough to the outside cultural influence that they are on track to fall with everybody else. [02:44:51] Wow. [02:44:52] So, this new insular culture is going to have to be. [02:44:56] Pretty damn insular, and that's going to cause a lot of costs, right? [02:45:01] That is, you know, that insular cultures have caused wars and a lot of things, and you know, especially they're going to limit innovation, right? [02:45:09] This insularity is probably going to resist other things from the outside culture, not just fertility attitudes, right? [02:45:14] It's going to resist technology, maybe all sorts of things it might resist, right? [02:45:18] And so, and who knows what it'll be, but unless we can like engineer it, it'll be kind of random, right? [02:45:24] Which insular culture appears that manages to. [02:45:28] To persist. [02:45:29] And of course, it'll have to not only arise as an insular fertile culture, but also resist the outside world trying to crush it. [02:45:35] Right. [02:45:36] Because the outside world may be very offended by their insular fertile culture. [02:45:41] I'm less assured of biology solving the problem than culture. [02:45:45] That is, I'm less assured that some humans will just evolve a biological inclination to just want to have a lot of kids regardless of cultural pressures. [02:45:53] Or we can start genetically making our own kids in test tubes. [02:45:57] I'm not sure. [02:45:58] I don't think that helps. [02:45:59] That is. [02:46:00] It's about again how much you invest in each child, right? [02:46:04] Getting pretty fun. [02:46:04] It's about how much you invest in each child, not so much about how mechanically you make the child in the first place, right? [02:46:10] Most of the cost of raising children isn't the first year or the pregnancy. [02:46:16] The cost is all that time it takes to raise the child to be an elite cultured child, and the time it takes to put into them and to teach them things, right? [02:46:26] So, if you'd have to imagine a future where people are willing to like hand all that off to robots or something, right? [02:46:30] Well, people are selfish too. [02:46:32] Like, I feel like right now people are. [02:46:36] They're not, they're inclined to produce more kids, but it's the time they're willing to spend on the kids because you're taking time away from yourself and putting it towards this other human. [02:46:49] And you have to have enough faith and enough belief that you're going to. [02:46:52] Right. [02:46:53] So you have to imagine something like, you know, where the elites send their kids off to boarding school or something, right? [02:46:59] Where now the elites aren't spending the time the boarding school does. [02:47:02] Now you have to imagine a very efficient, cheap boarding school, right? [02:47:05] Right. [02:47:06] Because somehow it's going to be able to raise these elite kids at a low cost to the parents so that they're willing to have a lot of them. [02:47:14] Right. [02:47:15] That seems a tough game. [02:47:16] I would more predict the insular fertile culture, whatever it is, and faster than that. [02:47:21] Well, Robin, we've hit three hours. === The Hanson Website (00:36) === [02:47:24] Thank you for doing this, man. [02:47:25] It's been fun. [02:47:26] Thanks for letting me spout and ramble. [02:47:28] I'm going to have to watch this over again so I can understand it better. [02:47:32] But it's been quite educational. [02:47:33] I appreciate it very much. [02:47:35] Where can people watching and listening find more of your work, find your books, find you on the internet? [02:47:40] Well, I have a website, hanson.gmu.edu. [02:47:44] I on Twitter at Robin Hansen. [02:47:47] I have these two books, The Age of M and The Elephant in the Brain. [02:47:51] And just Google my name. [02:47:53] Of course, you'll find it a lot, or even videos on YouTube. [02:47:56] Google my name. [02:47:57] Well, thank you very much again. [02:47:59] It's been a pleasure. [02:48:00] I very much enjoyed it.