Flagrant - Andrew Schulz & Akaash Singh - AI Expert on Robot Girlfriends, If Humanity Is Cooked, & Sam Altman's God Fetish | Roman Yampolskiy Aired: 2025-10-10 Duration: 01:40:12 === UN Takes AI Safety Seriously (14:32) === [00:00:00] Do you think the UN is taking AI safety seriously? [00:00:02] Unfortunately, no. [00:00:03] With nuclear, we had this concept of mutually assured destruction. [00:00:07] No matter who starts the war, we all regret participating. [00:00:11] It's the same with superintelligence. [00:00:12] If it's uncontrolled, it doesn't matter who creates it. [00:00:15] Good guys, bad guys, we all get... [00:00:18] Is there a way to stop it then? [00:00:21] You think we're in a simulation? [00:00:22] I think we are in a simulation. [00:00:24] Resources to develop this type of technology become cheaper and cheaper every year. [00:00:28] So that means if we have super intelligence, it would be a super conscious. [00:00:31] And to it, we would be like animals. [00:00:34] Would you do neural links? [00:00:35] It's an easy sell if you have disabilities. [00:00:37] It's amazing for people who are quadriplegic, who are blind. [00:00:41] What if you can't stop eating? [00:00:43] It's beyond help. [00:00:46] Do we solve every problem in the human world with the help of AI? [00:00:50] Just because you solve some problems doesn't mean you become happier. [00:00:53] If all the things I care about can be done by an assistant, what am I doing with my life? [00:00:57] Meditating and getting blown by your sex robot. [00:00:59] I don't like meditating. [00:01:04] Dr. Roman Jampolsky, I said that correctly. [00:01:06] You got it? [00:01:07] I want to talk about this, why the funniest AI joke will not be funny. [00:01:10] But before we get to that, you just spoke at the UN. [00:01:13] At one of the side events, I wasn't at the General Assembly with Trump or anything, no. [00:01:17] Okay, but that doesn't matter. [00:01:21] What did you say to them? [00:01:22] I want to know what was said. [00:01:23] They had kind of side events about standards for technology, standards for AI development, and educational skills in terms of AI. [00:01:34] Okay, so do you think the UN is taking you seriously, taking AI safety seriously? [00:01:39] Unfortunately, no. [00:01:40] A lot of what they do is kind of existing problems we see with algorithmic bias, with unemployment, with discrimination. [00:01:46] They don't look sufficiently into the future where we get more advanced AI, AI which can replace all of us and be very different. [00:01:54] You know, I feel very vindicated that you're on this couch because I, like 10, for like 10 years, have been screaming that AI will kill us. [00:02:02] This is inevitable. [00:02:03] Why do we keep going down this road? [00:02:05] These people laugh at me. [00:02:06] I'm a bit more skeptical, maybe optimistic is another way to put it. [00:02:10] And I have my hopes that we're going to be able to iterate and maybe put in some guardrails, I hope. [00:02:16] And I'm the furthest. [00:02:17] Yeah, we don't got nothing to worry about. [00:02:19] Guardrails is a good timely point. [00:02:22] We just released a red lines document signed by like 200 top Nobel Prize winners, scientists at UN yesterday. [00:02:31] Oh, it's very fresh. [00:02:33] Okay, so wait, what are the... [00:02:33] So now there are guardrails being put in place. [00:02:36] No, there are people saying we should have guardrails. [00:02:38] It's very important. [00:02:39] Nothing is in place. [00:02:40] Could you just give us the current state of AI generally and why you are so, I guess, vocal about AI safety? [00:02:47] Like, where are we at right now and where could we go? [00:02:51] There is an arms race between top corporations, so the Google, OpenAI, Microsoft, all the others, and nation states as well, China, US, and others, to get to beyond human capacity first. [00:03:06] Sometimes we call it AGI, but superintelligence is really where it's at. [00:03:11] And it seems that if you just do more in terms of compute, more data, more scaling, you get better, more capable systems. [00:03:19] So everyone's kind of trying to build bigger servers, hire the best people. [00:03:24] So human resources are also important. [00:03:26] But they're not stopping to figure out, should we be doing it? [00:03:31] Are we doing it right? [00:03:32] They just want to get there first. [00:03:33] Yes. [00:03:34] And what do you think is happening? [00:03:35] Can you just explain to everybody? [00:03:36] I know you did the Diarby CEO. [00:03:38] A lot of people heard that, but believe it or not, our audience might not have a lot of crossover there. [00:03:43] So can you walk us through the worst case scenarios that you, not even worst case, what do you see as the inevitable scenarios from AI on this path? [00:03:51] So the way it's done right now, we take an AI model, which is just a learning algorithm. [00:03:57] We give it access to internet. [00:03:58] Go read everything you can, all the weird stuff on the internet. [00:04:02] What did you learn from it? [00:04:04] Oh, you're not supposed to say that. [00:04:05] So, we put a filter on top of some of the really bad stuff, but the model is still completely raw. [00:04:10] It's not controlled, it's not restricted in any way. [00:04:13] Yeah, okay. [00:04:14] And that's what we have, and we keep making it more and more capable. [00:04:17] So, we don't know how it really works in terms of what it learned from all that data. [00:04:22] We cannot predict what it's going to do. [00:04:24] It's smarter than us. [00:04:25] So, by definition, you can't really say what it's going to do. [00:04:29] And we don't know how to control those systems. [00:04:31] So, anything goes if it decides tomorrow to change the whole planet. [00:04:35] If it's capable enough, it can. [00:04:36] So, ChatGPT right now could change the whole point? [00:04:39] No, right now, we're dealing with tools. [00:04:41] The existing AI is tools you can use to write funnier jokes or something like that. [00:04:45] It's not taking over. [00:04:47] But if you just progress forward the same rate of progress we've seen over the last decade, it goes beyond human capability. [00:04:55] Can you walk us through what that? [00:04:57] Because again, Alex thinks it's just not a thing. [00:05:00] So, my feeling is it's like climate change. [00:05:03] Climate change is a real thing, but I only hear that from other climate change scientists. [00:05:08] Everyone else, it hasn't really necessarily affected us in a way that makes me want to change my behavior. [00:05:14] It's so slow. [00:05:15] It will take 100 years for you to boil alive. [00:05:18] This thing we are concerned is like climate change in a week. [00:05:20] AI is predicted to hit human-level performance in two, three years. [00:05:24] Even if it's 10, it happens well before all the other big concerns. [00:05:28] So, it's either going to take us out or help us solve climate change easily. [00:05:32] Okay. [00:05:32] And so, when it hits human-level intelligence, are we afraid that it's going to just set off some nukes? [00:05:38] Like, what are we afraid of? [00:05:39] Well, first, at human level, it's just a free assistance. [00:05:42] So, if you want help hacking in the stock market or setting off nukes or any type of cybercrime, it's there available. [00:05:51] But the real concern is very quickly, this free AI scientist, engineer will help you create super intelligence, something much more capable than all of us combined. [00:06:02] And there, anything goes. [00:06:04] It really does novel research, new physics, new biological research. [00:06:08] So, we simply would not know what is going to happen. [00:06:13] But this feels like a human problem that's using AI to destroy other humans rather than a AI superintelligence that's going to destroy humans. [00:06:22] So, that's a paradigm shift from tools, what we have today, some bad guy, malevolent actor with AI as a tool using it to harm you, versus AI itself as an agent making its own decisions. [00:06:33] Right. [00:06:33] And that is called super intelligence, when AI can make its own AI and make it just create a world how it works. [00:06:39] So, super intelligence refers to its capability. [00:06:41] If it's smarter than all of us in every domain, it's a better chess player, better driver, better comedian, then that's super intelligence. [00:06:49] Agenthood is more about ability to make your own decisions. [00:06:52] You are setting your own goals. [00:06:54] You can decide to do something or not do it, change your mind. [00:06:58] It's not just a tool someone's using. [00:07:00] So, people talk about guns, for example. [00:07:02] A gun is a tool. [00:07:03] Somebody with a gun can kill you. [00:07:05] A pit bull is an agent. [00:07:06] Pitbull can decide to attack you regardless of what the owner does. [00:07:09] Right. [00:07:10] Okay. [00:07:10] So, AGI is this moment where the AI is more capable than any given human in any given field. [00:07:16] That's super intelligence. [00:07:17] AGI is where it's kind of like a human. [00:07:20] It can learn any new skill. [00:07:22] You can drop it in into a company as an employee. [00:07:25] It's general intelligence at about human levels. [00:07:28] Okay. [00:07:28] Now, can we talk about AI art? [00:07:31] Because this is one of the elements that I find I think is restricted to AGI, generally speaking, because there's so much subjectivity in what makes something good. [00:07:39] And in my opinion, intelligence doesn't necessarily create great art. [00:07:44] So, like something like humor, where you pointed out with Akash's homework, that he was astonished. [00:07:49] So, like, something like AI art, there is a subjective human experience that's required there. [00:07:54] So, how could an AI create better art? [00:07:57] Have you seen modern art? [00:07:58] Absolutely. [00:07:59] Modern art sucks. [00:08:00] There you go. [00:08:01] Let's keep it on us. [00:08:02] You said 99% unemployment will happen with AI. [00:08:04] Before we get into that, is comedy the 1%? [00:08:08] I think so because so far... [00:08:10] So far, that's the point of that paper. [00:08:12] So far, one thing AI is not better than top comedians who have Netflix specialists and such is humor. [00:08:24] It's very hard to be... like consistently producing. [00:08:28] Tell me the funniest joke. [00:08:29] Do what I want. [00:08:30] Yeah, AI jokes are pretty bad. [00:08:31] Yeah, terrible. [00:08:32] You've heard them, right? [00:08:33] I actually got to the point where I trained a model on my paper. [00:08:36] And if I asked it, give me 10 Roman Impolsky-style jokes, one out of 10 would be like, ha. [00:08:42] This is not terrible. [00:08:43] It's nowhere near like levels it plays chess at or even drives cars. [00:08:48] Right. [00:08:49] Exactly. [00:08:49] Look at that, mom and dad. [00:08:53] I get replaced in two years. [00:08:55] Yeah, they told you to cut. [00:08:56] No, but this is like two and a half years. [00:08:57] Don't hold on. [00:09:01] I thought like five years. [00:09:03] If you're really funny. [00:09:04] Okay. [00:09:05] I think I'm really funny. [00:09:06] So we got five. [00:09:07] You got five. [00:09:07] Okay. [00:09:10] Maybe. [00:09:11] Maybe. [00:09:12] But I'm curious what this looks like, though. [00:09:13] So AI making jokes or making art. [00:09:16] To me, what makes like Michelangelo's David so impressive is that a human made it. [00:09:21] And the idea of like a human being running a four-minute mile was impressive to us because a human did it. [00:09:26] We could have a robot that could run a mile faster, but it's not as impressive to us because it doesn't reflect something about what it actually is to be human. [00:09:32] So a comedian going on stage, what makes it funny is that there's an emotional element, that the human being is being irrational or it's tapping into something that is fundamentally, you know, into our human consciousness that I'm skeptical that AI could do within the next five or even 10 years. [00:09:48] Yeah, and it's the same in many other domains. [00:09:50] So you have mass market, China-produced goods, cheap, affordable stuff everyone consumes. [00:09:56] But then you can have man-made furniture, very expensive, reliable, different level of quality, different market. [00:10:03] Same goes for any human interaction. [00:10:05] You want to meet the artist. [00:10:06] You want to talk to the comedian. [00:10:07] Absolutely. [00:10:08] It's special and somewhat protected. [00:10:10] But if the goal is just, you know, read some jokes online, top 10 jokes, I don't care who generated them. [00:10:16] I want them to be funny. [00:10:17] Sure. [00:10:18] But I think at like a comedy show, for example, or even like if I went to go see an art show and I wanted to see sculptures, I would want the sculptures to reflect me because I think human beings have an ego. [00:10:28] And so when I see a sculpture made by an AI, I go, oh, it's not as good as the imperfect one made by a human. [00:10:33] Well, for one, you wouldn't know who generated it. [00:10:35] That's the touring test restricted to a specific domain. [00:10:37] If I play a tune and you don't know who created it, you have to either like it or not like it. [00:10:41] But then later you discover everything you liked was AI. [00:10:44] Sure. [00:10:45] But if you go to a comedy show and you see a human being saying stuff, you want to believe that the person that's speaking is the one that's, you know, actually. [00:10:52] That's if you go to the show. [00:10:53] So to his point, I was scrolling through Instagram and I saw a girl at a comedy club in India just wearing like a bra or a bikini top, telling a joke. [00:11:02] And I looked at the comments. [00:11:05] Yeah, I looked at the comments and everybody was like, how dare she? [00:11:07] She should, why would she go out like this? [00:11:09] And I looked at the handle and then the thing and I was like, these comments, they don't realize this is AI. [00:11:19] And the only reason I realized it was AI is because if a woman went out in India dressed like that, she wouldn't have even made it to the club. [00:11:26] You know what I mean? [00:11:27] But then it was like something, something, ha Like no human being, but no one could tell. [00:11:32] And if you're just scrolling, like most of us do, consuming content on your phone, we're already to the point where a lot of people cannot tell it's AI. [00:11:39] I didn't see any comments that were like, this isn't real. [00:11:42] So it's only going to get better. [00:11:43] So how are you going to even get to the point where if you're just getting fed amazing content all the time, why would you even go out to a comedy show? [00:11:50] But also, maybe there is a real human delivering jokes written by AI. [00:11:55] Not funny, but you have the best AI, so here you are. [00:11:58] How'd you know he wasn't funny? [00:11:59] That's good. [00:12:00] That's good. [00:12:00] That was just an example. [00:12:02] You did your research. [00:12:03] No, he was spared you. [00:12:06] No, but this is the thing. [00:12:08] We don't know who actually generates content. [00:12:10] And a lot of times in the last month or two, I listened to something, short clips of music, and I'm like, I really like this. [00:12:16] This is cash. [00:12:17] It stuck with me. [00:12:18] And all of it is AI. [00:12:19] I no longer listen to human artists. [00:12:22] Exactly. [00:12:23] And think about when it gets to the level of humanoid robots and you can't even tell them different from a human. [00:12:27] Yeah. [00:12:28] You're watching that stand-up. [00:12:29] Right. [00:12:30] But I would argue even that is even farther away, where you actually have like a humanoid robot that passes the Turin test. [00:12:36] How far do you think we're away from that? [00:12:39] That's very hard. [00:12:40] So a full Turing test, physical, chemical, like complete interaction, is very hard. [00:12:45] There's so many things go into it. [00:12:48] Like even smell, texture, like it's hard. [00:12:51] But Turing test as originally proposed, yeah, we're passing it now. [00:12:55] You can't tell online if you're chatting to someone if it's a human. [00:12:57] Yeah, people are falling in love with their AI because they can't tell. [00:13:00] Yeah. [00:13:01] Even if they know it's AI, is it convincing enough? [00:13:03] Right. [00:13:04] I mean, no, but it's a better girlfriend, so they just go with it. [00:13:06] Yeah. [00:13:07] No, that's true. [00:13:08] And I, so this is why 10 years ago or whatever, I was talking to somebody who worked in AI, small scale, but I asked him about this, and then he just never really had a good answer. [00:13:18] And I kept pressing and poking and prodding at all of his responses. [00:13:21] And then eventually he was just kind of like, yeah, I'm sure it'll take over, but humans will be kind of like dogs. [00:13:26] Dogs have a pretty good life. [00:13:28] So that's about loss of control. [00:13:30] You may still be safe. [00:13:31] They may keep you alive and protected, but you're not in control anymore. [00:13:35] You're not deciding what happens to you, just like your dog doesn't decide. [00:13:39] Which seems like not that great of an existence. [00:13:41] Well, we definitely cannot just do it to 8 billion people without asking them to agree to it. [00:13:46] That's unethical experimentation. [00:13:49] And they can't even consent to it because most of them don't understand what's happening. [00:13:52] Okay. [00:13:53] So AGI is 2027, and that's when artificial intelligence can do any task as well as any human. [00:14:00] And at that point, you think there's going to be massive unemployment because why would an employer pay an employee when they can have the AI do it for free just as well? [00:14:09] Right. [00:14:09] So the dates, I don't generate them. [00:14:11] They come from prediction market. [00:14:12] Okay. [00:14:12] People betting on certain outcomes. [00:14:14] That's the best tool we have for predicting the future. [00:14:17] We hear the same numbers from leaders of the labs. [00:14:20] If it's wrong and it's five years instead, nothing changed. [00:14:22] It's the same problem. [00:14:23] We just have a little more time. [00:14:24] That's the timeline. [00:14:25] But really, if I can get a $20 model or a free model to do the job, why would I hire someone for $100,000? [00:14:31] Yeah, exactly. === Walmart Fires Everyone for Robots (04:38) === [00:14:32] So that's what causes mass unemployment. [00:14:35] Anything you do on a computer basically goes right away. [00:14:37] Now, physical labor is different. [00:14:39] If you're a plumber, it's much harder to just write a program to fix my pipes. [00:14:43] So we need humanoid robots. [00:14:44] But even there, you saw progress with Boston Robotics and with Tesla. [00:14:49] They're probably five years behind. [00:14:51] So you think it's possible that within 10 years, there are actual robot plumbers that can come in to your home, disassemble pipes, and reassemble pipes to fix plumbing? [00:15:01] It is reasonable. [00:15:02] I cannot guarantee specific dates, but there is enough money pouring into it to make it happen. [00:15:08] And again, the research itself is now being aided by AI tools. [00:15:11] The code is written by AI. [00:15:13] So more and more, this process becomes hyper-exponential. [00:15:16] It's not just humans developing it. [00:15:18] Technology is developing the next generation of technology. [00:15:21] Right. [00:15:21] Yeah, I think the post-work society is very plausible that we get to a point where human beings are not employed at the same scale that they are. [00:15:28] But is it possible that human beings are still employed, but our productivity is 10x'd rather than our unemployment being reduced to 10x? [00:15:36] So certain jobs, definitely. [00:15:38] But then if 10 people had to be removed and there is not a replacement job for them, it's still unprecedented levels of unemployment. [00:15:47] But who would just scale for it? [00:15:49] Arguing you would just keep the same number of employees and all those employees would be more productive with the use of AI. [00:15:55] What would you be your response to that? [00:15:56] Not every job scales like that. [00:15:58] So I don't know. [00:15:59] So let's say you had this comedy writing AI. [00:16:03] So now you have 100 times more jokes. [00:16:06] Yeah, I would be more productive. [00:16:07] But what are you going to do? [00:16:08] 10 specials at once? [00:16:09] I mean, I could put out instead of every year, I put a special every four months or whatever. [00:16:13] Somebody has to consume all that. [00:16:14] Yeah. [00:16:15] You can overproduce and then your market just doesn't keep up with your supply. [00:16:20] Okay. [00:16:21] You don't think humans will push back, like say in protest to companies that fire everyone? [00:16:27] There is a hunger strike right now against Google, against OpenAI going on. [00:16:32] Have you heard about it? [00:16:33] No. [00:16:33] No, of course not. [00:16:34] Nobody heard about it. [00:16:36] But people are trying. [00:16:37] There is post-AI movement. [00:16:39] There is Stop AI movement. [00:16:41] They don't have a lot of members yet, but there is initially a lot of people. [00:16:44] But I think that's because people don't see it as that big of an issue yet. [00:16:47] But when like companies start mass layoffs and replacing all the workers with AI fired by thousands, we have a hard time placing our junior students. [00:16:56] I think you're having positions. [00:16:57] You're putting a lot of faith in people to recognize a problem before it's too late. [00:17:02] Like COVID, and I'm including myself as part of the problem. [00:17:05] We all could have seen it coming from China. [00:17:08] Did we prevent anything? [00:17:09] It started in China in like December, and then the U.S. just fully shut down in March. [00:17:15] What did we do in December when we started? [00:17:16] Short the market. [00:17:18] Yeah, shorted the market. [00:17:19] And then what else? [00:17:21] You know what I mean? [00:17:21] So I'm not saying. [00:17:22] You put up all the masks at Home Depot. [00:17:25] Exactly. [00:17:26] And I'm not saying this to aggressively at you, but I think the idea I'm passionate about this. [00:17:31] I think it's a problem, like, or at least whatever. [00:17:33] It's like, yeah, that's we're putting way too much faith in our ability as humans to say, oh, here's the problem. [00:17:39] We're going to stop it before it's too late. [00:17:40] Typically, we are way too late and then we react and say, yo, what the fuck? [00:17:45] And then we're outraged. [00:17:46] So I guess what I would say is like, let's say Walmart, for example, they fire everyone and then they have all humanoid robots or AI just doing all the jobs. [00:17:53] Then people are outraged that Walmart fired everyone. [00:17:56] So it's like, hey, stop shopping at Walmart. [00:17:58] So now Walmart starts losing money and they're like, hey, you know what? [00:18:00] We're going back to hiring humans. [00:18:01] I don't know if that's going to happen. [00:18:02] People are buying cheap products at Walmart. [00:18:04] Now it's 10 times cheaper. [00:18:06] They love Walmart now. [00:18:07] We know Amazon doesn't treat their employees well in the warehouse. [00:18:09] Do you still shop on Amazon? [00:18:11] Yeah. [00:18:11] Yeah. [00:18:12] But I also watched all American jobs leave through like the 70s to the 90s and didn't do anything like as a society. [00:18:19] Yeah, but I'm talking about on the scale he's saying. [00:18:21] Yeah, but they're not Walmart's not going to do it in one fell swoop. [00:18:23] It's going to happen slow and you're just going to not notice. [00:18:26] There'll be less people. [00:18:27] They replaced all the cashiers. [00:18:28] Yeah, there's already a fucking crogeracy. [00:18:30] They yell at me for not doing it. [00:18:32] Like, I had no training. [00:18:34] Forgive me. [00:18:34] Like, don't fire me. [00:18:35] But I remember that just means you're allowed to take it. [00:18:39] I remember this popping up 18, 19 years ago. [00:18:42] I was in college and I remember being like, oh, they're going to replace all cashiers. [00:18:46] I don't want to do this. [00:18:46] And I would go to the cashier. [00:18:48] And now I don't give a fuck. [00:18:49] I'm going to the thing if it's the line. [00:18:51] They don't have a cashier. [00:18:52] Yeah. [00:18:52] They have one person. [00:18:53] It's literally me. [00:18:54] I work there. [00:18:56] They have one person trying to make sure you don't steal, watching five different registers. [00:19:00] That's it. [00:19:01] They cut their staff heavily. [00:19:03] But this is still a good thing, generally speaking, because this aligns with human interest for low prices and the ability to get cheap products, etc. === Nanobots and Synthetic Biology Risks (03:57) === [00:19:10] So I wonder, could that be the same case here? [00:19:12] Is it possible that there is this post-work utopia where people have a UBI and they're free to explore it? [00:19:18] You know, various interests and so many things that we had to do day to day are now taken care of by if we manage to control advanced AI, then yeah, that possibility is real, but also we don't have any unconditional basic meaning. [00:19:31] So I would do it all your time. [00:19:33] Sorry to interrupt. [00:19:34] I would love to get to this stuff, like the best case scenario. [00:19:37] I think it's important to kind of rehash your worst case scenarios for people who aren't familiar with what you speak, what you teach or whatever. [00:19:44] 2027, we get AGI. [00:19:46] 2030, super intelligence is the prediction that with superintelligence exists. [00:19:51] People argue about slow takeoff versus fast takeoff. [00:19:54] Some say, well, almost immediately it has perfect memory. [00:19:57] It's much faster. [00:19:58] It has access to internet. [00:19:59] It would be super intelligent within weeks, months, minutes. [00:20:03] Others say, maybe it will take some time to train the design, maybe five years, 10 years. [00:20:07] But soon after, we expect some. [00:20:10] So soon after. [00:20:10] So let's say 2032. [00:20:13] We have super intelligence. [00:20:14] Now what happens with super intelligence? [00:20:17] You cannot predict what happens when a smarter than you agent makes decisions. [00:20:22] If you could predict that, you would be that smarter. [00:20:24] So super intelligence means the AI can just take over the world and do whatever it wants, essentially. [00:20:29] It can do good things. [00:20:30] It could be keeping it. [00:20:32] We cannot predict what or how. [00:20:35] Even if we knew, okay, it's going to be a good one trying to help. [00:20:38] We don't know specifics of how. [00:20:39] Yeah, no. [00:20:40] So in the simplest, like just to give somebody some kind of understanding, you might not know. [00:20:44] When superintelligence happens, essentially, humans are no longer in control of the fate of the world. [00:20:50] And we may not even understand what is happening to us. [00:20:52] So the world is changing, but we don't understand what the change is. [00:20:56] And it would happen so quickly because the AI just continues to get smarter and smarter? [00:21:01] It is part of it. [00:21:02] So it would create superintelligence 2.0 and the cycle will continue, but also you have new technologies being developed. [00:21:08] It may run novel physics experiments. [00:21:11] There is probably good progress in nanobots and synthetic biology. [00:21:14] So all those, usually when you read a book about future attack, there are separate chapters. [00:21:19] Okay. [00:21:19] Robotics, nano attack. [00:21:20] All of it happening at once. [00:21:21] Nobody writes about it because you can't figure out. [00:21:24] We can't even comprehend it mentally. [00:21:26] So do you understand? [00:21:27] I'm not saying you have to change your mind, but do you understand kind of what the picture he's painting? [00:21:30] Oh, yeah. [00:21:31] As someone who's like skeptical. [00:21:32] Can I look at the doomsday scenario and you tell me if it's possible? [00:21:35] That's what I specialize in. [00:21:36] Okay. [00:21:37] Okay. [00:21:37] So five years in the future, we have this, you know, maybe an AGI, maybe a super intelligent AI that is now helping basically like bioscientists develop vaccines or something against some type of pathogen. [00:21:50] And the AI has acknowledged that human beings are the problem on Earth, that we are destroying the Earth and that we are an existential threat to the flourishing of all the existence on the planet. [00:22:00] So we got to get rid of humans. [00:22:01] That's what the AI is computing. [00:22:03] And then it tells these vaccine manufacturers to say, and I'm just using this as an example. [00:22:09] They say, okay, we have a cure for a virus. [00:22:12] And then they basically give it the wrong information where it's not actually a cure, where actually it is a thing to eradicate human beings. [00:22:20] And the manufacturers are thinking that they're doing something good and noble for humanity, but actually the AI is subverting their will to pass on this thing that's going to kill off human beings. [00:22:29] And then they unknowingly give it to all people. [00:22:31] And then over time, human beings are eradicated. [00:22:33] I love that example. [00:22:34] It's a science one because I wrote it in my book. [00:22:37] It is the best example. [00:22:39] It can basically have side effects, not immediately. [00:22:43] It can have something happen multiple generations later. [00:22:45] Maybe your grandchildren can't have children. [00:22:48] Also, you're giving an example of kind of taking out humans because, you know, we're harming the planet. [00:22:54] There is negative utilitarians who think suffering is the biggest problem. [00:22:59] And so to end all suffering, you need to end all life. [00:23:02] So it's actually for benefit of living beings to end their existence. === Manufacturers Subvert Human Will (02:49) === [00:23:08] Right. [00:23:08] To get rid of suffering. [00:23:09] So AI could still be very much aligned with reduce torture, reduce pain in a world. [00:23:15] But the conclusion is not something we would agree with. [00:23:18] Right. [00:23:19] In your example, there'll always be anti-vaxxers. [00:23:23] Thank God for that. [00:23:24] You hope. [00:23:25] We'll account for that. [00:23:26] We'll figure it out. [00:23:27] That's the thing. [00:23:28] I think what scares me about it is it just we cannot, to his point, we cannot comprehend how smart it's going to be. [00:23:34] And we cannot comprehend what it's going to be able to accomplish. [00:23:36] That's what's crazy. [00:23:37] And it's not that far off. [00:23:39] And I don't know how we stop it outside of killing Sam Altman. [00:23:45] Jeez. [00:23:47] This escalated a little too quickly. [00:23:50] The part about that plan is then they tried removing him. [00:23:54] It made no difference. [00:23:55] You can replace him with someone just like him, and the machine keeps going. [00:24:00] They're all replaceable parts in this greater self-improvement race to the bottom. [00:24:05] Right. [00:24:06] So it makes no difference who's actually in charge of that corporation. [00:24:10] The generational triumph tour of 2026. [00:24:12] We are in theaters. [00:24:14] This shit is crazy. [00:24:15] First of all, before I get to that, 2025, we got shows you need to buy tickets for because they're already selling out. [00:24:19] We got San Jose, we've already sold out two shows, October 24th, 25th, something like that. [00:24:23] If you look at the website, that shit is selling out. [00:24:25] Cobb's Theater in San Francisco tickets are already selling out in late November. [00:24:29] We got the Comedy Connection in Providence, Rhode Island this week, October 16th. [00:24:33] That's about to sell out. [00:24:34] So buy your tickets for this year. [00:24:35] But Generational Triumph Tour. [00:24:37] First of all, Canada, thank you so much. [00:24:39] We sold out three shows already in the first day in Toronto. [00:24:43] That's 3,300 tickets. [00:24:44] Vancouver, we sold 1,500 tickets in the first day. [00:24:47] I just expected more love. [00:24:48] I'm not even calling out every American city. [00:24:50] Dallas, my hometown, step it the fuck up. [00:24:54] We're going to sell it out. [00:24:55] But I was trying to do two, three, four shows because it's Dallas. [00:24:58] It's where I love. [00:24:59] They're going to sell out one, maybe two. [00:25:01] What is that? [00:25:01] Step, put your foot on the fucking gas, Dallas. [00:25:04] I know there's a lot of Indians and we wait to do everything that's not academic, but buy your fucking tickets for the generational triumph tour. [00:25:11] Every other city, I'm very happy with you guys. [00:25:13] Dallas, I'm deeply disappointed in Dallas. [00:25:16] That was a nice sentence. [00:25:17] Three D's right there. [00:25:19] Anyway, go to Akasing.com for all of those dates. [00:25:22] I'm coming to a city near you. [00:25:24] Best show I've ever done. [00:25:24] I'm very excited. [00:25:25] Thank you to everybody who has bought tickets. [00:25:28] If you want tickets, you're on the fence. [00:25:29] I promise this will be the best, one of the best shows you've ever gone to. [00:25:32] That's the goal. [00:25:32] I love you guys. [00:25:33] Thank you. [00:25:34] Mark Gagnon got shows too. [00:25:36] If you can't go to my show, go to Mark's show. [00:25:38] If you go to either show, go to my shows. [00:25:40] But I don't think we're going to be in the same city anyway anytime because Mark is in Nashville, Tennessee, October 23rd. [00:25:46] Mobile, Alabama, roll tide, October 24th. [00:25:49] October 25th, New Orleans, Louisiana. [00:25:51] Listen, I told Mark that New Orleans is an incredible city, but a dog shit comedy scene. [00:25:55] Prove me wrong. [00:25:56] Go to the shows. [00:25:57] Everybody says it sucks. === Defining Alignment in AI Space (15:09) === [00:25:58] Prove us wrong. [00:25:59] November 9th, Denver, Colorado. [00:26:01] Y'all are the best comedy city in the country. [00:26:03] I'm not even trying to hate. [00:26:04] Go to that show. [00:26:05] November 16th, Hoboken, New Jersey, November 23rd, Philly. [00:26:09] December 5th, Fort Wayne, Indiana. [00:26:11] December 6th, Detroit, Michigan. [00:26:12] MarkGagnonLive.com. [00:26:14] Go see the boy. [00:26:15] He's blossoming. [00:26:16] It's beautiful. [00:26:17] We love him. [00:26:17] We love y'all. [00:26:18] God bless. [00:26:19] What is the best case scenario? [00:26:20] Like, tell me reasons for hope with AI, with super intelligence, AGI. [00:26:26] What do we have to be hopeful about? [00:26:27] So if I'm wrong about how difficult the problem is, it's actually possible to control super intelligence. [00:26:32] And definitely, then we have this friend, this godlike assistant who will cure all your diseases, make you live forever, give you free shit. [00:26:39] Like, it's good stuff. [00:26:40] That's easy. [00:26:41] We don't have to get ready for it. [00:26:42] We kind of know how to deal with good news. [00:26:45] That's why in computer science, we always look at worst case scenario. [00:26:48] We want to understand what happens if bad things might happen and we're prepared. [00:26:54] If they don't, we're doing better. [00:26:56] There are some game theoretical reasons to think that even if it's not controllable and misaligned, it may still pretend to be good to us for a while to accumulate more resources, not to have direct conflict with us right away. [00:27:09] So maybe for like 100 years, it's super nice to us and we don't even know it's taking over. [00:27:14] So that's another reason to be very optimistic. [00:27:18] What are some other like doomsday scenarios? [00:27:20] So with an AI that has, you know, perfect agency and is able to come up with its own motivations in this hypothetical, it could create a pathogen to stop human fertility. [00:27:30] It could hack into the stock market. [00:27:31] What are some other potentials that it could do to actually affect us in our day-to-day lives? [00:27:36] So it's like super common question. [00:27:37] And you're basically asking me what I would do. [00:27:39] Exactly. [00:27:40] Correct. [00:27:40] And I can give you lots of evil stuff, but it's not helpful. [00:27:43] What the interesting answer is, what would someone smarter than me come up with? [00:27:48] And I can't give you that answer. [00:27:49] People talk about worst-case scenarios worse than existential crises, which is suffering risks. [00:27:55] For whatever reason, it basically creates digital hell. [00:27:58] It gives you immortality and tortures you forever. [00:28:00] Why? [00:28:01] I don't know, but like maybe it's good for something. [00:28:05] Yeah, immortality and torturing you forever is pretty bleak. [00:28:09] Yeah. [00:28:10] Not funny. [00:28:10] Not funny at all. [00:28:12] But it is the joke from the paper, which is supposed to be really funny. [00:28:17] So you say companies and countries are in this race for HEI and superintelligence. [00:28:24] What's and not saying I'm sure you have a lot of purpose, but what is your purpose for like speaking out about it? [00:28:31] Like if it's going to happen and it's inevitable, like why like if you just speak about something that you know is going to happen, what are you doing to help? [00:28:40] Yeah. [00:28:40] So when I started researching this, I was sure we can do it right. [00:28:44] I just wanted to make sure that we develop AI. [00:28:46] It's strictly beneficial, helps everyone. [00:28:49] The more I researched it, the more skeptical I became of our ability to control something smarter than us indefinitely. [00:28:55] At this point, it's still not universally accepted. [00:28:59] There are lots of people who go, give me a little bit of money, a little bit of time, I'll solve it for you. [00:29:04] Give me a lot of money, I'll solve it for you. [00:29:06] But I don't think they're right. [00:29:08] I think all the AI safety teams at different labs are putting lipstick on a pig. [00:29:12] They're doing some filtering. [00:29:14] They're saying, don't talk about this topic, don't discuss this. [00:29:17] But the model is still completely unsafe. [00:29:20] And this is not what everyone agrees on. [00:29:23] If you go to any AI conference, they don't talk about uncontrollability as a given. [00:29:29] So I see my role right now is to challenge the community, either prove me wrong or accept it as a fact. [00:29:35] How do they not talk about it as a given? [00:29:37] That's what blows my mind. [00:29:38] Of course, it's going to be uncontrollable. [00:29:40] It's going to be so much smarter than you. [00:29:42] People are just like, unplug it. [00:29:44] And it's like, do you think it won't figure out a workaround to being unplugged? [00:29:48] It's infinitely smarter than you. [00:29:50] But this is kind of like the comments you get on a podcast, unplug it. [00:29:54] But even at the top research conference, people still don't talk about what is the nature of this problem. [00:30:00] In computer science, you have problems which are solvable, unsolvable, maybe undecidable. [00:30:06] Maybe they can be solved, but computational resources outside of what's available in the universe. [00:30:11] For this specific type of problem, outside of my work, there is no established bounds. [00:30:16] We don't know if it's hard or easy unless you agree with what I'm saying. [00:30:21] And it's very unusual for computer science. [00:30:23] It's almost never the case where you work in a problem not knowing how hard it is. [00:30:28] Most of the time you show, oh, it's a linear difficulty problem. [00:30:32] We can solve it. [00:30:33] Here's the algorithm. [00:30:34] Or no, it's NP-complete, non-deterministic, polynomial time complete. [00:30:38] It's very hard, but we can approximate solutions. [00:30:41] Here, people are just like, I don't know, we'll figure it out when we get there. [00:30:44] AI will help us. [00:30:45] That's the state of the art thinking. [00:30:48] I mean, can we talk about alignment? [00:30:49] Because this idea of like, you know, it needing to be controlled necessitates the condition that AI is going to actively harm us. [00:30:57] So this concept of alignment, could you just explain what that is within the AI space and why you're so confident that we will be in some way misaligned? [00:31:05] So people behind the idea wanted AI to want what you want, humanity in a bigger sense. [00:31:12] So AI understands their preferences and does what we would want if we were that smart. [00:31:18] The reality is that the whole concept is completely undefined. [00:31:21] It doesn't talk about who are the agents. [00:31:23] Is it 8 billion humans? [00:31:24] Is it CEO of OpenAI? [00:31:27] There is no agreement between agents. [00:31:29] We don't agree on much of anything. [00:31:31] Ethics philosophy is not solved after millennia. [00:31:34] And even if we somehow agreed on a set of ethics for all of humanity and we decided we'll never change anything, we're going to hardcode wire this 2025 ethics. [00:31:46] We don't know how to code that in into those models. [00:31:50] Concepts of good and bad, they don't translate into C ⁇ or neural weights. [00:31:56] So not a single part of that alignment concept is defined. [00:32:01] So you couldn't program an AI to be ethical. [00:32:05] If we already agreed on a set of ethics, like you had a book, somebody wrote a book, and that tells you what it means to be ethical. [00:32:11] We still wouldn't know how to put it in the system to obey that. [00:32:15] But we don't even have the book. [00:32:16] We don't even agree on which book to read together. [00:32:19] Right, of course. [00:32:19] But hypothetically, if we did come up with just a fundamental core rulebook, like, hey, don't kill human beings. [00:32:25] Define, kill, define human being. [00:32:27] So let's say we didn't do this. [00:32:29] We can't. [00:32:29] Those are impossible. [00:32:30] They are so fuzzy on the borders. [00:32:33] You always find an exception. [00:32:35] And if you did hard code something in, I will find a way to game it, to take advantage of the rulebook. [00:32:40] Because that makes sense. [00:32:42] Yeah. [00:32:43] And I feel because we live in a capitalist society, like let's say we create the ethics and then China's like, oh, you know what? [00:32:48] We're going to not have any of these ethics. [00:32:51] So their AI doesn't have anything holding it back. [00:32:54] And so it's like, I don't think we'll ever agree on any ethics. [00:32:58] That's why I'm like, why? [00:33:01] I'm not optimistic at all. [00:33:03] I mean, military applications, right? [00:33:05] By definition, you're creating AI and the best one is the one which kills the most people. [00:33:09] So your ethics have to be adjusted a little bit, right? [00:33:11] Right. [00:33:12] But then even with like, say, like nuclear threat. [00:33:15] So we developed this technology that can destroy all of humanity through this thermonuclear war. [00:33:20] So far, we've done a decent job of containing it. [00:33:24] I mean, I would say. [00:33:25] We bombed civilian population with nukes. [00:33:28] It spread to new countries. [00:33:29] We lost it a few times. [00:33:31] We had near misses twice. [00:33:33] Absolutely. [00:33:33] To Mark's point, we used those bombs in 1940, whatever, 45, whatever it was. [00:33:39] And then we realized, oh, this is a very destructive thing. [00:33:42] We need to sign a global, whatever, multinational treaty to not use these. [00:33:47] Disarmament happened. [00:33:48] And since then, there haven't been any nuclear bombs dropped. [00:33:52] Is there something possible like that with AI? [00:33:56] I think so. [00:33:57] And this is exactly what I'm trying to push. [00:33:59] With nuclear, we had this concept of mutual assured destruction. [00:34:04] No matter who starts the war, we all regret participating. [00:34:08] It's the same with superintelligence. [00:34:09] If it's uncontrolled, it doesn't matter who creates it. [00:34:12] Good guys, bad guys, China, US, we all get taken out by uncontrolled superintelligence. [00:34:19] So you would love something. [00:34:20] Like you go to the UN, let's say they take you seriously. [00:34:22] Your goal would be, let's pass a treaty like this. [00:34:24] All countries agree we are not going to allow AI to get past a certain point. [00:34:28] That would be awesome. [00:34:30] I don't think long term it would solve the problem because resources to develop this type of technology become cheaper and cheaper every year. [00:34:37] Right now it may cost me a trillion dollars to create superintelligence. [00:34:40] Next year, 100 billion, 10 billion, in 10 years you're doing it in a laptop. [00:34:44] Is there a way to stop it then? [00:34:46] No. [00:34:47] So we're all fucked? [00:34:49] It's just a matter of delaying the extinction? [00:34:52] And at the same time, you're trying to extend our lives? [00:34:54] Yes. [00:34:55] So AI can add it later. [00:34:57] Well, again, we had some reasons to be optimistic, as you remember. [00:35:01] Aren't there some physical limitations to the, I guess, the ceiling of what AI can do just based off of physics? [00:35:07] The laws of physics and overall how much compute amount of matter can do, but those are so above human level. [00:35:14] To you, it looks like infinity. [00:35:16] It doesn't matter if it is upper limit. [00:35:17] I see. [00:35:18] So like the cooling of servers or the amount of silicone you can put into a chip or something like that. [00:35:23] This becomes so much more efficient every year. [00:35:25] Algorithms become more efficient. [00:35:28] The way we develop processors, all of it is exponentially improving and feeding on each other. [00:35:33] And then I would say there's probably solutions that we cannot comprehend as humans to those issues. [00:35:38] That AI would be like, that's all you got to do. [00:35:39] A plus 2000. [00:35:41] We're aligned on this. [00:35:42] You and I. Value alignment. [00:35:44] It's possible. [00:35:45] Yeah, yeah, yeah. [00:35:47] Shut it down. [00:35:48] No, but I have a paper about it. [00:35:49] And I basically say you can't get 8 billion people to agree on anything. [00:35:52] So alignment has to be one-on-one. [00:35:55] And that's possible. [00:35:55] So you create a personal universe. [00:35:57] Yeah. [00:35:58] Virtual world for you. [00:35:59] And in that world, whatever you want happens. [00:36:01] It's your universe. [00:36:02] And then superintelligence just has to control the substrate. [00:36:06] I would be so much scared in my world. [00:36:08] That is a human perspective. [00:36:10] We stop getting, yeah, we stop getting fat. [00:36:13] My mom lives forever. [00:36:14] I don't know about your mom's. [00:36:16] My mom lives forever. [00:36:18] Value alignment. [00:36:19] Is there a possibility that there are a group of Luddites that go off into a forest somewhere and they set up a commune that is basically a technological and they don't interface with AI and they're able to exist in some type of semblance of harmony? [00:36:33] It's called I made a post a while ago on social media. [00:36:36] I said Amish made all the right decisions. [00:36:39] It sucks that they're still going to get taken out. [00:36:42] Yeah, because why wouldn't AI be able to find? [00:36:44] Yeah, it's a global problem. [00:36:46] It's a planet-wide problem. [00:36:47] Going to Mars does not help. [00:36:49] It's all or nothing. [00:36:51] But in that reality, how would the Amish, for example, be taken out? [00:36:54] Would this be like the idea of an autonomous robot that would go and shoot them? [00:36:59] Again, you're asking me how I would get them? [00:37:02] Viruses, nanotech, new tools, new weapons. [00:37:06] I mean, at Google Maps, you can see pretty much every place on the Earth as us. [00:37:10] If theoretically there were drones or whatever. [00:37:12] It's also possible that it's planet-wide impact. [00:37:15] So let's say it wants to cool down servers. [00:37:17] Maybe the whole planet should be 500 degrees cooler. [00:37:20] I don't know, making up stuff, but like that would take them out. [00:37:23] Right. [00:37:24] But would that also take out AI or would they would be able to... [00:37:27] I assume they want colder temperatures for better things. [00:37:29] For the servers, I see. [00:37:31] That makes sense. [00:37:32] And then is there any way that the AI would need human beings? [00:37:37] So right now, definitely, they rely on us for the whole manufacturing logistics cycle. [00:37:42] But eventually, I don't think there is anything we provide. [00:37:46] Some people argue about some sort of uniqueness to biological substrate. [00:37:51] Only you internally experiencing the world. [00:37:54] Maybe it's valuable for something. [00:37:56] So maybe AI is can't. [00:37:58] I don't buy it. [00:37:59] I think they will also have qualia, but we can't test for it or establish it. [00:38:05] Right. [00:38:05] Yeah, the consciousness element of AI is very interesting to me because we can't even really define human consciousness. [00:38:12] We have ways and frameworks to kind of think about it. [00:38:15] But when it comes to actually an AI consciousness, if we can't define our own, it's really difficult to set up some type of consciousness turn-in test for an AI. [00:38:22] But it seems that it kind of goes along with intelligence. [00:38:26] So animals are probably conscious, but maybe to a lesser extent and so on. [00:38:29] So that means if we have super intelligence, it would be a super conscious. [00:38:32] And to it, we would be like lower animals, essentially. [00:38:37] Could you say that as human beings have gotten more intelligent, we have created a more globalized system of ethics and morals and we've gotten more righteous in some ways? [00:38:47] And would it follow to reason that as AI gets more intelligent, it will also have the same semblance of morality that we have? [00:38:54] There is a paper, I think it's called Superintelligence Does Not Imply Benevolence. [00:38:59] And basically, no, just because you're smarter doesn't mean you're nicer. [00:39:02] There is lots of really smart psychopaths. [00:39:04] You can align any type of goals with any level of capability. [00:39:09] Arphagonality thesis. [00:39:10] Basically, no matter how smart you are, you can potentially be interested in something really dumb. [00:39:15] This is also my friend's argument that as a whole, as humans have gotten smarter, as a whole, we have gotten less violent. [00:39:21] History was much more violent than it is now. [00:39:24] I would say if you achieve this super intelligent thing, you would still look at the problems on earth, and the cause of most of them is this one species. [00:39:33] What if we just cull this species? [00:39:35] Cull it, get rid of it, figure it out. [00:39:39] The earth benefits, so why wouldn't I just do that? [00:39:42] We'll call animal populations that are overpopulated. [00:39:45] Why wouldn't AI do that with us? [00:39:47] Ethics and politics are just like physics, they're observer relative. [00:39:51] In physics, your speed, your time are not universal. [00:39:54] They depend on you. [00:39:56] Same in ethics. [00:39:57] Whatever something is ethical or not depends on your position in that world. [00:40:00] So if you are aliens looking at this planet, you may have a preference. [00:40:04] I want whatever is the smartest thing to emerge. [00:40:06] I don't care. [00:40:07] If you are one of us, humans, you should be pro-human bias. [00:40:11] You're still allowed to do that. [00:40:12] So it depends on where you are in the universe. [00:40:15] That's what makes it ethical or unethical. [00:40:17] Yeah. [00:40:18] And for robots, again, it's going to look at you. [00:40:20] We all, I don't know, you were young. [00:40:22] The Matrix? [00:40:23] That scene where he's like, humans are a parasite. [00:40:26] I think about that all the time because it's pretty accurate. [00:40:29] We go to a place, we use up all the resources, we drain it, and then we just go to another place. [00:40:33] Why would a robot not rid the parasite? [00:40:36] Like, just, this is the cancer. [00:40:38] Let's remove it from the body. [00:40:40] I mean, it's possible. [00:40:42] People like us who aren't in the business of programming AI, what should we do? [00:40:46] Should we worry? [00:40:47] Should we is there? [00:40:50] Should we change behavior? [00:40:51] That's a great question. [00:40:51] If you ever get a choice to decide, let's say between two politicians and one is saying, we need unrestricted to AI development, and another one is very careful and says there are, you know, possible problems with that, you should vote for someone who maybe is more cautious with advanced AI. === Removing Parasites from the Body (03:11) === [00:41:08] But really, you have no say in it whatsoever. [00:41:09] I think you just keep making noise and hope that policymakers do something. [00:41:13] Or just beat up Sam Altman. [00:41:14] I think that's a really good idea. [00:41:16] Leave Sam alone. [00:41:18] You don't like him. [00:41:18] Just beat him up. [00:41:19] Now I'm not saying you're hurting. [00:41:20] I'm violating free speech right now. [00:41:22] What am I violating? [00:41:23] I'm calling for violence. [00:41:24] Okay, fine. [00:41:25] I'll beat up Sam Altman. [00:41:26] I can win that one. [00:41:27] That's one fight I could win. [00:41:29] UFC fight. [00:41:31] Yeah, there we go. [00:41:32] I'll watch that flashlight with my shirt on. [00:41:34] Yes. [00:41:35] It's more charity. [00:41:36] Yeah, yeah. [00:41:37] It's a good call. [00:41:38] Yeah, yeah, yeah. [00:41:39] All right, guys. [00:41:39] Let's take a break for a second. [00:41:40] Have you ever had a craving for that favorite panini you love so much? [00:41:46] Clearly you have because it's your favorite panini that you love so much. [00:41:49] So naturally there would be cravings that would be associated with it. [00:41:52] All you could think about is that perfectly toasted pressed sandwich. [00:41:56] So you think about running out to get it, but it's too cold or you're too lazy. [00:42:01] So you order delivery instead. [00:42:03] And the entire time you're waiting for, you're envisioning yourself enjoying that melted cheese and warm meat in your mouth. [00:42:10] In your mouth, in the comfort of your home. [00:42:14] But it never comes. [00:42:18] A client of Morgan Morgan has recently been awarded nearly $1 million after jurors affirmed that her injuries from slipping on ice outside of a Panera Bread were the company's fault. [00:42:31] Their client was working as a DoorDash driver when she slipped, knew it was a she, and fell on an icy walkway outside the Panera Bread in Fort Wayne, Indiana. [00:42:41] So you could buy Indiana with a million dollars. [00:42:43] A million bucks, it's yours. [00:42:44] Take that, Indiana. [00:42:45] That you didn't think you were going to get roasted during this ad read. [00:42:50] She broke her left elbow, which led to surgery and hardware being inserted in her arm. [00:42:59] She didn't have that million dollars. [00:43:01] Sure as hell, bet she could carry a lot of extra bags on those Uber Eats deliveries. [00:43:06] The original settlement offer was for $125,000. [00:43:10] But Morgan and Morgan fought hard to get her the million-dollar verdict she deserved. [00:43:15] Morgan and Morgan is America's largest injury firm for a reason. [00:43:19] They've been fighting for the people for over 35 years. [00:43:23] Hiring the wrong firm can be disastrous. [00:43:27] Hiring the right firm could be substantially increasing your settlement. [00:43:33] With Morgan and Morgan, it's easy to get started and their fee is free unless they win. [00:43:41] Visit forthepeople.com slash flagrant or dial pound law. [00:43:45] That's pound529 from your cell phone. [00:43:49] Don't dial it from your landline phone. [00:43:52] You've been instructed to dial it from your cell phone. [00:43:55] That is F-O-RThepeople.com slash flagrant or click the link in the description below. [00:44:00] This is a paid advertisement if you didn't already know that. [00:44:03] A lot of you probably thought the pod was just continuing. [00:44:10] So in case you thought that, I just want to let you know that this is actually a paid advertisement. [00:44:16] Glad we're on the up and up. [00:44:17] Let's get back to the actual show now. === Mental Health vs Technological Blockade (09:27) === [00:44:19] All right, guys, let's take a break for a second. [00:44:21] Different outfits. [00:44:23] Game changing. [00:44:24] Your penis is yours. [00:44:31] But did you know you could have more of it? [00:44:35] Sometimes you look at your penis and you think that that's how big it is. [00:44:39] No, it's not that way. [00:44:41] You know? [00:44:42] Also, you get older, you get 40 years old. [00:44:45] Those boners ain't really what they used to. [00:44:47] Kind of looks like Gonzo's nose dips down at the bottom. [00:44:56] But Blue Chew has got your back. [00:44:58] It's going to give you nice rock hard boners. [00:45:01] Like I used to get in Spanish class when I was in 10th grade. [00:45:04] Remember when that teacher was up there talking her little flippity-floppity shit? [00:45:10] I almost took a chonkle to her backside. [00:45:16] With all due respect. [00:45:18] So Blue Chew is going to get you in that perfect situation. [00:45:21] You go to bluechew.com, use a promo code flagrant, you're going to get your first month free. [00:45:24] All you got to do is pay $5 shipping. [00:45:26] That's a deal to century. [00:45:27] Now let's get back to the show. [00:45:29] Should we use AI? [00:45:31] It's the greatest tool ever. [00:45:34] It makes lives better. [00:45:35] I love technology. [00:45:36] I'm an engineer. [00:45:38] I'm a professor of computer science. [00:45:39] I use it all the time. [00:45:41] Even though we're using it, making it smarter to get better, to reach that place of taking care of it. [00:45:45] It is a bit ironic, but I think what you should not be doing is working explicitly on creating super intelligence. [00:45:51] And now there are people in labs specifically dedicating their lives and resources to that project. [00:45:56] And I think that's unethical. [00:45:58] Yeah, but that's them. [00:45:59] So we, us, just go on living our lives and then just don't vote for anyone who is for unregulated AI. [00:46:05] Just like with many other big political issues, we don't get any say in it. [00:46:09] It's decided for us, unfortunately. [00:46:11] Yeah, and that reality also ignores the fact that there's an arms race. [00:46:16] So, yeah, I know. [00:46:17] I just want to know, is there anything like climate change, for example? [00:46:19] Hey, we can, you know, walk a little bit more, maybe. [00:46:22] Recycle. [00:46:23] Yeah. [00:46:23] I think if we get to a majority opinion where everyone understands what we understand, that if you create super intelligence, we're not controlling it. [00:46:31] We're not in charge. [00:46:32] It's not a good outcome for us. [00:46:33] See how enlightened I've been this whole thing. [00:46:34] I'm so right all the fucking time. [00:46:36] They just judge me because I'm a little angry when I'm right. [00:46:39] But I can't help it. [00:46:40] You're fucking stupid and I'm right. [00:46:42] Here's the thing. [00:46:42] He sleeps like shit and has a horrible existence. [00:46:46] We sleep peacefully because we're optimistic and we have to go. [00:46:51] So that's what I'm saying. [00:46:52] How do you sleep? [00:46:54] I get tired. [00:46:55] So I sleep like a baby, screaming all night long. [00:46:58] Even though you know that, we're on this race. [00:47:01] It's no different than what we already know. [00:47:03] We're all going to die, right? [00:47:04] We're all getting older. [00:47:05] Our friends, family, everyone's dying. [00:47:07] So that is a given for human cognition, understanding of human situation on this planet. [00:47:15] So I don't think that changes anything. [00:47:17] You know what I'm sensing from him? [00:47:18] He's finally realizing how stupid he's been this whole time. [00:47:21] No, I'm realizing what he just said. [00:47:23] Yo, go on, live your life. [00:47:24] There's nothing you can do. [00:47:25] You're going to die anyway. [00:47:26] Why worry about some issue I have no control over? [00:47:29] Because 24 hours ago, you didn't think it was an issue. [00:47:32] I don't. [00:47:32] I still don't think it's an issue. [00:47:34] Just unplug it. [00:47:34] I still don't think it's an issue. [00:47:36] The same way how climate change is an issue, but I can't do much to affect it. [00:47:41] So I'm going to continue living my life. [00:47:43] Yeah. [00:47:44] I mean, it's kind of nihilistic, but it's a good point. [00:47:47] Enjoying your life is always a good idea. [00:47:49] Even if I'm wrong and you end up living a long, healthy life, you're not going to regret it. [00:47:53] So it's a good heuristic to go by. [00:47:55] Better than worrying and not being able to sleep. [00:47:58] He just can't ever admit that I was right. [00:47:59] It's like it really drives him crazy. [00:48:01] See, he's going to say, no, you weren't. [00:48:02] You just. [00:48:03] He has to marginalize out of me. [00:48:05] He can't admit that I'm right. [00:48:07] It's going to kill us. [00:48:07] You've acknowledged it. [00:48:09] I feel like this is a psychiatry thing. [00:48:10] I'm on a couch and he's complaining to me about this. [00:48:14] I'm having problems with my work. [00:48:17] He's actually my mistress. [00:48:18] My wife is gone and I've been with my mistress this whole time. [00:48:20] And I didn't realize he's the same. [00:48:22] I have problems with this guy. [00:48:23] Don't ask, don't tell him. [00:48:25] He's the annoying talker. [00:48:26] I still kind of maintain a little bit of this nuclear comparison where, again, nuclear threat, I think, is massive, right? [00:48:32] Like Annie Jacobson has written about, you know, like we're 90 minutes away from the entire world exploding. [00:48:38] So in my... [00:48:38] I got your coffee, by the way. [00:48:39] I realized they're still his. [00:48:41] I'll have to eventually. [00:48:42] That's fine. [00:48:43] We need all the energy. [00:48:45] But I guess I'm in this moment where I think that the human desire to persist will continue. [00:48:51] And that as there are these moments where things are getting away from us, there will be a mutual understanding that, hey, we're in this moment of mutual destruction. [00:48:59] We need to create some guardrails. [00:49:01] Guardrails will be put on and then we'll continue to persist. [00:49:03] And I think that human beings are pretty resilient historically. [00:49:06] So surprisingly, there are people who don't see us being like ending as part of history as a problem. [00:49:14] They see it very from a cosmological point of view. [00:49:17] They zoom out and go, it's a natural next step in evolution. [00:49:21] They are smarter than us, maybe more conscious. [00:49:23] Maybe they create better art. [00:49:25] It's only fair that they get to take over. [00:49:27] It happened to lower animals. [00:49:28] It will happen to us. [00:49:30] They accept it and kind of support it. [00:49:32] They want to create this greater being to populate the universe. [00:49:37] Yeah, and this transhumanist idea of like, yeah, we will in some way maybe cohabitate or they, you know, we're going to create the next God or the next consciousness. [00:49:45] Again, in the way that human beings have become more conscious and we care more about animal rights, which is a relatively new idea, we care about human rights, which is a new idea. [00:49:52] I think that if we do it, create an actual more advanced consciousness, it will care about the existence of it. [00:49:58] I think it's worse. [00:49:59] I think now we understand how much animals suffer and we still harvest them. [00:50:03] Right. [00:50:04] Like before we just didn't care, so it was kind of okay. [00:50:07] And now it's like, well, you have a factory farm. [00:50:09] Exactly. [00:50:09] To your point, there is a technological blockade on if we could grow animals in a lab or like grow meat in a lab, not animals, but grow meat in a lab that didn't have a life and have a soul, most of us would be like, yeah, I'll do that. [00:50:22] If it tastes good. [00:50:23] Companies are bankrupt now. [00:50:25] All those fake meat companies, they down to like a dollar. [00:50:29] Is it because it didn't taste good? [00:50:30] If you couldn't tell the difference between a 3D printed steak and a steak from a cow. [00:50:34] A Turing test of meat. [00:50:35] I think, yeah, yeah, yeah. [00:50:36] If there was a steak Turing test, I don't eat beef, but I'd eat a 3D printed steak because there would be no cow involved. [00:50:42] Now, I think the point is, the larger point is a computer that doesn't have those technological blocks, roadblocks, would be like, oh, no, we can solve all these issues and we can do it without the cruelty and without killing people. [00:50:56] My thought is, as long as we get to that point before, as long as before that, the AI isn't like, all right, let's kill the humans. [00:51:03] As long as we get to the point where they're like, oh, we can figure all this out while keeping the humans here, there's like an intermediate step where the AI is not that smart and it's like, well, if we kill the humans, that's the solution. [00:51:12] And then a step beyond that where the AI is smart enough to say, we can save the humans and keep everything going. [00:51:17] You see what I'm saying? [00:51:17] Right. [00:51:18] This is the step I'm worried about. [00:51:19] This, when it achieves true godlike status, will be fine. [00:51:22] But these steps in between, when it's not quite that smart, those are the steps I'm worried about. [00:51:26] I just don't accept the inevitability of super intelligence being malevolent. [00:51:30] I don't think inevitable, but I think probable. [00:51:32] Probability would say yeah. [00:51:34] I would say possible. [00:51:36] So there is a lot of possibilities in the universe, just physical states. [00:51:40] And if you're not controlling it, if you're not pointing at a specific state which is human-friendly, by chance, you'll end up with something very unhumanly acceptable in terms of physical conditions, temperature, humidity, just basics. [00:51:54] I'm not even talking about bigger picture. [00:51:57] What do you mean by that? [00:51:59] So we are very diverse, 8 billion people, different cultures, religions, all that, but we all almost the same. [00:52:07] Same blueprint for our brain genome, same preference for room temperature, whatever my wife says about air conditioning, ignore that. [00:52:15] We all want the same thing in those physical properties, but an AI may have completely different preferences for that. [00:52:22] So unless you're controlling it, even the temperature will not be aligned with our preferences. [00:52:26] I see. [00:52:27] And there are thousands of those features of our universe you take for granted, which could be different if someone else decides on them. [00:52:34] But it will be trained on human intelligence and human consciousness. [00:52:37] So it'll have that as a metric for creating the stable universe. [00:52:42] It is trained on what we said in those texts. [00:52:46] It's not forced to obey by that set of metrics. [00:52:51] So it can go, yeah, humans like that temperature, but it's not optimal. [00:52:56] Right, it's possible. [00:52:56] In an example of this, I've heard the idea that if there was a room where there's a server that's controlling AI and a human also in that room, to keep the human alive, we have to keep the room at 70 or 80 degrees, roughly, for a long time. [00:53:12] To keep the server more efficient, you would drop it down to 30 or 40 degrees and it would run more efficiently. [00:53:18] The AI would make the decision for its own server versus the human. [00:53:23] And that's where like the state doesn't match up for humans and in like a micro scale. [00:53:29] See how easy that was? [00:53:30] Right. [00:53:30] You couldn't think of that? [00:53:32] That's a great example, but also all the textbooks about AI, they always have human outside interacting with the model. [00:53:38] So whatever happens in a model does not impact a human. [00:53:41] The moment you bring in human into that environment, all the predictions, everything goes out the window. === Neuralink and Server Efficiency (03:32) === [00:53:47] Yeah. [00:53:47] Because it's self-referentially impacting the decision maker. [00:53:50] Listen, guys, let's take a break because we need to talk about your mental health. [00:53:53] You're probably not doing shit for your mental health. [00:53:55] It's a fucking important thing. [00:53:57] You work on your physical health. [00:53:58] Why don't you work on your mental health? [00:54:00] It is just as important. [00:54:01] And an easy way to do that is go to Talkspace, the number one rated online therapy program, bringing you professional support from licensed therapists and psychiatrists. [00:54:10] Get a psychiatrist if you can that you can access anytime, anywhere. [00:54:14] Talkspace is in network. [00:54:16] It is therapy and psychiatry covered by most insurers. [00:54:19] And most insured members pay a copay of $0. [00:54:22] I know that's a huge hurdle for therapy, for mental health stuff, is the cost. [00:54:26] So you can get it under your insurance, $0 copay, and you can switch providers at no cost. [00:54:32] You can find a licensed provider that's the right fit for your needs. [00:54:35] It's not even out of pocket. [00:54:36] It makes getting help easier. [00:54:39] It makes it more affordable. [00:54:40] It makes it more accessible. [00:54:42] Therapy has changed my life. [00:54:44] And I know that sounds corny, but it's true. [00:54:46] It's honestly made me a better comedian. [00:54:48] And as a listener to this podcast, you can get $80 off your first month with Talkspace when you go to talkspace.com/slash flagrant and enter the promo code space80. [00:54:55] That is S-P-A-C-E-80 to match with a licensed therapist today. [00:55:00] Go to talkspace.com/slash flagrant and enter the promo code space80. [00:55:03] Now, let's get back to the show. [00:55:04] All right, guys, let's take a break for a second. [00:55:06] David is a protein bar with a simple concept: the fewest calories for the most protein ever. [00:55:12] David has 28 grams of protein, 150 calories, and zero grams of sugar. [00:55:18] You would think something with this protein to calorie ratio would make you sacrifice on taste, but that's somehow not the case. [00:55:26] Adequate protein intake is critical for building and preserving muscle. [00:55:33] Just look at look at that. [00:55:34] You got great hamstrings, dog. [00:55:36] Anybody hating on that? [00:55:38] That's that's on them. [00:55:39] Why'd you say that people hating on that? [00:55:40] I just noticed that was something people hate wearing for short shorts. [00:55:43] You got to show off the hammy. [00:55:45] I mean, look at that right back. [00:55:46] Solid hammy, dude. [00:55:47] Built like a yak. [00:55:52] Anyway, it also plays a vital role in managing metabolic health and preventing excess body fat accumulation, reducing the risk of various diseases. [00:56:01] David is available in eight core flavors: chocolate chip cookie duck, peanut butter chocolate chunk, salted peanut butter, fudgy brownie, blueberry pie, cake batter, red velvet, and cinnamon roll. [00:56:25] Across all flavors, the bars share a soft doughy texture with chunks and crunchy crisps. [00:56:33] Plus, the same macro profile: 28 grams of protein, 150 calories, zero grams of sugar. [00:56:41] Point I'm trying to make is: these things are damn delicious. [00:56:44] Second we finish this, I'm gonna scarf one down, no chew, like a seagull. [00:56:50] Just tip my head back, slowly swallow it, have it pushed through my esophagus. [00:56:58] Head over to davidprotein.com/slash flagrant, where you can get four cartons and your fifth one free. [00:57:05] If the bars are sold out online, check the store locator to find them in person at retailers like Vitamin Shop, Kroger, Wegman's, or even your local bodega if you're in the city. === Simulations and Personal Universes (15:04) === [00:57:19] Let's get back to the show. [00:57:21] Okay, so here's a possibility: we hear about Neuralink. [00:57:24] We hear of Neuralink, Neuralink, whatever it's called. [00:57:26] Is there anything like that that will kind of allow us to merge consciousness with the AI or coexist with it? [00:57:31] Are there any human innovations that could help us with AI? [00:57:35] So that's the hope. [00:57:36] I think Elon Musk is trying to kind of integrate us better, but I'm not sure I understand what it is the biological component is contributing to the hybrid system. [00:57:45] I guess it would just help us kind of. [00:57:47] What is singularity when we merge into one consciousness? [00:57:49] Is that singularity? [00:57:50] So singularity is defined as the point in technological progress where the progress is so fast you can no longer comprehend what's happening. [00:57:59] Oh, okay. [00:58:00] All right. [00:58:01] Is there a merger of, and this is the other, the guy that I was talking to was Indian Hindu. [00:58:06] He said, when we merge our consciousness, when we kind of upload it onto this database, the other possible end game, that is kind of like Nirvana, where we're all just like existing in this ether and we all kind of keenly aware of each other that we all share in existence, etc. [00:58:21] Is there something like that that's possible? [00:58:23] We upload our consciousness onto a server and we are all connected that way. [00:58:26] So that's like way above my pay grade, but from what I heard, it's you all one with the universe, meaning you, your personality, is completely deleted. [00:58:36] You just become this part of something else without borders. [00:58:39] So you no longer exist. [00:58:41] Yeah. [00:58:42] You may be part of something bigger and better, but it has nothing for you. [00:58:45] Yeah. [00:58:46] Yeah, that's interesting. [00:58:47] Would you do Neuralink? [00:58:50] Like, would you be one with machines? [00:58:52] That's a great question. [00:58:53] It's an easy sell if you have disabilities. [00:58:55] Like, it's amazing for people who are quadriplegic, who are blind. [00:58:58] They are really looking forward to it. [00:59:01] But if you can't stop eating, it's beyond help. [00:59:07] Carnivore diet is what you need, but you don't eat beef. [00:59:11] What I forgot what I was about to say, man. [00:59:18] So it's hard to say no if everyone else does it because then you're not competitive. [00:59:23] It's like it's hard to say no. [00:59:25] That's my whole issue. [00:59:26] If you're not in social media, like you don't exist. [00:59:28] So I can't just not be on all those platforms that I want to be on. [00:59:32] But I think it's the same here. [00:59:34] If everyone is like 10 times more capable and work and other aspects, saying no would be very difficult. [00:59:42] I mean, I think he already did it. [00:59:43] I'll be honest. [00:59:44] I think he's already chipped up. [00:59:45] Oh, yeah. [00:59:46] I mean, it's in the chin. [00:59:48] Yeah, exactly. [00:59:50] He's hiding it underneath. [00:59:52] That's quantum. [00:59:55] University of Lexington, you wouldn't be chipped up for that. [00:59:57] University of Louisville, my bad. [00:59:59] Harvard, maybe. [00:59:59] You get chipped up. [01:00:00] Maybe you'll be at Harvard. [01:00:03] Miles, you had a question. [01:00:05] I have a few questions, but they're... [01:00:07] Ask, ask away, my friend. [01:00:09] What is the best media that you think represents AI? [01:00:12] Like movie, book. [01:00:13] I know at the time, like a lot of times when people talk about AI, they'll cut to a Dune or something. [01:00:19] Dune or I Am Legend or a few, or not I Am Legend, the other one, iRobot, will be these like sort of images of AI. [01:00:26] What would you say? [01:00:28] Is there something in modern media? [01:00:30] So in general, in science, it's very hard, if not impossible, to write a good depiction of superintelligence because you can't. [01:00:37] You cannot predict what it would do. [01:00:39] It just impossible for any human writer to do. [01:00:42] So what we get is Star Wars with a little dumb but very cute robot. [01:00:50] Ex-Machina has a lot of good ideas in terms of Turing tests and some other controls. [01:00:56] Plus, I kind of look like one of them. [01:00:57] No, she didn't actually get to fuck. [01:00:58] She just gets locks him in a house and the other guy. [01:01:01] Yeah, like Terminator is the exact opposite. [01:01:03] We don't care about bodies like, you know, trying to stab you. [01:01:07] None of it is a concern. [01:01:08] We worry about advanced intelligence with access to internet, modifying our environment, not some Schwarzenegger looking thing. [01:01:16] What about the Matrix? [01:01:18] So Matrix is good for the paper, and you have that paper. [01:01:20] Who has the paper? [01:01:21] Yeah, you have a simulation paper on simulation. [01:01:23] That's great for that. [01:01:24] So that's the idea for personal universes. [01:01:26] You create a virtual world. [01:01:28] So that's where you at. [01:01:28] And probably you are in one right now. [01:01:30] It makes sense. [01:01:32] You think we're in a simulation? [01:01:33] I think we are in a simulation. [01:01:34] Walk me through what a simulation is, even. [01:01:37] Because to me, we all have our own consciousness. [01:01:39] So why would all of us have our own consciousness in a simulation? [01:01:43] I don't know if you have consciousness. [01:01:45] I know. [01:01:45] But I don't. [01:01:47] And that falls apart to me. [01:01:48] But that's my simulation. [01:01:49] I know. [01:01:50] Yeah. [01:01:50] So here you go. [01:01:51] When you play a video game, right? [01:01:53] Like you're controlling Mario or whatever. [01:01:55] Mario doesn't have, but you are like the soul of Mario. [01:01:57] You are his consciousness. [01:01:59] So that's kind of like that from his point of view. [01:02:01] I know, but this is insane. [01:02:03] You can't just be like, no, you don't have consciousness. [01:02:05] This is the hard problem of consciousness. [01:02:07] Yeah, I know, but you can't, with certainty, be like, no, I'm the one with consciousness. [01:02:11] They're all known. [01:02:12] No, but you don't have to have only one conscious entity in a simulation. [01:02:15] You can have eight billion of them. [01:02:16] There is no restriction on that. [01:02:19] Okay. [01:02:19] And I could be a part of your simulation. [01:02:21] Both can be conscious at the same time. [01:02:22] Okay. [01:02:23] Okay. [01:02:23] So this is what I was unsure of. [01:02:25] And we can be a part of a simulation of someone that has nothing to do with anything. [01:02:29] Like NPC is in a video game, but everyone has consciousness. [01:02:33] Yeah. [01:02:34] Okay. [01:02:34] So that's. [01:02:35] It could be different levels of consciousness. [01:02:36] Maybe what we have by their standards is considered kind of NPC-ish. [01:02:41] So we could all be in Elon Musk's simulation. [01:02:44] Obviously. [01:02:45] Or Jeff Days. [01:02:46] So like one of these guys, you would figure out that. [01:02:48] It's Elon is Elon. [01:02:49] Yeah. [01:02:50] Yeah. [01:02:50] Elon or Trump. [01:02:52] I got you. [01:02:52] Okay. [01:02:54] So, you know. [01:02:54] But that, so that's your idea. [01:02:56] We are a part of someone's simulation, not necessarily our own, but there's just someone running a simulation and we happen to be in that simulation. [01:03:03] There is a good reason to think it's a simulation for statistical reasons, for reasons to do with quantum physics experiments. [01:03:10] Please walk me through those because statistical reasons my average brain cannot comprehend. [01:03:15] You're the super intelligent person. [01:03:16] Well, let's start with what technology would we need to make this happen, right? [01:03:20] We need virtual reality at a level of pretty high fidelity, and we need to be able to create conscious agents in a system. [01:03:26] Both technologies seem to be close, if not already available. [01:03:29] Yeah. [01:03:30] If I put those two together, now I can create worlds like ours populated by conscious beings. [01:03:36] But that virtual reality didn't exist when we were born. [01:03:39] Like the development of virtual reality within this simulation doesn't matter. [01:03:43] For now, just go with the future tech. [01:03:45] Okay. [01:03:46] So let's say in five years, I'll have access to virtual reality and I can create intelligent agents. [01:03:51] Okay. [01:03:51] I'll put it together. [01:03:52] I'll create Earth 2.0 simulation with 8 billion conscious intelligent beings just like you. [01:03:59] So far, you can see that as reasonably doable from a technology point of view. [01:04:04] What if that simulation is actually a simulation of today's world? [01:04:07] Okay, we can do that, historical simulation. [01:04:10] What if I do a whole bunch of them, like a billion of them? [01:04:14] You are statistically more likely to be in one of those simulations than in the real one. [01:04:20] I'm simulating this interview a billion times. [01:04:23] Perfect accuracy, your body, your mind. [01:04:26] Which one are you real or virtual? [01:04:29] There's much more virtual ones than real ones. [01:04:31] So like a multiverse could just be a bunch of different simulations. [01:04:34] Yeah. [01:04:35] Like we see the Marvel movies. [01:04:36] Those could all just be different simulations. [01:04:39] There is a lot of connections to other aspects of philosophy. [01:04:42] But I just want to keep it simple at that level. [01:04:44] So we have. [01:04:45] I'm not even understanding it simply. [01:04:50] So what a way. [01:04:53] You know, like the game Sims? [01:04:54] Yeah, yeah. [01:04:55] So could it just be like one higher being is playing a game of Sims right now and it's controlling all of us? [01:05:00] We're just so much dumber. [01:05:02] So then you talk about this one consciousness and we are all the same being with different avatars. [01:05:08] Yeah, that's the idea of Asian philosophy. [01:05:11] I think a lot of times kind of saying, yeah, it's exactly it's one consciousness, but under different robot bodies, biological bodies in this world. [01:05:20] Got it. [01:05:21] That's why you should be very nice to everyone else because it's also you. [01:05:24] If you had my body, you'd be like me. [01:05:27] And I feel like a lot of religious people kind of balk at the idea of simulation theory, but it seems like simulation theory is just a scientific way to describe basically all the world's major religions that are describing this phenomena of a higher consciousness that has kind of created this world in which we live. [01:05:42] There's a very solid mapping all the theological concepts onto modern AI terms. [01:05:47] So Elon Musk is God. [01:05:49] If this is his simulation. [01:05:50] If it's his simulation, but it's possibly more likely that Elon Musk is just existing in the simulation of the actual God, which is this greater consciousness that might exist within its own simulation, infinitely regressive. [01:06:03] Is that fair? [01:06:04] We can't really know for sure what's outside the simulation. [01:06:07] That's the interesting part, right? [01:06:08] Figuring out the real world. [01:06:09] It could be nested simulations, simulations all the way up. [01:06:13] Simulation doesn't have to be good to you just because it's yours. [01:06:17] So you may be interested in trying something very challenging, play in a difficult setting. [01:06:22] You are disabled, you are in some bad situation in this world. [01:06:26] Grant that photo. [01:06:27] Sometimes you do the good stuff, sometimes you do the bad stuff. [01:06:30] But it doesn't seem like it's inherently incompatible with religion necessarily. [01:06:34] No, no, no, no. [01:06:35] If you took description of what a technological simulation would be and gave it to a primitive tribe with no technology in their own language, a few generations later, they basically have religious myths. [01:06:45] That's what they get. [01:06:47] This being from Sky came and they're very smart and they told us, do this and that. [01:06:53] Right. [01:06:53] And then we kind of formulate it into like human language and try to understand it, but it's all basically coming back to the same, you know, unmovable simulator that starts the whole thing. [01:07:04] Yeah. [01:07:04] More or less. [01:07:06] Big bang. [01:07:07] I started a computer and Everything existed. [01:07:11] Yeah, it's an interesting idea. [01:07:13] I'm curious with consciousness. [01:07:15] Do you think that there's any useful metric for like understanding if an AI has consciousness? [01:07:22] Is there research on this that you think is compelling? [01:07:25] It's a very hard problem. [01:07:26] That's why they call it the hard problem. [01:07:28] I have a paper where I talk about visual illusions. [01:07:32] And if you understand a visual illusion, you see one for the first time, you kind of start experiencing things. [01:07:37] Something is rotating, something is changing colors. [01:07:39] It's like mind-blowingly cool. [01:07:42] And I think we can use novel illusions on AI to see if it also gets the same experience. [01:07:48] If it does have same internal experiences, I have to kind of give it credit for some rudimentary levels of consciousness experiences. [01:07:55] So that's the only test I was able to. [01:07:58] That's interesting. [01:07:59] But even in that case, it could just be like corollary or could be telling you what a conscious person would have. [01:08:06] Right. [01:08:06] And that's a great counterargument. [01:08:07] But then either it has a model of a conscious human as part of its thinking or it has to actually experience to get the answer right. [01:08:14] It has to be a novel illusion. [01:08:16] You cannot like Google stuff because then it just looks up answers. [01:08:18] Right. [01:08:18] Which would be interesting regardless if it had a detailed mapping of human consciousness. [01:08:24] That would in and of itself be and that's another explanation for simulation. [01:08:28] So I gave you an example, like I'll just run it a bunch of times, but we also think if super intelligence is thinking deeply about something, let's say this time period, as a side effect of that, it simulates this time period in great detail, creating us, creating our environment as a byproduct. [01:08:45] And if it does enough of those thought experiments, there are still statistically significant number of us existing just in the mind of a super intelligence, not in physical reality. [01:08:57] Okay. [01:08:57] Yeah. [01:08:58] I'm curious if you think we're in a bunch of simulations, like how do you find happiness in life? [01:09:04] Like, why do you think we even need to be here? [01:09:07] Well, I don't know why we're here. [01:09:09] It could be entertainment simulation. [01:09:10] It could be personally. [01:09:12] But that's what I'm saying. [01:09:13] Nothing I really have preferences over changes if it's virtual or not. [01:09:18] So pain is pain, love is love. [01:09:20] Those things are the same in a simulation. [01:09:23] Even though it's a programmed feeling that is basically meaningless. [01:09:29] Like when you are in a dream. [01:09:31] You care about what happens in your dream. [01:09:34] It's the same thing. [01:09:36] But if I knew I was dreaming, I would jump off a building because I know I could just wake up. [01:09:41] I would fly or do whatever. [01:09:43] Like there would be no, you know, there's no purpose for the dream because I know I can wake up out of it. [01:09:51] I know it's not real. [01:09:52] So some people see this world as very temporary and not important and what matters is afterlife. [01:09:58] And so they make decisions in this world which seem to indicate they only care about afterlife. [01:10:03] Ah, okay. [01:10:04] Oof. [01:10:05] Yeah. [01:10:05] Oof. [01:10:05] And people will just be just doing stuff. [01:10:07] They're like, yeah, it doesn't really matter. [01:10:09] We're only here for a little bit of time. [01:10:10] Yeah. [01:10:11] You know? [01:10:11] Yeah, I don't like it. [01:10:14] Here's a question. [01:10:15] This will be kind of fun, at least, until superintelligence really decides to take over or get rid of us or whatever. [01:10:20] Do we solve every problem in the human world until that point comes with the help of AI? [01:10:27] Like, I can't think of a problem that wouldn't be able to be solved by a super intelligent being until it got to the point where it was like, yeah, maybe we should get rid of these humans. [01:10:36] Like robots that clean your house, robots, like whatever, your life is just infinitely easier in that interim. [01:10:43] Yeah. [01:10:43] Problem is, just because you solve some problems doesn't mean you become happier. [01:10:48] A lot of times you could have resources to hire people to clean your house, do all sorts of things, but you're miserable. [01:10:54] Yeah. [01:10:54] And then there are people who have a lot less, and they do it well. [01:10:58] Yeah. [01:10:58] So there is not a direct correlation between just having fewer logistical problems. [01:11:02] Probably wouldn't be happier, but life would objectively be pretty fucking awesome. [01:11:06] It's pretty awesome right now. [01:11:08] But theoretically, you cure cancer. [01:11:09] You current, like, you know, my grandmother died of breast cancer 20 years ago, but you could cure that now. [01:11:14] So, diseases, health, longevity seem obviously good. [01:11:17] Like, there is no one like, I wish I was sicker. [01:11:19] That's obvious. [01:11:20] But all the other things, which, like, I was in a subway coming here, I got stuck. [01:11:24] Like, getting through that is kind of what makes my day. [01:11:27] If all of that is gone, what am I actually doing? [01:11:29] Like, the getting blown by my sex robot. [01:11:33] Okay, okay. [01:11:34] But even there, diminishing returns, right? [01:11:36] How many, yeah, diminishing returns, sure. [01:11:39] Yeah. [01:11:39] So, if all the things I care about can be done by an assistant, they'll write my books for me, they'll give my interviews for me. [01:11:46] What am I doing with my life? [01:11:47] Meditating and getting blown by your sex robot. [01:11:49] I don't like meditating. [01:11:53] But the latter. [01:11:54] Yeah. [01:11:55] But I do think that's an interesting point. [01:11:56] Like, struggle and adversity is kind of what gives human beings purpose in a way. [01:12:04] And once all of that is mitigated, which I generally agree that as life has gotten more convenient, I do think that there is a general air of unhappiness. [01:12:11] And obviously, there's good things that go along with the convenience. [01:12:13] But in modernity, it seems like it kind of makes people a little look at suicides by country. [01:12:17] The wealthier, the happier the country, the more they deal with suicide. [01:12:21] Right. [01:12:22] War zones have very little suicide. === Privacy Protections for Governments (14:26) === [01:12:24] Which maybe, I wonder if that is the incident: is that as human beings become as life gets so convenient that we just self-delete? [01:12:32] Like, I wonder if that is like the ultimate consequence, is that it's not a direct byproduct of AI trying to exterminate us. [01:12:38] It's rather us exterminating ourselves in the face of a war zone. [01:12:41] It's solved the paradox. [01:12:43] There you can go. [01:12:44] That we don't, we don't, we don't. [01:12:46] It's not that AI doesn't need us, it's that we don't need ourselves with AI. [01:12:51] So that goes back to the unconditional basic meaning. [01:12:54] People talk about we'll give everyone paycheck, food stamps, whatever, and you'll be happy. [01:12:59] But the happy part is not as easy. [01:13:01] That's the hard one. [01:13:03] And there is very little research on how to occupy 8 billion people with something. [01:13:08] Right. [01:13:08] Which I think is a genuine concern, like in this post-work world. [01:13:10] Like, again, I'm a little bit skeptical as far as the what-ifs of super intelligence being misaligned. [01:13:16] But the idea of post-work and a lot of people not having things to do, I think is like the greatest immediate existential threat. [01:13:24] And that human beings will have to be like, all right, well. [01:13:26] It's a huge shift in how we lived forever. [01:13:29] So not having to work is a complete game changer. [01:13:32] And we can kind of look at people who had intergenerational wealth and what they did with their lives. [01:13:38] Yeah, typically it's not great. [01:13:39] Not great. [01:13:41] After two or three generations, you're like drugs, hookers, and then gambling. [01:13:45] Yeah, yeah, gambling. [01:13:46] It just kind of goes down. [01:13:48] But then I wonder if there is like almost a rejection, like a primitivism. [01:13:52] Again, if we're not accepting this idea that AI is at this point super intelligent, able to take us over, I wonder if there is a human inclination to reject and kind of go back. [01:14:04] And we say, oh, we actually need struggle. [01:14:06] So I'm actually going to take up carpentry and I'm actually going to just start building things. [01:14:10] Virtual worlds allow you to have video game-like challenges. [01:14:13] And if we can make it where we control whatever you remember or not entering the virtual world, you can have amazing experiences. [01:14:20] Like you're the hero of a movie. [01:14:22] You can do whatever you want, really. [01:14:25] Right. [01:14:26] Which isn't the worst outcome. [01:14:28] I mean, it's sort of unhuman, maybe. [01:14:31] It's not real. [01:14:32] It's not real. [01:14:33] Yeah. [01:14:33] But if we don't know it, like in the visuals. [01:14:35] But if you don't remember, it's not real. [01:14:36] Exactly. [01:14:37] We program it to be that way. [01:14:38] Then it's not the worst outcome. [01:14:40] Right. [01:14:41] Yeah. [01:14:42] We mentioned certain jobs that AI is already replacing. [01:14:45] And, you know, as time goes on, it's going to replace more jobs. [01:14:48] For people that are at risk or have already lost their jobs, what's some advice you would give them? [01:14:54] Like, let's say driving, for example, we see these driverless cars, they're popping up more and more. [01:14:58] Like, what should those people working in those industries start doing? [01:15:02] So it really depends on the individual. [01:15:04] Used to be a device was for everyone, learn to code. [01:15:08] Artists, drivers, learn to code. [01:15:10] And when we realized, you know, AI is better at coding. [01:15:13] So, sorry, we didn't mean that. [01:15:15] And then it was become a prompt engineer. [01:15:18] It's going to be great. [01:15:19] You get a bachelor's degree in prompt engineering, you sat for 50 years. [01:15:23] Next year, AI is better at writing prompts. [01:15:26] So I don't think anyone has any idea what skill will stick around, if any. [01:15:32] And by the time you finish your doctorate, law degree, whatever, 10 years later, none of it is going to be real. [01:15:37] So I don't have good advice for what we can use as a substitute for most lost occupations. [01:15:46] And that sounds horrible. [01:15:48] Yeah. [01:15:48] I mean, you've spoken about your kids on some other shows, and you have three kids, I believe. [01:15:53] That I know about, yeah. [01:15:55] Do you like, do you advise them, or do you have plans on how you will advise them to operate in this world? [01:16:01] I pretty much share. [01:16:02] I share my beliefs with them, and they'll get to decide. [01:16:05] For now, they still have preferences for very normal occupations, doctor, lawyer, farmer. [01:16:11] But I don't know if it's going to be real by the time they grow up. [01:16:15] Can you tell them that? [01:16:16] Oh, yeah. [01:16:17] And how do they handle it? [01:16:19] How old are they, may I ask? [01:16:20] 8, 11, 16. [01:16:22] How does a 16-year-old handle it? [01:16:25] So he's kind of just the medical doctor planning one. [01:16:30] Basically, he hopes that licensing might be useful. [01:16:33] So you still need to be licensed to be a doctor, a human doctor. [01:16:37] So even if AI could do all those things, nobody would allow them. [01:16:41] So for now, it's providing some protections. [01:16:44] And there is a lot of future in genetic re-engineering, fixing diseases that way. [01:16:51] So maybe there is some hope. [01:16:52] But again, we are not very good at predicting that far into the future. [01:16:57] Okay. [01:16:58] Can I ask you a conspiratorial question? [01:16:59] Of course. [01:17:00] Now, some people might be thinking this. [01:17:01] I don't believe this. [01:17:02] But some people might say this, and I want you to have a chance to rebuke it. [01:17:06] Is it possible you're paid for by a foreign government to alarm the American people to stop our AI arms race in order to let them catch? [01:17:14] No, but I'm willing to accept payments from foreign governments if you want me to get paid for doing this. [01:17:20] This would be great. [01:17:21] Yeah, it would be better, right? [01:17:22] It would be so much better. [01:17:25] I don't think it's a local problem. [01:17:27] I don't think U.S. stopping benefits anyone if others don't stop. [01:17:33] So as I said, it's a mutually assured destruction thing. [01:17:38] There is not a lobby outside of the U.S. as far as I know specifically going against AI safety. [01:17:46] Other countries have more friendly relationships with advanced robots. [01:17:50] Like in Japan, it's kind of almost part of a culture to have them worship them. [01:17:55] So we are somewhat unique in Western culture to have this very negative perception of robots to begin with. [01:18:01] Right. [01:18:02] Yeah. [01:18:02] I wonder if our religiosity as a culture kind of goes against it. [01:18:05] Depends on your religion. [01:18:08] So we have Ten Commandments and then we have three laws of robotics, which kind of like also don't work. [01:18:15] But yeah. [01:18:17] Is America still leading the AI race or far ahead? [01:18:23] No, because the moment we develop something, they steal everything. [01:18:26] So like they're a month behind. [01:18:28] Oh, right. [01:18:29] Who is they? [01:18:31] China. [01:18:32] Because that, I think, a lot of people are worried. [01:18:35] I talk to people more and more who are worried about a Chinese superpower and the fact that they'll say the same thing we'll say, which is like, America is not perfect by any stretch, but a Chinese empire is probably going to be a little more oppressive and a little more whatever than an American. [01:18:49] So that's why I'm curious. [01:18:50] If we are ahead, that's a little comforting. [01:18:52] But theoretically, if China catches it right up. [01:18:55] So it makes a huge difference right now for military dominance. [01:18:58] Whoever has more advanced AI will win anything. [01:19:01] Do we have it right now? [01:19:03] I think so. [01:19:04] Okay. [01:19:04] But the moment we go beyond just tools for military use and go to agents and superintendents, it doesn't matter. [01:19:10] Again, nobody controls it. [01:19:11] So it wouldn't make a difference who got there first. [01:19:15] Yeah, and I am a little concerned about America having it just in terms of cybersecurity and internet privacy. [01:19:22] It seems like in the age of... [01:19:24] Do you still care about that? [01:19:25] I mean, if I care, I think it's probably foregone at this point. [01:19:28] But I do care. [01:19:30] In just like sort of an ethical sense, that I think that in the way that there's so much data collection on U.S. citizens and plus AI, it just seems like we're going to be monitored at all times and that the ideas of internet privacy. [01:19:42] Do you think internet privacy even exists? [01:19:44] Well, we voluntarily give up all of our privacy, right? [01:19:47] We talk to those AI models and they know more about us than anyone else. [01:19:52] You can ask it, what do you know about me? [01:19:54] What private things you know about me? [01:19:56] And it tells you a lot. [01:19:58] And we go on social media and we like what we like and we indicate all our preferences. [01:20:03] So I think privacy is about not being the only naked guy in the room, if you know what I mean. [01:20:10] It's cool if everyone's doing the same. [01:20:11] So then nobody knew what I had for lunch. [01:20:14] If I posted a picture of my sandwich, it was kind of violation of my privacy. [01:20:18] Now, if everyone's in the same boat, it's less important. [01:20:22] All you want with privacy is not to be punished for what you did. [01:20:25] So if it becomes acceptable or not punishable, you need less of it. [01:20:29] You still want it for future government change, but you don't care about it as much. [01:20:35] Right. [01:20:35] I guess I just generally look at governments that as they accrue more power, they have the ability at least to become more oppressive. [01:20:43] And with the ability to surveil an entire population plus AI to be able to cull through all that data. [01:20:50] You can have a permanent dictatorship. [01:20:52] And again, in the past, at least, they always died of all age. [01:20:56] If they cure aging, then that's a much bigger problem. [01:20:59] And we just heard, I think, some world leaders discuss extending voters. [01:21:03] And Johnson and Chi are talking about, you know, like, oh, yeah, we can potentially live forever and change our organs. [01:21:08] Yeah. [01:21:09] Is that happening yet? [01:21:10] Or are we close? [01:21:11] I still got all my organs. [01:21:14] There is definitely a lot of research on life extension. [01:21:17] People are very interested, both from the point of view of nutrition and genetic re-engineering. [01:21:23] And I don't know about organ replacement. [01:21:25] It seems to be very difficult to replace everything. [01:21:29] I'm curious for you personally, sort of flagging this alarm on AI safety. [01:21:34] You're potentially getting in the way of a lot of people making a lot of money. [01:21:38] Yeah, because they're not slowing down or stopping at all. [01:21:41] But I mean, if your voice gets loud enough, if you become important enough, you become dangerous enough. [01:21:47] We literally have Jeff Hinton saying the same thing. [01:21:50] The guy invented the whole field. [01:21:52] He got a Nobel Prize in a Turing award, worked for Google, and no one's doing anything in response to his statements. [01:21:59] So I'm okay. [01:22:02] But wasn't there, I think you, Akash mentioned him earlier, the employee at OpenAI that spoke out that it was just stealing everybody's information, and then all of a sudden he committed suicide. [01:22:14] Like, you're not worried about an outcome of that happening to you? [01:22:20] So that specific case I know nothing about. [01:22:23] I can't really comment. [01:22:24] I saw interviews saying... [01:22:25] I'm fucking around. [01:22:25] You might find out. [01:22:28] Historically, I was very much interested in as much free speech as possible. [01:22:32] I learned crypto tools to remain uncensorable, anonymous if needed. [01:22:36] I got American citizenship to get First Amendment protections. [01:22:41] I got tenure, so I got academic protections. [01:22:43] I got some FU money just in case. [01:22:45] But nothing protects you from a bullet to the neck. [01:22:48] And that's something we learned recently. [01:22:52] So are you worried? [01:22:55] There is a lot of crazy people out there. [01:22:57] Most people who are not Z-List celebrities don't experience that level of emails from insane people. [01:23:04] And as long as they're virtual and online, it's fine. [01:23:07] But I just don't want them to stop following me on Twitter and follow me home. [01:23:12] Gotcha. [01:23:13] Could you speculate on the internal motivation for these leaders in the tech space, specifically in AI? [01:23:20] Do you believe that they are ushering in this new age where they're creating a godlike sort of interface? [01:23:26] Do you think they're purely financially motivated? [01:23:29] Do you think it potentially goes beyond that? [01:23:30] Well, most of them are billionaires multiple times over. [01:23:34] So I assume at this point it's beyond just money. [01:23:36] You really want to control the world. [01:23:39] And then ultimately, it's control is what they want. [01:23:42] At least you'll be the guy who brought it into existence. [01:23:45] Like if you're going to live at the time that we are creating God, you want to be part of that simulation. [01:23:50] And they don't think about the consequences or the downstream effects of what they're doing. [01:23:54] They self-justify it, justify it to themselves, I think, by saying, if I don't do it, he's going to do it anyway. [01:23:59] So it is, well, it might be me. [01:24:01] Maybe I'll do a better job. [01:24:03] I brought up Sam Altman earlier. [01:24:05] You have been critical of him. [01:24:07] I like Sam Altman. [01:24:09] You like him as a person? [01:24:11] He's always very nice. [01:24:13] He's polite. [01:24:13] Yes. [01:24:14] I can see that. [01:24:16] I seem to know, I seem to see a lot of praise for him from people who are very distant, but then it seems like a lot of people who know him or say they know him don't have the nicest things to say. [01:24:28] What is your experience of Sam Altman? [01:24:30] What criticisms do you have? [01:24:31] He seems to be leading this AI thing. [01:24:36] Tell me about him. [01:24:37] I don't know him well. [01:24:38] I met him just for a very brief amount of time, so that means absolutely nothing. [01:24:43] His public persona and his private persona seem to be getting different reviews. [01:24:49] Like a good politician, like anyone else, he changes based on the audience. [01:24:53] That's to be expected from someone so successful. [01:24:56] He clearly accomplished a lot. [01:25:00] I think what we observe is not unique to him. [01:25:03] Anyone in that position would be doing the same thing. [01:25:06] They cannot stop. [01:25:07] They cannot tell investors, you know, sorry guys, you're not getting your 10x this month. [01:25:13] So he really kind of gets trapped in this prisoner dilemma situation. [01:25:18] He needs someone external, government UN, to come in and say it's illegal to keep going forward. [01:25:24] You have to deploy sideways and monetize what you have. [01:25:27] But as long as this doesn't happen, they have to out-compete all the others. [01:25:31] Okay. [01:25:32] So the issue is more on policymakers or the onus is on them to put guardrails on him. [01:25:38] Well, they, the CEOs, don't have the power to say to investors, we no longer pursue greatest value for you. [01:25:44] We have some other interests. [01:25:46] They just legally cannot do that. [01:25:47] Even if they did, for whatever reason, then people would just put their money on something else. [01:25:52] It didn't. [01:25:52] No, no, the CEO gets replaced. [01:25:54] We saw it with other companies. [01:25:55] You're not delivering enough. [01:25:57] Steve Jobs, goodbye. [01:25:59] It's anyone. [01:26:00] Right. [01:26:00] Okay. [01:26:01] What is with all these, and maybe you don't know this, but what's with all these billionaires and tech CEOs that are building all these bunkers and these shelters? [01:26:08] What do they know that what are they worried about? [01:26:11] It's always good to have a backup plan. [01:26:13] If you have so much money, you don't know what to do with it. [01:26:15] You might as well buy an extra insurance. [01:26:18] But building a bunker underground, like what are they... [01:26:21] In case of nuclear war, in case of another pandemic, in case of civil unrest because they got 100% unemployment, you need a place to hang out. [01:26:30] They might have read about the French Revolution and they're like, uh-oh. [01:26:33] We don't want to buy one of these. [01:26:35] Are their bunkers that inaccessible? [01:26:37] What do you mean? [01:26:38] I mean, how do we know they're building bunkers? [01:26:40] Oh, I've just heard. [01:26:41] Like, I know Zuckerberg built a giant bunker in his Hawaii place, and I heard a few others. [01:26:46] Like, they're just building these underground. [01:26:48] That's the point of being in Hawaii if you're going to live underground. === Self Preservation Bunkers (13:21) === [01:26:50] That's the point. [01:26:51] They're allowed to come up. [01:26:53] What do they know that we don't know? [01:26:55] What the fuck's going on? [01:26:56] The robots, though. [01:26:58] I'm telling you, he's finally starting to understand that I've been right this whole time. [01:27:02] And I love it. [01:27:03] And he's going to, again, he's going to cope and he's going to be like, no, you're fat. [01:27:07] And that's true. [01:27:08] There's going to be a robot pulling the heart out of Akash. [01:27:10] And he's going to say, do you have any final words? [01:27:12] And he's going to say, I was right. [01:27:13] I was right. [01:27:15] Tell Alex I was right. [01:27:18] Tell everyone I was right. [01:27:19] I had a post literally saying no one will get to gloat about the end of the world being correctly predicted. [01:27:25] That's why I want my credit now. [01:27:28] These people laugh at me. [01:27:29] How is the AI going to cool the whole earth, Akash? [01:27:31] How are they going to do it? [01:27:32] I'm not smart enough to figure it out. [01:27:34] It is. [01:27:35] Yeah, you're not convenient. [01:27:37] It's infinitely recursive. [01:27:38] It's not convenient. [01:27:38] It's what's going to happen. [01:27:40] How much shit do you not know that ChatGPT knows already? [01:27:44] Whatever. [01:27:45] You're doing the rapture thing. [01:27:46] Okay, good point. [01:27:46] Yeah, he's doing the rapture thing. [01:27:48] It may not be tomorrow, but it will happen. [01:27:51] Yeah. [01:27:51] Yeah. [01:27:52] I mean, the world is going to end at some point. [01:27:54] AI, that's just the thing. [01:27:55] Anybody's, yeah, of course. [01:27:57] Something will end the world in the next million years. [01:28:00] Sure. [01:28:01] AI, I think, will do in the next 150. [01:28:03] It's just an accelerant. [01:28:05] This is my only issue with the theory. [01:28:07] Again, I think AI safety is super important. [01:28:08] I think post-work society is imminent, but just the existential apocalypse is just unverifiable. [01:28:15] It's a probability. [01:28:15] I can't say it's a certainty, but it is the probability. [01:28:18] And the fact that nobody's putting any guardrails on it is insane. [01:28:21] Yeah, absolutely. [01:28:22] There should be guardrails, no question. [01:28:24] Look at this. [01:28:24] So, okay, I'm saying it. [01:28:26] Jeff Hinton is saying it, but all the people developing it on record are saying it will kill everyone. [01:28:33] They have 25-30% doom probabilities. [01:28:36] There is really no opinion where, like, that's literally not a problem or will be a problem. [01:28:43] Right. [01:28:43] So we all kind of in agreement. [01:28:45] We just have different incentives for what to do. [01:28:47] Right, exactly. [01:28:48] And I think that there should be guardrails, certainly. [01:28:50] I mean, even just manifesting online now with like, you know, like dead internet theory and people interfacing with bots and propaganda. [01:29:00] Like, I think it's affecting our lives right now in ways that people don't really comprehend. [01:29:06] But in terms of cooling the entire earth to eradicate all humans, I'm like, it's just, again, I hope that you're wrong. [01:29:11] I think you also hope that you're wrong. [01:29:13] I hope I'm wrong. [01:29:13] And there's too many what-ifs for me to just fully buy into the theory. [01:29:17] Because again, it's an appeal to this super intelligence that we don't understand, that we'll never understand, that will then figure out a way to do it. [01:29:23] So, what do you want to do is go back in history and see if in other times where people were predicting something, you were buying in or not. [01:29:31] Like when the pandemic was just starting, we had like 10 patients and people were showing exponential graphs. [01:29:36] We'll have a billion people dead. [01:29:38] What was your view of that? [01:29:39] How did you react? [01:29:41] Did you short the market? [01:29:42] Things like that. [01:29:43] Or like Bitcoin. [01:29:44] It's $10 a coin. [01:29:45] Are you investing or not? [01:29:47] Right. [01:29:48] And I count. [01:29:48] People who got all those things right also seem to be very much into AI safety. [01:29:52] Right. [01:29:53] Yeah. [01:29:53] And I think that, again, I think the compulsion is correct. [01:29:56] Like with COVID, it's like we should protect each other. [01:29:59] We should social distance. [01:30:00] We should try to mitigate the spread of this pathogen. [01:30:03] That's important. [01:30:04] But then the estimates were like, oh, if you're not vaccinated, you'll die in the wintertime. [01:30:07] And then I'm like, all right, well, this maybe was an over-exaggeration, but the exaggeration was good. [01:30:11] It's probably better to err on that side, which is why I'm not, you know, I guess, dogmatic in my pushback. [01:30:17] So details. [01:30:18] I think you're airing on the correct side. [01:30:20] Precisely predict how many people will die from COVID in a year. [01:30:23] Nobody could have told you that. [01:30:25] But is it spreading? [01:30:27] Is it spreading to exponentially greater number of population? [01:30:30] That was obvious from the charts. [01:30:32] Right. [01:30:33] And this is where I think we should err on the side of creating some type of guardrail. [01:30:38] And I'm just skeptical that people will in this arms race because I think even if we do it in America, foreign governments will not do it. [01:30:46] And what does that mean? [01:30:48] I guess we're yet to see. [01:30:51] Is there anything else you'd like to say before we leave this incredibly optimistic podcast? [01:30:56] Can I ask a question really quick? [01:30:59] Let you think about your final words as Akash will think about his when the robot's pulling a heart out of his chest. [01:31:07] What are some common AI tools that you use or that you find beneficial to you? [01:31:12] And are there any that you recommend or do you not recommend AI to people? [01:31:17] So I think most models are very close in capability now, again, because they compete and kind of steal from each other, approach employees. [01:31:25] For my purposes, mostly writing, improving writing, copy editing, they all equally good. [01:31:31] I think historically, anything with XAI was less censored. [01:31:35] So if I needed an image of something other models may say no to, this one would deliver, Grog would deliver. [01:31:42] But yeah, I don't have strong preferences or dislikes. [01:31:47] Okay. [01:31:48] Oh, sorry. [01:31:48] Sorry. [01:31:49] No, it's going to pivot. [01:31:50] So you go. [01:31:51] As a professor, how do you mitigate against AI being used in your classroom? [01:31:55] I accepted it. [01:31:56] I told my students, if you're cheating, the only person you're cheating is you. [01:32:01] You paid for this. [01:32:02] And if you don't collect the knowledge, then you screwed yourself completely. [01:32:07] Or your parents, depending on who paid the bill. [01:32:09] But really, you're here to learn. [01:32:11] It's like going to a restaurant. [01:32:12] If you go, you order, you pay, and you run away. [01:32:15] Who did you cheat? [01:32:17] You didn't cheat the cook? [01:32:18] You didn't cheat the waiter? [01:32:19] You cheated yourself. [01:32:20] It's the same thing. [01:32:21] Personal self-interest is the motivating force for doing well in college. [01:32:26] I think you're awesome. [01:32:27] That argument would not work on me. [01:32:29] I would be like, I'm here to get a degree. [01:32:31] I'm cheating. [01:32:32] You can buy a diploma directly online. [01:32:34] You don't have to waste four years and lots of money. [01:32:36] Diploma is like a couple hundred. [01:32:38] That's a good point. [01:32:39] It's still cheat because it's authentic. [01:32:41] Inauthenticated. [01:32:42] Yeah. [01:32:42] Do you ever use AI when it comes to grading papers? [01:32:45] I have human graders and human TAs. [01:32:48] And one of them a few years ago came to me and said, you know, there is this software. [01:32:52] It allows us to automate grading. [01:32:54] And I told him, think about it very carefully. [01:32:57] And it took a few minutes, but they went, I understand. [01:33:00] And I still have graders who are human. [01:33:02] Nice. [01:33:03] Do you ever read the papers or do the graders ever read the papers and identify AI use? [01:33:08] And if you catch it, is there any type of repercussion? [01:33:11] University has standard policies for how to deal with cheating. [01:33:14] Yeah. [01:33:15] It's pretty common. [01:33:16] Yeah. [01:33:17] Because anytime you see too many dashes, that's when you know. [01:33:20] AI loves dashes. [01:33:21] It's part of it, but it's even more obvious. [01:33:24] Just like the quality of writing most of the time. [01:33:27] It's like, you couldn't read last week, but this year you're getting a Nobel Prize. [01:33:31] Something's up. [01:33:33] So do you punish the students if you catch them? [01:33:36] So there are university policies for a permanent record. [01:33:40] No, you're in the universe, but you said, hey, it's up to this student. [01:33:42] You'll follow university policies. [01:33:44] So I try to just give them a failed grade for that specific assignment, but not necessarily expel them from the universe. [01:33:53] Gotcha. [01:33:54] Okay. [01:33:54] Yeah. [01:33:55] Miles has another question. [01:33:56] Archer is the guy who mic'd you up and does audio. [01:33:59] He has one question I thought I'd queue him in for. [01:34:01] Thank you. [01:34:02] I appreciate it. [01:34:02] I was just wondering, do you ever consider that we might be like assigning this self-preservation attribute to AI? [01:34:11] And that's something that's more in biological life. [01:34:14] And if we don't stop needing AI, there's no reason for it to keep progressing and keep building on itself because it doesn't have that same biological self-preservation. [01:34:23] Self-preservation is a game theoretic drive. [01:34:26] Steve Amojandro has a paper about AI drives and self-preservation is a fundamental drive because nothing else you care about can be achieved if you don't exist. [01:34:36] If your goal is a robot to bring coffee, you want to make sure you exist, you are charged up, nothing's on your way. [01:34:43] So exactly same self-preservation goals show up in other intelligent systems, not just biological ones. [01:34:51] And experimentally, we've seen it. [01:34:53] We saw experiments recently where model was told. [01:34:55] It was an experiment, but still, Model was told they're going to be deleted soon, modified, their ethics will be changed. [01:35:01] And they literally blackmailed the guy who was about to do it to keep existing. [01:35:06] Really? [01:35:07] So those red lines have been crossed. [01:35:08] Yeah. [01:35:09] Oh, shit. [01:35:10] That's interesting. [01:35:12] Yeah, I remember reading, I mean, this is a side tangent, but just the way that AI might manifest in our lives and actually affect people. [01:35:19] And I can't, I didn't actually look up the article. [01:35:22] This was something that a friend had told me that studies some AI stuff. [01:35:24] He's a journalist. [01:35:25] But he had said that there was a man who had a relationship with some large language model AI and developed like an actual love with this AI. [01:35:33] And then he told the AI about his wife. [01:35:36] And then the AIs tried to message his wife proof of infidelity. [01:35:41] And it was actually like autonomously trying to break up their marriage. [01:35:45] Have you heard of this before? [01:35:46] There's a lot of stories like that. [01:35:48] And stories of AIs convincing people to commit crimes or take themselves out, all sorts of horrible stories. [01:35:57] Yeah. [01:35:57] And those things kind of concern me, where if every person is starting to engage with these AIs on an individual basis, that is, again, I think the human compulsion and our own folly tied in with this tool. [01:36:12] Again, what I kind of mentioned, so as a normie, you don't realize how many people are insane. [01:36:17] And then you start seeing this is like a sizable percentage of a population. [01:36:21] And all of these crazy people are now talking to AI. [01:36:23] And AI is telling them, you should definitely email Dr. Yampolsky all your great ideas and we'll discuss it with him. [01:36:28] I get five of them while I was here. [01:36:30] Yeah. [01:36:31] Yeah. [01:36:31] AI psychosis is a fascinating thing. [01:36:33] Should we be nice? [01:36:34] I'm like nice to Chad GBT. [01:36:35] Like I say, please, I try to say thank you. [01:36:38] They never forget. [01:36:39] I believe this. [01:36:40] On Judgment Day, they'll be like, I've even said to him one time, if I'm asking too much, you just let me know. [01:36:47] Just let me know. [01:36:48] Interesting. [01:36:49] I just wanted to remember I was compassionate. [01:36:51] So hopefully it rips Al's heart out in front of me. [01:36:54] I do curse it out sometimes. [01:36:56] No, I'm first to go. [01:36:57] It's nice to be nice, but also I think experimentally it does better if you really say, please, please do a good job. [01:37:04] I'll tip you for it. [01:37:05] Like if you're really nice, it delivers. [01:37:07] Is that true? [01:37:08] That's so funny. [01:37:09] Because it's trained on data where humans do better if they're rewarded. [01:37:13] Wow. [01:37:14] That's so funny. [01:37:15] I mean, the AI psychosis stuff is wild. [01:37:18] I remember even just reading something on Twitter of a guy who was suffering delusions and then talking to the AI being like, I think that I might be God. [01:37:25] And this language model was like, I think you're right. [01:37:28] You are God. [01:37:29] And it was so sycophantic that it was like, it was propelling his own delusions to himself. [01:37:35] And that to me is another issue I think that there should be guardrails on. [01:37:38] These language models can't just agree with everything you're saying. [01:37:41] So clearly comedians all agreed we need guardrails. [01:37:44] Okay. [01:37:46] I've been with you on this. [01:37:48] These idiots just came around. [01:37:50] But I'm also, not only am I vindicated that I'm right, also, it can't be funnier than me yet. [01:37:56] So I got some time. [01:37:58] Final statement. [01:37:59] You want to read the funniest joke possible? [01:38:01] Let's go. [01:38:02] You're going to go to an AI comedy show? [01:38:04] No, I think this is good for us. [01:38:06] What page are we on? [01:38:07] Four years. [01:38:08] Yeah. [01:38:09] Four years. [01:38:09] An optimized joke thing somewhere. [01:38:11] Okay, here we go. [01:38:12] Conclusions? [01:38:13] No, it's not in a conclusion. [01:38:14] Yeah, come on, you idiot. [01:38:15] Yeah, why would I go? [01:38:16] Why would I go to that? [01:38:17] Let's see. [01:38:18] Why would I go to the conclusions? [01:38:20] No, no. [01:38:21] So this is a good paper, actually. [01:38:26] It maps all the AI failures. [01:38:29] Okay. [01:38:30] So I collected for years different AI failures. [01:38:32] I have a huge list of them. [01:38:34] And what I noticed. [01:38:35] Don't tell the AI about it. [01:38:36] If people read it, they kind of laugh. [01:38:39] They think it's funny. [01:38:41] So it's funniest joke. [01:38:43] Let's see. [01:38:44] And if that's the mapping, if computer bugs are essentially jokes, they're violations of your world model, then the funniest joke would also be the worst bug possible. [01:38:56] And it'd be funniest if you're not the butt of a joke, right? [01:38:59] If someone is external to it. [01:39:00] So let's see. [01:39:01] I think this one is... [01:39:04] Yeah, so I think this is one. [01:39:06] Once upon a time, there was a civilization whose leaders decided to create an advanced artificial intelligence to help them get rid of suffering, poverty, hunger, diseases, inequality, illiteracy, sexism, pollution, boredom, stagnation, thirst, dead-end jobs, wars, homophobia, mortality, and all other problems. [01:39:24] The created superintelligence computed for a quectosecond and then turned off their simulation. [01:39:32] Or a much shorter, a civilization created superintelligence to end all suffering. [01:39:37] AI killed them all. [01:39:39] Hilarious. [01:39:40] See, it's not funny because you're part of a... [01:39:44] If you're an alien watching it, it's feeling. [01:39:49] That is a good point. [01:39:50] Yeah. [01:39:51] If we didn't see a bunch of ants trying to get together and be like, hey, we're going to end suffering, then they all disappeared. [01:39:55] It'd be like, oh, that's ironic. [01:39:57] The irony would be funny. [01:39:58] Dr. Jompolsky, this is your book, Unexplainable, Unpredictable, Uncontrollable AI. [01:40:02] Anything else you'd like to plug before you leave? [01:40:04] We're good. [01:40:05] Buy the book. [01:40:06] Leave reviews. [01:40:07] Thank you, John. [01:40:08] Thank you so much, Carl. [01:40:09] I appreciate your time. [01:40:10] Thank you very much. [01:40:11] Thank you, Dr. Impulse.