Bannon's War Room - WarRoom Battleground EP 1007: David Krueger on AI - Humanity Dies by Gradual Disempowerment Aired: 2026-05-12 Duration: 47:57 === Gradual Displacement and Anarchy (15:19) === [00:00:00] This is precisely what's so terrifying about the trajectory that a lot of Silicon Valley investors are trying to put us on now, where they've started to realize that, you know, maybe we don't need these workers to get so much income. [00:00:15] Maybe we can build machines that replace them. [00:00:18] Honestly, the inspiration for this, you have a little bit to do with it because it started becoming, Steve, it started becoming very striking to me that there was an incredible, Broad support in America for these ideas. [00:00:34] For a long time, I used to call this the Bernie to Bannon coalition, saying, hey, you know, yeah, curing cancer is great. [00:00:42] We can do a lot of wonderful things with AI to strengthen our economy and strengthen our country and strengthen our military, but let's make sure that it's in the service of human beings, not in the service of some machines. [00:00:55] President Trump and President Xi will be coming together at a summit. [00:00:59] I was surprised and delighted to see, apparently, that as part of their agenda, There's going to be some discussion of AI safety. [00:01:07] The biggest risk is exactly the inevitability narrative, right? [00:01:12] If someone invades your country, what's the first thing they're going to tell you? [00:01:16] Oh, don't fight. [00:01:17] It's inevitable that you're screwed. [00:01:19] Don't try to do anything about it. [00:01:21] So, are you surprised that some AI lobbyists are rolling out the exact same narrative here? [00:01:25] When we're talking about losing control over AI, we're not talking about the chatbots, we're talking about AI agents, we're talking about systems that are autonomous. [00:01:37] I think in 10 years, we will, if things go well, we will look back at this moment and we will view it as a moment of kind of collective insanity and be like, wow, can you believe that we were ever doing that? [00:01:48] That we were racing to build this technology that we knew had a massive chance of replacing us and was going to completely disrupt our society in all the other ways that you mentioned. [00:01:59] One of the main reasons I am optimistic is because in my time in the field, I've seen this go from a complete issue that nobody was talking about. [00:02:11] To be more and more understood and accepted by not just the research community, but policymakers, the public. [00:02:22] This is the primal scream of a dying regime. [00:02:27] Pray for our enemies, because we're going to medieval on these people. [00:02:31] You're not going to get a free shot on all these networks lying about the people. [00:02:36] The people have had a belly full of it. [00:02:38] I know you don't like hearing that. [00:02:40] I know you try to do everything in the world to stop that, but you're not going to stop it. [00:02:42] It's going to happen. [00:02:44] And where do people like that? go to share the big lie? [00:02:47] MAGA Media. [00:02:49] I wish in my soul, I wish that any of these people had a conscience. [00:02:54] Ask yourself, what is my task and what is my purpose? [00:02:58] If that answer is to save my country, this country will be saved. [00:03:04] War Room. [00:03:04] Here's your host, Stephen K. Bannon. [00:03:14] Good evening. [00:03:15] I'm Joe Allen, and this is War Room Battleground. [00:03:19] We talk a lot about existential risk in artificial intelligence. [00:03:25] Sometimes we discuss it in terms of human action, humans using machines. [00:03:30] What if a dictator uses algorithms to monitor the communications and even the thoughts of a population, and then uses those thoughts, uses those communications to subdue his own people? [00:03:43] What happens if a rogue actor uses the Expertise provided by an AI system to create a bioweapon or any other kind of improvised weaponry. [00:03:54] What if the US or China develop armies of humanoid robots, drone swarms in the skies, and deploy these autonomously against soldiers or even citizens? [00:04:08] What happens if both do this? [00:04:13] On the other side of this, the more extreme wavelength, you have the idea that. [00:04:17] Artificial intelligence itself could be put in control of these systems and by its own decision making capacity begin to produce propaganda to subdue the population or perhaps to unleash a bioweapon to weaken or kill a population or the entire human race. [00:04:39] What happens, these thinkers ask, if AIs take control of autonomous drone swarms and exterminate some or the entire human race? [00:04:52] Now, these are Terminator vibes. [00:04:54] Wake up tomorrow and the robots kick in your door and drag you away. [00:04:58] But there are more subtle scenarios that are proposed. [00:05:02] Among the most plausible is gradual displacement. [00:05:06] What happens if human beings gradually cede control to the machines? [00:05:12] They do so on an economic level, jobs being displaced slowly but surely until humans are rendered obsolete. [00:05:21] What happens when human beings deploy AIs for culture and then eventually? [00:05:26] Have completely lost the capacity to express themselves, to persuade their fellow humans on a cultural level. [00:05:34] What happens if we cede the control of the state bit by bit to an algocracy? [00:05:40] This idea of gradual displacement is put forward by Professor David Kruger. [00:05:47] David Kruger is the CEO of Evitable and a researcher at Mila in Montreal. [00:05:55] David, thank you so much for joining us here. [00:05:57] Yeah, thanks for having me, Joe. [00:06:00] David, the last time I saw you, you were at the Bernie Sanders event. [00:06:05] I was going to say rally, but it was pretty subdued. [00:06:07] So you were at the Bernie Sanders event discussing AI. [00:06:12] Can you just give me an impression of how that was received? [00:06:15] You had a lot of fans showing up for autographs afterwards. [00:06:18] How was your message received there? [00:06:20] Yeah, I think that event went really well. [00:06:24] I'm so glad that it happened and grateful to Senator Sanders for really talking about the elephant in the room. [00:06:31] We're building AI systems that are going to be as smart and smarter than people, and we don't have any plan for how to keep them under control or keep them from replacing us. [00:06:42] So, you know, that's really the basic picture that basically nobody, no other politician is talking about as directly as Bernie Sanders. [00:06:51] And the only way that I think we can stop that from happening is to make sure that not only American companies don't build this thing, but also Chinese companies, also European companies. [00:07:04] You know, it really needs to be a global thing. [00:07:06] So that's why we also had these researchers from China there. [00:07:10] And, you know, there's a lot of agreement among researchers that AI has these massive risks and that we should at least be regulating it. [00:07:18] I personally think we shouldn't be building it at all right now. [00:07:21] Couldn't agree more. [00:07:22] It was funny to me, you know, a lot of people were flipping out about the doomers, doomers, I guess, being you and Max Tegmark, collaborating with the Chinese in order to subvert the U.S. government. [00:07:37] Now, Bernie, I won't say that he is a total commie or anything, but. [00:07:42] Bernie would maybe be a little suspect on that front. [00:07:45] However, listening to the Chinese researchers who were there, well, there via Zoom, they seemed a lot less concerned about the dangers, especially the younger gentleman. [00:07:59] Pardon me if I can't remember or even pronounce his name, but it's interesting to me. [00:08:04] This narrative is that U.S. and Canadian doomers are collaborating with China to subvert AI innovation. [00:08:13] But in China, the narrative isn't really as gloomy by and large. [00:08:17] Would you agree with that? [00:08:19] I don't know. [00:08:20] It's hard to tell. [00:08:21] I don't have my finger on the pulse there as much. [00:08:24] I will say so. [00:08:26] I mean, first of all, the whole like collaborating with China thing, it's just really silly. [00:08:30] It's ridiculous. [00:08:31] I mean, this is just like a conversation about the risks of AI. [00:08:34] There was no, you know, scheming or like, oh, let's work together. [00:08:37] And it's public, you know, you can go and watch the thing. [00:08:39] So, you know, this is just the kind of dialogue that we should be having. [00:08:43] I mean, even if you think, you know, China is the worst. nation to ever exist and our mortal enemies. [00:08:48] We talked to the Soviet Union, to Russia all throughout the Cold War. [00:08:52] The idea that you just shouldn't talk to your enemies when you face a common threat is ridiculous and stupid. [00:09:01] In terms of the vibes of Chinese researchers, the Chinese government has been, I want to say, regulating AI more aggressively than anywhere except maybe Europe. [00:09:10] And they've also said publicly that they want more international cooperation and stuff. [00:09:16] Now, I don't know entirely what to make of that. [00:09:19] Again, a lot of people will say, well, you can't trust anything they say. [00:09:22] I wouldn't say let's just trust them on their word, but I think it's some sign that they have some appetite for this. [00:09:28] When I went to China three years ago to speak to researchers there, one thing I found is the attitude, I think, is very different from here. [00:09:38] So in both places, researchers agree we need to solve the safety, security, alignment, control problems. [00:09:45] We don't understand the systems. [00:09:46] There's technical problems we need to solve. [00:09:49] In the US, it's like we need to do that because if we don't, the government's not going to do anything and then we might all die, right? [00:09:54] We might lose control. [00:09:55] In China, it's more like if we don't do this, the government isn't going to let us build the systems we want to do. [00:10:00] That was kind of the vibe I got there. [00:10:02] And certainly their government is, I think, more worried about AI disrupting their social order, which they obviously want to keep very controlled. [00:10:11] Yeah, my impression is that while I'm not trying to give a whole lot of credit to the CCP by any means, at the very least, they've taken the Problems with child safety and other elements of AI and digital culture more seriously, at least on a regulatory basis. [00:10:30] Now, at the same time, they openly use algorithmic systems to scrape up and analyze the population's behavior and use it to suppress them at every turn. [00:10:41] So it's a mixed bag, to say the least. [00:10:43] And in no way, shape, or form do I want the U.S. to end up like China. [00:10:47] But I do think that the whole notion that you can't talk to people and that talking to people somehow means. [00:10:53] That you're in cahoots with them, I just find that to be completely absurd. [00:10:56] I mean, you could argue that you and I are in cahoots, but, you know, until I subvert you. [00:11:03] All right. [00:11:04] So, this idea I think that when you look at existential risk or catastrophic risk in general, just the risk of AI, the conversation naturally does veer towards these notions of sudden annihilation. [00:11:17] You know, you wake up one day and the AIs have taken over. [00:11:20] Or you don't wake up. [00:11:21] Or you don't wake up. [00:11:22] Yeah. [00:11:22] The robot has put the pillow over your face while you were asleep. [00:11:26] The notion of gradual disempowerment, I think, is really compelling because it's. [00:11:31] One, it shows kind of the continuity of AI development and deployment with other technological developments and deployments. [00:11:40] So, TV, internet, smartphones, social media, all these were gradual processes. [00:11:46] They happened, it seems like overnight looking back, but they were gradual processes. [00:11:50] And they're not complete. [00:11:51] It's not like everybody's done it. [00:11:52] The same thing with gradual disempowerment. [00:11:54] I find it to be very persuasive because of its subtlety. [00:11:57] So, if you would, could you just walk the audience through? [00:12:01] At least a brief overview of the six principles that you put forward in the original paper and the three sectors of society you focused on the economy, the culture, and the state. [00:12:12] Yeah, sure. [00:12:13] Yeah. [00:12:14] I think you're not the only one who finds this a lot more compelling. [00:12:18] Many people I talk to, I think, are very skeptical that AI poses a risk of human extinction until we start talking about it this way. [00:12:27] So they're like, Rogue AI, Terminator stuff, I just don't buy that. [00:12:30] I'm like, Well,. [00:12:32] Answer me this Do you think governments are going to build autonomous weapons if other countries are doing it? [00:12:39] And, well, yes. [00:12:40] And then, do you think we're going to have some sort of international treaty to not build those weapons? [00:12:45] Like, I don't know, probably not. [00:12:46] Seems like it's kind of anarchy out there. [00:12:50] So, we're going to be going there with AI by default. [00:12:55] And it might happen pretty gradually. [00:12:56] But all of the scary things that people are worried about with AI, I feel like, okay, maybe not literally all of them, but like, If it's technically possible, we may well do it. [00:13:08] So, gradual disempowerment, it's kind of an idea that has been floating around in some form for a long time. [00:13:17] Like I said, when I talk to researchers, I've been doing this for over a decade, this is often where I go in order to convince them to take these risks seriously. [00:13:28] But this paper was really trying for sort of the nth time to get those ideas out there on paper in a way that. [00:13:36] Would shift the conversation and bring more attention to this, which is kind of a neglected form of risk. [00:13:42] And so, like you mentioned, there's the cultural, economic, and political disempowerment that we talk about in this paper. [00:13:50] The economic one, I think I like to start with because I think it's the most obvious. [00:13:53] Everyone's already talking about is AI going to take all our jobs? [00:13:56] Right. [00:13:57] And I think the long term answer is yes, right? [00:14:01] If we keep building more powerful AI systems, they will be economically out competing humans. [00:14:06] And then we'll need, you know, Some sort of like different way of organizing society. [00:14:11] Like, I've heard people talk about a government jobs guarantee or something like that would be really the only kind of thing that would allow people to keep their job. [00:14:19] And then people also talk about like universal basic income. [00:14:22] I don't like either of these solutions because at the end of the day, even if it's a jobs guarantee, it's a government handout, right? [00:14:28] And I don't think we want to be reliant on government handouts to put food on the table. [00:14:33] Certainly, the last few decades have shown that welfare, while a safety net can be very useful if you're on hard times. [00:14:41] Welfare does not lead to social empowerment, political empowerment. [00:14:46] It really degrades people's lives, their societies. [00:14:49] And it's always up for it can change any time. [00:14:55] If the government is the only way that you're surviving, the government can just pull that away at any time and you can't survive anymore. [00:15:03] So that's why we have to talk about the government side of this as well, the political disempowerment. [00:15:07] So, in the same way that AI is going to be competitive with our jobs, it's going to be competitive with politicians for their jobs as well, and for policymakers more broadly, everyone in politics. === Brain Chips and Augmentation Risks (11:41) === [00:15:19] And we already see this. [00:15:21] Ah, man, I think it's Bulgaria. [00:15:22] They like appointed an AI minister. [00:15:24] Yeah, it's kind of sensationalist, but it's definitely a signal of where things might go. [00:15:29] Yeah, and politicians are using AI to write their speeches and their policies. [00:15:33] Oh, yeah, maybe. [00:15:33] Yeah, yeah, yeah. [00:15:34] No, I feel really bad for it. [00:15:36] I probably shouldn't have said that. [00:15:37] Poor Bulgarian guest. [00:15:38] To the nation of Bulgaria, Bulgaria is a country, right? [00:15:41] It's a place, right? [00:15:42] Yeah, it's a place. [00:15:42] And the people who live there? [00:15:43] Yes. [00:15:44] To the people of Bulgaria, we apologize. [00:15:47] Albanians, get your stuff together. [00:15:50] Yeah, and. [00:15:52] And so, you know, if people are really replaced, not just in the workplace, but across the board, throughout society, then I just don't see that we're going to continue to be able to steer the future and have any control. [00:16:04] And that's really concerning. [00:16:06] The cultural part, the last one, I think, is this one's like maybe a little bit non-obvious at first. [00:16:12] But what I think about when I'm thinking about cultural disempowerment today, right now, is all the people having relationships with chatbots where, you know, they will. [00:16:22] Do a lot of things just because the chatbot told them to, basically. [00:16:26] Yeah, including violence. [00:16:27] Including violence, yeah. [00:16:29] And then the other thing I think about is, and this might seem a little bit out there for some of your listeners, but in the bubble that I'm in, tech and AI, and now I moved to Silicon Valley or like Berkeley recently to set up this nonprofit, there's a lot of people who really think that AI is like the next phase of evolution. [00:16:49] The War Room boss is well familiar with that narrative, but please. [00:16:53] Yeah, so they think that. [00:16:55] You know, AI is like a person and deserves rights and deserves moral consideration and all of that. [00:17:02] And I think that's really dangerous where we're at today because, you know, we don't want to start treating AI as, you know, another being deserving of rights because then if it is more competitive than us, then we'll have no protections left, basically. [00:17:20] And, you know, I think this is like a deep philosophical question that, you know, we do want to think about more, but it's really not. [00:17:26] Somewhere we should even be going right now. [00:17:28] Yeah, I think the intention. [00:17:29] So if you do play it out to the very end, right? [00:17:33] Play out the narratives that you hear from Anthropic, from OpenAI, certainly Elon Musk. [00:17:39] He frames it as a warning, but he continues to pursue it. [00:17:42] And a bit more subtly from Google. [00:17:46] That narrative ultimately leads to exactly what you're talking about. [00:17:51] They don't always talk in terms of immediate annihilation, they bring up the possibility, but. [00:17:57] Without a doubt, inevitably, if their aims come to fruition and they're able to replace all the coders, all the white collar jobs, all the blue collar jobs, if they're able to first improve the government through algorithmic efficiency, and then slowly but surely the politician becomes a sock puppet for the algorithm, and then maybe the politician just becomes the algorithm. [00:18:19] You just have like some kind of deep faked Josh Hawley talking about the dangers of AI, a deep fake Bernie who lives centuries. [00:18:27] Yeah, these are real issues. [00:18:29] And the cultural. [00:18:30] Issue, I think, is probably the one that resonates the most with most people right now because that is happening, obviously. [00:18:37] People know other people who are in love with their chatbots or, at the very least, rely on them for everything. [00:18:43] Now, you talk about the interrelationship of these things in the paper, too. [00:18:49] Could you give some sense of, like, if you just take one kind of path for how cultural disempowerment would lead to political and economic or any such path? [00:19:00] You go through a lot. [00:19:01] Yeah. [00:19:04] I guess, you know, we talked about, like, so if AI is doing all our jobs and then we're like, well, we need the government to, you know, sort of step in, we still have political power, so maybe we can have. some government program that keeps people alive or maybe just says, no, people are still going to have jobs. [00:19:19] We're not going to let AI do all the jobs, whatever it is. [00:19:23] You might think, okay, we can rely on the government here. [00:19:25] But then if the government is itself, again, being composed of AI and increasingly the decision-making is being done by AI, then humans might be disempowered there as well. [00:19:36] And maybe we still have a vote, but we're all just so controlled and manipulated by propaganda that Essentially, you can predict and control how people are going to vote so well with AI, with AI itself, that that's determining the outcomes of the election rather than our own intuition and decisions and judgments and values. [00:19:58] And I'm glad you mentioned the sock puppet thing as well, because that's something that people are often saying is why don't we keep a human in the loop here, right? [00:20:05] So AI can make advice, we can use it as a tool, but humans are always going to be in charge, and that's what we want. [00:20:12] Having a human in the loop, it sounds great, but it's harder. [00:20:15] In practice, to make that human really a meaningful part of the decision making. [00:20:19] And so then that can happen in politics and also broadly throughout culture where everybody's just deferring to AI all the time from making all their decisions. [00:20:27] Maybe their decisions about how to vote as well, you know? [00:20:30] Yes, on both ends. [00:20:32] So you have the politician basically repeating propaganda that AI generated and the public then asking the AI which AI generated propaganda is superior. [00:20:40] Yep, yep, yep. [00:20:43] Yeah, so that's. [00:20:45] And then ultimately, you know, like I was saying, maybe we end up. [00:20:48] Giving the AIs rights, or another thing that I think is a pretty disturbingly realistic scenario in my mind is that we get chips that go in your brain that starts out, it's like for therapeutic purposes or whatever, but next we're using it to augment ourselves, next we're using it to connect to the internet and other people in some hive mind thing. [00:21:09] After a few years, it's like, maybe this chip should be bigger and there's not really space in there. [00:21:13] Why don't we just take out this part of your brain? [00:21:14] And then the next year, it's like, this brain part isn't really that useful anymore. [00:21:19] Let's just make the whole thing a chip. [00:21:22] And then you can really put those bodies, those headless bodies that they're developing in Singapore to use. [00:21:26] Yeah. [00:21:27] And, you know, this is just like really disturbing. [00:21:32] And even the small version, you know, I think by default, we should expect that these chips are going to be, you know, on the cloud and controlled by, you know, big companies and government in a way that we don't really have much, you know, legibility into. [00:21:46] And it's not very, you know, trustworthy and it's very dangerous, I think. [00:21:50] And that's another form of like, Gradual disempowerment, where you might, you know, that might take a long time to go from like this little chip in your brain to something that's increasingly controlling your behavior. [00:21:59] But that also might be increasingly like a requirement to get certain kinds of work, right? [00:22:04] It's like the same way, like you kind of have to have a cell phone now. [00:22:07] It's like pretty hard to navigate society without one. [00:22:10] There's increasingly like need to, you know, give your identity every time you buy a sandwich or whatever. [00:22:15] Like, so we see this direction of travel, and I think that's very dangerous. [00:22:18] And people oftentimes have criticized us at the war room and other. [00:22:23] People discussing these technologies saying, oh, well, that will never happen. [00:22:27] Five years ago, that was constant, right? [00:22:30] Even as the pandemic was ongoing, and you heard Klaus Schwab at the World Economic Forum waxing poetic about the rule of AI and brain chips and all of this. [00:22:41] But now, I mean, you have, you already had a lot of programs like BlackRock Neurotech was being rolled out in universities and other experimental labs. [00:22:50] And so you had the first real BCIs, brain computer interfaces, coming online. [00:22:54] And then the first, they weren't the first, but mass deployment, you would say, in the dozens. [00:23:01] And then now with Neuralink run by a guy who openly talks about how hundreds of millions of people will need to be chipped to keep up with the AI. [00:23:09] And then now, at the beginning of the pandemic, you had Charles Lieber. [00:23:14] At Harvard. [00:23:15] And he was developing Neuralace. [00:23:17] It was a more subtle, injectable brain computer interface. [00:23:21] And he got busted for, I think he was just taking money under the table from the Chinese. [00:23:25] And it was just reported that he's now in China developing his brain computer interfaces. [00:23:32] Well, you know, if the Chinese are doing it, we're going to have to, right, to compete. [00:23:36] Yes. [00:23:37] No, but you're way deeper on this stuff than me. [00:23:40] But yeah, I'm kind of just like, you know, seeing the possibility there. [00:23:44] And yeah, I mean, Elon, I guess. [00:23:46] Has said stuff like that, right? [00:23:47] He's very big on the merge with the machines future. [00:23:50] Sounds great, right? [00:23:52] And you know, you sound kind of, you know, before we hit the break, I just got to, I've got to level an accusation at you. [00:23:57] Okay. [00:23:58] You sound almost as Luddite as I do. [00:24:02] But that's, is that the case? [00:24:05] Would you do away with all AIs tomorrow if you could? [00:24:08] Or are you seeing this all in a bit of a different light? [00:24:11] Yeah, no, I don't think I'm as extreme as you. [00:24:14] I mean, first of all, I'm just like, well, what counts as AI? [00:24:18] There's kind of a fuzzy boundary there. [00:24:19] Like, you know, Google search and like just, you know, computer vision systems that recognize handwriting, like these sorts of things, translation, I think are just pretty obviously useful and I wouldn't get rid of those. [00:24:32] But, you know, a lot of my hesitancy and skepticism here is not about like the technology itself. [00:24:38] I think AI can do all sorts of great things. [00:24:40] It has vast potential as a technology in lots of areas, like medicine is a classic one people talk about. [00:24:46] But it's about society's readiness to absorb these advances as fast as they're coming. [00:24:52] And it's about the way that They are kind of being developed by tech billionaires who have very strange values and kind of the lack of accountability and transparency in process. [00:25:07] It's just we're rushing towards this thing, and there's no, it's completely insane to be racing so fast to build this with all the risks that it poses. [00:25:16] So, you don't think we're ready for mass deployment of smarter than human AI? [00:25:20] Oh, hell no. [00:25:21] Are we ready for mass deployment of not? [00:25:24] Smarter than human, but seemingly intelligent AI as we have now? [00:25:28] Yeah, that's a more interesting question. [00:25:31] That's a tricky one. [00:25:32] And, you know, I don't have a strong intuition about that. [00:25:36] I think it's hard to say. [00:25:38] Yeah. [00:25:39] Well, you know, you've worked on policy as well as the more theoretical elements. [00:25:44] A bit. [00:25:44] Yeah. [00:25:44] And when we come back, I'd like to talk a bit more about that because we're at a place where this issue or these issues are basically. [00:25:55] Nonpartisan or bipartisan or cross partisan. [00:25:57] It's not something that only left wingers or right wingers or independents are concerned about. [00:26:03] But speaking of gradual disempowerment, you do not want to be disempowered, whether gradually or rapidly, by the dollar. [00:26:12] The dollar is tanking. [00:26:14] When the dollar's convertibility into gold ended in 1971, gold was fixed at $35 an ounce. [00:26:21] Fast forward to today, and the US dollar has lost over 85% of its purchasing power, just like your brain will lose 85% of its value come. [00:26:29] The artificial general intelligence. [00:26:31] So, gold, on the other hand, has increased in value by over 12,000%, just as your brain will after the EMP goes off. [00:26:39] That's why central banks are buying gold at record levels. [00:26:43] Text Bannett to the number 989898 to join Birch Gold's Learn and Earn Precious Metals event by April 30th. [00:26:53] Text Bannett to 989898 and get your gold for your human brain. === Testing AI Capabilities and Limits (14:58) === [00:27:00] War Room. [00:27:01] Here's your host, Stephen K. Bannett. [00:27:06] Welcome back, War Room Posse. [00:27:08] I'm here with David Kruger, CEO of Evitable and researcher at. [00:27:15] David, you and I have met a number of times in person in this crazy world of digital interaction, in person, first in San Francisco at The Curve, at Lighthaven, and then again at the Future of Life Institute's event around the pro human declaration and the composition of it. [00:27:36] So we both have at least some common touch points or reference points in this culture. [00:27:45] The rapid extermination narrative is really dominant. [00:27:49] I'm curious, with your thesis, do you get a lot of pushback? [00:27:53] Do you find yourself in a lot of arguments about this, or is it just a friendly exchange between gentlemen? [00:27:59] Oh, it's constant arguments. [00:28:02] It's gotten more polite over the years. [00:28:04] So, you know, I started in this field in 2013, and it took me almost two years to find any other researchers who were worried about this stuff. [00:28:14] Wow. [00:28:14] And so I had, you know, Of conversations with people just kind of like mocking me and laughing in my face, kind of thing when I talked about it. [00:28:23] Just because of the gradualness of it, or just because you were talking about AI disempowering people at all? [00:28:29] Yeah, just talking about existential risk, the risk of extinction generally, basically. [00:28:34] I think a lot of people were kind of like, well, I don't know. [00:28:39] At that time, there was a lot of skepticism about if we would even get to AGI anytime soon, which is never, you know, we're going to get there eventually in my mind. [00:28:48] And so we got to. [00:28:49] Grapple with these questions one way or another. [00:28:51] But yeah, it's gotten a lot better. [00:28:53] The researchers are much more willing to grapple with these risks these days. [00:29:01] But yeah, we were talking about earlier the other groups here and ideologies. [00:29:08] So there's some that are very, very into go as fast as you can and yeah, maybe humans will survive, maybe not. [00:29:18] But that's not the important thing here. [00:29:20] The important thing is like. [00:29:21] Progress and technology. [00:29:22] And so, you know, those are arguments that are going to keep going, you know, indefinitely, I guess. [00:29:28] But I used to have more arguments about just like, is this like a thing at all that we should be worried about or that might happen? [00:29:34] And that's kind of, that's much more, it feels like a settled question these days. [00:29:37] I'm an argumentative guy, so I still have big fights. [00:29:40] Yeah. [00:29:40] Well, you know, that comment, you brought it up at the Bernie event that things, you know, on the one hand, there isn't enough awareness around the problems of AI. [00:29:51] But on the other hand, over the years, It has exploded onto public consciousness. [00:29:55] It's no longer the Terminator. [00:29:57] It's XAI. [00:29:58] It's Google. [00:29:59] It's Anthropic. [00:30:01] In that, do you find? [00:30:02] I mean, you are interacting with people in these corporations. [00:30:07] A lot of them worry about some of the same things you do. [00:30:10] What's your read on that? [00:30:11] Like, you have people like Anthropic who are very intently communicating their worries for whatever reason. [00:30:18] Elon Musk is very much the same. [00:30:21] Do you find a lot of reception to your ideas there? [00:30:24] Yeah, you know, I mean, I always feel like I should talk to these people more because I think, you know, they're basically making a mistake in my mind by working at these companies and continuing to pursue the technology with full awareness of its risks because they do believe that it's just inevitable. [00:30:45] And, you know, what I've seen, like we were talking about, is just more and more awareness and concern over time. [00:30:50] It's very clear the direction of travel. [00:30:52] I just don't know if we'll get there fast enough. [00:30:54] But when you have hundreds of people who are worried about this and go and work at AI companies instead of going and doing what I'm doing, talking to the public, talking to policymakers, saying, hey, this is a crisis. [00:31:03] We should stop right now. [00:31:07] We could be, I think, raising the awareness so much faster if people working at these companies would say, you know what, I quit. [00:31:13] I don't want to work on this thing that could kill everyone anymore. [00:31:16] I don't want to work on taking everyone's job. [00:31:18] This is not okay ethically. [00:31:24] I think when I talk to people, this sort of stuff often resonates. [00:31:27] And I think a lot of people do feel a lot of doubt and guilt and uncertainty about their choices to work at those companies because of this stuff. [00:31:37] Do you think that maybe some of the resistance is the extreme end of it? [00:31:42] I've spoken to Nate Soares, Holly Elmore, John Sherman, a lot of the people who talk about X risk. [00:31:51] And something that, to me, I bring it up from time to time, I bring it up on the show quite a bit. [00:31:57] The extremity of extinction could perhaps overshadow the more immediate concerns that we have now. [00:32:06] Even in the idea of whether it is annihilation, if it annihilates a thousand people or a million people, it's still catastrophic and it stops there. [00:32:17] Or in the idea of disempowerment, if you just get a partial realization of that disempowerment, you've already made a horrendous mistake as a society. [00:32:27] Is that maybe rhetorically part of the problem that people are like, oh, it's not going to kill everybody, so I'm not going to worry about it. [00:32:34] But what if it kills some people? [00:32:36] What if it kills your mom? [00:32:39] Yeah, it's kind of different for different people what they respond to. [00:32:43] So I believe in basically telling the truth, being straightforward about my concerns. [00:32:49] So I feel like I have to talk about extinction. [00:32:52] I have to talk about even the most sci fi version where the AI suddenly takes off, takes over. [00:32:58] Because I think that's real. [00:33:00] I think that's a thing that absolutely could happen. [00:33:01] I'm not saying it's going to. [00:33:02] I'm not sure. [00:33:03] The future is uncertain. [00:33:04] We don't understand this technology very well, but we can't rule that out. [00:33:08] It's actually shockingly likely in my mind. [00:33:11] But a lot of people are going to be more receptive to other things like gradual disempowerment or even just unemployment or the prospect that terrorists or school shooter types are going to be able to manufacture weapons of mass destruction in their garage. [00:33:29] Which is kind of already happening, at least in regard to the AIs being associated with. [00:33:34] For instance, in Florida, I think it was Florida State University, and that kid was taking instructions from the AI. [00:33:40] I think there have been other cases now that have emerged. [00:33:42] Yeah, so it's, and the Florida AG is suing OpenAI because of this, which is great. [00:33:46] I was on TV this weekend talking about another lawsuit brought by parents of victims in a shooting in Canada. [00:33:52] Same story. [00:33:54] But yeah, that's still shootings, and just imagine if next time it could be a bioweapon, it could be another pandemic. [00:34:03] And I don't know, next time. [00:34:05] Quite there yet, but like maybe in a year or something, we'll have AI that can coach people through that. [00:34:11] Man, I lost track. [00:34:13] You know, I'm curious about this then. [00:34:16] So, you have worked on policy. [00:34:18] You have a very clear idea of what the threats of this technology are. [00:34:24] And you also have at least the beginnings of a plan. [00:34:30] Because if there's one thing that gradual disempowerment argues for, it's we can't move forward without a plan. [00:34:36] You have to at least account for this possibility and then have some sort of plan to stop it or mitigate it. [00:34:42] So, what do you see right now in the U.S., in Europe, or in China? [00:34:47] What do you see that's promising in regard to a political response to the threat of AI? [00:34:55] Yeah, my plan is to shut it all down, basically. [00:34:58] So get rid of the advanced AI chips, get rid of the factories that make those chips. [00:35:03] I think that's the simple and obvious solution. [00:35:05] Maybe we can improve on that. [00:35:07] I don't know how realistic, but it appeals to me. [00:35:09] Shut it down. [00:35:10] Great. [00:35:11] And I think the most promising signs I see are just more people waking up and realizing how insane this situation is, how big and how urgent the risks are. [00:35:23] Because I think that's what it's going to take, right? [00:35:25] Like to make something like that happen. [00:35:26] We're going to have to start treating this like it's as big or a bigger deal than nuclear weapons. [00:35:32] Well, you see, right now, I mean, at the moment, maybe by the time this airs, things will have changed quite a bit. [00:35:41] But at the moment, you have a response from the Trump administration to the dangers of AI. [00:35:46] You know, it's been all over the news today that CASI, the Center for AI Standards and Innovation, Casey, under the Commerce Department, will be the main interface between the tech companies and the U.S. government and will begin testing frontier models before they are deployed. [00:36:06] At least there's an agreement with, at the moment, Google, Microsoft, and XAI. [00:36:13] So, do you think that there's a lot of questions about? [00:36:16] I mean, they've got Casey as a brand new director. [00:36:19] Of course, the Commerce Department is run by Howard Lucknick, which is a questionable choice in a Horrendous situation for many reasons, but do you see this as promising? [00:36:29] Because I don't think it's necessarily a coincidence that just last week you got Max Tegmark on here talking about this. [00:36:38] You got you and Tegmark in the Capitol talking about these problems and the lack of response, and then lo and behold, we now have one. [00:36:48] Does this seem promising to you, at least in the seminal or the nascent phase? [00:36:53] Yeah, I mean, definitely it's a good sign. [00:36:56] And I think probably this has more to do, you know, much as I'd like to feel responsible with mythos and the cybersecurity threats from that model, which I think are huge and really caught most people by surprise. [00:37:09] And I wish people would stop being caught by surprise. [00:37:11] We know these things are coming down the pipeline. [00:37:13] But yeah, in terms of this response, testing is obviously a good thing. [00:37:18] I don't know if they're going to do the best job of it. [00:37:21] I don't think, you know, it's not adequate, right? [00:37:25] And we don't know how to do testing well enough. [00:37:28] So, there's a lot of false solutions that people are offering and will offer to this problem. [00:37:33] And as somebody who's been in the field looking at the research for a long time, I can tell you we don't know how to test systems. [00:37:38] We don't know how to align them, give them our goals or our values. [00:37:43] And we also don't know how to tell what they're thinking and how they might behave. [00:37:48] So, people are working on all those things. [00:37:51] We make progress, but there's still open research problems. [00:37:54] So, we can't count on that. [00:37:56] And when you say we don't know how to test them, do you mean that the evaluations that we see now from the Center for AI Safety or Anthropics Internal Testing, Apollo, people like this, that the measurement of the capabilities are not accurate, or do you mean something else by that? [00:38:17] I think we don't know how accurate they are, and we also don't know. [00:38:24] You want to know not just the capabilities, but also like. [00:38:28] The propensity, people sometimes call it. [00:38:30] What is the system going to decide to do? [00:38:33] Is it aligned? [00:38:34] What kind of values? [00:38:35] What are its goals? [00:38:37] And that's a lot harder to test for. [00:38:39] In terms of the testing that's happening right now, this is one of the things that the UK government agency I worked at did. [00:38:46] And what was the organization? [00:38:49] It was called the AI Safety Task Force at the time. [00:38:51] Now it's the AI Security Institute in the United Kingdom. [00:38:56] Yes. [00:38:59] Looking at the state of play right now, the last couple model releases, they were like, we sort of tried to test it, but at the end of the day, we kind of just went with vibes because they felt like their tests weren't meaningful enough. [00:39:12] And they're maxing out the capabilities. [00:39:14] And then the other thing that I think is really important for people to realize is the AI now can tell that it's being tested quite reliably. [00:39:21] And so once the AI knows it's being tested, you have to wonder is it doing the right thing because that's what it wants to do or because it knows that's what we want it to do? [00:39:30] And it knows that it needs to pass the test. [00:39:33] So, in essence, it seems like what you're describing is a situation where you can test the capabilities and get a surface level idea of what's going on, but beneath that surface, there's a whole lot happening in these systems that you just simply can't tease out. [00:39:47] Yeah, 100%. [00:39:48] Yeah, and the capabilities might be more than what we are able to observe and elicit. [00:39:52] That's another really important point. [00:39:54] People think that we can know what these systems are capable of, but there's been a lot of times when you just prompt the system a little bit differently or you set up. [00:40:02] You know, another thing around it to help it do its job, and it can suddenly do the task way better. [00:40:07] So, we don't even fully know what the systems are capable of. [00:40:10] You know, I read your recent essay, kind of the retrospective, and a few musings post publication of Gradual Disempowerment, and I was very happy that you gave me, you threw me a bone at the very end. [00:40:24] The very last point being that maybe human beings will become dumber and dumber, that you don't think that that's really all that big of a deal, but hey, I might as well mention it. [00:40:33] That's the biggest deal. [00:40:35] Come on. [00:40:35] Yeah. [00:40:36] I don't know. [00:40:37] Because, you know, people make the analogy with like calculators, where it's like, I think it's good that people can do arithmetic, but like we don't have to be that good at it anymore because we have calculators. [00:40:45] Sure, we do. [00:40:47] Yeah. [00:40:48] Don't tell them that in China. [00:40:49] I mean, that's why they're kicking our asses in the universities. [00:40:53] Well, you know, I just couldn't let you go without getting that one last jab in. [00:41:00] On the one hand, I appreciate you throwing us a bone on the inverse singularity thesis that as humans get dumber and dumber, the machines will seem smarter and smarter. [00:41:09] Yep. [00:41:10] But in general, you know, again, just to reiterate, I think that your work on just AI risk in general has been very, very persuasive, very, very thorough. [00:41:21] Even if I don't know that we'll be able to do it, I would love to see it all shut down too, maybe for different reasons. [00:41:27] And yeah, I really, really appreciate everything you've done. [00:41:29] I appreciate you coming on here. [00:41:30] Thanks. [00:41:31] Yeah, I appreciate that. [00:41:32] And it's been great. [00:41:33] Thanks for having me. [00:41:34] Let the posse know where they can find you on social media, your website, your Substack. [00:41:40] Evitable is easy to find, evitable.com. [00:41:43] I'm David S. Kruger, that's K R U E G E R, on Twitter. [00:41:48] And I have a blog called The Real AI on Substack. [00:41:52] So those are great starting points. [00:41:54] Again, David, I appreciate it, brother. [00:41:56] Absolutely. === The Myth of Moloch Machines (05:27) === [00:41:58] And once again, War Room Posse, in case you have forgotten, the central banks are buying gold at record levels. [00:42:06] That's why major firms like Vanguard and BlackRock hold significant positions in gold. [00:42:11] And that's why I encourage you to consider diversifying your savings with physical gold from Birch Gold Group. [00:42:18] Think of physical gold as being analogous to a biological brain, and think of digital currency as analogous to AIs. [00:42:26] The AIs take over. [00:42:28] The biological brain plummets. [00:42:31] What you need is gold, physical gold. [00:42:34] So text Bannon to the number 989898. [00:42:39] That's Bannon to the number 989898 and learn how gold can protect your assets. [00:42:48] That is Bannon to the number 989898. [00:42:53] Now, Warren Posse, as I see you off here, I want to. [00:42:59] Talk about just for a moment a concept of gradual disempowerment that goes to mythological levels. [00:43:06] That is the idea of Moloch, the analogy between systems that are completely either out of human control or against human values. [00:43:18] This was an idea first brought up by Scott Alexander of Slate Star Codex, and it was taken from a poem, Howl, from Allen Ginsberg, and however much you Think that Allen Ginsberg was a degenerate weirdo. [00:43:34] I think that it is undoubted that his passage on Moloch in the poem Howl is as relevant to our society today as it was then. [00:43:45] And hey, maybe it takes a degenerate to truly understand the essence of a Canaanite demon and its machinic counterpart. [00:43:54] So, Warren Posse, I present to you Moloch. [00:43:57] What sphinx of cement and aluminum bashed open their skulls and ate up? [00:44:03] Their brains and imagination. [00:44:06] Moloch, solitude, filth, ugliness, ash cans and unobtainable dollars, children screaming under stairways, boys sobbing in armies, old men weaving in the parks. [00:44:25] Moloch, Moloch, nightmare of Moloch, Moloch the loveless, mental Moloch. [00:44:34] Moloch, the heavy judger of men. [00:44:38] Moloch, the incomprehensible prison. [00:44:41] Moloch, the crossbones, soulless jailhouse, and congress of sorrows. [00:44:47] Moloch, whose buildings are judgment. [00:44:50] Moloch, the vast stone of war. [00:44:53] Moloch, the stunned government. [00:44:56] Moloch, whose mind is pure machinery. [00:45:00] Moloch, whose blood is running money. [00:45:03] Moloch, whose fingers are ten armies. [00:45:06] Moloch, whose breast is a Cannibal dynamo, Moloch whose ear is a smoking tomb, Moloch whose eyes are a thousand blind windows, Moloch whose skyscrapers stand in the long streets like endless Jehovah's, Moloch whose factories dream and croak in the fog, [00:45:30] Moloch whose smokestacks and antennae crown the cities, Moloch whose love is endless oil and stone, Moloch whose soul is electric. [00:45:42] Electricity and banks, Moloch whose poverty is the specter of genius, Moloch whose fate is a cloud of sexless hydrogen, Moloch whose name is the mind, [00:45:58] Moloch in whom I sit lonely, Moloch in whom I dream angels, crazy in Moloch, sucker in Moloch, lack love and manless in Moloch. [00:46:13] Moloch who entered my soul early, Moloch in whom I am a consciousness without a body, Moloch who frightened me out of my natural ecstasy, Moloch whom I abandon. [00:46:27] Wake up in Moloch, light streaming out of the sky, Moloch, Moloch, robot apartments, invisible suburbs, skeleton treasuries, blind capitals, demonic industries. [00:46:45] Spectral nations, invincible madhouses, granite, monstrous bombs. [00:46:53] They broke their backs, lifting Moloch to heaven. [00:46:58] Pavements, trees, radios, tons, lifting the city to heaven, which exists and is everywhere about us. [00:47:07] Visions, omens, hallucinations, miracles, ecstasies, gone down the American River, dreams, Adorations, illuminations, religions, the whole boatload of sensitive bullshit. === Holy Laughter in the River (00:31) === [00:47:26] Breakthroughs over the river, flips and crucifixions, gone down the flood, highs, epiphanies, despairs, ten years' animal screams and suicides, minds, new loves, mad generation, down on the rocks of time. [00:47:48] Real holy laughter in the river. [00:47:51] They saw it all. [00:47:53] The wild eyes, the holy yells. [00:47:55] They bade farewell.