Bannon's War Room - WarRoom Battleground EP 991: HOLLY ELMORE: Save the Human Race! Pause AI Aired: 2026-04-21 Duration: 47:59 === Primal Scream of Dying Regime (14:47) === [00:00:03] This is the primal scream of a dying regime. [00:00:07] Pray for our enemies, because we're going to medieval on these people. [00:00:12] You're going to not get a free shot on all these networks lying about the people. [00:00:17] The people have had a belly full of it. [00:00:19] I know you don't like hearing that. [00:00:20] I know you try to do everything in the world to stop that, but you're not going to stop it. [00:00:23] It's going to happen. [00:00:24] And where do people like that go to share the big lie? [00:00:27] MAGA Media. [00:00:29] I wish in my soul, I wish that any of these people had a conscience. [00:00:34] Ask yourself. [00:00:35] What is my task and what is my purpose? [00:00:38] If that answer is to save my country, this country will be saved. [00:00:45] War Room. [00:00:45] Here's your host, Stephen K. Bannon. [00:00:54] Good evening. [00:00:54] I am Joe Allen, and this is War Room Battleground. [00:00:58] For the last year and a half, artificial intelligence has exploded onto American politics. [00:01:04] On the one side, you have accelerationists who are hell bent on developing and deploying AI, and there seems to be absolutely no end to their reckless thirst to alter the entire trajectory of the human race. [00:01:21] On the other side, you have those, maybe you would consider me to be in that camp, those who would prefer to see all of it. [00:01:27] Stopped. [00:01:28] If you could push a button right now and turn off the entire AI industry, I would be all for it. [00:01:35] But of course, these are perhaps unrealistic dreams. [00:01:38] And in the spectrum between these extremes, you have every position imaginable people who are mildly concerned about artificial intelligence, the expansion of data centers, child protection, deepfakes, and people who are extremely concerned, but not to the point that they would give up the entire industry. [00:02:01] In the AI safety community, you have concerns such as bioweapons being developed by amateurs or even rogue states, concerns like AI going rogue. [00:02:12] What happens if you create a superhuman artificial intelligence that cannot be controlled? [00:02:19] One of the more sane arguments, I would say, is to simply pause this race. [00:02:24] The race dynamic between American corporations and American corporations against China means that no one. [00:02:33] Is incentivized to pause. [00:02:37] And yet it is ultimately up to human beings to make these decisions. [00:02:42] Here to talk about pausing AI is Holly Elmore, Executive Director of Pause AI. [00:02:49] Holly, thank you very much for joining us here. [00:02:51] That's Pause AI US, just to be. [00:02:53] Pause AI US as opposed to Pause AI Mars. [00:02:56] That's a positive. [00:02:56] Pause AI the whole movement, you know, it's a worldwide thing. [00:02:59] Yes. [00:03:00] So, Holly, if you would, maybe just give us a sense of what Pause AI is as an organization, what your goal is, and what your tactics are to achieve that goal. [00:03:11] So, we're a grassroots organization. [00:03:13] We're focused on using the democratic process to connect the already 70%, looking at different polls, sometimes the number is even higher, that want something like a pause. [00:03:22] They want a slowdown, they want regulation on AI. [00:03:26] That's already what the people want. [00:03:27] So, we're connecting them to their representatives. [00:03:30] To hear that message and then demonstrating in other ways that just let people know. [00:03:34] There are a lot of people out there who wish they could pause. [00:03:36] We hear very commonly, like, oh, wouldn't that be nice? [00:03:38] You know, kumbaya, we could pause. [00:03:40] But we really can. [00:03:40] We've done things like this before. [00:03:42] This is why we're not dead from nuclear weapons, is because there have been international treaties to control the proliferation and the use of nuclear weapons. [00:03:50] When you say the democratic process, you've been here in D.C. for a while. [00:03:53] You're talking to politicians. [00:03:55] You actually had a protest out in front of the White House, right? [00:03:57] Or a demonstration. [00:03:58] The Capitol, yes. [00:03:59] The Capitol, yeah. [00:04:00] So tell me a little bit about your experience here in D.C. and what you hope to achieve here. [00:04:05] Honestly, I hadn't done what we did this week on Monday and Tuesday. [00:04:08] We met with 75 congressional offices, including 25 Senate offices, so that's 25% of the Senate. [00:04:14] And we just brought constituents who are concerned about this to talk to their representatives. [00:04:20] And that went so much better than I even thought it was going to go. [00:04:24] It was really impressive how much the staffers of these offices and sometimes the members themselves wanted to know more about what we were saying, about what was possible. [00:04:32] They wanted to see these polls. [00:04:34] You know, there were a lot of things. [00:04:35] They're busy, they haven't heard of a lot of things. [00:04:37] You really can make a difference by bringing it. [00:04:38] To their attention. [00:04:39] And an in person meeting makes a big point. [00:04:42] And then the Capitol demonstration just further goes. [00:04:45] It's a big conscientious test. [00:04:47] It's so hard to put a lawful demonstration on Capitol Hill because the security is high. [00:04:52] But it shows that, look, there are this many people. [00:04:55] We had something like 80 people who support pausing AI. [00:04:58] They've got lobbyists in their ear all day from the AI industry who are telling them, oh, you can't, it's too hard. [00:05:04] And the people, this is so unpopular. [00:05:06] The people love AI. [00:05:07] And the people really don't love AI. [00:05:09] And so if you get that message to them, that makes a difference. [00:05:12] So, here in America, you've got four major frontier companies who are pushing this race forward. [00:05:18] You've got Google, OpenAI, XAI, and Anthropic. [00:05:23] I oftentimes joke that Meta is an upstart trying, just jogging along behind them. [00:05:30] I think it was our colleague Jeffrey Lattish's joke that Mark Zuckerberg should get the Nobel Peace Prize for slowing down the race to super intelligence. [00:05:40] Was I buying up all the GPUs? [00:05:41] No, just by, I think, if I took the joke correctly, just because they suck. [00:05:45] But at any rate, this race is driven by the notion that if we, Google, or we, XAI, or we, Anthropic, don't create AGI first, don't arrive at the finish line first, then the bad guys will. [00:06:00] People who are less responsible, less trustworthy. [00:06:03] When you look at that, you look at that incentive structure, how do you see pausing AI being possible? [00:06:10] How would it even really occur? [00:06:12] Would it require some sort of governmental intervention, or is it possible to let these guys? [00:06:18] Basically, determine our fates on their own? [00:06:21] I think it has to require the people of the world cooperating. [00:06:25] And the legal way we do that is through governments. [00:06:28] We have infrastructure for governments to come together. [00:06:30] Again, we have other treaties on other destructive technologies already run by the UN, for example, by other intergovernmental organizations. [00:06:40] So, with PAWS, it's a very broad ask. [00:06:44] It's just the idea of what we want. [00:06:45] There's actually so many ways to recognize a PAWS. [00:06:48] Also, a PAWS could happen without us doing it on purpose. [00:06:51] If, for instance, There is some problem with the compute supply chain. [00:06:55] That would put in place a de facto pause because I think what a lot of people don't realize is that there's a lot of effort going into making each new model. [00:07:02] It seems like it's happening all the time, and it is, but every time, every new model, like Mythos, Claude Mythos, that just was sort of released, that takes exponentially more compute every time, more huge resources. [00:07:14] That's why these big data centers are being built. [00:07:17] And so if something got in the way of the compute supply chain, if something got in the way of the price of energy that this is going to take, There would be lots of ways that that project could be stopped in its track. [00:07:28] It's a huge project. [00:07:29] It's by many ways, by many lights, the largest human project ever. [00:07:33] And it's not just happening by itself. [00:07:34] So there's lots of ways to stop this. [00:07:36] What I would prefer is that our governments stop it on purpose because it's dangerous. [00:07:41] Like it's a national security issue, it's an international security issue. [00:07:45] But there are lots of ways. [00:07:47] What we want is a pause. [00:07:49] And I'm fond of saying that a pause is the next right step. [00:07:52] It's the next right step for any correct solution. [00:07:55] For any solution, we need time where we're not making the problem worse. [00:07:59] We're not building more and more dense neural networks that we don't understand and that we can't control. [00:08:05] We need time to catch up on technical safety and research, but mainly on how we're going to govern it, how we're going to make sure who gets to call the shots, whose values, all of these questions. [00:08:14] We need time. [00:08:15] You know, for the most part, this argument has been about theoretical dangers, ideas like bioweapons or cyber attacks, things like this. [00:08:25] But I think that the at least completion, if not release, of Claude Mythos from Anthropic shows that there are practical concerns that need to be honed in on. [00:08:36] You've got, with Mythos, the capability to identify and exploit vulnerabilities in operating systems, browsers, security systems. [00:08:46] And even if we don't have a really clear view of the details because they've kept it behind closed doors, we at least, I think, can trust that what they developed is actually dangerous. [00:08:59] They say it is dangerous. [00:09:01] People have called it a sales pitch, and I can see why you might think that, but I don't think that you would have all these other corporations that are involved in it. [00:09:09] I don't think they're all conspiring to boost Anthropic's stock value. [00:09:13] So, on the note of danger, what are some of the dangers that you see with artificial intelligence? [00:09:19] I mean, we've talked a few times. [00:09:20] Times about this that, you know, the more kind of mundane dangers of, you know, mass AI psychosis, but also the more extraordinary dangers of artificial general intelligence out of control or artificial super intelligence. [00:09:35] So for you, Holly Elmore, what are the major dangers you're concerned about? [00:09:40] I mean, for me, truly, I mean, when I set the priorities of Pause AI, I meant it. [00:09:44] I think the entire spectrum of dangers that are caused by developing out of control and unregulated are. [00:09:51] Important and they're all connected, they're all connected to the externalities of this out of control development. [00:09:56] Um, but I mean, the biggest thing you know, imaginable to me is that the human race goes extinct. [00:10:02] I mean, something that serious we either lose control or it empowers a bad actor or a dictator to do something that wipes out our civilization. [00:10:12] Uh, that's I really do think that's on the table, and I that will sometimes sound very histrionic to people, but you know, I uh I used to be in my old life an evolutionary biologist, and uh. [00:10:22] 99% of all species that have ever lived are extinct now. [00:10:26] And that's the normal thing that happens. [00:10:28] And a lot of times you can see in the fossil record what happens when one species gains something like eyes that are, you know, they become better predators. [00:10:35] They just wipe out a lot of species. [00:10:37] There's nothing, there's no natural law that says that we cannot go extinct. [00:10:41] And when we destabilize our society, all these things add up too. [00:10:44] So, you know, if we destabilize society by through, you know, we can't trust deepfakes, we can't trust what we see, we don't know what's going on, then we're not going to be as resilient to big threats like bioweapons, like. [00:10:57] Possibly AI having its own ideas and usurping power. [00:11:01] So I think the whole range of these threats is real. [00:11:06] There's probably many we haven't thought of, and those are going to be the real sleepers. [00:11:09] I mean, that's the nature of this danger it's intelligence, it's the ability to figure out ways to get to a goal. [00:11:15] And if it's smarter than us, it's going to find out ways to do what it wants. [00:11:18] We might not know why it wants what it wants. [00:11:21] And it's going to be very hard for us to just anticipate. [00:11:25] We can't deal with it like that. [00:11:26] That's why I really think we have to stop now. [00:11:29] And get really serious about figuring out without advancing capabilities how we can look forward and know how to make sure that what we're doing is safe. [00:11:37] You know, speaking to both laymen and experts, there's some resistance to the notion of artificial intelligence being smart or being intelligent. [00:11:47] The direct comparison with human intelligence, I think, is a real blocker to seeing AI as a real cognitive system. [00:11:56] An example that I think exposes how it is that an AI is quote unquote smart would be in gaming. [00:12:04] Google's DeepMind has created a number of different AIs, AlphaZero being maybe one of the more impressive. [00:12:11] That are able to figure out how to play games, chess, Go. [00:12:16] I think StarCraft is another one that AlphaZero has mastered, and it excels at them. [00:12:22] And it's not that they, with AlphaGo, they trained the system on previous Go moves. [00:12:29] With AlphaZero, it's learning on its own, and very quickly it becomes superhuman. [00:12:35] It may not be smarter than human beings in reading, it can't even read. [00:12:39] It may not be more perceptive than humans, it can't really see. [00:12:43] But when it comes to the rule of that game, it excels all human capabilities and it teaches itself. [00:12:49] And I think when you extrapolate that out to things like drone piloting, things like target acquisition, any other system that might be able to recognize patterns at a superhuman level, you then run into the danger of it exceeding human capabilities. [00:13:07] So it may not be at that large scale theoretical sort of place of, say, AGI yet. [00:13:14] Artificial general intelligence or artificial super intelligence. [00:13:17] But just with what we have now, you have a lot of potential dangers. [00:13:21] And so, I guess if I could tease out some of your more theoretical ideas on this before we move back to Earth, how do you see, if you could give us your definition of artificial general intelligence, artificial super intelligence, and if you have a definite timeline, what is it? [00:13:41] So, artificial general intelligence, this is a very problematic term at this point because I think the frontier models of all of them are greatly exceeding human abilities in many, many ways. [00:13:53] You know, they're not, there's some abilities they don't have at all. [00:13:56] Like, they're very limited, like, sensory abilities or things like that. [00:14:00] But I think, like, I would say AGI should mean roughly like human level ability. [00:14:07] And it turns out that, like, just that ability is, as you were saying, kind of spiky. [00:14:11] In some ways, it's not fully, it's not. [00:14:13] Up to human level, but in a lot of ways that are enough to be dangerous, certainly. [00:14:16] And my, I think we should be having risk in mind when we make these kinds of definitions. [00:14:21] I think we're. [00:14:22] For the audience, when you say spiky or jagged, what do you mean by that? [00:14:25] So, some abilities. [00:14:26] So, none of us can read 10 novels in one second and write up a summary of what happened, but that's something that all of these frontier models can do, even if then later they lie to you about having a timer or something. [00:14:38] Like, they don't know things about themselves. [00:14:40] Like, so it's very uneven what their abilities are. [00:14:44] Hence the strawberry thing. [00:14:46] The strawberry thing, and yeah. [00:14:48] Strawberry, I'm seeing now. === Unexpected Openness in AI Ecosystem (10:12) === [00:14:50] Yeah. [00:14:51] And I think the AI companies make a big deal of that to kind of make us feel safe. [00:14:54] Like, oh, well, it's not quite human. [00:14:56] And they'll get us used to the idea that just because something's not strictly better than humans at every single thing doesn't mean that they have worryingly better capabilities. [00:15:05] Doesn't mean that they can't do your job for almost no money compared to you. [00:15:09] So that's AGI I put there. [00:15:12] I'm mainly today talking about super intelligence and the risk of just. [00:15:18] Capabilities, any kind of capability that greatly exceeds human ability. [00:15:22] So it could even be a narrow intelligence or maybe not something as broad as the dreamt of super intelligence, I'm sorry, general intelligence that encompasses most human abilities. [00:15:33] And I think we'll continue discovering abilities that we haven't really thought of as like cognitive abilities, but that are going to be a source of immense power to a super intelligence. [00:15:43] So, from the perspective of like, it's kind of a philosophical question like, is it human level, is it the same? [00:15:49] Yes. [00:15:49] Question is like, is it a threat? [00:15:51] And I think we're definitely at human level threat as far as cognitive abilities from our current models. [00:15:58] And I'm really worried about raising that any higher. [00:16:02] And just to bring it down to specifics by threats, do you mean things like the creation of bioweapons, cyber attacks, these sorts of things, weapon system control systems? [00:16:11] Yes. [00:16:11] That sort of thing? [00:16:12] Yes, absolutely. [00:16:13] I mean, this is so Claude Mythos claimed that it found a number of zero day exploits in. [00:16:20] All of the operating systems. [00:16:23] That's pretty serious. [00:16:24] That's the kind of thing that would be just one of those, would be a major human hacking campaign. [00:16:30] And increasingly, there are benchmarks for virology knowledge, and AIs are scoring above virologists, human virologists who take these tests, with knowledge and kind of a lot of implicit knowledge about how to make things in the lab and stuff. [00:16:45] So we have a lot of indication that there could be this danger if for some reason. [00:16:51] A bad actor wanted to use an AI that way, or for some reason, an AI had the desire to do it themselves. [00:16:58] So, on that note, when you think about the social opposition to this, I mean, you have a huge backlash to AI right now. [00:17:07] And as you noted earlier, the public sentiment has absolutely turned against AI for a variety of reasons. [00:17:15] And as with any kind of social mood, right, like a sense of discontent or malaise that's spread across the population, you're going to have, to put it in a colloquial way, you're going to have psychopaths. [00:17:29] That freaks out and they turn to violence. [00:17:33] They do it for, they justify their violence for all sorts of different reasons. [00:17:37] And this has been a problem in America, especially for decades. [00:17:43] Here recently, we had two incidents where Sam Altman's home was hit with first, I think, a Molotov cocktail and then shot at. [00:17:51] And then there was an Indianapolis councilman who was advocating for building data centers, and his home was shot at with a manifesto referencing the data centers. [00:18:03] You hear then in response this sort of blame being put on anyone critical of artificial intelligence companies, of anyone who is supporting the building of data centers. [00:18:15] All of this backlash is basically being scapegoated for the actions of psychopaths. [00:18:20] And you were interviewed recently, I believe it was in Fortune magazine, about that sort of blame coming your way. [00:18:28] You have always advocated for nonviolent tactics, correct? [00:18:33] I've always been extremely strict, you know, making people sign our volunteer agreement. [00:18:37] Number one thing about the code of conduct is nonviolence. [00:18:40] Yes, extremely, extremely strict. [00:18:43] To the point, at first, everybody thinks, like, oh, you know, come on, I don't let people even make jokes. [00:18:48] I don't, you know, when people make protest signs, I screen the signs. [00:18:52] And if somebody has, like, blood dripping, like, I know they don't mean anything by it, but the person you're speaking to and then the people who are watching, you know, don't necessarily know that. [00:19:00] So we really, really, really strict, you know, this is about influencing people morally and through democratic means. [00:19:07] And you have a real positive element to your message, too, correct? [00:19:11] I mean, I think so. [00:19:13] Maybe not you personally, but Pause AI US. [00:19:16] I think so. [00:19:17] Give us a little light before we get on to the commercial break. [00:19:21] Well, I think we're about that the world is really good and we want to protect it. [00:19:25] And we think that there is a way to protect it. [00:19:27] We're not talking about anything that hasn't happened before. [00:19:29] We have nuclear nonproliferation treaties. [00:19:31] We have New START treaties. [00:19:33] We just want this for AI. [00:19:35] And then we can enjoy whatever benefits are safe from the AI. [00:19:39] That's really the best of all possible worlds. [00:19:42] I also enjoy, frankly, I enjoy it's bracing to be part of the democratic process and discussion. [00:19:48] And I find that very fun. [00:19:51] Speaking of life, Your study as an evolutionary biologist, you studied mushrooms, correct? [00:19:56] That was one of your specialties? [00:19:57] That was the species I worked on in grad school, yeah. [00:19:59] The species, the phylum kingdom. [00:20:03] Did you spend a lot of time out in nature doing this? [00:20:05] I did. [00:20:06] I did field work where I would collect. [00:20:08] Actually, I worked on the deadliest mushrooms. [00:20:10] We'd collect those and be very careful. [00:20:12] Sorry, I got morbid again. [00:20:16] Let's hear a little bit about that, though, because I'd like to connect it to the way you see artificial intelligence to some extent. [00:20:22] I think I do have. [00:20:24] I used to say, well, I started doing Paz AI because it was necessary morally and that it wasn't connected to my old work. [00:20:30] But more and more, I think it really is. [00:20:32] I think the way that I see things, definitely machine learning is the same process as natural selection, gradient descent. [00:20:41] I think I have an intuition for what works. [00:20:43] And I wanted to spend my life just understanding life and doing cool stuff like that. [00:20:48] And I thought, okay, I've been called to duty now and I just have to do this instead. [00:20:54] But it's. [00:20:55] Really, it's an interesting challenge not only to. [00:20:59] I understand people who are really interested in AI and they really want to be close to it and study it because it is fascinating, but it's also dangerous. [00:21:05] And I kind of think I've got the right remove. [00:21:08] And then it's also just a really cool challenge to figure out a new social movement. [00:21:12] And now here I'm talking to you. [00:21:14] I'm having a good time. [00:21:15] Yeah, it's interesting that someone like you, I mean, I've never gotten a clear sense of your political leanings, but I would say quite a bit further left than mine. [00:21:24] Who knows? [00:21:25] We're a bipartisan group. [00:21:27] Yeah. [00:21:28] I think that this. [00:21:29] Moment is really fascinating and heartening because this issue is sort of like pollution or unhealthy foods or drugs in schools. [00:21:40] It's not something that any particular political persuasion is necessarily going to be concerned about. [00:21:46] If the soil is poisoned and getting into the crops, it affects everyone and it affects everyone in the community. [00:21:51] That is a beautiful thing about working on PAWS AI. [00:21:55] We have people really coming together under the same banner, under this single issue. [00:21:59] Really, it's everything but working for the AI companies that unites us. [00:22:04] I mean, everything else, like being parents. [00:22:06] And at this Hill Day, we had parents, engineers, people who had been in the AI industry, people, teachers, just all kinds of people can agree this is dangerous. [00:22:17] What are we doing? [00:22:18] Why are we making this problem worse before we have any idea of what to do about the dangers? [00:22:22] Yeah, I think that I've met a number of people. [00:22:25] Like the AI ecosystem has opened up to me in the last year in ways that were really unexpected. [00:22:30] I mean, there are some people maybe that I didn't get along with. [00:22:33] Totally, but for the most part, we're talking about people from very, very different walks of life, all of whom share the same concern. [00:22:41] The Future of Life Institute, that's been a really, really important resource to not only connect to different people, but also just to learn more from people who are expert in this about what artificial intelligence systems are, what the real effects are on the human mind. [00:22:57] Another example would be, say, Nate Soros and Eliezer Yadkowski from the Machine Intelligence Research Institute. [00:23:05] Jeffrey Laddish and his colleagues at Palisade Research. [00:23:09] So it's been really amazing. [00:23:11] How do you see that ecosystem functioning now? [00:23:15] I mean, of those institutions I just mentioned or any others, when you look out across the landscape, how do you see it fitting together? [00:23:25] Where do you see the real strength in this movement? [00:23:28] I think the strength, so what we're aiming at is the public, engaging the public. [00:23:33] And I think that's going to be our real source of strength. [00:23:36] And I think that's where the organizations you mentioned are getting their strength, is that they're working with, they're working publicly, openly. [00:23:43] So the old AI safety ecosystem used to be very closed and it was very in groupy. [00:23:49] And it got to the point where, unfortunately, people see people working at AI companies as closer to them as a safety person than the public. [00:23:59] And their interests. [00:24:01] And I think our power, those groups are branching out into just being more open and being more involved in the democratic process. [00:24:08] And that's going to be the way forward. [00:24:10] And that's going to be, it's already growing, but we have the public already. [00:24:16] They just need to understand and be helped a little bit in this baffling. [00:24:19] Everybody's baffled by how much is happening and how fast in the AI ecosystem. [00:24:24] But this is, I think this is where the power is just harnessing that, focusing that, shepherding that. [00:24:31] And you, Come from San Francisco, right? [00:24:34] You've been in San Francisco for many years, but you've been all over the country talking to different people. [00:24:40] Do you see a certain sort of personality type or certain cultural type that's more open to the critiques of this technology, or is it kind of like my experience? [00:24:50] Is it just really across the board? [00:24:52] It's everybody. [00:24:53] The only people who are close to it are the people in San Francisco, really. [00:24:57] Pretty much everybody thinks, like, well, we don't need this. [00:25:01] I'm perfectly happy with my life. === Gold as Wealth Against Robots (02:29) === [00:25:03] Why would I risk it all? [00:25:04] Why would I take a dice? [00:25:05] Role on my species going extinct for what? [00:25:09] To be able to write faster or to have sort of the temptation that my children never learn anything and cheat their way through college? [00:25:18] That's most people's. [00:25:19] And there are so many angles on that. [00:25:22] There are so many ways in which people are dissatisfied and scared about what's happening with AI. [00:25:26] Well, another kind of common human need is to pay bills and to store wealth. [00:25:33] And if there is one way to store your wealth that you don't have to worry about robots coming and scooping up. [00:25:39] Everything you own, it is gold, especially gold provided by Birch Gold. [00:25:45] When the dollar's convertibility into gold ended in 1971, gold was fixed at $35 an ounce. [00:25:51] Fast forward to today, and the U.S. dollar has lost over 85% of its purchasing power. [00:25:57] 85%! [00:25:58] Gold, on the other hand, has increased in value by over 12,000%. [00:26:03] That's why central banks are buying gold at record levels. [00:26:07] That's why major firms like Vanguard and BlackRock hold significant positions in gold. [00:26:12] And that's why I encourage you to discover diversifying your savings with physical gold from Birch Gold Group. [00:26:21] But it starts with education, not education by bots, not education by going on to Wikipedia, education from Philip Kirkpatrick. [00:26:31] Birch Gold just announced their Learn and Earn Precious Metals event. [00:26:35] This free online event rewards you for learning the basics of investing in precious metals. [00:26:39] You must act now. [00:26:41] This special event only runs through April 30th. [00:26:43] The dollar lost its anchor in 1971. [00:26:45] You don't want to have to lose yours. [00:26:48] Text Bannon to the number 989898 to join Birch Gold's Learn and Earn Precious Medals event. [00:26:54] Bannon at 989898. [00:27:00] War Room. [00:27:01] Here's your host, Stephen K. Bannon. [00:27:08] Welcome back, War Room Posse. [00:27:13] The American healthcare system is broken. [00:27:16] And for most Americans, nothing changes. [00:27:20] There are still delays, denials, high costs, insurance roadblocks. [00:27:29] So when I find people doing things differently, I talk about it. === Elite Schools and ChatGPT Dependency (15:43) === [00:27:32] All family pharmacy is not your typical big chain pharmacy. [00:27:36] This is an independent, family owned pharmacy that gives you. [00:27:40] Access to over 400 medications delivered straight to your door, not by drones, but by a smiling human being who just wants to see you well. [00:27:49] They've got ivermectin, Mbendazole, antibiotics, antivirals, NAD, even your daily maintenance medications, and a whole lot more. [00:28:00] If you already have a prescription, your doctor can send it directly. [00:28:04] If you don't, their doctors handle it at all family pharmacy. [00:28:09] As long as there is a medical necessity, they'll take care of it for you. [00:28:13] And I'll tell you this. [00:28:14] The feedback from people listening to this show has been extremely strong. [00:28:19] People are using it, it's working for them, and they're sticking with it. [00:28:22] That's because it cuts out the delays, the middlemen, and all the usual nonsense. [00:28:27] This is about being ready before you need it. [00:28:30] Go to allfamilypharmacy.comslash Bannon. [00:28:34] That's allfamilypharmacy.comslash Bannon and use code Bannon10, that's B A N N, numeral 1, numeral 0, to save 10%. [00:28:48] The healthcare system is broken. [00:28:50] Your pharmacy doesn't have to be. [00:28:57] And now that I am activated by my meds, I am back with Holly Elmore of PAWS AI US. [00:29:06] Holly, we left off talking about cognitive surrender to AIs in schools, and you studied in Vanderbilt, or at Vanderbilt? [00:29:17] Vanderbilt and then Harvard. [00:29:18] And then Harvard. [00:29:19] Yeah, basically, you've one up me on both. [00:29:22] I went to the University of Tennessee, Knoxville, and then, of course, I was across the river at Boston University, staring out the window over at Harvard, wondering what it must taste like over there. [00:29:31] What's it smell like at Harvard? [00:29:34] So, you know the educational system. [00:29:38] Do you have faith in academia as an institution? [00:29:42] Was it a satisfying experience for you? [00:29:47] I, you know, I back then before AI, I did, I've been pretty discouraged by what I've seen with AI Education Day. [00:29:54] So we have a university level organizers, and I was talking to them the other day, and I told them I was like, you know, I was a scrupulous, never cheat, never did. [00:30:02] I would stay up till, you know, five, six a.m. to do essays all the time. [00:30:05] But I never had, I just go to one link and one click of a button practically and had the whole thing done. [00:30:13] I mean, that, how can you stand up to that? [00:30:15] And then one of my organizers, Told me it's actually even worse than that. [00:30:18] Like, I have a school provided laptop that's a Lenovo with Microsoft Office. [00:30:22] And when I write in Microsoft Office, it constantly prompts me to have Copilot rewrite it. [00:30:28] Right. [00:30:28] It's like the software, the computers, everybody's telling you to cheat. [00:30:32] Like, how can you keep your integrity in that environment when just like, it's like everybody's telling you to do it? [00:30:38] And then on top of that, you know, OpenAI especially is making all these deals with campuses. [00:30:42] So my mom teaches at a small but a reputable Christian college, and they have taken on, they're a ChatGPT college. [00:30:50] And my mom was a composition teacher. [00:30:52] She's like, this is unacceptable. [00:30:53] We can't be operating like this. [00:30:55] And then you know what they added? [00:30:57] They added Chat GPT College. [00:30:59] Campus. [00:30:59] Yeah, it's what this. [00:31:00] But what's even worse, they have Chat GPT Shepherd for clergy to tell the clergy how to do their job. [00:31:08] Like Christ GPT. [00:31:09] It's called Chat GPT Shepherd. [00:31:11] Yeah. [00:31:11] Which I thought was bad enough. [00:31:12] Yeah. [00:31:13] So, like, that's just, it's like with everything with the AI industry, they're able to flood us so fast. [00:31:20] They go so much faster than our legal system, they go so much faster than our news cycle. [00:31:24] They go, like, People can't keep up. [00:31:25] And I think that people have the right, I think our hearts are in the right place and we would correct. [00:31:30] And even our institutions like academia will want to correct. [00:31:33] But are they going to be able to, with how fast they're being inundated with this and by how quickly, you know, if a whole generation grows up, first they went through COVID and they missed their high school, and then now they're missing college basically because they cheated everything. [00:31:50] We just have like an uneducated generation. [00:31:52] I think it's a pretty serious concern. [00:31:54] Absolutely. [00:31:55] It's the kind of concern I'm much more affected by than I do think seriously about AGI super intelligence, and I don't discount it at all. [00:32:06] Although I'm pretty skeptical of its imminency, you know, that it's an immediate. [00:32:11] I hope you're right. [00:32:12] Yeah. [00:32:12] Well, you know, as I oftentimes say, if I'm wrong about extinction, you can call me out. [00:32:17] I'll admit it. [00:32:18] That would be great. [00:32:20] I'll be the first to admit I was wrong. [00:32:22] If I'm wrong, I would jump for joy. [00:32:23] I'm just, I think we got to act on, you know, the worst case scenario and make sure that doesn't happen. [00:32:28] But man, I have no pride in the idea that, like, oh, it's all going to end and I called it right, you know? [00:32:33] Sure. [00:32:33] Well, you know, I suppose in the afterlife, we'll sort it out, right? [00:32:37] But there won't be anyone around to talk about it. [00:32:41] You know, but the more practical concern, let's say we keep going as a species, decades, centuries, millennia, millions of years, that this period, Well, as you say, we'll see a completely stunted generation, a generation that was demoralized. [00:32:57] They were told that the AI would do all of their jobs, any vocation that they chose would ultimately be done by a machine. [00:33:04] The best case scenario would be that they were an AI babysitter that got to command around their AIs and get super rich off of it. [00:33:10] But by and large, the messaging to them is adapt or die, use the AI. [00:33:15] And as you say, these are young people, most of whom are extraordinarily hungry for knowledge, right? [00:33:23] At that age, you want to learn. [00:33:25] You want to expand, you want to grow, you want to socialize, all these things. [00:33:28] And they're having screens shoved in their faces. [00:33:31] It would be like if you could go down to the school nurse and get Oxycontin on demand. [00:33:39] Most kids you would hope wouldn't do it, but many would, and a growing number would. [00:33:43] And if it's basically okay, yeah, the nurse will give it to you. [00:33:46] And this is a ChatGPT campus, and oh, you're supposed to use it to help your development. [00:33:51] Like that's in an honor system. [00:33:53] It does feel like they're being told to use it and become dependent. [00:33:59] Yeah, it's spotty. [00:34:01] So there are a lot of professors who are pushing back on this in their classrooms, to the administrations. [00:34:07] There are a few schools, Brown University being one of them, but a few schools that overall. [00:34:12] Are you know the general attitude of the administration is against all of this? [00:34:17] A lot of professors going back to the old blue books with the pencils, which you know, I'm old enough to remember when the blue books were a thing. [00:34:24] I use a blue book, yeah. [00:34:26] I'm old enough to remember when there were no laptops in classrooms. [00:34:30] Uh, when I taught briefly, I refused to have any laptops in my classroom, and the kids adapted just fine, there really wasn't a problem. [00:34:37] Uh, but increasingly, I hear from professors that due to COVID and the lockdowns and the lost education, and also just the Kind of general digital culture, the kids coming in aren't really prepared for college. [00:34:51] Some are, but most really aren't. [00:34:53] And it is nightmarish. [00:34:56] The idea that human beings survive, artificial intelligence doesn't create radical abundance, and we're stuck with this global village of the damned in which all the children are digitized and have offloaded their cognition to the machine. [00:35:10] In the case of Shepard GPT, offloaded their spirituality to the machine. [00:35:16] It's terrifying, more terrifying to me than the idea of going extinct. [00:35:20] going extinct would be kind of a relief in comparison to that. [00:35:24] I mean, I'd beg to differ, but I do think they're all very serious. [00:35:27] And I think really all these threats that are caused by these unchecked externalities of development are threats to our way of life. [00:35:34] And then one is like a final threat to our way of life. [00:35:37] But some are more survivable, some are not. [00:35:40] But like, we also don't know. [00:35:43] We shouldn't mess around with the fabric of our society. [00:35:45] Like, we just don't know what's the important load bearing part. [00:35:48] Yeah. [00:35:50] Yeah. [00:35:50] And I feel very demoralized, especially with it goes beyond just like getting the grade for kids in school. [00:35:57] It's like, They're becoming less confident in their ability to think themselves. [00:36:02] Like, they don't like to sort of just represent their own thoughts. [00:36:05] This is one of the things that people always get complimented on this for being outspoken, but more and more people are like, ooh, I could never do that without having AI check it. [00:36:14] Or, like, why? [00:36:16] You can never just, what, think your own thoughts, have your own ideas? [00:36:19] Or they need to go to one thing that my university organizers were complaining about was that, like, even to just answer trivial questions or even questions about their own preferences, like, people would be like, ask chat, ask chat. [00:36:30] Like, it's like an addiction to, they can't even stand the uncertainty of like working it out themselves. [00:36:35] You know, it's hard enough to stay physically fit in today's world, but imagine you have this with thoughts. [00:36:40] Like, you encounter a little difficulty and there's this answer that's very soothing and easy and quick and it feels valid and like from a neutral source right at your fingertips. [00:36:49] Think of the potential for manipulation if there's any kind of, you know, if, you know, some person in control, some industry in control wanted people to think a certain thing, they would be able to do it. [00:37:01] Yeah, increasingly they are. [00:37:03] And, you know, there was a, A problem from the television forward, you could say from the telegraph forward, but this is a whole other level. [00:37:10] You know, when I was in grad school, my main area of study was evolutionary and cognitive science as applied to religion, but my real interest in my master's thesis was based on this altruism. [00:37:25] The question if evolution, if Darwinian evolution is so harsh, why would human beings be so kind? [00:37:32] Why would the ants be so helpful to one another, the termites, the bees, all of this? [00:37:37] And recently, I've been accused of being an effective altruist. [00:37:43] Now, I'm kind of an altruist. [00:37:46] I'm mostly an ineffective altruist. [00:37:48] I'm not usually nice to many people and not very long. [00:37:50] But I'm definitely not an effective altruist. [00:37:54] Now, you have a lot of experience in and around this group. [00:37:58] Can you give the War Room posse some idea of who the effective altruists are, what their goals and tactics are? [00:38:07] So, I. Disclaimer I used to be kind of a big personality in effective altruism, and I first got introduced to it when I started grad school at Harvard. [00:38:16] They were kind of at a lot of elite schools. [00:38:18] I ended up organizing Harvard EA for six years. [00:38:22] Back then, the idea was mainly like, yes, so there's the possibility of helping others. [00:38:27] And the big insight was like, people who are wealthy in the West can do a lot more for people elsewhere, or we can just even rank our causes in terms of what's the actual impact instead of what caused you like on vibes. [00:38:42] And that could take the same amount of money, the same amount of our personal power to do anything, and have a much bigger positive impact for people in the world. [00:38:49] I still believe this is great. [00:38:51] But always lurking in the back, there was also this AI safety cause, which I remember seriously, I was already vegetarian. [00:38:59] I was already into giving to the poor. [00:39:03] I was really excited for a way for that to go further. [00:39:05] And the only thing I didn't like about EA was AI safety because I just, and I couldn't put my finger on it. [00:39:13] And that's the way a lot of people feel about hearing any argument about. [00:39:16] A computer can become powerful and out of control, and it could be a problem. [00:39:19] Right. [00:39:19] But, but, I realized over time that it was sort of the culture I didn't like. [00:39:25] And that culture continues to be very strong. [00:39:28] It is very complete. [00:39:29] I know your listeners will be familiar with kind of transhumanist ideas. [00:39:33] Sure. [00:39:33] It's very in that space, wanting to. [00:39:35] It's kind of a descendant, an intellectual descendant from transhumanism, would you say? [00:39:39] Yeah. [00:39:39] Yeah. [00:39:40] Well, and so a lot of the reason that the core group that ever got this to be a big idea, not everybody who's into it today, of course, really knows why it became a topic, but the interest was to become, to use AI to be immortal. [00:39:55] And then, like, to reach the singularity to become immortal. [00:39:58] And then, of course, everything else would also be fixed. [00:40:02] And so, within the mindset of effective altruism, this is kind of like an argument for everything. [00:40:08] Like, if the AI would do everything the best, you have to try to get the AI and apply it to whatever you're trying to do. [00:40:16] Or the whole project that the version of AI safety that they worked on is called Alignment. [00:40:22] And it was about, in various different flavors of this, but like finding the true values that the AI should have. [00:40:29] And then, like, make and then letting it become more and more powerful but guided by those values so it'll just do the right thing by humanity and ideally provide, uh, like a paradise, you know, where people get to do whatever they want a kinder, gentler digital god, yeah, of sorts to sort of make a digital god that would like nanny paradise, like, uh, and uh, I always thought of this as like a not, I didn't think of it as like a scientific idea, I didn't think of it, uh, [00:40:59] there was always something that kind of repulsed me about it, but as, um, The capabilities of AI got worse. [00:41:06] I thought, like, oh, these people are definitely, like, they're onto something about the power of it for sure. [00:41:11] And after ChatGPT came out, I just had, I really didn't think about it any harder up until then because it seemed like it really could be like hundreds of years off before we're like dealing with artificial intelligence, like anything close to human level. [00:41:25] And when I saw ChatGPT talk like a human, like, I knew computers could not do that before. [00:41:32] I, based on my, you know, knowledge of linguistics and stuff, like, most linguists, like, Argued we would never see it in our lifetime. [00:41:38] Yes, yes. [00:41:38] And so the guy. [00:41:40] David Deutsch, the futurist David Deutsch, he argued this famously in The Beginning of Infinity. [00:41:45] And most of that book is quite accurate, but not on the LLMs. [00:41:49] Yeah, it just. [00:41:49] And it was, I mean, to get a little nerdy, kind of the thing that this kind of AI is good at is what we thought was like human skills. [00:41:56] So it's like associative, creative writing. [00:42:00] We thought that artificial intelligence would be more like mathematical, like that would be its ability. [00:42:05] But actually, as we were talking about, that's kind of where it has, it makes mistakes. [00:42:08] It's not. [00:42:09] Precise. [00:42:11] It's kind of like the creative parts of our brains. [00:42:14] And when I, the one thing that was even scarier about ChatGPT was that it was created just by a process of. [00:42:23] Searching, you could describe this whole thing as like a process of the more and more compute resources you have, like the more combinations of parameters it's called, like model weights. [00:42:36] Kind of similar to neurons and synapses. [00:42:39] But if you're searching like design space for brains, and the more compute you have, the more you can search and the better you can search it and find like those really powerful options. [00:42:48] And it did that. [00:42:51] This process was done without like learning anything new, special about how the brain works. [00:42:55] Like we don't know. [00:42:56] You know, how it's doing it. [00:42:58] It's just a process for finding a way to do it that's described in these model weights, and we don't know what it's doing. [00:43:06] And so, once that happened, it seemed pretty obvious that if you put more compute on it, you would get an even bigger, a more powerful model. === Final Boss: Anthropic's Frontier (04:43) === [00:43:16] And there was nothing standing in the way. [00:43:18] You know, the only thing standing in the way is acquiring these compute resources. [00:43:23] Okay, so we're talking about effective altruism, we're talking about these massive projects, huge data centers. [00:43:30] Full of GPUs screaming, developing these kind of virtual brains in the mathematical space of possibilities. [00:43:39] It all reminds me of anthropic. [00:43:43] Anthropic, by and large, is my understanding. [00:43:46] This is what the word on the street is. [00:43:48] Anthropic is largely staffed at the upper levels with people who are very friendly to effective altruism. [00:43:54] Perhaps even effective altruists themselves. [00:43:56] Definitely. [00:43:57] And they are in that strange sort of mode that a lot of these tech companies and the CEOs are in. [00:44:04] This technology, they say, could kill everyone, but we have to build it. [00:44:09] Right. [00:44:09] Or else someone else will build one to kill everyone. [00:44:13] You know, a lot of people have mixed emotions about Anthropic, but you've been very clear on your position. [00:44:18] We don't have a whole lot of time left, but if you could, I'd love to just hear your side of the story with the Anthropic question. [00:44:25] I'm just going to preface this by saying this is all my opinion. [00:44:28] Right. [00:44:28] Not my opinion. [00:44:29] Don't want to hear from Anthropic's lawyers. [00:44:32] But my experience being there as this company was formed, Is it was definitely founded by EAs with EA values. [00:44:39] It was founded by EA effective altruists. [00:44:41] Effective altruists. [00:44:42] Yes. [00:44:44] It was a breakoff from OpenAI because of losing confidence in Sam Altman's leadership and commitment to those values, which is something Sam Altman did talk up early on because EAs were the people with the technical ability to do this. [00:44:57] So, It's always been that from the beginning, despite what they told The Atlantic, which was a lie about EA involvement. [00:45:04] So, I'm just going to my personal opinion Anthropic's the one I hate the most. [00:45:09] I think it's Anthropic Final Boss. [00:45:12] Anthropic Final Boss, meaning? [00:45:14] Meaning I think that they're the one that's going to be left. [00:45:16] The others are going to do something. [00:45:18] I mean, OpenAI has kind of shown its hand, especially with Sam Altman's duplicity. [00:45:24] Anthropic is really successfully cultivating this group of loyalists and serving. [00:45:31] Their interests, and I try to break their ranks. [00:45:33] I call out Anthropic employees all the time for how they're betraying what I know were the values they went into it with. [00:45:40] But they don't break ranks. [00:45:43] But they're doing the same thing as all of the other AI companies. [00:45:46] And now they're at the edge. [00:45:48] And they went from saying they weren't going to push the frontier, they were just going to study this to help with safety. [00:45:53] I remember. [00:45:54] They broke that promise. [00:45:55] Now they're at the frontier, and they're talking about how can we release this model that knows all these zero day exploits. [00:46:04] For all our operating systems. [00:46:05] The ultimate cyber weapon. [00:46:07] So they, but they're better at creating this beneficent image and kind of playing on letting people believe, like, oh, you don't have to do anything. [00:46:13] Like, we'll handle it. [00:46:14] Like, the world's going to be great and it's no problem. [00:46:17] But we do have to handle it. [00:46:18] We can't be lulled into a false sense of security. [00:46:20] We can't think, oh, well, Anthropic would basically do what I want. [00:46:23] It has to be we, the people, make known what we want and we have democratic control over what happens with this AI. [00:46:30] Well, on that note, Holly, if you would tell the audience, Where they can find resources on your mission, where they can find information about PAWS AI, and give them some sense of where you're going from here. [00:46:43] How can they follow you? [00:46:44] Okay, so you can go to pauseaius.org, and our website will branch out to everything else. [00:46:50] You can find out how to join a local group, you can donate there. [00:46:54] Where we're going, we're trying to really scale up with helping our constituents, the constituents who identify the PAWS position, reach their representatives. [00:47:04] And we really want. [00:47:06] To help people to get through all of the confusing, you know, it feels like a 12 hour news cycle on AI, help them focus, make their voices amplified, make their voices unified. [00:47:17] So you can find out more on pauseaius.org. [00:47:21] And I really hope to see you all there. [00:47:25] We really need you. [00:47:26] Absolutely. [00:47:27] This is a huge fight, and you guys are fighting with all your might. [00:47:30] I really appreciate it. [00:47:32] Where can they find you personally on Twitter? [00:47:34] If they want to hear your scathing remarks about Daria Amadei and Anthropic, where can they find you? [00:47:40] Well, there's Paz AI US social media, and then there's my personal social media is at Ilex underscore Olmus. [00:47:46] You can just search Holly Elmore on Twitter, where I say more of my personal beliefs about the situation. [00:47:51] And yes, they are spicy. [00:47:53] Well, Holly, you're a fighter to the end. [00:47:55] I really appreciate you coming on. [00:47:56] Thank you so much. [00:47:57] Thank you, Joe. [00:47:57] Thanks for the opportunity. [00:47:58] Appreciate it.