WarRoom Battleground EP 991: HOLLY ELMORE: Save the Human Race! Pause AI
Holly Elmore and Stephen K. Bannon examine the existential threat of unregulated AI, highlighting Pause AI US's grassroots campaign involving 75 congressional offices to halt development amid fears of bioweapons and extinction. They critique the race between American firms like Google and OpenAI against China, noting Anthropic's alleged betrayal of effective altruist values by releasing dangerous models like Claude Mythos. Elmore warns that unchecked tools are eroding academic integrity and creating cognitively dependent generations, arguing that a pause is morally imperative to prevent a "digital god" scenario and ensure technical safety before irreversible damage occurs. [Automatically generated summary]
I am Joe Allen, and this is War Room Battleground.
For the last year and a half, artificial intelligence has exploded onto American politics.
On the one side, you have accelerationists who are hell bent on developing and deploying AI, and there seems to be absolutely no end to their reckless thirst to alter the entire trajectory of the human race.
On the other side, you have those, maybe you would consider me to be in that camp, those who would prefer to see all of it.
Stopped.
If you could push a button right now and turn off the entire AI industry, I would be all for it.
But of course, these are perhaps unrealistic dreams.
And in the spectrum between these extremes, you have every position imaginable people who are mildly concerned about artificial intelligence, the expansion of data centers, child protection, deepfakes, and people who are extremely concerned, but not to the point that they would give up the entire industry.
In the AI safety community, you have concerns such as bioweapons being developed by amateurs or even rogue states, concerns like AI going rogue.
What happens if you create a superhuman artificial intelligence that cannot be controlled?
One of the more sane arguments, I would say, is to simply pause this race.
The race dynamic between American corporations and American corporations against China means that no one.
Is incentivized to pause.
And yet it is ultimately up to human beings to make these decisions.
Here to talk about pausing AI is Holly Elmore, Executive Director of Pause AI.
So, Holly, if you would, maybe just give us a sense of what Pause AI is as an organization, what your goal is, and what your tactics are to achieve that goal.
We're focused on using the democratic process to connect the already 70%, looking at different polls, sometimes the number is even higher, that want something like a pause.
They want a slowdown, they want regulation on AI.
That's already what the people want.
So, we're connecting them to their representatives.
To hear that message and then demonstrating in other ways that just let people know.
There are a lot of people out there who wish they could pause.
We hear very commonly, like, oh, wouldn't that be nice?
You know, kumbaya, we could pause.
But we really can.
We've done things like this before.
This is why we're not dead from nuclear weapons, is because there have been international treaties to control the proliferation and the use of nuclear weapons.
Honestly, I hadn't done what we did this week on Monday and Tuesday.
We met with 75 congressional offices, including 25 Senate offices, so that's 25% of the Senate.
And we just brought constituents who are concerned about this to talk to their representatives.
And that went so much better than I even thought it was going to go.
It was really impressive how much the staffers of these offices and sometimes the members themselves wanted to know more about what we were saying, about what was possible.
They wanted to see these polls.
You know, there were a lot of things.
They're busy, they haven't heard of a lot of things.
You really can make a difference by bringing it.
To their attention.
And an in person meeting makes a big point.
And then the Capitol demonstration just further goes.
It's a big conscientious test.
It's so hard to put a lawful demonstration on Capitol Hill because the security is high.
But it shows that, look, there are this many people.
We had something like 80 people who support pausing AI.
They've got lobbyists in their ear all day from the AI industry who are telling them, oh, you can't, it's too hard.
And the people, this is so unpopular.
The people love AI.
And the people really don't love AI.
And so if you get that message to them, that makes a difference.
So, here in America, you've got four major frontier companies who are pushing this race forward.
You've got Google, OpenAI, XAI, and Anthropic.
I oftentimes joke that Meta is an upstart trying, just jogging along behind them.
I think it was our colleague Jeffrey Lattish's joke that Mark Zuckerberg should get the Nobel Peace Prize for slowing down the race to super intelligence.
No, just by, I think, if I took the joke correctly, just because they suck.
But at any rate, this race is driven by the notion that if we, Google, or we, XAI, or we, Anthropic, don't create AGI first, don't arrive at the finish line first, then the bad guys will.
People who are less responsible, less trustworthy.
When you look at that, you look at that incentive structure, how do you see pausing AI being possible?
How would it even really occur?
Would it require some sort of governmental intervention, or is it possible to let these guys?
I think it has to require the people of the world cooperating.
And the legal way we do that is through governments.
We have infrastructure for governments to come together.
Again, we have other treaties on other destructive technologies already run by the UN, for example, by other intergovernmental organizations.
So, with PAWS, it's a very broad ask.
It's just the idea of what we want.
There's actually so many ways to recognize a PAWS.
Also, a PAWS could happen without us doing it on purpose.
If, for instance, There is some problem with the compute supply chain.
That would put in place a de facto pause because I think what a lot of people don't realize is that there's a lot of effort going into making each new model.
It seems like it's happening all the time, and it is, but every time, every new model, like Mythos, Claude Mythos, that just was sort of released, that takes exponentially more compute every time, more huge resources.
That's why these big data centers are being built.
And so if something got in the way of the compute supply chain, if something got in the way of the price of energy that this is going to take, There would be lots of ways that that project could be stopped in its track.
It's a huge project.
It's by many ways, by many lights, the largest human project ever.
And it's not just happening by itself.
So there's lots of ways to stop this.
What I would prefer is that our governments stop it on purpose because it's dangerous.
Like it's a national security issue, it's an international security issue.
But there are lots of ways.
What we want is a pause.
And I'm fond of saying that a pause is the next right step.
It's the next right step for any correct solution.
For any solution, we need time where we're not making the problem worse.
We're not building more and more dense neural networks that we don't understand and that we can't control.
We need time to catch up on technical safety and research, but mainly on how we're going to govern it, how we're going to make sure who gets to call the shots, whose values, all of these questions.
You know, for the most part, this argument has been about theoretical dangers, ideas like bioweapons or cyber attacks, things like this.
But I think that the at least completion, if not release, of Claude Mythos from Anthropic shows that there are practical concerns that need to be honed in on.
You've got, with Mythos, the capability to identify and exploit vulnerabilities in operating systems, browsers, security systems.
And even if we don't have a really clear view of the details because they've kept it behind closed doors, we at least, I think, can trust that what they developed is actually dangerous.
They say it is dangerous.
People have called it a sales pitch, and I can see why you might think that, but I don't think that you would have all these other corporations that are involved in it.
I don't think they're all conspiring to boost Anthropic's stock value.
So, on the note of danger, what are some of the dangers that you see with artificial intelligence?
I mean, we've talked a few times.
Times about this that, you know, the more kind of mundane dangers of, you know, mass AI psychosis, but also the more extraordinary dangers of artificial general intelligence out of control or artificial super intelligence.
So for you, Holly Elmore, what are the major dangers you're concerned about?
I mean, for me, truly, I mean, when I set the priorities of Pause AI, I meant it.
I think the entire spectrum of dangers that are caused by developing out of control and unregulated are.
Important and they're all connected, they're all connected to the externalities of this out of control development.
Um, but I mean, the biggest thing you know, imaginable to me is that the human race goes extinct.
I mean, something that serious we either lose control or it empowers a bad actor or a dictator to do something that wipes out our civilization.
Uh, that's I really do think that's on the table, and I that will sometimes sound very histrionic to people, but you know, I uh I used to be in my old life an evolutionary biologist, and uh.
99% of all species that have ever lived are extinct now.
And that's the normal thing that happens.
And a lot of times you can see in the fossil record what happens when one species gains something like eyes that are, you know, they become better predators.
They just wipe out a lot of species.
There's nothing, there's no natural law that says that we cannot go extinct.
And when we destabilize our society, all these things add up too.
So, you know, if we destabilize society by through, you know, we can't trust deepfakes, we can't trust what we see, we don't know what's going on, then we're not going to be as resilient to big threats like bioweapons, like.
Possibly AI having its own ideas and usurping power.
So I think the whole range of these threats is real.
There's probably many we haven't thought of, and those are going to be the real sleepers.
I mean, that's the nature of this danger it's intelligence, it's the ability to figure out ways to get to a goal.
And if it's smarter than us, it's going to find out ways to do what it wants.
We might not know why it wants what it wants.
And it's going to be very hard for us to just anticipate.
We can't deal with it like that.
That's why I really think we have to stop now.
And get really serious about figuring out without advancing capabilities how we can look forward and know how to make sure that what we're doing is safe.
You know, speaking to both laymen and experts, there's some resistance to the notion of artificial intelligence being smart or being intelligent.
The direct comparison with human intelligence, I think, is a real blocker to seeing AI as a real cognitive system.
An example that I think exposes how it is that an AI is quote unquote smart would be in gaming.
Google's DeepMind has created a number of different AIs, AlphaZero being maybe one of the more impressive.
That are able to figure out how to play games, chess, Go.
I think StarCraft is another one that AlphaZero has mastered, and it excels at them.
And it's not that they, with AlphaGo, they trained the system on previous Go moves.
With AlphaZero, it's learning on its own, and very quickly it becomes superhuman.
It may not be smarter than human beings in reading, it can't even read.
It may not be more perceptive than humans, it can't really see.
But when it comes to the rule of that game, it excels all human capabilities and it teaches itself.
And I think when you extrapolate that out to things like drone piloting, things like target acquisition, any other system that might be able to recognize patterns at a superhuman level, you then run into the danger of it exceeding human capabilities.
So it may not be at that large scale theoretical sort of place of, say, AGI yet.
Artificial general intelligence or artificial super intelligence.
But just with what we have now, you have a lot of potential dangers.
And so, I guess if I could tease out some of your more theoretical ideas on this before we move back to Earth, how do you see, if you could give us your definition of artificial general intelligence, artificial super intelligence, and if you have a definite timeline, what is it?
So, artificial general intelligence, this is a very problematic term at this point because I think the frontier models of all of them are greatly exceeding human abilities in many, many ways.
You know, they're not, there's some abilities they don't have at all.
Like, they're very limited, like, sensory abilities or things like that.
But I think, like, I would say AGI should mean roughly like human level ability.
And it turns out that, like, just that ability is, as you were saying, kind of spiky.
In some ways, it's not fully, it's not.
Up to human level, but in a lot of ways that are enough to be dangerous, certainly.
And my, I think we should be having risk in mind when we make these kinds of definitions.
So, none of us can read 10 novels in one second and write up a summary of what happened, but that's something that all of these frontier models can do, even if then later they lie to you about having a timer or something.
Like, they don't know things about themselves.
Like, so it's very uneven what their abilities are.
And I think the AI companies make a big deal of that to kind of make us feel safe.
Like, oh, well, it's not quite human.
And they'll get us used to the idea that just because something's not strictly better than humans at every single thing doesn't mean that they have worryingly better capabilities.
Doesn't mean that they can't do your job for almost no money compared to you.
So that's AGI I put there.
I'm mainly today talking about super intelligence and the risk of just.
Capabilities, any kind of capability that greatly exceeds human ability.
So it could even be a narrow intelligence or maybe not something as broad as the dreamt of super intelligence, I'm sorry, general intelligence that encompasses most human abilities.
And I think we'll continue discovering abilities that we haven't really thought of as like cognitive abilities, but that are going to be a source of immense power to a super intelligence.
So, from the perspective of like, it's kind of a philosophical question like, is it human level, is it the same?
Yes.
Question is like, is it a threat?
And I think we're definitely at human level threat as far as cognitive abilities from our current models.
And I'm really worried about raising that any higher.
And just to bring it down to specifics by threats, do you mean things like the creation of bioweapons, cyber attacks, these sorts of things, weapon system control systems?
I mean, this is so Claude Mythos claimed that it found a number of zero day exploits in.
All of the operating systems.
That's pretty serious.
That's the kind of thing that would be just one of those, would be a major human hacking campaign.
And increasingly, there are benchmarks for virology knowledge, and AIs are scoring above virologists, human virologists who take these tests, with knowledge and kind of a lot of implicit knowledge about how to make things in the lab and stuff.
So we have a lot of indication that there could be this danger if for some reason.
A bad actor wanted to use an AI that way, or for some reason, an AI had the desire to do it themselves.
So, on that note, when you think about the social opposition to this, I mean, you have a huge backlash to AI right now.
And as you noted earlier, the public sentiment has absolutely turned against AI for a variety of reasons.
And as with any kind of social mood, right, like a sense of discontent or malaise that's spread across the population, you're going to have, to put it in a colloquial way, you're going to have psychopaths.
That freaks out and they turn to violence.
They do it for, they justify their violence for all sorts of different reasons.
And this has been a problem in America, especially for decades.
Here recently, we had two incidents where Sam Altman's home was hit with first, I think, a Molotov cocktail and then shot at.
And then there was an Indianapolis councilman who was advocating for building data centers, and his home was shot at with a manifesto referencing the data centers.
You hear then in response this sort of blame being put on anyone critical of artificial intelligence companies, of anyone who is supporting the building of data centers.
All of this backlash is basically being scapegoated for the actions of psychopaths.
And you were interviewed recently, I believe it was in Fortune magazine, about that sort of blame coming your way.
You have always advocated for nonviolent tactics, correct?
I've always been extremely strict, you know, making people sign our volunteer agreement.
Number one thing about the code of conduct is nonviolence.
Yes, extremely, extremely strict.
To the point, at first, everybody thinks, like, oh, you know, come on, I don't let people even make jokes.
I don't, you know, when people make protest signs, I screen the signs.
And if somebody has, like, blood dripping, like, I know they don't mean anything by it, but the person you're speaking to and then the people who are watching, you know, don't necessarily know that.
So we really, really, really strict, you know, this is about influencing people morally and through democratic means.
I used to say, well, I started doing Paz AI because it was necessary morally and that it wasn't connected to my old work.
But more and more, I think it really is.
I think the way that I see things, definitely machine learning is the same process as natural selection, gradient descent.
I think I have an intuition for what works.
And I wanted to spend my life just understanding life and doing cool stuff like that.
And I thought, okay, I've been called to duty now and I just have to do this instead.
But it's.
Really, it's an interesting challenge not only to.
I understand people who are really interested in AI and they really want to be close to it and study it because it is fascinating, but it's also dangerous.
And I kind of think I've got the right remove.
And then it's also just a really cool challenge to figure out a new social movement.
Yeah, it's interesting that someone like you, I mean, I've never gotten a clear sense of your political leanings, but I would say quite a bit further left than mine.
That is a beautiful thing about working on PAWS AI.
We have people really coming together under the same banner, under this single issue.
Really, it's everything but working for the AI companies that unites us.
I mean, everything else, like being parents.
And at this Hill Day, we had parents, engineers, people who had been in the AI industry, people, teachers, just all kinds of people can agree this is dangerous.
What are we doing?
Why are we making this problem worse before we have any idea of what to do about the dangers?
Like the AI ecosystem has opened up to me in the last year in ways that were really unexpected.
I mean, there are some people maybe that I didn't get along with.
Totally, but for the most part, we're talking about people from very, very different walks of life, all of whom share the same concern.
The Future of Life Institute, that's been a really, really important resource to not only connect to different people, but also just to learn more from people who are expert in this about what artificial intelligence systems are, what the real effects are on the human mind.
Another example would be, say, Nate Soros and Eliezer Yadkowski from the Machine Intelligence Research Institute.
Jeffrey Laddish and his colleagues at Palisade Research.
So it's been really amazing.
How do you see that ecosystem functioning now?
I mean, of those institutions I just mentioned or any others, when you look out across the landscape, how do you see it fitting together?
Where do you see the real strength in this movement?
You've been in San Francisco for many years, but you've been all over the country talking to different people.
Do you see a certain sort of personality type or certain cultural type that's more open to the critiques of this technology, or is it kind of like my experience?
I went to the University of Tennessee, Knoxville, and then, of course, I was across the river at Boston University, staring out the window over at Harvard, wondering what it must taste like over there.
I, you know, I back then before AI, I did, I've been pretty discouraged by what I've seen with AI Education Day.
So we have a university level organizers, and I was talking to them the other day, and I told them I was like, you know, I was a scrupulous, never cheat, never did.
I would stay up till, you know, five, six a.m. to do essays all the time.
But I never had, I just go to one link and one click of a button practically and had the whole thing done.
I mean, that, how can you stand up to that?
And then one of my organizers, Told me it's actually even worse than that.
Like, I have a school provided laptop that's a Lenovo with Microsoft Office.
And when I write in Microsoft Office, it constantly prompts me to have Copilot rewrite it.
So, like, that's just, it's like with everything with the AI industry, they're able to flood us so fast.
They go so much faster than our legal system, they go so much faster than our news cycle.
They go, like, People can't keep up.
And I think that people have the right, I think our hearts are in the right place and we would correct.
And even our institutions like academia will want to correct.
But are they going to be able to, with how fast they're being inundated with this and by how quickly, you know, if a whole generation grows up, first they went through COVID and they missed their high school, and then now they're missing college basically because they cheated everything.
Well, you know, I suppose in the afterlife, we'll sort it out, right?
But there won't be anyone around to talk about it.
You know, but the more practical concern, let's say we keep going as a species, decades, centuries, millennia, millions of years, that this period, Well, as you say, we'll see a completely stunted generation, a generation that was demoralized.
They were told that the AI would do all of their jobs, any vocation that they chose would ultimately be done by a machine.
The best case scenario would be that they were an AI babysitter that got to command around their AIs and get super rich off of it.
But by and large, the messaging to them is adapt or die, use the AI.
And as you say, these are young people, most of whom are extraordinarily hungry for knowledge, right?
At that age, you want to learn.
You want to expand, you want to grow, you want to socialize, all these things.
And they're having screens shoved in their faces.
It would be like if you could go down to the school nurse and get Oxycontin on demand.
Most kids you would hope wouldn't do it, but many would, and a growing number would.
I'm old enough to remember when there were no laptops in classrooms.
Uh, when I taught briefly, I refused to have any laptops in my classroom, and the kids adapted just fine, there really wasn't a problem.
Uh, but increasingly, I hear from professors that due to COVID and the lockdowns and the lost education, and also just the Kind of general digital culture, the kids coming in aren't really prepared for college.
Some are, but most really aren't.
And it is nightmarish.
The idea that human beings survive, artificial intelligence doesn't create radical abundance, and we're stuck with this global village of the damned in which all the children are digitized and have offloaded their cognition to the machine.
In the case of Shepard GPT, offloaded their spirituality to the machine.
It's terrifying, more terrifying to me than the idea of going extinct.
going extinct would be kind of a relief in comparison to that.
And I feel very demoralized, especially with it goes beyond just like getting the grade for kids in school.
It's like, They're becoming less confident in their ability to think themselves.
Like, they don't like to sort of just represent their own thoughts.
This is one of the things that people always get complimented on this for being outspoken, but more and more people are like, ooh, I could never do that without having AI check it.
Or, like, why?
You can never just, what, think your own thoughts, have your own ideas?
Or they need to go to one thing that my university organizers were complaining about was that, like, even to just answer trivial questions or even questions about their own preferences, like, people would be like, ask chat, ask chat.
Like, it's like an addiction to, they can't even stand the uncertainty of like working it out themselves.
You know, it's hard enough to stay physically fit in today's world, but imagine you have this with thoughts.
Like, you encounter a little difficulty and there's this answer that's very soothing and easy and quick and it feels valid and like from a neutral source right at your fingertips.
Think of the potential for manipulation if there's any kind of, you know, if, you know, some person in control, some industry in control wanted people to think a certain thing, they would be able to do it.
And, you know, there was a, A problem from the television forward, you could say from the telegraph forward, but this is a whole other level.
You know, when I was in grad school, my main area of study was evolutionary and cognitive science as applied to religion, but my real interest in my master's thesis was based on this altruism.
The question if evolution, if Darwinian evolution is so harsh, why would human beings be so kind?
Why would the ants be so helpful to one another, the termites, the bees, all of this?
And recently, I've been accused of being an effective altruist.
Now, I'm kind of an altruist.
I'm mostly an ineffective altruist.
I'm not usually nice to many people and not very long.
But I'm definitely not an effective altruist.
Now, you have a lot of experience in and around this group.
Can you give the War Room posse some idea of who the effective altruists are, what their goals and tactics are?
So, I. Disclaimer I used to be kind of a big personality in effective altruism, and I first got introduced to it when I started grad school at Harvard.
They were kind of at a lot of elite schools.
I ended up organizing Harvard EA for six years.
Back then, the idea was mainly like, yes, so there's the possibility of helping others.
And the big insight was like, people who are wealthy in the West can do a lot more for people elsewhere, or we can just even rank our causes in terms of what's the actual impact instead of what caused you like on vibes.
And that could take the same amount of money, the same amount of our personal power to do anything, and have a much bigger positive impact for people in the world.
I still believe this is great.
But always lurking in the back, there was also this AI safety cause, which I remember seriously, I was already vegetarian.
I was already into giving to the poor.
I was really excited for a way for that to go further.
And the only thing I didn't like about EA was AI safety because I just, and I couldn't put my finger on it.
And that's the way a lot of people feel about hearing any argument about.
A computer can become powerful and out of control, and it could be a problem.
Well, and so a lot of the reason that the core group that ever got this to be a big idea, not everybody who's into it today, of course, really knows why it became a topic, but the interest was to become, to use AI to be immortal.
And then, like, to reach the singularity to become immortal.
And then, of course, everything else would also be fixed.
And so, within the mindset of effective altruism, this is kind of like an argument for everything.
Like, if the AI would do everything the best, you have to try to get the AI and apply it to whatever you're trying to do.
Or the whole project that the version of AI safety that they worked on is called Alignment.
And it was about, in various different flavors of this, but like finding the true values that the AI should have.
And then, like, make and then letting it become more and more powerful but guided by those values so it'll just do the right thing by humanity and ideally provide, uh, like a paradise, you know, where people get to do whatever they want a kinder, gentler digital god, yeah, of sorts to sort of make a digital god that would like nanny paradise, like, uh, and uh, I always thought of this as like a not, I didn't think of it as like a scientific idea, I didn't think of it, uh,
there was always something that kind of repulsed me about it, but as, um, The capabilities of AI got worse.
I thought, like, oh, these people are definitely, like, they're onto something about the power of it for sure.
And after ChatGPT came out, I just had, I really didn't think about it any harder up until then because it seemed like it really could be like hundreds of years off before we're like dealing with artificial intelligence, like anything close to human level.
And when I saw ChatGPT talk like a human, like, I knew computers could not do that before.
I, based on my, you know, knowledge of linguistics and stuff, like, most linguists, like, Argued we would never see it in our lifetime.
And it was, I mean, to get a little nerdy, kind of the thing that this kind of AI is good at is what we thought was like human skills.
So it's like associative, creative writing.
We thought that artificial intelligence would be more like mathematical, like that would be its ability.
But actually, as we were talking about, that's kind of where it has, it makes mistakes.
It's not.
Precise.
It's kind of like the creative parts of our brains.
And when I, the one thing that was even scarier about ChatGPT was that it was created just by a process of.
Searching, you could describe this whole thing as like a process of the more and more compute resources you have, like the more combinations of parameters it's called, like model weights.
But if you're searching like design space for brains, and the more compute you have, the more you can search and the better you can search it and find like those really powerful options.
And it did that.
This process was done without like learning anything new, special about how the brain works.
Like we don't know.
You know, how it's doing it.
It's just a process for finding a way to do it that's described in these model weights, and we don't know what it's doing.
And so, once that happened, it seemed pretty obvious that if you put more compute on it, you would get an even bigger, a more powerful model.
It was a breakoff from OpenAI because of losing confidence in Sam Altman's leadership and commitment to those values, which is something Sam Altman did talk up early on because EAs were the people with the technical ability to do this.
So, It's always been that from the beginning, despite what they told The Atlantic, which was a lie about EA involvement.
So, I'm just going to my personal opinion Anthropic's the one I hate the most.
Well, on that note, Holly, if you would tell the audience, Where they can find resources on your mission, where they can find information about PAWS AI, and give them some sense of where you're going from here.
Okay, so you can go to pauseaius.org, and our website will branch out to everything else.
You can find out how to join a local group, you can donate there.
Where we're going, we're trying to really scale up with helping our constituents, the constituents who identify the PAWS position, reach their representatives.
And we really want.
To help people to get through all of the confusing, you know, it feels like a 12 hour news cycle on AI, help them focus, make their voices amplified, make their voices unified.