Stephen K. Bannon frames AI as a global "brain," predicting AGI within three to five years despite public concern and violent backlash against data centers. Experts Daniel Cochran and Brendan Steinhauser warn of 100,000 lost jobs and existential risks, urging federal oversight via Marsha Blackburn's "Trump America AI Act." While Ray Kurzweil envisions conscious digital entities, the segment concludes that unchecked consolidation threatens democracy, necessitating immediate regulation to prevent societal collapse. [Automatically generated summary]
It'll be personal, adaptable, it'll be easy to use, it'll give people incredible superpowers that were sort of science fiction only a couple of years ago.
I think an even more meaningful impact in our lives is going to come from.
Everyone having a personal super intelligence that helps you achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be.
One way to say this is that within three to five years, we'll have what is called general intelligence, AGI, which can be defined as a system that is as smart as the smartest mathematician, physicist, artist, writer, thinker, politician.
I think that personal devices like glasses that can see what we see, hear what we hear, and interact with us throughout the day are going to become our main computing devices.
When the end of that occurs, then they have reached a condition of having overcome human behavior, human thinking, human desires, desiring only to be in the kingdom of heaven.
In the evolutionary level above human, this is a kind of brain for the world.
It'll be personal, adaptable, it'll be easy to use, it'll give people incredible superpowers that were sort of science fiction only a couple of years ago.
It is Wednesday, April 15th in the year of our Lord, 2026.
I am Joe Allen sitting in for Stephen K. Bannon.
I've been going over a lot of the polling in the recent weeks, polling from Gallup, polling from Pew, and the Stanford AI Index.
What they are finding, to no one's surprise, is that the public is not only not excited about the AI revolution, a revolution as we have just heard.
Intended to first replicate the human mind and then replace as many human beings as possible with these non human minds.
The public, according to polling from Pew, 50% of people, a mere 50%, are in any way excited about AI.
Now, this is, apologies, apologies, a mere 24%.
50%, on the other hand, are quite concerned about artificial intelligence.
And 50% are convinced, and I think rightly so, that artificial intelligence will in fact worsen creativity in children, in adults.
It will in fact worsen human relationships.
Insofar as the younger generation is concerned, a recent Gallup poll found that the number of Zoomers, Gen Z kids from the age 16 to 29, there were 27% last year who thought that AI would bring about a better world.
They were hopeful about AI.
That is down to 17% now.
It's oftentimes said that boomers, the reason that the older generation isn't in any way excited about AI and is quite a bit concerned is because they don't really use it.
They don't have experience with it.
Well, Gen Z is more than immersed in AI.
They've, for most of their conscious or early adult life, used AI continuously.
It's been around them.
The internet was something that was taken for granted their entire lives, even though, or perhaps because of, Their use of artificial intelligence, they're coming away completely dismal about the possibilities for the future.
Why would Gen Z, why would anyone look to artificial intelligence with a suspicious eye?
I think first and foremost, it comes down from the tech CEOs themselves and what they have said, they've stated explicitly their mission is, which is to, again, create non human minds that are built from data extracted from the public that will replicate every economically viable activity that human beings can do coding,
white collar work, and eventually with the advancement of robotics.
Blue collar work.
Their intention is to replace you.
In the meantime, we're seeing stock prices soar.
We're seeing data centers imposed on communities around the country.
And we're seeing the culture watered down with AI slop.
The data center backlash is among the most heartening sorts of movements that we're seeing right now against AI.
In Wisconsin, in Port Washington, the City, with a referendum, has decided to pause all construction on data centers that extract more than 20 gigawatts.
In Maine, I'm sorry, Festus, Missouri, they fired half of their city council for trying to build data centers in their small town.
And in Maine, we have a moratorium, again, for a year and a half on the building of any data centers above 20 gigawatts.
This is just the tip of the iceberg insofar as the local resistance goes to not only the cultural effects, the economic effects of artificial intelligence, but the on the ground imposition of massive, noisy data centers that strain the infrastructure, that use up The electricity that use up in many cases the water.
And people are becoming more and more conscious of what the ultimate goal of these data centers are.
Now, this backlash has led to some unfortunate instances of violence.
No one has been harmed up to this point, but we had in Indianapolis a city councilman who had pushed for a data center to be built in the local municipality.
His home was shot up.
And a manifesto talking about the sort of downsides of AI was found.
Sam Altman's home was hit with a Molotov cocktail and then a few days later shot up.
This is not the norm, nor is it anything that is being advocated for by the people pushing against the AI industry.
We're not calling for violence.
You see, violence associated with any movement, whether it's Racial grievance or economic downturns.
You see violence associated with police resistance.
You see violence everywhere.
It is not characterizing the AI movement, and we can't allow it to characterize the anti AI movement.
What we are about is having people on the ground, on the local level, being able to determine whether or not their town is defined by a loud data center that's sucking up all of the electricity.
What we are about is.
Are people at the state level being able to protect their children from the predatory bots that have been unleashed in schools, in their homes, and on the internet in general?
What we are about is the federal level control.
Of not only the damage being done, accountability to these companies and demands for transparency as to what's going on inside the labs, but also the institution of some outside agency that will watch over these companies and determine whether or not the technology they're producing is destructive, whether it is dangerous.
That may be the Department of Energy, that may be the Center for AI Innovation and Standards.
That may be another agency such as DHS, or it may be a combination of all of them.
But at some point, we're going to have to look at artificial intelligence the same way we've looked at nuclear weapons, or on a more mundane level, the same way we've looked at junk food, the same way we've looked at medicines.
No industry goes unregulated.
And the push to leave the future of artificial intelligence solely up to the companies that are building it and deploying it recklessly.
Would be just as insane as to allow Pfizer to regulate themselves with vaccines or pain medication or psychiatric meds.
This is a minimal ask.
We simply want, first and foremost, people to be empowered, and second, for the government to be responsive to the demands of their constituencies.
You are not alone in this resistance.
This is nationwide.
This is worldwide.
And as we push on the local level, on the state level, and eventually the federal level, you're going to have to keep your backbones.
You're going to have to stay salty.
You're going to have to look to the future, not a future of doom in which AI has taken over everything.
There are no jobs, and we are simply pets that are being fed by robots.
We are not looking towards a future in which Robots are kicking in your door and dragging you out to become biofuel for the next data center.
We're looking for a future that continues our traditions.
We're looking for a future in which everything that our parents, our grandparents, and all the generations that have brought us up to this point are vindicated in their efforts, that all of their sacrifices were not for nothing, that this movement of human life from the primordial origins up to this point and onward, that this movement continues.
Remains human, that we are not handing ourselves over to the machines, and most importantly, that we're not handing ourselves over to the people who are building and deploying these machines.
Because if there's one thing that you really have to keep in mind, is that part of the anti AI backlash is being steered towards looking at AI as some sort of abstract entity, as if AI itself, as if this non human digital mind was coming down from the heavens or perhaps coming up.
From the hells of its own accord and imposing itself on you and on your children and destroying the culture that we have around us.
No, this is not happening because of some abstract non human mind.
This is happening because people like Sam Altman of OpenAI, Dennis Isabas of Google, Elon Musk of XAI, and Dario Amadei of Anthropic, and we'll even throw in Mark Zuckerberg of Meta.
It's because human beings have made the choice.
To build replacements for other human beings and impose their technologies on us.
We're going to resist it at the local level.
We're going to resist it at the state level.
We're going to resist it at the federal level.
And most importantly, we're going to reject it in our own homes, in our own lives.
Stay tuned, Warren Posse.
We'll be right back with Daniel Cochran of the Institute for Family Studies.
you Welcome back, Warren Posse.
I am proud to introduce Daniel Cochran, a senior fellow with the Family First Tech Initiative at the Institute for Family Studies.
Daniel comes from a long and storied background in policy with a number of think tanks and institutions.
Daniel, thank you very much for joining me.
Daniel, if you would just Tell me a little bit about how you got into AI policy and what you see the future of AI policy being in America.
To begin with, I think that, you know, look at the social media era, because I think in here, in talking about AI past its prologue, at the beginning, we were told social media would radically transform the world.
And in one way, it did.
It did indeed transform the world, but not in the ways we promised.
We were promised greater democracy, more self government, more decentralization.
Social media was promised as sort of our savior, the savior of the common man from the big institutions, think the big media, right?
All the things that, This show has long talked about and critiqued.
But instead, what it became is a new, it created a new type of information gatekeepers.
And those are the companies we're familiar with today Meta, TikTok, all of these big companies, Google, which owns YouTube and other social media platforms.
These platforms became new information gatekeepers, and they now have the power not just to influence our elections and public discourse, but as we've learned very tragically, to literally rewire the minds of an entire generation.
So when I say past as prologue, I think we need to apply that lesson of history.
Now to AI.
What are the AI guys telling us?
What is Sam Altman telling us?
What is Mark Zuckerberg telling us?
What is Mark Andreessen telling us?
They're all telling us the same thing.
This is going to be the savior of humanity, literally in some cases.
This is going to decentralize.
That's Mark Andreessen's big point.
He talks over and over again, and others as well, about how AI is going to create an opportunity for smaller enterprises to compete with large businesses, break the media monopoly, et cetera.
But what are we seeing in reality?
Well, as you noted in the opening segment, we're seeing these big tech.
Companies, many of which were involved in the social media era, like Meta, reconsolidate control over AI and, in fact, use their existing distribution networks, specifically like Meta's Instagram or TikTok or others, to not only peddle their AI products, but to create a new generation addicted to digital narcotics.
And this time, it's not just an algorithm that addicts you or that pulls you in, it's now an algorithm that you believe has a relationship with you, you believe loves you.
AI companions, AI lovers, et cetera.
So I think what we ought to do is, I think we have to have a healthy degree of cynicism, but not to the extent of desiring to destroy the tech companies.
That's not what we should desire.
I mean, we should desire to destroy the evil that's there, but we should desire to have a vision.
We should chart a vision for how AI can serve the American public.
And I know we're here to talk about some of the important legislative efforts and so forth, but I think it's important to lead with skepticism, but then provide the vision.
And that's what I think Republicans and conservatives around. the nation need to do.
Yeah, it's very important to look at that direct connection between social media and the digital milieu in general and AI.
As you noted, people have been primed to communicate via these networks.
And especially with social media, there's a culture now in which people are basically uploading their souls and have been doing so since the early 2000s.
You're taking your personality, you're putting it in this digital form, you're interacting with other people whose personalities are put into a digital form.
You literally have.
This kind of simulacra that becomes, in many ways, at least socially speaking, more real than real life, at least for some people.
We all know those people whose social media persona is more important to them than the actual flesh and blood relationships around them.
And AI steps in to that pre existing digital culture.
And instead of now interacting with digitized humans, you're interacting with a digital mind that is built from previously.
Digitized humans, and it has a kind of life of its own.
I'm curious, how do you see the lack of regulation with social media playing out as we move forward into most likely some pretty harsh regulation with artificial intelligence, or at least I hope so?
Well, look, I think you're seeing the backlash already around the country.
I mean, even Gen Z is saying, you know, and polling is showing this more and more, Gen Z is rejecting to some extent social media and smartphones.
You know, you talk about these clubs on.
College campuses.
I met very recently, I met a young man who started a club at a public university.
And the whole point of this club is low tech, no tech, kind of digital living, combined with policy advocacy to get a handle on this stuff.
So that's really interesting.
It's not just about get rid of the smartphone in your own life, it's about arresting the powers that have kind of taken over our homes and our lives from the grassroots, from the state level, local level, federal level, which you sort of alluded to earlier.
So, you have a paper coming out, or your institute has a paper coming out addressing Marsha Blackburn's Trump America AI Act.
If you would, could you just break down what your position is, maybe even give us a little bit of a review on what Marsha Blackburn's bill is actually calling for?
And I think it's important to connect this with the president's AI framework, the legislative recommendations the White House recently made.
And in there, I think it's useful to say look, it was.
Those are high level principles, but the document delegates most of, frankly, the hard work to Congress, which is the way it should be.
Congress needs to legislate on this issue.
They've needed to for years, they promised to for years, and they failed to for years.
Now, when we talk about artificial intelligence and making, ensuring, creating a national framework, where I think we often get off track is we lose sight of what we're actually trying to do.
And the first principle of an effective national AI framework must be guardrails.
Strong guardrails that actually have teeth.
And Marsha Blackburn's Trump America AI Act would do just that.
It contains a lot of provisions that are very common sense and widely bipartisan, like the Kids Online Safety Act, the Guard Act introduced by Senator Hawley.
One of the more interesting facets to me, the notion of existential risk with artificial intelligence or any kind of catastrophic risk.
This hasn't really been much of a discussion other than the kind of sensationalist claims or predictions, which may in fact prove to be true.
No one's saying that they're not, but I don't really focus on them all that much.
On the other hand, when you have these tech execs, Elon Musk is probably the most notable, but we've also had many statements from Dario Amadei for sure, Sam Altman, that the notion that artificial intelligence poses Catastrophic risk for the public, either because of loss of control or because you have rogue actors that use AI to create bioweapons or any other kind of improvised weapon.
Marsha Blackburn's Trump America AI Act addresses that head on.
The proposition seems to be that some combination of federal agencies would oversee these companies.
She specifically points to the Department of Homeland Security in the case of rogue actors or the more general threat.
That responsibility would fall on the Department of Energy, which, of course, has a long history of surveilling and regulating nuclear materials.
So I'm curious have you thought much about that aspect?
Or is it, do you find existential risk to be a persuasive argument that we actually should be concerned that AI could go rogue or that rogue actors could use it to inflict horrible damage on society?
Well, look, I think the devil's in the details on how you define it, but let's look at an example from just this past week with.
Claude, Anthropics Claude, being used to find critical security vulnerabilities in a whole variety of critical infrastructure and also operating systems that we didn't know existed for years.
So, that in one sense is a kind of existential risk because at its base, what is AI?
It's pattern recognition and prediction.
So, it's finding patterns that no human or prior computer systems have found, and that creates a lot of vulnerabilities.
I mean, if you think about it, if a hacker had the ability to do what Claude can do right now, right?
That would be really bad.
For a national security standpoint, from an economic security standpoint, from almost any standpoint.
So I think it's quite important for bills like the Trump America Act, which you just referred to, to delegate some responsibility to the Department of Homeland Security and other federal agencies to ensure that this stuff, that we're studying it at the very least, and that we create protocols for reporting back to the government, like what are the actual abilities of these models.
Anthropic response to this, the way they move forward, I have a lot of mixed emotions about it because, on the one hand, they're creating this software, these models, to write better code and a number of other functions.
It's dual use, and so by being able to write code and analyze code, it's able to find these vulnerabilities.
If they are going to create a monster like that, they have to be responsible for it.
And so they are, to some extent, taking responsibility.
They're saying we should be regulated.
They did not release the model.
They offered the model to Amazon Web Services, Microsoft, a number of corporations so that they could strengthen, they could bolster their own cybersecurity using the model.
So it's a mixed bag because, on the one hand, they're driving forward with this AI race.
They are actively creating something that could wreak havoc.
Maybe not this model, maybe not Mythos, maybe it's the next one or the next one could wreak havoc on the digital infrastructure.
Knowing that this danger exists, they somehow feel it's worth it so they can write better code.
Even, you know, it's as if, you know, you had Dr. Frankenstein asking the local sheriff to stand guard to make sure that the monster doesn't go out of control.
I think that the problem.
Is really the desire to create such a monster.
But, you know, Daniel, you've always been much more positive about this than I have in our discussions.
You've always had much more optimism.
And when we get back at the other end of the break, I want to hear more about your optimism and I want you to make me feel good about the artificial intelligence revolution.
We're trying to reverse them, but we're playing catch up.
The cement on AI is still wet, and we still have an opportunity as a country to make decisions for the direction that technology is going to travel.
And the lesson we ought to apply to this round is we need guardrails and regulations.
And as we mentioned, we were talking about Senator Blackburn's Trump America AI Act.
That's a great step in the right direction.
So you ask, Well, what would it look like to have guardrails?
That's what I think a starting pass looks like.
It looks like protecting kids online with Kids Online Safety Act.
It means that we impose regulations on chatbots.
We tell companies look, you're not allowed to sell or make AI companions available to children, especially AI companions that are engaged in sexually promiscuous conversations.
And to your point earlier, it ensures that we create some kind of framework.
To monitor the risks that are emerging from these models, the ability of these models to find vulnerabilities, but also to be able to act in unpredictable ways that are going to harm our society at a massive scale.
The way I look at it, if I had my own way about it, I would just simply turn back the clock.
I'm not sure to where, probably somewhere in the 90s, the best decade.
But obviously, no one's asking me for my opinion on how all of society should go.
And it's probably best I don't have my way.
So we're stuck.
These technologies have been created.
They will continue to develop.
We have to have some way of steering it, or kind of the way I look at it, to put up blast shields against the worst effects of the technology.
And while I'm not a huge fan of government imposition, I'm also not a fan of corporate imposition.
And I think in this case, the corporate imposition is far more dangerous than anything the government's going to do.
And these guys are talking about, oh, our rights as corporations are being trampled upon.
Well, They've trampled on our privacy.
They've really trampled on our sanity.
And certainly they've trampled on the sense of safety that people can and should have regarding their children.
So, you know, I'm all for regulating this.
If the government is our only real mechanism to get a handle on this, you know, sometimes you've got to make a deal with the devil to go against Lucifer, right?
I think it's important to recognize, too, this is why states are critical in figuring out how to strike the balance.
Like, we have a federalist system here in the United States, and one of the benefits of federalism is we get to figure out what works and what doesn't.
You know, I'm from California.
California's done some really crazy stuff.
And you look at California, like, I don't want to do that.
And other states are like, we're going the opposite direction.
You got states like Texas that are doing some really great stuff on education, on tax policy.
Florida's the same.
Tennessee's the same.
And especially on technology, when it comes to AI, look at what Texas is doing, look at what Utah's doing, look at what Tennessee's doing.
You know, Tennessee, the home of the country music industry and a lot of creatives, they enacted the Elvis Act.
What does that do?
It says you can't use AI to distribute or you can't distribute digital copies of someone's visual likeness or their voice.
Very common sense.
And that came not from Congress, not from some technocratic organization.
Agency here in Washington, that came from a state.
Look at Utah.
Utah put common sense guardrails on chatbots.
They said, look, if you're using a chatbot for mental health therapy, which personally I oppose, but okay, if you're going to do that, the company can't use your data and target you with ads while you're using the chatbot.
Very common sense.
That came from a state, not from Washington, D.C. Florida, for years, has been on the front lines of trying to address the effects of social media.
They passed laws that not only put restraints on social media and guardrails on social media to address some of their harmful effects on kids, like Like requiring parental consent for social media access.
But more recently, they have also enacted a law to push back on big tech censorship.
You know, the case that went to the Supreme Court just a couple of years ago, it dealt with a law in Florida and Texas that required, that would have restricted the ability of these companies to censor speech.
And I know that's something that War Room has experienced firsthand.
And we know, I mean, this is stuff that the states are doing.
The states are the American people's first, best, and best line of defense against the.
Against big tech and against big AI, and that's why we got to keep them in the race.
Well, it's great to be back with you, Joe, and great always to be with the War Room posse.
You know, we started, we're sitting around the office one day, and one of the team members had an idea to say, look, we're seeing these reports.
Of job losses due to AI.
What if we started to track them in real time?
What if we kept it updated on a daily basis?
If we kind of scoured news reports and we looked at what's happening in the boardrooms around America, we started to actually count these job losses in going back to January of last year, January of 2025.
And so the team put together this great tool, jobloss.ai, where we can actually track what's going on.
It's based on what the companies are saying themselves when they announce job cuts due to AI.
It's also based on media reports because sometimes people don't want to admit they're cutting jobs due to AI.
And so oftentimes you have a whistleblower in the company that talks to a journalist, and now we know the real story.
But that's kind of the methodology there.
We try and be aware of some companies might say it's due to AI, but really it might be due to something else.
We try and dig in a little bit to figure out, okay, what's the most likely plausible situation here?
It's a moving target, but I think we're really happy with the product as it is, the tool that people can use and track.
But the downside, the problem is that this is happening and it's happening really fast.
And policymakers are behind the eight ball here and they need to be talking about this more.
They need to be thinking about it more.
And right now, the American worker is at grave risk.
Coding seems to be the most vulnerable occupation at the moment.
Can you break down a bit what you're finding as far as each job, the kind of job title or job role that is really under threat?
Where people are being replaced.
And if you could give us a sense, how much of the job loss is the lack of opportunity where lower level people who would have come in and trained don't have a job to come in and be an apprentice?
And how much is just people getting wiped out, like the massive layoffs at Oracle?
Yeah, there's a lot of activity at these big companies, Oracle being one of them, Salesforce, Dell, and a bunch of others.
They've announced massive cuts.
So, a lot of those numbers we're tracking actually are due to real jobs lost, real people who are left unemployed.
And so that number is well over 100,000 now in the US and growing.
And Goldman Sachs estimates it's going to be something like 15 or 16,000 jobs lost per month.
So, these numbers are, it's a moving target, but they're basically saying that's what they see happening.
But right now, yeah, it's a combination of sort of your computer programmers, your coders, people who AI is essentially replacing in real time.
So if you're in that software development business, you're probably seeing this at your company.
You're probably seeing this if that's your job description, even if you work at a different company.
So a lot of that impact is happening to those folks.
But also, what we're seeing is kind of your white collar workers, your kind of middle managers, people that have some experience that aren't necessarily entry level and they're not the most senior person at the company.
But sort of mid level management is getting hit really hard.
And you think about that, that could be at any type of company.
Those can be tech companies or retail or construction or any company that has that sort of role of people managing people, especially through the use of computers, through the use of spreadsheets and email and phone calls and that sort of thing.
So, what's happening is these AI agents that are coming on the scene are starting to replace those white collar workers.
And that's why one of the big tech CEOs admitted this about six months ago.
He said there could be a white collar bloodbath and you could have.
An incredible amount of high unemployment numbers coupled with hiring freezes and young people not being able to get good jobs.
They're asking questions about what's going on, what big tech is building.
There's literally, you know, I've not met a single person that supports being replaced and given a UBI in my travels.
There's been a lot of opposition to building a super intelligence.
People are very worried about that.
We have traveled all over the country.
My team and I have done everything from attending CPAC in Dallas, a huge gathering of conservatives there.
To going to the Sundance Film Festival in Utah or South by Southwest in Austin, traveling down to Palm Beach, Florida, talking to voters, talking to regular people.
It is incredible.
We have an issue where more than 80% of Americans agree we have to have safeguards on AI.
They don't want accelerationism, they don't want runaway AI.
And it's really interesting and encouraging because it's kind of a bipartisan thing.
You meet people who are pretty far left, pretty far right, people in the middle who hate politics.
They all agree on this and they say, I don't trust what big tech is doing.
I'm really concerned about where we're going.
And I want to see some safeguards and protections.
And so that's heartening to me.
And I think that my message to all Americans is that you have agency, you have a voice in this, you get a vote in what happens to us in our future.
And I really want to encourage people to continue that fight.
You've taught me a lot about how to deal with politics, and maybe one day I'll actually learn.
All right, that's job loss.ai, job loss.ai.
I really encourage you to go there and see what the impacts are, the real world impacts are of artificial intelligence on the economy and on people's jobs, on people's lives.
You aren't the first person to say that's a long URL, Joe.
So I will keep that in mind for the next time.
But it is warroomvarally, all one word, dot eventbrite.com.
And if you go on War Room social media pages, there is a post from last night, and we will post it again with the link so you're able to click right from there as well.
You know, Mo, working with you, working with Steve, working with everyone here at the War Room, the thing that really has given me the most hope about American politics is this grassroots effort.
You know, we're not some kind of top down organization.
This is all about the people, and the War Room has given the people a voice.
Well, Warren Posse, I think that it would be good to end on a bright note.
We've talked a lot about Ray Kurzweil, Ray Kurzweil's idea of the singularity, Ray Kurzweil and the merger of man with machine, and Ray Kurzweil and his idea of spiritual machines.
So as we go out, let's just watch this doddering old man describe what the future of artificial intelligence really is.