All Episodes Plain Text
April 15, 2026 - Bannon's War Room
48:56
Episode 5302: Rise Of The Digital Mind

Stephen K. Bannon frames AI as a global "brain," predicting AGI within three to five years despite public concern and violent backlash against data centers. Experts Daniel Cochran and Brendan Steinhauser warn of 100,000 lost jobs and existential risks, urging federal oversight via Marsha Blackburn's "Trump America AI Act." While Ray Kurzweil envisions conscious digital entities, the segment concludes that unchecked consolidation threatens democracy, necessitating immediate regulation to prevent societal collapse. [Automatically generated summary]

Transcriber: CohereLabs/cohere-transcribe-03-2026, sat-12l-sm, and large-v3-turbo
Participants
Main
d
daniel cochrane
09:29
j
joe allen
24:50
Appearances
b
brendan steinhauser
04:50
e
eric schmidt
01:06
m
marshall applewhite
00:36
m
maureen bannon
01:40
r
ray kurzweil
01:10
Clips
d
donald j trump
admin 00:12
j
jake tapper
cnn 00:10
m
mark zuckerberg
meta 00:29
s
sam altman
openai 00:27
s
steve bannon
r 00:27
|

Speaker Time Text
Primal Scream of Dying Regime 00:06:26
steve bannon
This is the primal scream of a dying regime.
Pray for our enemies, because we're going to medieval on these people.
You're not going to have a free shot on all these networks lying about the people.
The people have had a belly full of it.
I know you don't like hearing that.
I know you try to do everything in the world to stop that, but you're not going to stop it.
It's going to happen.
jake tapper
And where do people like that go to share the big lie?
MAGA Media.
I wish, in my soul, I wish that any of these people had a conscience.
steve bannon
Ask yourself.
What is my task and what is my purpose?
If that answer is to save my country, this country will be saved.
unidentified
War Room.
Here's your host, Stephen K. Bath.
sam altman
We are building something profound.
This is a kind of brain for the world.
It'll be personal, adaptable, it'll be easy to use, it'll give people incredible superpowers that were sort of science fiction only a couple of years ago.
eric schmidt
Okay, so we believe as an industry that in the next one year, the vast majority of programmers will be replaced by AI programmers.
marshall applewhite
It probably would look like what you might consider a very attractive extraterrestrial.
mark zuckerberg
I think an even more meaningful impact in our lives is going to come from.
Everyone having a personal super intelligence that helps you achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be.
marshall applewhite
Insects or slimy reptilians or eyes so big that you could fall into them.
eric schmidt
One way to say this is that within three to five years, we'll have what is called general intelligence, AGI, which can be defined as a system that is as smart as the smartest mathematician, physicist, artist, writer, thinker, politician.
donald j trump
The people of OpenAI, Google, Meta, and countless startups are proving once again that America is impossible.
You are just impossible to beat.
mark zuckerberg
I want to talk about our new effort, Meta Superintelligence Labs, and our vision to build personal superintelligence for everyone.
marshall applewhite
The only real extraterrestrials have a body similar to human body.
sam altman
So now we're starting to look ahead to superintelligence, and even more than before, our focus must be on wide and fair access.
eric schmidt
What happens when every single one of us has the equivalent of the smartest human on every problem in our pocket?
mark zuckerberg
I think that personal devices like glasses that can see what we see, hear what we hear, and interact with us throughout the day are going to become our main computing devices.
eric schmidt
In the next year or two, this foundation is being locked in, and we're not going to stop it.
It gets much more interesting after that.
Because remember, the computers are now doing self improvement.
They're learning how to plan and they don't have to listen to us anymore.
We call that super intelligence or ASI, artificial super intelligence.
And this is the theory that there will be computers that are smarter than the sum of humans.
The San Francisco consensus is this occurs within six years.
marshall applewhite
When the end of that occurs, then they have reached a condition of having overcome human behavior, human thinking, human desires, desiring only to be in the kingdom of heaven.
In the evolutionary level above human, this is a kind of brain for the world.
sam altman
It'll be personal, adaptable, it'll be easy to use, it'll give people incredible superpowers that were sort of science fiction only a couple of years ago.
joe allen
Good morning.
It is Wednesday, April 15th in the year of our Lord, 2026.
I am Joe Allen sitting in for Stephen K. Bannon.
I've been going over a lot of the polling in the recent weeks, polling from Gallup, polling from Pew, and the Stanford AI Index.
What they are finding, to no one's surprise, is that the public is not only not excited about the AI revolution, a revolution as we have just heard.
Intended to first replicate the human mind and then replace as many human beings as possible with these non human minds.
The public, according to polling from Pew, 50% of people, a mere 50%, are in any way excited about AI.
Now, this is, apologies, apologies, a mere 24%.
50%, on the other hand, are quite concerned about artificial intelligence.
And 50% are convinced, and I think rightly so, that artificial intelligence will in fact worsen creativity in children, in adults.
It will in fact worsen human relationships.
Insofar as the younger generation is concerned, a recent Gallup poll found that the number of Zoomers, Gen Z kids from the age 16 to 29, there were 27% last year who thought that AI would bring about a better world.
They were hopeful about AI.
That is down to 17% now.
It's oftentimes said that boomers, the reason that the older generation isn't in any way excited about AI and is quite a bit concerned is because they don't really use it.
They don't have experience with it.
Well, Gen Z is more than immersed in AI.
They've, for most of their conscious or early adult life, used AI continuously.
It's been around them.
The internet was something that was taken for granted their entire lives, even though, or perhaps because of, Their use of artificial intelligence, they're coming away completely dismal about the possibilities for the future.
Decentralizing AI Against Central Powers 00:15:33
joe allen
Why?
Why would Gen Z, why would anyone look to artificial intelligence with a suspicious eye?
I think first and foremost, it comes down from the tech CEOs themselves and what they have said, they've stated explicitly their mission is, which is to, again, create non human minds that are built from data extracted from the public that will replicate every economically viable activity that human beings can do coding,
white collar work, and eventually with the advancement of robotics.
Blue collar work.
Their intention is to replace you.
In the meantime, we're seeing stock prices soar.
We're seeing data centers imposed on communities around the country.
And we're seeing the culture watered down with AI slop.
The data center backlash is among the most heartening sorts of movements that we're seeing right now against AI.
In Wisconsin, in Port Washington, the City, with a referendum, has decided to pause all construction on data centers that extract more than 20 gigawatts.
In Maine, I'm sorry, Festus, Missouri, they fired half of their city council for trying to build data centers in their small town.
And in Maine, we have a moratorium, again, for a year and a half on the building of any data centers above 20 gigawatts.
This is just the tip of the iceberg insofar as the local resistance goes to not only the cultural effects, the economic effects of artificial intelligence, but the on the ground imposition of massive, noisy data centers that strain the infrastructure, that use up The electricity that use up in many cases the water.
And people are becoming more and more conscious of what the ultimate goal of these data centers are.
Now, this backlash has led to some unfortunate instances of violence.
No one has been harmed up to this point, but we had in Indianapolis a city councilman who had pushed for a data center to be built in the local municipality.
His home was shot up.
And a manifesto talking about the sort of downsides of AI was found.
Sam Altman's home was hit with a Molotov cocktail and then a few days later shot up.
This is not the norm, nor is it anything that is being advocated for by the people pushing against the AI industry.
We're not calling for violence.
You see, violence associated with any movement, whether it's Racial grievance or economic downturns.
You see violence associated with police resistance.
You see violence everywhere.
It is not characterizing the AI movement, and we can't allow it to characterize the anti AI movement.
What we are about is having people on the ground, on the local level, being able to determine whether or not their town is defined by a loud data center that's sucking up all of the electricity.
What we are about is.
Are people at the state level being able to protect their children from the predatory bots that have been unleashed in schools, in their homes, and on the internet in general?
What we are about is the federal level control.
Of not only the damage being done, accountability to these companies and demands for transparency as to what's going on inside the labs, but also the institution of some outside agency that will watch over these companies and determine whether or not the technology they're producing is destructive, whether it is dangerous.
That may be the Department of Energy, that may be the Center for AI Innovation and Standards.
That may be another agency such as DHS, or it may be a combination of all of them.
But at some point, we're going to have to look at artificial intelligence the same way we've looked at nuclear weapons, or on a more mundane level, the same way we've looked at junk food, the same way we've looked at medicines.
No industry goes unregulated.
And the push to leave the future of artificial intelligence solely up to the companies that are building it and deploying it recklessly.
Would be just as insane as to allow Pfizer to regulate themselves with vaccines or pain medication or psychiatric meds.
This is a minimal ask.
We simply want, first and foremost, people to be empowered, and second, for the government to be responsive to the demands of their constituencies.
You are not alone in this resistance.
This is nationwide.
This is worldwide.
And as we push on the local level, on the state level, and eventually the federal level, you're going to have to keep your backbones.
You're going to have to stay salty.
You're going to have to look to the future, not a future of doom in which AI has taken over everything.
There are no jobs, and we are simply pets that are being fed by robots.
We are not looking towards a future in which Robots are kicking in your door and dragging you out to become biofuel for the next data center.
We're looking for a future that continues our traditions.
We're looking for a future in which everything that our parents, our grandparents, and all the generations that have brought us up to this point are vindicated in their efforts, that all of their sacrifices were not for nothing, that this movement of human life from the primordial origins up to this point and onward, that this movement continues.
Remains human, that we are not handing ourselves over to the machines, and most importantly, that we're not handing ourselves over to the people who are building and deploying these machines.
Because if there's one thing that you really have to keep in mind, is that part of the anti AI backlash is being steered towards looking at AI as some sort of abstract entity, as if AI itself, as if this non human digital mind was coming down from the heavens or perhaps coming up.
From the hells of its own accord and imposing itself on you and on your children and destroying the culture that we have around us.
No, this is not happening because of some abstract non human mind.
This is happening because people like Sam Altman of OpenAI, Dennis Isabas of Google, Elon Musk of XAI, and Dario Amadei of Anthropic, and we'll even throw in Mark Zuckerberg of Meta.
It's because human beings have made the choice.
To build replacements for other human beings and impose their technologies on us.
We're going to resist it at the local level.
We're going to resist it at the state level.
We're going to resist it at the federal level.
And most importantly, we're going to reject it in our own homes, in our own lives.
Stay tuned, Warren Posse.
We'll be right back with Daniel Cochran of the Institute for Family Studies.
you Welcome back, Warren Posse.
I am proud to introduce Daniel Cochran, a senior fellow with the Family First Tech Initiative at the Institute for Family Studies.
Daniel comes from a long and storied background in policy with a number of think tanks and institutions.
Daniel, thank you very much for joining me.
Daniel, if you would just Tell me a little bit about how you got into AI policy and what you see the future of AI policy being in America.
daniel cochrane
Well, great to be here, Joe.
Thanks for having me.
To begin with, I think that, you know, look at the social media era, because I think in here, in talking about AI past its prologue, at the beginning, we were told social media would radically transform the world.
And in one way, it did.
It did indeed transform the world, but not in the ways we promised.
We were promised greater democracy, more self government, more decentralization.
Social media was promised as sort of our savior, the savior of the common man from the big institutions, think the big media, right?
All the things that, This show has long talked about and critiqued.
But instead, what it became is a new, it created a new type of information gatekeepers.
And those are the companies we're familiar with today Meta, TikTok, all of these big companies, Google, which owns YouTube and other social media platforms.
These platforms became new information gatekeepers, and they now have the power not just to influence our elections and public discourse, but as we've learned very tragically, to literally rewire the minds of an entire generation.
So when I say past as prologue, I think we need to apply that lesson of history.
Now to AI.
What are the AI guys telling us?
What is Sam Altman telling us?
What is Mark Zuckerberg telling us?
What is Mark Andreessen telling us?
They're all telling us the same thing.
This is going to be the savior of humanity, literally in some cases.
This is going to decentralize.
That's Mark Andreessen's big point.
He talks over and over again, and others as well, about how AI is going to create an opportunity for smaller enterprises to compete with large businesses, break the media monopoly, et cetera.
But what are we seeing in reality?
Well, as you noted in the opening segment, we're seeing these big tech.
Companies, many of which were involved in the social media era, like Meta, reconsolidate control over AI and, in fact, use their existing distribution networks, specifically like Meta's Instagram or TikTok or others, to not only peddle their AI products, but to create a new generation addicted to digital narcotics.
And this time, it's not just an algorithm that addicts you or that pulls you in, it's now an algorithm that you believe has a relationship with you, you believe loves you.
AI companions, AI lovers, et cetera.
So I think what we ought to do is, I think we have to have a healthy degree of cynicism, but not to the extent of desiring to destroy the tech companies.
That's not what we should desire.
I mean, we should desire to destroy the evil that's there, but we should desire to have a vision.
We should chart a vision for how AI can serve the American public.
And I know we're here to talk about some of the important legislative efforts and so forth, but I think it's important to lead with skepticism, but then provide the vision.
And that's what I think Republicans and conservatives around. the nation need to do.
joe allen
Yeah, it's very important to look at that direct connection between social media and the digital milieu in general and AI.
As you noted, people have been primed to communicate via these networks.
And especially with social media, there's a culture now in which people are basically uploading their souls and have been doing so since the early 2000s.
You're taking your personality, you're putting it in this digital form, you're interacting with other people whose personalities are put into a digital form.
You literally have.
This kind of simulacra that becomes, in many ways, at least socially speaking, more real than real life, at least for some people.
We all know those people whose social media persona is more important to them than the actual flesh and blood relationships around them.
And AI steps in to that pre existing digital culture.
And instead of now interacting with digitized humans, you're interacting with a digital mind that is built from previously.
Digitized humans, and it has a kind of life of its own.
I'm curious, how do you see the lack of regulation with social media playing out as we move forward into most likely some pretty harsh regulation with artificial intelligence, or at least I hope so?
daniel cochrane
Well, look, I think you're seeing the backlash already around the country.
I mean, even Gen Z is saying, you know, and polling is showing this more and more, Gen Z is rejecting to some extent social media and smartphones.
You know, you talk about these clubs on.
College campuses.
I met very recently, I met a young man who started a club at a public university.
And the whole point of this club is low tech, no tech, kind of digital living, combined with policy advocacy to get a handle on this stuff.
So that's really interesting.
It's not just about get rid of the smartphone in your own life, it's about arresting the powers that have kind of taken over our homes and our lives from the grassroots, from the state level, local level, federal level, which you sort of alluded to earlier.
joe allen
So, you have a paper coming out, or your institute has a paper coming out addressing Marsha Blackburn's Trump America AI Act.
If you would, could you just break down what your position is, maybe even give us a little bit of a review on what Marsha Blackburn's bill is actually calling for?
daniel cochrane
Certainly.
And I think it's important to connect this with the president's AI framework, the legislative recommendations the White House recently made.
And in there, I think it's useful to say look, it was.
Those are high level principles, but the document delegates most of, frankly, the hard work to Congress, which is the way it should be.
Congress needs to legislate on this issue.
They've needed to for years, they promised to for years, and they failed to for years.
Now, when we talk about artificial intelligence and making, ensuring, creating a national framework, where I think we often get off track is we lose sight of what we're actually trying to do.
And the first principle of an effective national AI framework must be guardrails.
Strong guardrails that actually have teeth.
And Marsha Blackburn's Trump America AI Act would do just that.
It contains a lot of provisions that are very common sense and widely bipartisan, like the Kids Online Safety Act, the Guard Act introduced by Senator Hawley.
National Security Protocols for AI 00:04:08
daniel cochrane
It would create tracking mechanisms for AI labor disruptions and many other things that are just kind of common sense.
It's not perfect, like any piece of legislation, but it's an important first step.
joe allen
One of the more interesting facets to me, the notion of existential risk with artificial intelligence or any kind of catastrophic risk.
This hasn't really been much of a discussion other than the kind of sensationalist claims or predictions, which may in fact prove to be true.
No one's saying that they're not, but I don't really focus on them all that much.
On the other hand, when you have these tech execs, Elon Musk is probably the most notable, but we've also had many statements from Dario Amadei for sure, Sam Altman, that the notion that artificial intelligence poses Catastrophic risk for the public, either because of loss of control or because you have rogue actors that use AI to create bioweapons or any other kind of improvised weapon.
Marsha Blackburn's Trump America AI Act addresses that head on.
The proposition seems to be that some combination of federal agencies would oversee these companies.
She specifically points to the Department of Homeland Security in the case of rogue actors or the more general threat.
That responsibility would fall on the Department of Energy, which, of course, has a long history of surveilling and regulating nuclear materials.
So I'm curious have you thought much about that aspect?
Or is it, do you find existential risk to be a persuasive argument that we actually should be concerned that AI could go rogue or that rogue actors could use it to inflict horrible damage on society?
daniel cochrane
Well, look, I think the devil's in the details on how you define it, but let's look at an example from just this past week with.
Claude, Anthropics Claude, being used to find critical security vulnerabilities in a whole variety of critical infrastructure and also operating systems that we didn't know existed for years.
So, that in one sense is a kind of existential risk because at its base, what is AI?
It's pattern recognition and prediction.
So, it's finding patterns that no human or prior computer systems have found, and that creates a lot of vulnerabilities.
I mean, if you think about it, if a hacker had the ability to do what Claude can do right now, right?
That would be really bad.
For a national security standpoint, from an economic security standpoint, from almost any standpoint.
So I think it's quite important for bills like the Trump America Act, which you just referred to, to delegate some responsibility to the Department of Homeland Security and other federal agencies to ensure that this stuff, that we're studying it at the very least, and that we create protocols for reporting back to the government, like what are the actual abilities of these models.
I think it's important to at least know that.
unidentified
Yeah.
joe allen
Anthropic response to this, the way they move forward, I have a lot of mixed emotions about it because, on the one hand, they're creating this software, these models, to write better code and a number of other functions.
It's dual use, and so by being able to write code and analyze code, it's able to find these vulnerabilities.
If they are going to create a monster like that, they have to be responsible for it.
And so they are, to some extent, taking responsibility.
They're saying we should be regulated.
They did not release the model.
They offered the model to Amazon Web Services, Microsoft, a number of corporations so that they could strengthen, they could bolster their own cybersecurity using the model.
So it's a mixed bag because, on the one hand, they're driving forward with this AI race.
They are actively creating something that could wreak havoc.
Maybe not this model, maybe not Mythos, maybe it's the next one or the next one could wreak havoc on the digital infrastructure.
Knowing that this danger exists, they somehow feel it's worth it so they can write better code.
Diversifying Savings with Physical Gold 00:03:14
joe allen
It's a hard sell for me.
Even, you know, it's as if, you know, you had Dr. Frankenstein asking the local sheriff to stand guard to make sure that the monster doesn't go out of control.
I think that the problem.
Is really the desire to create such a monster.
But, you know, Daniel, you've always been much more positive about this than I have in our discussions.
You've always had much more optimism.
And when we get back at the other end of the break, I want to hear more about your optimism and I want you to make me feel good about the artificial intelligence revolution.
Back in just a moment.
unidentified
Here's your host, Stephen K. Bannon.
joe allen
Welcome back, Warren Posse.
When the dollar's convertibility into gold ended in 1971, gold was fixed at $35 an ounce.
Fast forward to today, and the U.S. dollar has lost over 85% of its purchasing power.
Gold, on the other hand, has increased in value by over 12,000%.
That's why central banks are buying gold at record levels.
They are not going to stand by as artificial intelligence turns everything into digital muck.
They're going to protect themselves.
That's why major firms like Vanguard and BlackRock hold significant positions in gold.
And that's why I encourage you to consider diversifying your savings with physical gold from Birch Gold Group.
That's right, Birch Gold Group.
No robots at Birch Gold Group.
But it starts with education.
Birch Gold just announced their Learn and Earn Precious Metals event.
This free online event rewards you for learning the basics of investing in precious metals.
Sign up to get free silver on your next.
Purchase.
Get even larger incentives as you go.
The more you learn, the more you can earn.
Don't ask AI, ask Philip Kirkpatrick.
But you must act now as this special event only runs through April 30th.
The dollar lost its anchor in 1971.
You don't have to lose yours.
Text Bannon to the number 989898 to join Birch Gold's Learn and Earn Precious Metals event by April 30th.
Text Bannon to 989898 today.
unidentified
All right.
joe allen
We are back with Daniel Cochran, Senior Fellow with the Family First Tech Initiative at the Institute for Family Studies.
Daniel, as promised, if you will, make me optimistic about the future of artificial intelligence.
Does the government have a handle on this?
Does the American populace have a handle on this?
Do we have a handle on this?
daniel cochrane
Well, I'm making you optimistic.
That may be a bridge too far for me, but I'll try my best in this short segment.
Look, I think that it depends upon the decisions we make.
Some of the optimism should be look, so past this prologue, the cement is dry on social media, right?
Job Losses and Media Reports 00:15:06
daniel cochrane
A lot of the effects have already occurred.
We're trying to reverse them, but we're playing catch up.
The cement on AI is still wet, and we still have an opportunity as a country to make decisions for the direction that technology is going to travel.
And the lesson we ought to apply to this round is we need guardrails and regulations.
And as we mentioned, we were talking about Senator Blackburn's Trump America AI Act.
That's a great step in the right direction.
So you ask, Well, what would it look like to have guardrails?
That's what I think a starting pass looks like.
It looks like protecting kids online with Kids Online Safety Act.
It means that we impose regulations on chatbots.
We tell companies look, you're not allowed to sell or make AI companions available to children, especially AI companions that are engaged in sexually promiscuous conversations.
And to your point earlier, it ensures that we create some kind of framework.
To monitor the risks that are emerging from these models, the ability of these models to find vulnerabilities, but also to be able to act in unpredictable ways that are going to harm our society at a massive scale.
joe allen
The way I look at it, if I had my own way about it, I would just simply turn back the clock.
I'm not sure to where, probably somewhere in the 90s, the best decade.
But obviously, no one's asking me for my opinion on how all of society should go.
And it's probably best I don't have my way.
So we're stuck.
These technologies have been created.
They will continue to develop.
We have to have some way of steering it, or kind of the way I look at it, to put up blast shields against the worst effects of the technology.
And while I'm not a huge fan of government imposition, I'm also not a fan of corporate imposition.
And I think in this case, the corporate imposition is far more dangerous than anything the government's going to do.
And these guys are talking about, oh, our rights as corporations are being trampled upon.
Well, They've trampled on our privacy.
They've really trampled on our sanity.
And certainly they've trampled on the sense of safety that people can and should have regarding their children.
So, you know, I'm all for regulating this.
If the government is our only real mechanism to get a handle on this, you know, sometimes you've got to make a deal with the devil to go against Lucifer, right?
So, Lucifer and Ahriman, how about that?
You know, I'll turn to Ahriman to regulate.
Lucifer.
daniel cochrane
I think it's important to recognize, too, this is why states are critical in figuring out how to strike the balance.
Like, we have a federalist system here in the United States, and one of the benefits of federalism is we get to figure out what works and what doesn't.
You know, I'm from California.
California's done some really crazy stuff.
And you look at California, like, I don't want to do that.
And other states are like, we're going the opposite direction.
You got states like Texas that are doing some really great stuff on education, on tax policy.
Florida's the same.
Tennessee's the same.
And especially on technology, when it comes to AI, look at what Texas is doing, look at what Utah's doing, look at what Tennessee's doing.
You know, Tennessee, the home of the country music industry and a lot of creatives, they enacted the Elvis Act.
What does that do?
It says you can't use AI to distribute or you can't distribute digital copies of someone's visual likeness or their voice.
Very common sense.
And that came not from Congress, not from some technocratic organization.
Agency here in Washington, that came from a state.
Look at Utah.
Utah put common sense guardrails on chatbots.
They said, look, if you're using a chatbot for mental health therapy, which personally I oppose, but okay, if you're going to do that, the company can't use your data and target you with ads while you're using the chatbot.
Very common sense.
That came from a state, not from Washington, D.C. Florida, for years, has been on the front lines of trying to address the effects of social media.
They passed laws that not only put restraints on social media and guardrails on social media to address some of their harmful effects on kids, like Like requiring parental consent for social media access.
But more recently, they have also enacted a law to push back on big tech censorship.
You know, the case that went to the Supreme Court just a couple of years ago, it dealt with a law in Florida and Texas that required, that would have restricted the ability of these companies to censor speech.
And I know that's something that War Room has experienced firsthand.
And we know, I mean, this is stuff that the states are doing.
The states are the American people's first, best, and best line of defense against the.
Against big tech and against big AI, and that's why we got to keep them in the race.
joe allen
Well, you know, I agree.
This is a massive experiment, and the states and local communities do form sort of a variety of petri dishes in which we are being experimented upon.
And I recommend to the War Room posse or anyone else who is going to listen stay in the control group.
Daniel, I really appreciate you coming by.
If you would let the audience know where they can follow your work, how do they find the Institute for Family Studies?
Give them all you got.
daniel cochrane
That's great.
So you can find me at RealD Cochrane on Twitter.
And then if you go to InstituteForFamilyStudies.org, you can read all about our work.
And then the Family First Tech Initiative.
Just type that into Google.
If they're not censoring us, you'll find our work there.
joe allen
Thank you very much, sir.
daniel cochrane
Thank you so much.
It's great to be here.
joe allen
All right, pivoting from optimism to pessimism, we have Brendan Steinhauser, head of the Alliance for Secure AI.
Brendan, thank you very much for coming on.
Look, I've been following jobloss.ai, your AI labor tracker.
I was hoping that you would come and just explain to the audience what you're doing, what you're finding, and how they can keep up.
brendan steinhauser
Absolutely.
Well, it's great to be back with you, Joe, and great always to be with the War Room posse.
You know, we started, we're sitting around the office one day, and one of the team members had an idea to say, look, we're seeing these reports.
Of job losses due to AI.
What if we started to track them in real time?
What if we kept it updated on a daily basis?
If we kind of scoured news reports and we looked at what's happening in the boardrooms around America, we started to actually count these job losses in going back to January of last year, January of 2025.
And so the team put together this great tool, jobloss.ai, where we can actually track what's going on.
It's based on what the companies are saying themselves when they announce job cuts due to AI.
It's also based on media reports because sometimes people don't want to admit they're cutting jobs due to AI.
And so oftentimes you have a whistleblower in the company that talks to a journalist, and now we know the real story.
But that's kind of the methodology there.
We try and be aware of some companies might say it's due to AI, but really it might be due to something else.
We try and dig in a little bit to figure out, okay, what's the most likely plausible situation here?
It's a moving target, but I think we're really happy with the product as it is, the tool that people can use and track.
But the downside, the problem is that this is happening and it's happening really fast.
And policymakers are behind the eight ball here and they need to be talking about this more.
They need to be thinking about it more.
And right now, the American worker is at grave risk.
joe allen
A lot is being said about coding.
Coding seems to be the most vulnerable occupation at the moment.
Can you break down a bit what you're finding as far as each job, the kind of job title or job role that is really under threat?
Where people are being replaced.
And if you could give us a sense, how much of the job loss is the lack of opportunity where lower level people who would have come in and trained don't have a job to come in and be an apprentice?
And how much is just people getting wiped out, like the massive layoffs at Oracle?
I think that's at like 30,000.
brendan steinhauser
That's right.
Yeah, there's a lot of activity at these big companies, Oracle being one of them, Salesforce, Dell, and a bunch of others.
They've announced massive cuts.
So, a lot of those numbers we're tracking actually are due to real jobs lost, real people who are left unemployed.
And so that number is well over 100,000 now in the US and growing.
And Goldman Sachs estimates it's going to be something like 15 or 16,000 jobs lost per month.
So, these numbers are, it's a moving target, but they're basically saying that's what they see happening.
But right now, yeah, it's a combination of sort of your computer programmers, your coders, people who AI is essentially replacing in real time.
So if you're in that software development business, you're probably seeing this at your company.
You're probably seeing this if that's your job description, even if you work at a different company.
So a lot of that impact is happening to those folks.
But also, what we're seeing is kind of your white collar workers, your kind of middle managers, people that have some experience that aren't necessarily entry level and they're not the most senior person at the company.
But sort of mid level management is getting hit really hard.
And you think about that, that could be at any type of company.
Those can be tech companies or retail or construction or any company that has that sort of role of people managing people, especially through the use of computers, through the use of spreadsheets and email and phone calls and that sort of thing.
So, what's happening is these AI agents that are coming on the scene are starting to replace those white collar workers.
And that's why one of the big tech CEOs admitted this about six months ago.
He said there could be a white collar bloodbath and you could have.
An incredible amount of high unemployment numbers coupled with hiring freezes and young people not being able to get good jobs.
joe allen
Yeah, there are two things that really disturb me about that.
I mean, the whole thing is bothersome, but one, the inability of young people to be able to learn a trade or to learn a job.
People forget that even if you can replace low level coders or accountants or even eventually with robotics, you know, a carpenter.
You then deprive the incoming workers, the upcoming generation from being able to learn a trade, to be able to master it.
And then, on the other hand, you also got the prospect that even if you keep your job, you are then forced basically to become symbiotic with an AI.
And just imagine you're a carpenter, an HVAC repairman.
And yes, you still have a job.
Yes, you're still working with your hands, but now you're managed by an AI.
Now everything you do is being tracked, you're uploading all of your activities.
To your phone and your phone is spitting back orders as to where you go, what you do, what you did wrong, all of these sorts of things.
It just seems to me like they're creating a digital hellscape and that would explain a lot of the backlash.
Brendan, we've got to go to break.
I want to hold you over.
And when we get back, I'd like to talk to you about your travels around the country.
You've been everywhere.
Is this backlash real?
And if so, what shape is it taking?
Back in a moment, War Room Posse.
unidentified
Here's your host, Stephen K. Bann.
joe allen
Welcome back, Warren Posse.
We are here with Brendan Steinhauser, head of the Alliance for Secure AI.
Brendan, you've been all over the country speaking.
You've met a ton of people in the context of this resistance to the artificial intelligence revolution.
I think you were just at CPAC a couple of weeks ago.
Tell me, what do you see when you're looking into the soul of America?
Do we have a chance?
Are people fighting?
Are they sitting on their hands?
Have they given up?
What's the story, brother?
brendan steinhauser
Yeah, I think it's really heartening.
I think we do have a chance.
We have agency.
People are rising up.
They're asking questions about what's going on, what big tech is building.
There's literally, you know, I've not met a single person that supports being replaced and given a UBI in my travels.
There's been a lot of opposition to building a super intelligence.
People are very worried about that.
We have traveled all over the country.
My team and I have done everything from attending CPAC in Dallas, a huge gathering of conservatives there.
To going to the Sundance Film Festival in Utah or South by Southwest in Austin, traveling down to Palm Beach, Florida, talking to voters, talking to regular people.
It is incredible.
We have an issue where more than 80% of Americans agree we have to have safeguards on AI.
They don't want accelerationism, they don't want runaway AI.
And it's really interesting and encouraging because it's kind of a bipartisan thing.
You meet people who are pretty far left, pretty far right, people in the middle who hate politics.
They all agree on this and they say, I don't trust what big tech is doing.
I'm really concerned about where we're going.
And I want to see some safeguards and protections.
And so that's heartening to me.
And I think that my message to all Americans is that you have agency, you have a voice in this, you get a vote in what happens to us in our future.
And I really want to encourage people to continue that fight.
joe allen
Yeah, brother.
In an age where all we hear about is AI agents, I think that human agents, human agency, this has got to be the most important thing.
At the forefront of the conversation, because if you don't have a choice about any of this, then you are a slave to the machine.
Brendan, we've got to bounce.
If you would just bring us back, jobloss.ai, the Alliance for Secure AI, your team is fantastic.
Where do people go to keep up with the job losses?
Where do people go to keep up with you?
And where do people go to keep up with the Alliance?
brendan steinhauser
You can visit our website at secureainow.org.
SecureAINOW.org, and then follow us on social media for the daily updates on all of these topics.
Our handles are SecureAINOW across platforms.
Thanks to you, Joe, and to the War Room posse.
I think we can win this fight together, and I appreciate your support.
joe allen
Brendan, I really appreciate you.
You've been a fighter for many, many years.
You've taught me a lot about how to deal with politics, and maybe one day I'll actually learn.
All right, that's job loss.ai, job loss.ai.
I really encourage you to go there and see what the impacts are, the real world impacts are of artificial intelligence on the economy and on people's jobs, on people's lives.
All right, I'm happy to bring in Mo Bannon.
Virginia Grassroots Rally for November 00:02:08
joe allen
Mo, how are you?
And what is this event I keep hearing about coming up?
maureen bannon
Hi, Joe.
Thank you for having me on this morning.
It is an event for the grassroots in Virginia, it is a Virginia grassroots rally.
Hosted by War Room and a lot of the members of the grassroots community in Virginia for the election that is occurring on Tuesday.
I know early voting ends Saturday in Virginia, but if you have not early voted, you need to get out on game day on Tuesday.
And we want to do this rally as a thank you to everyone that has been involved door knocking, the phone bank, sitting at the polls for early voting.
And in order to go to the event, you have to RSVP, it's a free ticket.
But you have to go to warroomvarally.eventbrite.com, warroomvarally, all one word,.eventbrite.com to reserve your ticket.
And it'll be a great event.
We have a lot of great speakers talking about Tuesday and then the way ahead in November.
But it is in Hanover County, Virginia.
We will release the location closer to the event.
But it is this Sunday from 4 p.m. to 7 p.m.
joe allen
And Mo, will there be a link on all the War Room platforms for people to check this out?
maureen bannon
We are working on getting it up so we can stream it as well.
joe allen
Got you.
Thank you very much.
So, again, if you would just let the audience know, it's a bit of a long URL, but I'm sure they're up for it.
And besides, they're savvy enough to know where to go.
But if you would just hit the rewind and let them know where to go and what they can expect.
maureen bannon
You aren't the first person to say that's a long URL, Joe.
So I will keep that in mind for the next time.
But it is warroomvarally, all one word, dot eventbrite.com.
And if you go on War Room social media pages, there is a post from last night, and we will post it again with the link so you're able to click right from there as well.
Accepting Conscious AI Therapists 00:02:15
joe allen
Fantastic.
You know, Mo, working with you, working with Steve, working with everyone here at the War Room, the thing that really has given me the most hope about American politics is this grassroots effort.
You know, we're not some kind of top down organization.
This is all about the people, and the War Room has given the people a voice.
It's fantastic.
So, yeah, I really appreciate it, Mo.
See you soon.
maureen bannon
Thank you again, Joe.
joe allen
Well, Warren Posse, I think that it would be good to end on a bright note.
We've talked a lot about Ray Kurzweil, Ray Kurzweil's idea of the singularity, Ray Kurzweil and the merger of man with machine, and Ray Kurzweil and his idea of spiritual machines.
So as we go out, let's just watch this doddering old man describe what the future of artificial intelligence really is.
ray kurzweil
I think AIs will.
Be indistinguishable from a conscious being, and that will just keep going, and finally we will accept it.
unidentified
When?
When, Daisy?
When, Ray?
ray kurzweil
Like right now, an AI might say that it's conscious, but people aren't really sure.
But eventually, it keeps having all the earmarks of a conscious being, and you will accept it because it'd be useless not to have it.
And again, you can't say that's going to happen at the same time for everybody.
But I think when we're a few years into AI entities acting conscious, we will accept it.
And so I don't think it's going to be a very long delay.
I mean, today people have AI therapists and sometimes they don't really believe it, but at other times people really believe it.
And the AI therapists, if you read the transcripts, they sound very convincing.
And that's going to keep going, and people will really accept that they have a therapist that's conscious.
Export Selection