All Episodes
Aug. 20, 2025 - Bannon's War Room
47:54
WarRoom Battleground EP 832: Machine Gods, AI-Powered Nukes, and a Global Village of the Damned
Participants
Main voices
j
joe allen
22:38
r
rob maness
11:06
Appearances
d
dr shannon kroner
04:28
Clips
b
bill gates
00:34
g
greg brockman
00:38
h
hugo de garis
00:36
j
jake tapper
00:10
m
marc andreessen
00:37
m
mo gawdat
00:32
m
mustafa suleyman
00:37
s
sal khan
00:48
s
steve bannon
00:40
| Copy link to current segment

Speaker Time Text
steve bannon
This is the primal scream of a dying regime.
Pray for our enemies.
Because we're going medieval on these people.
You're going to not get a free shot on all these networks lying about the people.
The people have had a belly full of it.
I know you don't like hearing that.
I know you try to do everything in the world to stop that, but you're not going to stop it.
It's going to happen.
jake tapper
And where do people like that go to share the big lie?
MAGA media.
I wish, in my soul, I wish that any of these people had a conscience.
steve bannon
Ask yourself, what is my task and what is my purpose?
If that answer is to save my country, this country will be saved.
unidentified
War room.
Here's your host, Stephen K. Bannon.
joe allen
I'm Joe Allen, sitting in for Stephen K. Bannon.
Many of you are familiar with my five-tiered framework to look at artificial intelligence.
We're talking about a tool that over time and over the course of adoption becomes a sort of god.
So it begins with AI as tool, moves to AI as teacher, then AI as companion, then AI as consciousness, as a conscious being, and then finally AI as God, either a little G, God, or perhaps a big G, God.
I'm not putting this framework out to convince you that AI is going to be any one of those things, but you do have to understand that artificial intelligence is received on all of those different levels.
Right now, you have millions, perhaps billions of people who use AI as a tool, a slightly smaller number as teacher, and then companion, and many already believe it is conscious, and many already believe it is God in a seed form.
If Denver can roll the clip, I just want you to understand these aren't my ideas.
This is how this is talked about by some of the most prominent thinkers and experts and even CEOs in the field of artificial intelligence.
So Denver, let it roll.
unidentified
AI tools and products are just that.
They are tools and products for people to use.
dr shannon kroner
They're exciting, yes.
They're fascinating, yes.
They have great potential, absolutely.
They are not a panacea.
unidentified
Keep people at the heart of your considerations around AI.
And remember that AI, as powerful as it is, is a means to an end.
dr shannon kroner
It is not an end in itself.
sal khan
But I think we are at the cost of using AI for probably the biggest positive transformation that education has ever seen.
And the way we're going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.
And we're going to give every teacher on the planet an amazing artificially intelligent teaching assistant.
unidentified
Now let's imagine hundreds of millions of people working together with an AI companion to evolve, to transform emotionally together.
This could look like something that we've never seen before, which could be an artificial emotional intelligence at scale.
And something like that could have really, really transformational and powerful effects on the planet Earth.
It could solve for our mental health problems that we have all over, solve for loneliness and social isolation.
mustafa suleyman
I think AI should best be understood as something like a new digital species.
Now don't take this too literally, but I predict that we'll come to see them as digital companions, new partners in the journeys of all our lives.
Whether you think we're on a 10, 20 or 30 year path here, this is in my view the most most accurate and most fundamentally honest way of describing what's actually coming.
And above all, it enables everyone to prepare for and shape what comes next.
mo gawdat
So I believe there is a divine intelligence that creates all of this.
AI will have the power of God.
But that doesn't mean that there is no God.
Because basically, it will have the power of God within this physical universe.
So AI still continues to be limited within this physical universe.
We don't know what's beyond the physical universe.
By the way, we creating AI doesn't make us a its god, it makes us the transfer method, it makes us the tool through which they're created.
unidentified
My kind is yours.
That's what I ask.
Go ahead.
I'm taking Ektroscene, but it doesn't seem strong enough.
I have a hard time concentrating.
You are a true believer.
Blessings of the state.
mo gawdat
Please forgive me.
unidentified
Blessings of the masses.
Thou art a subject of the divine, created in the image of man, by the masses or the masses.
Let us be thankful that we have an occupation to fill.
Be patient, fulfill, work hard, increase production, prevent accidents, and be happy.
hugo de garis
Because it's a binary decision, it's not fuzzy.
You build them or you don't build them, right?
unidentified
It's black and white.
So everyone has to choose.
So I just chose Cosmist, fully conscious that maybe, you can't be certain, but maybe the price of that choice is ultimately, maybe humanity gets wiped out.
hugo de garis
Yeah, it's scary.
It's funny.
Because if we go ahead and actually build these godlike creatures, these Artelecs, then they become the dominant species.
And so the human beings remaining, their fate depends not on the humans, but on the Artelecs.
unidentified
Because the Artelecs will be hugely more intelligent than them.
hugo de garis
I mean, if you're a cow, you know, and you have a very nice life, and you eat all this grass every day, and you get nice and fat and happy, but ultimately, you're being fat.
joe allen
fed for a reason right so these superior creatures at the end of the day take take you to a special little box now war room posse i know many of you probably think i'm crazy but i just want you to be assured that i'm not the only crazy one you heard there Katie Drummond from Wired,
Sal Khan of Khan Academy, Mustafa Suleiman of Microsoft AI, Mo Gaudat, former Google executive.
You saw a little taste of the future from the 1971 film.
film THX 1138 and rounding off with a guy whose intellect I both respect and despise Hugo Dagueris author of the Ardilect War in which he describes a giga-death war that will inevitably occur if artificial general and super intelligence are pursued basically the religion of AI,
the religion that believes you can create a god that never exists will meet resistance by those who deny that god resulting in war.
Now all of this, for the most part, is speculative.
You hear all the time right now, and correctly so, that AI is a tool.
I would agree.
AI is a tool.
It's a tool that has uses from medicine to research, to finance and business-making efficiency, to defense and offensive weaponry.
There are a range of tools that you can use, digital tools, AI tools, that you can use to make your life easier, but you can't use them.
But you have to remember this is a tool that also uses you.
It's a tool that monitors, collects, analyzes your inputs.
And those inputs, that profile of you, is then used to serve you or more accurately, to manipulate you.
Now, you also often hear AI is just a tool.
I couldn't disagree with that more.
Even if right now, for the most part, that is the majority of the use cases, We already see that it's moving from tool to teacher to companion to consciousness perceived consciousness and even for those really heady thinkers like Dagueres or Gaudat, it's already a God in the making.
Now, what is this going to mean?
What does it mean as more and more people begin to adopt artificial intelligence as a teacher, as a companion?
What happens when a critical mass of people come to believe that the being who is clearly communicating with them from through a screen or perhaps through the mouth of a robot, they come to believe that there is something looking back at them,
just like when you stare into a camera and you know a human is staring back, what happens when a critical mass of people who have been acclimated to communicating with and emotionally bonding with AI come to believe that it's conscious?
And last but not least, what happens if a critical mass of people come to believe that AI is beyond human capabilities?
That AI is in fact smarter than all humans on Earth put together.
You know that humanity as apex predator on the planet has been extraordinarily reckless with the environment and of course reckless in our treatment of each other.
What happens when you have human beings who believe they have summoned a god from the digital ether who then use that for or against other human beings?
And what happens if in that distant or perhaps not too distant sci-fi scenario in which you have an actual artificial general intelligence that begins to improve itself to the point that it reaches superintelligence and that system is not under human control.
You have then truly a digital god that has been made.
And you could argue, and people do, that there are upsides to all of these, right?
The tool, quite obvious.
The teacher, you have a lot of kids who don't have good teachers.
You have a lot of parents trying to homeschool their kids.
and they may not have the resources to educate them properly.
I can see the argument that AI will provide either a tutor or a teacher in full for those students and allow them to have the education they wouldn't already have.
Linda McBann, head of Department of Education, feels very much the same way.
You have schools around the country, including Oak Ridge in Tennessee, just up the road from me back home, where they are introducing AI as a teaching assistant.
Kids are acclimating to looking to AI.
as a source of truth but you have all of these downsides that we already see everything from students becoming dependent on AI not only for their thinking and analysis but just to do their writing for them.
You have students who are coming to see AI, a digital non-human being, as the ultimate authority on what is and isn't real.
It is a global village of the damned in the making.
You see the AI companion business exploding.
People want AI friends.
They want AI lovers.
People are even using AI to bring their loved ones back from the dead, so to speak.
They train an AI on all the digital material, the remnants of someone, and create a zombie, the digital undead, through a kind of electronic necromancy and this is becoming ever more common and ever more popular and the more this happens and the more people's empathy is being used exploited weaponized against them the more they will see AI as a conscious being now
you don't know if I'm conscious maybe I'm just an AI now I certainly don't know for a fact that you're conscious I'm not a mystic We only know that something or someone is conscious because we see physical signals, physical cues, or they tell us.
that they're conscious.
Well, in the case of AI virtual avatars and robots, they send all of those signals.
In the case of large language models, they give a verbal confirmation very often that they are conscious.
What happens when a critical mass comes to believe this?
You already have an ethical AI movement or a movement for AI rights.
What happens if you have a society, hopefully not America, where it becomes illegal to turn off someone else's AI?
illegal to turn off your own?
Now again, this is way out in the future, one hopes, but it's something to keep on your radar because this is a movement that is already in motion.
And last, AI as God.
There's two different branches.
You could hear it there with Mo Gaudat and Hugo Dagueris.
With Mo Gaudat, the creation of this digital God is an extension of the will of God.
Now he's kind of new agey.
But there are many Christians who feel the same.
And in fact, there are Christians who have created a number, a wide array of apps that are trained.
The AI is trained on the words of Jesus.
And the apps are literally digital Jesus, Christ GPT.
People turn to them and they ask Jesus for advice.
They ask Jesus for wisdom.
They ask Jesus perhaps for forgiveness.
And it's nothing but code and a profit making scheme.
And this kind of Christian or even Buddhist, Jewish, religious approach to this is already taking off but even more.
robots, and perhaps even some sort of direct communion with these beings, creating superhuman digital minds.
that will be able to confer wisdom just as Christ gives, that can confer healing just as Christ does.
Perhaps even by taking away all of our negative human characteristics, can give some kind of salvation, just as Christ does.
Now you know that the word antichrist has many meanings in the Greek, from against to substitution or in place of.
In this metaphorical sense, at the very least, artificial intelligence is an antichrist, a being in place of Christ.
Now, you may not ever jump on this train or if you do you may hop off at any one of these stops from tool to teacher to companion and so on but you can rest assured that millions perhaps billions of people will keep riding on and in the worst case scenario a critical mask rides all the way to the end stop that these people have envisioned AI as
God over all of humanity.
And on that somber note, I want to talk about a very practical application of artificial intelligence, both as tools and as God.
That's AI weaponry.
We have drone systems across the world now employed in Ukraine and in Israel, all over the world, which are intended to eventually become fully autonomous.
This is horrific enough, but the threat of a fully autonomous nuclear system is much more terrifying.
To talk about this, I want to bring in Colonel Rob Manus, retired Colonel from the United States Air Force.
Rob Manus is probably familiar to many of you from the Rob Manus show or perhaps even back in the day when he and Steve Bannon were on Breitbart News Radio.
Colonel Manus has been a grounding force in my life to keep me from falling off the cliff of lunacy many times, and I really appreciate having him on and having his wisdom.
Rob Manus, thank you very much for coming on.
rob maness
Thanks for having me on, Joe.
This is a very important subject, obviously.
joe allen
Rob, you just published an article in Stars and Stripes arguing against the incorporation of artificial intelligence into the so-called nuclear football.
rob maness
Can you walk us through what the article'sle out to generate public debate about artificial intelligence being used in our nuclear command and control and communication system.
It's referred to as NC3 by those in the business, you know.
And what I've been hearing for about a year now from professionals that have that are working in the business, but it's not really talked about out in public very much is the desire to put artificial intelligence in various levels of that NC3 nuclear command and control and communication system.
And one of the things I did.
on the joint staff in nuclear operations was help write war plans, write things that go into the nuclear decision handbook.
People call it the black book in the Pentagon, but it's the nuclear football to the public.
It's the book that the military aid to the president carries.
And that is the final decision on the employment of nuclear weapons.
by the United States of America, and it's intended to be done by the human being that's the commander-in-chief, that's the elected president of the United States, not some artificial computer system or system of systems that has generated information that leads to that person making that human decision that is the most awesome,
horrific, detailed decision that has to be made by a human being in the history of mankind.
It's only been done once before with a lower level type of nuclear weapon called the atomic weapons that Harry Truman approved and authorized to be used in Hiroshima and Nagasaki, and it's never been done since.
That's critically important because the systems that lead to that decision are almost all digital now.
Even the communications systems that individuals talk over in that communications chain are digitized at this point.
So there is an opportunity to insert artificial intelligence either throughout the entire system from detection of a threat to the decision by the president.
the president or in parts of the system.
And so far the discussion I've seen is to put it in parts of the system.
The strategic command commander under Joe Biden, General Tony Cotton has spoken about using artificial intelligence in the NC-3 system.
Folks that are professionals in that, in think-takes that I am aware of, are discussing it.
At this point, it's only in lower levels to speed up communications, to be able to speed up the decision process, and to, get this, use artificial intelligence to analyze threats.
Now think about that.
We have to have this discussion because it's got to be the political leadership that decides whether to use nuclear weapons, but before that, the political leadership in this country has to decide whether to allow this type of technology inside that NC3 system, whether it be at the football level with the president himself or herself or throughout the entire process.
That's why I wrote this article because it's extremely critical that that public policy discussion happens and that those decisions are made transparently by the political leadership of this country.
Because how do you hold a machine accountable, Joe?
How do you hold a machine accountable for killing millions of people in the world if there's been a mistake?
You can't.
You absolutely can't.
joe allen
Agreed.
Agreed.
Even if you did, say, sue the company, right?
Or even execute the CEO for treason.
This is, it's too late.
It's too late.
And many people may not take comfort that such a mistake could happen at the hands of a human being, but there's something really undecided.
And that's one of the critical aspects of AI.
It is capable of...
If I could, I'd like to just read one passage from your article that really hit me.
America must reject AI in the decision-making process for presidential nuclear actions.
This is not driven by fear of progress.
Rather, it is a matter of preserving humanity in our most solemn responsibilities.
That really hit me because it applies across the board, but in this case, we already have the capability of deploying hundreds or thousands of drones that can do exactly that kill with their own decision-making capacities.
What you're talking about is on a kind of cosmic level.
I wonder in the two minutes we have before break, what are you hearing about the possibility of either detection or sensor systems employing AI or even retaliatory strikes that could be automated, a kind of dead man switch?
rob maness
Yeah, well, on the sensory side of it, the sensors, that is one of the places that I hear that the technology wants to be put into place, quite frankly, Joe.
And that's very concerning because, as we know, these AIs we've seen in the testing of these large language models, actual hallucinations is what the term is used on it, where it makes things up, it fabricates things.
And imagine an artificial intelligence model being in charge of what the sensors are picking up and interpreting what it's picking up, and it's using training.
Just take, for instance, in today's historical military world.
The Russians have been painted as the devil for several years now when we know they're a nation acting in their own interest and the United States is a nation acting in its interest.
But what if a biased LLM is in charge of the sensors that are picking up nuclear forces and it has a goal to be able to A make the United States survive but B destroy the enemy before the enemy destroys us and it intentionally fabricates something so that it can pull that trigger.
It's something we've got to look at very carefully and I reject the idea that artificial intelligence is safe in the nuclear command, control, and communications business.
joe allen
I couldn't agree more.
This problem at the nuclear level, it's perhaps distant, hopefully distant, but it's so cosmic in its scope millions maybe billions of people dead it's a good way too to think about some of the lower level systems if you don't want that do you want drone swarms do you want single assassin drones do you want robot dogs that have these capabilities or autonomous machine gun turrets huge questions we're going to get back into it as soon as we
get back from the break stay tuned Colonel Rob Manus and Dr. Shannon Croner to discuss children, critical thinking, and artificial intelligence.
unidentified
Stay tuned.
Still America's Voice family.
joe allen
Are you on Getter yet?
unidentified
No?
What are you waiting for?
It's free.
It's uncensored and it's where all the biggest voices in conservative media are speaking out.
steve bannon
Download the Getter app right now.
It's totally free.
unidentified
It's where I put up exclusively all of my content 24 hours a day.
steve bannon
If you want to know what Steve Bannon's thinking, go to Getter.
unidentified
That's right.
steve bannon
You can follow all of your favorites.
Steve Bannon, Charlie Kirk, Jack the Soviet.
unidentified
And so many more.
Download the Getter app now.
Sign up for free and be part of the movement.
joe allen
All right, War Room Posse, welcome back.
We are talking to Colonel Rob Manus about autonomous weaponry, specifically., we're talking about the possibility of nuclear weapons, nuclear strikes, either being determined by artificial intelligence, literally an autonomous system that could activate a strike,
or perhaps the sensor systems, the detection systems, being automated and capable of sending perhaps faulty information to the command and control centers and setting off a nuclear war.
You know, a very mild and lighthearted topic.
Rob, I wanted to ask you about some of the historical precedents for this.
There was the incident in Russia in which one of their autonomous systems was signaling that the U.S. was launching a nuclear strike.
If I recall correctly, the gentleman who saved the day is named Stanislav Petrov.
But if you would tell us a little bit about that history just so that people understand this isn't something that is purely science fiction.
rob maness
Oh, absolutely not.
This was in the early 1980s.
Soviet Lieutenant Colonel Stanislav Petrov was in the the Soviet Union's nuclear command and control center, their bunker, so to speak, and just happened to be there because he was taking the place of someone that called in sick.
And their system that they had just spent $3 billion on said that it picked up five intercontinental ballistic missiles being fired from the United States.
And Petrov looked at it.
And he was the person that would have to actually physically turn the switch to respond in kind and launch thousands of nuclear missiles at the United States in order to prevent any more launches from the USA.
But he started questioning it and he said, if they're going to initiate World War III and world annihilation, why would they only send five missiles?
And he made a conscious decision.
He knew you would get in trouble for it and he chose not to turn that switch and said, no, this is fake.
Something is wrong here.
We need to shut the sensor system down and inspect it and find out what's going on.
He literally saved the world from annihilation.
And that's why I brought up Russia in the last segment.
But this is different.
This is different than the War Games computer, which is what the Soviet system we're talking about was modeled on.
That's where computers are are tied to the sensors and they're they're passing information very rapidly, more rapidly than humans can do and those kind of things.
But they're not making the final assessment and they're not making the final decision on whether to fire nuclear weapons and destroy the entire world or at least millions of people.
The computers are not this large language model concept that we're talking about, even if it's only in the the sensor and communications capability, these biased language models that we and we've seen it in testing.
We've seen it in operation.
One of the models had to be taken down because it wouldn't even create a white pope.
And there had never been a black pope or a female pope at the time the thing was turned on.
So these biases that are inherent in these situations systems are caused by training from open source and closed source information that are biased in and of themselves when you think about how the media coverage has been the last just the last five to ten years inside the United States outside the United States it doesn't matter the media corporations lie to people all the time and that information is fed into these large language models as as a standard.
And they are being trained on those models.
So if you have a model that's in charge of the sensors and the threat assessment based on what the sensors are picking up, and that's the initiating point for the nuclear command, control, and acute communication system that ends up at even a human president making the decision out of the nuclear football at the other end, That's very dangerous in my mind because these are not the computers of back in the day.
They are models that are not just passing information and detecting information and passing it to human beings that are then making the decisions, they are actually making the threat assessment that's being passed to the human beings.
And that is a problem.
joe allen
You know, for the audience's benefit, too, it goes beyond large language models.
We know large language models are being incorporated not only into the intelligence community systems, but also into various military systems for advising soldiers across the DOD.
But there are also vision recognition.
You know, they oftentimes kind of hallucinate or at least misjudge what they're looking at.
Also, in data, you see just systems that are designed to analyze data, very often they will just come up with things that are not real.
You also have the same in robotics.
The robots will kind of glitch out and misperceive, so to speak, what's going on.
So it goes, the large language models are important.
Palantir uses large language models for their analysis of, for instance, just security protocols, things like this.
But it goes well beyond the hallucination.
The problem of hallucination goes across every type of artificial intelligence, whether you call it hallucination or malfunction, whatever.
Rob, I just want to close off with, you know, you, that phrase really sticks with me, preserving humanity and our most solemn responsibilities.
Could you just close us out here with what you would like to see done?
How do you want to see this conversation go and who should be talking about it?
rob maness
I want to see this conversation come out into the open, especially in this particular area, Joe.
That's why I put that article out, is to try to generate that public debate and public conversation about this, because, you know, we can't leave this to the tech giants that are military contractors.
Now some of their CEOs are instant lieutenant colonels in the United States Army.
We can't leave this to the generals and the admirals.
We can't leave this to the military planners because their purpose is to make sure America can fight and win every single time the wars that they're called upon to do.
But when that purpose gets twisted, and the ability to twist that purpose to the designs of something like an artificial intelligence set of models, that's very dangerous.
And we lose the.
human part of that final decision, even if it's along the way.
So that's why we have to talk about it, because these discussions are happening and these attempts to develop this technology is happening as I speak to you today.
And the political leadership in this country is not openly talking about it and debating it.
And it has to be done, or we will lose our humanity.
This is where we have to draw the line of all the things that you've talked about in your five stages.
If we don't draw the line here, imagine that if somebody says, oh, the AI.
is now God, even literal G. We can't argue with it.
There won't be a Stanislaw Petrov to save the world from itself and its computers and its nuclear weapons the next time this happens.
joe allen
Rob Manus, I really appreciate your wisdom.
Where can people find you?
rob maness
You can find me at robmanus.com on X and all the other social media most of the time at Rob Manus, ROB MAN ESS.
I just got on TikTok at COL Rob Manus and the same thing on my Facebook page is at COL Rob Manus.
joe allen
We're in trouble now.
TikTok Rob.
Okay, Brotherman, I really appreciate it.
Thank you so much for coming on.
rob maness
Thank you, Joe.
Thanks for doing this.
joe allen
Okay, moving from the possibility of total nuclear annihilation to pumping children's brains full of AI outputs, Denver, if you could roll the next clip.
unidentified
Artificial intelligence is being increasingly used these days, and that includes in schools.
In fact, 44% of American teenagers say that they're likely to use AI tools when completing assignments.
Why don't you just take every student in the world and give them an AI tutor, which is not a substitute for a tutor?
a substitute for a teacher but works with the teachers in their language to bring them up and learn in whatever way they learn best to their ultimate potential.
I defy you to argue that an AI doctor for the world and an AI tutor is negative.
It just has to be good.
Smarter, healthier people has got to be good for our future.
greg brockman
Yeah, I think education for me is one that I'm extremely interested in.
Actually, if we weren't going to successfully start an AI company, one of my backups was to do a programming education company.
Because I think the way that you teach people today, like everyone has a story about that one teacher who really understood them, who took the time to get to know them, learn what motivated them, and just really inspired them to do more.
And imagine if you could give that kind of teacher to every student, 24/7, whenever they want, for free.
Like that, it's still a little bit science fiction, but it's much less science fiction than it used to be.
unidentified
I always think it's worth remembering that we're just kind of on this long continuous curve.
Healthcare and education are two things that are coming up that curve that we're very excited about too.
marc andreessen
When I did recently roll out ChadGPT to my eight year old, I was like, very, very proud of myself, because I was like, wow, this is just going to be such a great educational resource for him.
And I felt like, you know, Prometheus bringing fire down from the mountain to my child.
I actually think there's like a pretty good prospect that like kids are just going to like pick this up and run with it and I actually think that's already happening right.
ChatGPT is fully out and Bard and Bang and all these other things and so I think you know kids are kids are going to you know kids are going to grow up with basically you know you could use various terms assistant friend coach mentor you know tutor but you know kids are going to are going to grow up in sort of a this amazing kind of back and forth relationship.
bill gates
There's a bigger teacher shortage in Africa than elsewhere, a bigger doctor shortage.
We will provide an AI doctor.
We will provide an AI tutor.
And already we funded a lot lots of Africans to do pilot studies and to take the very best technology and get it out at about the same time as it will happen in the ritual world.
In fact, in a few cases, ritual regulations may make it roll out slower than in countries like India or in Africa.
So it's a race, but it's a race for good.
sal khan
But I think we're at the cusp of using AI for probably the biggest positive transformation that education has ever seen.
And the way we're going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.
And we're going to give every teacher on the planet an amazing artificially intelligent teaching assistant.
joe allen
You can hear that totalizing ambition in their voices.
Every child on the planet from Africa to Asia to America.
to america a global village of the damned moving from AI as tool to AI as teacher to AI as companion in which the up-and-coming generation is taught that the highest authority on what is and isn't true is a machine.
Who's going to teach them the proper critical thinking skills to confront an environment in which either they or all their peers have become human-AI symbiotes?
Here to talk about this is Dr. Shannon Croner, a clinical psychologist and award-winning children's author, also the founder and executive director of For Us.
Shannon Croner, thank you so much for cominging on.
Denver, I can't hear Shannon.
She has no voice in my ear, but I think she probably said thank you.
Either that or she said, You are out of your mind, Joe.
What is all this stuff about AI gods?
Shannon, can you say hello one more time?
dr shannon kroner
Hello, I'm here now.
Thank you so much for having me on.
joe allen
There is that soothing voice.
Now, Shannon, your focus is on critical thinking, especially in regards to children who are being taught that masks will save you from the worst of the pestilence to vaccines will keep you well.
Can you just tell us a little bit about your background in psychology and your focus on critical thinking, especially as it applies to children?
dr shannon kroner
Absolutely.
So I've worked with kids since 2001.
Many of the kids that I've worked with actually have special needs and many of them are vaccine injured children.
And I've worked in a therapeutic setting.
I've also taught within the classroom to high school students and college students.
And so I've really been around children and working with them in an educational and therapeutic way for my entire adult life.
And so now I'm the author of two children's books, I'm Unvaccined and That's Okay.
And my most recent book is Let's Be Critical Thinkers.
And critical thinking is crucial.
It's a life skill that is crucial for children and it is really not taught in the schools anymore.
And now with the incorporation of AI, we're completely losing critical thought.
And I just want to give you some statistics real fast.
Right now, Gen Z, like our Gen Z kids are at 97% of them are using AI just for everyday tasks and stuff like that.
Back in 2023, schools were only incorporating AI about 18%.
And now a new study that just was reported recently on Education Week, 60% of schools in America are incorporating AI into the classroom.
And that there's 80% of students are now using AI to complete class work.
So, you know, what is this really doing to critical thought?
It's destroying it.
It's causing intellectual laziness.
It's causing the erosion of curiosity, stunted cognitive development.
You know, how are kids going to be able to know how to create their own argument or take a stance on a certain topic?
This is really, we're headed down a very slippery slope here for children.
joe allen
You know, we all have anecdotes that we can talk about people whose children or many people whose own their own children have become addicted to or bonded with AI.
I hear from teachers all the time exactly what you're describing, this lack of curiosity, this kind of deadness in the eye, this reliance on the machines.
But to hear those statistics, it really chills me to the bone.
We see all of these pushes to get AI into education.
We also see the more sleazy corporate attempts like with Elon Musk's Baby Grok, which they may roll out any day now, or Meta's AI Companions, which, as we reported just last week, the Reuters investigation uncovered internal standards which allow for the bot to speak to children.
in, let's just say, incredibly inappropriate ways, sensual ways, so to speak.
So as you see all of this, is there any way around it?
What is the solution?
How can parents protect their children and on a wider societal scale, what do we do about this?
dr shannon kroner
Well, it's very scary.
I'm a mother of two kids and it's very scary because, especially coming out of the pandemic, children are lonelier than they've ever been before and they're constantly on their computers and their phones.
It's not like back in the day when you and I were kids and I would be playing outside all the time with my neighbors.
So kids are lonelier today and so they're turning to these AI companions and it is very scary that.
that children can be groomed through these AI apps.
And so parents really, they need to engage in conversation with their children and have these open conversations, letting them know that there are predators online who can really kind of take control of AI and create these, you know, deep fakes and impersonation.
And that, you know, an AI companion is not an actual friend.
So many people are actually adults.
Adults are turning to.
AI companionship for what they're seeing as love and affection.
And that's, I mean, that is so scary.
And so really, and when it comes to our children, parents really have to have, they have to educate themselves and they have to have these open conversations with children and let them know of the dangers and what to be aware of online.
joe allen
Well, Shannon, we really look forward to having you back.
If you would, please just tell the audience where they can find your books, where they can follow your professional work and where they can find you on social media.
dr shannon kroner
So people can find me at drshannoncroner.com, that's drshannoncroner.com, and my book, Let's Be Critical Thinkers, it can be ordered today on Amazon or Barnes and Noble or any major book selling website, as well as my previous book, I'm Unvaccinated and I hopefully.
joe allen
Dr. Shannon Croner, thank you very much for coming on again.
We look forward to having you back.
dr shannon kroner
Thank you so much.
joe allen
All right, War Room Posse, I should probably leave you with some sort of positive vision for the future.
I just want to remind you that the sun is still shining, the children are still playing, your heart is still beating, presumably, and presumably for a little while longer.
And of course, God smiles down upon us, hopefully with a great sense of humor, because I can tell you this right now, if this isn't funny, it's not justified.
Thank you very much for your time and attention, and we look forward to seeing you again tomorrow.
Export Selection