All Episodes
Nov. 4, 2025 - Bannon's War Room
47:55
WarRoom Battleground EP 883: Critical Thinking in the Age of AI
Participants
Main voices
c
connor leahy
12:28
d
dr shannon kroner
11:56
j
joe allen
19:00
Appearances
Clips
a
ashley strohmier
00:14
j
jake tapper
00:19
j
john f kennedy
00:09
m
marc andreessen
00:13
m
max tegmark
00:30
s
sam altman
00:45
s
steve bannon
00:41
t
tristan harris
00:12
| Copy link to current segment

Speaker Time Text
steve bannon
This is the primal scream of a dying regime.
Pray for our enemies because we're going medieval on these people.
Here's not got a free shot at all these networks lying about the people.
The people have had a belly full of it.
I know you don't like hearing that.
I know you've tried to do everything in the world to stop that, but you're not going to stop it.
It's going to happen.
jake tapper
And where do people like that go to share the big lie?
Mega media.
I wish in my soul, I wish that any of these people had a conscience.
steve bannon
Ask yourself, what is my task and what is my purpose?
If that answer is to save my country, this country will be safe.
unidentified
Here's your host, Stephen K. Bannon.
ashley strohmier
800 of the world's most prominent figures spanning the political spectrum are calling for a ban on developing super intelligent AI.
New polling shows that 64% of Americans agree super intelligence shouldn't be developed until it's proven safe.
max tegmark
Yeah, we're going to build super intelligence and we have no idea how to control it and it's going to be so cool.
Please invest in us.
marc andreessen
The single biggest fight is going to be over what are the values of the AIs.
That fight, I think, is going to be a million times bigger and more intense and more important than the social media censorship fight.
jake tapper
You know more about AI, which has me more worried.
unidentified
A lot of people, everybody's worried about it.
We're also hopeful about it.
marc andreessen
Yeah.
max tegmark
But it changes so fast.
tristan harris
Here now with AI, we have evidence now that we didn't have two years ago when we last spoke of what they call AI uncontrollability.
So this is the stuff that they used to say existed only in sci-fi movies.
unidentified
We can create infinite universes.
sam altman
This is like the fuel that we need.
unidentified
There's never been a technical project of this complexity and this scale.
Ever.
Two paths, two futures.
Teaching Sam to think or being left in the dust.
jake tapper
Wow.
unidentified
AI is really gonna, you know, it's gonna, it's gonna scale to the moon.
john f kennedy
Have you ever looked at the moon and wondered if it was real?
I have.
And tonight, ladies and gentlemen, I have to tell you, the moon is fake.
unidentified
What will it take to collapse the distance between idea and invention?
This technology saved my voice.
To build machines that build with us and wire the earth with thought.
marc andreessen
You know, this whole thing is going to reshape the economy.
sam altman
It'll take a few years, but it's going to reshape the economy.
jake tapper
And, you know, they are almost there now.
unidentified
You know, these systems are different from ordinary software.
jake tapper
You don't write every line of code, right?
unidentified
Building ordinary software, it's like building a skyscraper or something.
jake tapper
You make the blueprint, you make everything to design.
unidentified
This is, ironically, a bit more like biological.
sam altman
It's organic or something.
You're growing these models.
That can't be real.
connor leahy
It's a screen.
The moon's a screen.
Everything we've been looking at, it's fake.
sam altman
People love the newsaura.
And I also think it is important to give society a taste of what's coming on this co-evolution point.
So like very soon, the world is going to have to contend with incredible video models that can deepfake anyone.
We're attempting a classical coetic conjuration in the manner of the Lamegaton.
Thanks to our investors for trusting us with the seed round.
This one is for you.
By the names that gird the world, Adonai, Elohim, Tetragrammaton.
That will mostly be great.
There will be some adjustment that society has to go through.
And just like with ChatGPT, we were like, the world kind of needs to understand where this is.
Very soon, we're going to be in a world where like this is going to be everywhere.
max tegmark
There should be a prohibition against building this stuff, at least until there's a broad scientific consensus that this can be controlled and safe.
And also until Americans actually want it.
And we released a poll today also showing that actually less than 5% of all Americans want to race to superintelligence.
joe allen
Good evening.
I am Joe Allen, and this is Warroom Battleground.
The race to artificial superintelligence is on.
Frontier companies such as OpenAI, Google, XAI, and Anthropic are hurtling towards what they believe will be artificial general intelligence, a system that is able to do any cognitive task a human can do, and then, presumably, artificial superintelligence, a system smarter than every human being collectively on earth, otherwise known as a digital deity.
In the meantime, the AIs we have right now are turning people's brains into mush and turning their hearts into vessels for cold computer code.
We've seen many, many instances of AI psychosis and a few tragic instances of children who turned to AI for solace and for advice, and the AIs instructed them on how to kill themselves.
One of these is Adam Rain, who turned to ChatGPT, and GPT explained to him how to tie a noose and urged him not to go to his parents when he felt this suicidal ideation.
Now, we've wondered how many people out there are thinking like this.
How many people are using AI for this sort of thing?
OpenAI just released the numbers that they have internally.
Wired reports that, according to the OpenAI numbers, 0.07% of their 800 million users are experiencing something like AI psychosis.
Now, that might not sound like much, but that ultimately ends up being 560,000 people that we know of just using GPT who are going nuts.
They also reported that 0.15% are experiencing suicidal thoughts and turning to the AIs for advice.
That is 2.4 million people.
One imagines that the numbers are probably larger, especially when you think about the additional models, Anthropic, Meta, Grok, and one imagines that they will continue to increase.
At the same time, you have all of these companies pushing for every person on earth to adopt it, to turn to the AIs for truth, such as XAI, which just released Grokapedia, a competitor to Wikipedia, seemingly tailored to basically please a conservative audience, a kind of tailor-made reality just for you.
Also, you have Amazon announcing 30,000 layoffs by the end of the year, presumably to replace the workers either with automation or with Indians.
And according to a leaked document, they foresee some 600,000 such firings in the near future.
So besides going nuts, the AI companies seem to plan to put everyone out of work, the greater replacement.
And that's not to mention the existential risks of gradual disempowerment or perhaps sudden annihilation.
To discuss that, we will have Connor Leahy on shortly, but I just want to turn to the most immediate concern, at least in my mind, and that is children.
These companies are pushing to incorporate AI into every classroom to turn every child possible into a human AI symbiote to lift up AI as the highest authority on what is and isn't real.
In such an environment, children, as always, are going to need critical thinking.
And here to discuss that is Dr. Shannon Kroner, award-winning children's book author whose book, Let's Be Critical Thinkers, is just released October 28th.
It is fantastic, and I think that every child should absolutely, if not have an AI, should have a copy of Let's Be Critical Thinkers.
Shannon Kroner, thank you so much for joining us.
dr shannon kroner
Thank you so much for having me on today.
joe allen
Shannon, can you give the audience?
dr shannon kroner
I'm sorry, go on.
joe allen
Go ahead.
Please.
dr shannon kroner
I was just going to say that everything you were saying, it's absolutely accurate.
And that so many children today are, you know, turning to AI and it's very scary.
And earlier, you showed a clip about Sora.
That's another thing.
I think Sora is going to really damage the lives of many people by, you know, these deep fakes that it's going to create.
There are so many children today that are just turning to AI instead of using any kind of critical thinking skills.
There was a study that I was just reading that in May 2025, 69% of high school students were using ChatGPT at least weekly to help them with their homework and school assignments.
And then a study just from this past, a survey from this past summer showed that 93% of students have used AI at least once or twice to complete an entire assignment.
joe allen
Yeah, anecdotally, I've been going to different colleges around the country and that's all I hear.
I hear from really good students that their peers are using it not just for help, but to complete their assignments for them, to do all their thinking for them.
And it's pretty disturbing.
As far as the Sora thing goes, you know, if the audience remembers the clip of John F. Kennedy and the guy looking up at the fake moon and all that, well, I was in San Francisco when Sora was released and some guys were playing with it.
They said, well, what would you like to see, Joe?
And I said, I don't want to have anything to do with it.
But then I thought, well, maybe this would be funny.
And in a couple of seconds, just say, you know, John F. Kennedy, say the moon's not real.
And the machine created all the context, all the video out of that.
It just makes what used to be the province of imagination and creativity as easy as clicking a button.
So turning to your book, your book has just been released.
It has already sold.
I don't want to put the numbers out there unless you would like to, but it is already selling very, very well.
So give the audience some sense of what Let's Be Critical Thinkers is about, who it's targeted to, and what parents can get out of it.
dr shannon kroner
Yeah, they've actually, it just came out, you know, today, and it immediately sold out several, you know, thousands.
And so, but it will be back in stock within a day or two.
So people can go to Amazon and purchase that.
But, you know, the thing is, is that children really need to use their brain, use their mind.
You mentioned creativity.
AI is actually going to stunt creativity.
It's going to create intellectual laziness.
And what my book really encourages is critical thinking.
Let's Be Critical Thinkers teaches children how to critically think about the world around them, what questions to ask, why they should not just believe whatever it is that they are told by the media or by the government, that they need to be inquisitive, ask questions, challenge the narrative.
I think that that's really important, especially for a strong society in the future.
If we just simply allow kids to rely on AI, they are going to have zero critical thought.
The computer will be doing the thinking for them.
They will have stunted cognitive development.
And so, you know, I really want to encourage children to read more books.
We cannot just eliminate books and/or critical thinking skills.
And so, you know, what this book does is it has taken all the different policies that we had to deal with throughout the pandemic.
So, masks, lockdowns, social distancing, and the mandated vaccines.
And there is a main character, is a curious little girl.
Her name is Darlene Data.
She wants to become an investigative journalist when she grows up.
She explains what that means, and she teaches the reader how to become a critical thinker, how to ask the right questions, what propaganda is, how to spot propaganda, the importance of informed consent, why we should never have censorship.
And I think that these are all really important things for children of today to know and to learn because really, critical thought is not, it's not really taught in schools anymore, especially now with the introduction of AI.
If a child wants to, you know, is given an assignment to do a report to research something, all they have to do is go to ChatGPT, you know, or Grok or any one of these AI sources, type in the question, the research topic that they need to study or write about, and it's all just given to them.
They're not taught any research skills.
And so all the information is just given to them.
They can actually, if they are, let's say they're in sixth grade, seventh grade, whatever it is, they can actually ask ChatGPT to write the paper that they need from the perspective of a seventh grader.
And so they will write in the language of a seventh grader.
They can ask ChatGPT to cite their sources.
And ChatGPT will do that.
So now children are not even learning how to cite sources.
And so really, we are going to be seeing very quickly a dumbing down of today's children if we don't step in and really teach them critical thought.
joe allen
You know, the sub-theme of your book, The Pandemic, The Great Germ Panic of 2020 and the subsequent COVID-19 cult, that was an enormous disaster for education.
You had all these kids who didn't go to school, many of them who became little mini screen monkeys.
They were given ed tech and laptops and told this is what school is all about.
And on the heels of that now, you have the push to give them all AI.
I'm not going to go out on a limb and say it's all part of a master plan.
In fact, it's so distributed, it doesn't seem like any human has made the plan, but maybe someone a little bit further up the food chain.
On that note, though, a lot of parents and teachers have told me that they're really concerned that we're going to basically schools will crank out an entire generation of incompetent graduates and ultimately render diplomas valueless.
There really won't be any kind of prestige attached to them because everyone will assume they cheated.
I think that there will be a lot of students of their own volition and a lot of parents who urge their children to not go with that stream, to go against the current, to be critical thinkers, to be independent thinkers.
And I am fairly convinced that there will be enough of them to get us through.
Am I too optimistic, Shannon?
dr shannon kroner
Well, I hope that you're correct.
You know, I don't really know because the thing is, is that what we saw throughout the pandemic was just a lot of people who obeyed.
All they did was, you know, these a bunch of sheep who followed what they were told.
They wore masks and they used zero critical thought.
They wore a mask to enter into a restaurant and then sat down at the table and could suddenly take their mask off because if they were eating food or drinking a drink, germs didn't affect them.
There's no critical thought there.
And so I hope that you are very optimistic there and I hope that you're correct, but I'm a little worried because right now the scores for children, especially since, you know, we, all these children were locked down and out of school and stuck online for, you know, this Zoom education.
They, their scores are showing that the decline in education and their reading scores and math scores are at an all-time low.
You know, fourth graders, the recent information on fourth graders is that their reading level is below, 40% of fourth graders are below average reading.
Eighth graders, it's really sad for eighth graders right now.
They're scoring below basic level of reading.
Math scores are lower than ever before.
And, you know, we, I don't know how we catch these children up.
They literally missed, you know, two year, almost two years of education in school.
Not only that, but their mental health was affected.
Their peer relationships were affected.
All of these things, you know, play a part.
And we're seeing more children who are depressed and have anxiety more than ever before.
And so all of this kind of plays a role into today's society and their education and their critical thought and how they function in the world.
What the pandemic did to children was very damaging.
joe allen
Yeah, more specifically, what the authorities brought down on children, the lockdown to AI symbiote pipeline.
Again, Klaus Schwab in his infinite wisdom certainly saw what was going on and in fact encouraged it in his book, The Great Reset, calling the pandemic a narrow window of opportunity to digitize those people who would otherwise not be comfortable with screens in their faces all the time and mass surveillance.
I wonder, Shannon, so the bleak picture of the current state of education, the low test scores, the low competency evaluations, I think people hear it and they get really depressed.
I know I don't feel better having heard it.
I wonder where are parents and where are schools succeeding?
Do homeschooled children do better?
Do private schools, charter schools, Christian schools, do they do better?
Are there public schools in different regions that do better than others?
Is there a way to give the audience a little levity because they know I'm not going to give it to them?
dr shannon kroner
Well, you know, I'm in California where, you know, our vaccine, we have vaccine mandate laws.
Of the strictest vaccine mandate laws in the nation.
And so a child who's just missing one single vaccine is not allowed to actually go to school, public school, private school.
The only option for education is homeschool.
And so, you know, I'm lucky enough to have the ability to be able to homeschool, you know, the financial ability to be able to homeschool my children.
And but many people cannot do that.
And so a lot of people have actually had to leave California in order to get their children an education that is in school.
However, I will say, like, I absolutely love homeschool because I have a lot more control over what my children are learning.
They are not being, you know, fed this woke ideology that so many of the public schools, especially here in California, are teaching our children.
And so I'm somebody who, you know, from just my personal perspective and my personal experience, I absolutely love homeschool.
I believe that Christian schools are also a really great option for children.
They seem to be a little bit more strict with the education that they are teaching their children.
I would say if a parent could send their child to a private school, you know, however, private schools are extremely expensive.
And so, you know, sometimes there's private schools that are like literally college tuitions.
And so, you know, I would say that if you could send your child to a private school or a Christian school, that's much better than a public school where, you know, in the public school, it's kind of like anything goes.
And that's pretty scary because, and when a child is in a public school where anything goes and they're taught that there's, you know, more than two genders and all the other stuff, there's not really any room for them to challenge the narrative.
And what my book, Let's Be Critical Thinkers, does, it actually teaches children, you know, to challenge the narrative.
I think that it's really important to have room for debate.
And when you're in a public school and you're being taught that a child's being taught that there's more than two genders, and if they say no, there's, you know, female and male, that kid is going to get in trouble.
And that's really, that's really dangerous for society.
And so really what my book really encourages children to do is it empowers them and it teaches them how to critically think and ask the right questions and really kind of challenge the narrative in a very respectful way.
joe allen
You know, on that note, last question, and it's a bit of a challenging one, I think.
This really shows how history is a bit cyclical at times.
So now you have conservatives at the vanguard of critical thinking.
Not that conservatives haven't always been critical thinkers, but by and large, conservatives have tried to maintain a status quo, hence even the term.
In the 60s, 70s, you saw the real push for critical thinking, kind of critical theory, a culture of critique, the leftist sort of vanguard pushing against the, at the time, Christian, European, and American hegemony.
Now we're in a very, very different situation.
So my question is, as children are being taught critical thinking, how do you balance that with respect for tradition?
How do you teach them to be critical thinkers without them going off the rails and becoming, you know, rabid leftists who hate you and the country that you brought them into?
dr shannon kroner
Well, I think a lot of that actually, you know, starts within the home.
I think that parents need to really talk to their children more than ever before.
That is something that I do with my children.
I, you know, especially when we went through the pandemic, I was pointing out all the different things like propaganda, messaging, and asking their opinions.
I think it's really important for parents to have conversations and engage with their children, ask them questions.
Debate is healthy.
And, you know, and that's actually something that we really saw, you know, with Charlie Kirk, for instance.
He really welcomed debate.
And unfortunately, you know, he was killed for that.
But, you know, we need to really bring debate back.
And again, not in any kind of a negative way, not in an argumentative way, but I think really healthy debate, having the sharing the difference of opinions and kind of going back and forth and having that discourse is so important in a well-functioning society.
And opinions should be allowed.
What we saw during the pandemic with the censorship, anybody who said, myself, you know, I lost my entire social media during the pandemic because I was sharing information about the COVID shot.
And, you know, I was highly censored.
I don't think that any kind of research, medical opinions should ever be censored.
And we're seeing that right now with Secretary Kennedy, how the Democrats are really kind of trying to shut him down or get him kicked out.
And because he is sharing an opinion that they don't like, yet he is actually providing research for that.
And so I think it's important.
Yeah.
joe allen
Shannon, where can people find the book?
Where would you direct them to purchase this?
Any parents or grandparents that want their kids to be critical thinkers?
dr shannon kroner
The book is sold on every major bookselling website, Barbara Noble, Amazon, and it's out now.
joe allen
And what's your social media, Shannon?
dr shannon kroner
My ex is Dr. DR Shannon Kroner, and my Instagram is Dr. DR Shan Kron.
unidentified
All right.
joe allen
Well, thank you very much.
We really appreciate it.
I hope parents will turn their children into critical thinkers.
Now, we need to think about gold.
Is the continued divide between Trump and the Federal Reserve putting us behind the curve again?
Consider diversifying with gold through Birch Gold Group.
And Birch Gold makes it incredibly easy for you to diversify your savings into gold.
If you have an IRA or an old 401k, you can convert that into a tax-sheltered IRA in physical gold or just buy some gold and keep it in your safe or under your bed.
Keep it under your pillow.
Dream of gold.
But first, get educated.
Birch Gold will send you a free info kit on gold.
Just text Bannon, B-A-N-N-O-N, to the number 989-898.
Again, text Bannon to 989-898.
Consider diversifying a portion of your savings into gold.
That way, if the Fed can't stay ahead of the curve for the country, at least you can stay ahead for yourself.
We'll be back in just a moment with Connor Leahy to discuss AI Doom in America's home.
unidentified
Hell America's Voice family.
joe allen
Are you on Getter yet?
unidentified
No.
What are you waiting for?
steve bannon
It's free.
unidentified
It's uncensored, and it's where all the biggest voices in conservative media are speaking out.
steve bannon
Download the Getter app right now.
It's totally free.
Where I put up exclusively all of my content 24 hours a day.
You want to know what Steve Bannon's thinking?
Go to get her.
unidentified
That's right.
You can follow all of your favorites.
Steve Bannon, Charlie Cook, Jack the Soviet, and so many more.
Download the Getter app now, sign up for free, and be part of the new band.
joe allen
All right, War Room Posse, welcome back.
As you well know, I am no fan of artificial intelligence or most any technology unless I have to use it to make a living.
And even then, I got to say, this isn't exactly a comfortable situation staring into a camera and speaking to ghost-like wraiths somewhere out in America.
But artificial intelligence in particular is something that I have basically no use for.
As a writer, I think if you use AI to assist in your writing, you are no longer a writer.
You are basically a vessel for algorithms.
And if you don't list GPT as a co-author, then you're also a plagiarist.
I know it's different for other professions, doctors, soldiers, financiers, but from my perspective, AI is less than useless.
It is damaging, first and foremost, the psychological and social damage of having a bunch of human AI symbiotes, brain dead, guided by algorithms as if they were ants following pheromone trails.
And of course, the economic threat of being replaced.
Replacing writers with chatbots, replacing teachers with virtual avatars, replacing soldiers with drones.
All of this does not bode well.
And even then, that doesn't really account for the most extreme warnings that we hear about where this all could go.
If you have first artificial general intelligence, as smart as any human being at anything, then you already have the economic greater replacement.
But should that general intelligence begin to self-improve and become a super intelligence, some sort of godlike entity that is smarter than every human being on earth, by its nature, you would not be able to control it.
By its nature, you wouldn't even be able to comprehend what it's doing.
Someone who has tracked this for a long time and was at the forefront of warning about the most extreme existential risks of artificial intelligence is Connor Leahy, the CEO at Conjecture and advisor to control AI.
Connor has written a piece that is online right now, the Compendium.
You can find it at thecompendium.ai, and it was released almost a year ago to the date.
I think that his arguments have been vindicated, and I hope that his projections have not been, but we shall see.
Connor, thank you very much for coming on.
It's a pleasure to talk to you.
connor leahy
Thank you so much for having me.
joe allen
So, Connor, the Future of Life Institute just released a statement last week on superintelligence, calling for a ban on development towards artificial superintelligence with a couple of caveats that I'm not a fan of.
But I'm curious, what is your read on that?
What kind of impact will that have?
And I mean, you support a ban on superintelligence.
Is this going to be effective?
Will it make an impact?
connor leahy
So, the thing that I think is so important about statements like this, and this is one of the strongest war-to-statements of this kind, is that there really are a lot of people on this that you may not necessarily suspect.
A lot of like tech luminaries, you know, I got Steve Vozniak, co-founder of Apple.
You got, you know, Bannon himself on here as well.
You got a lot of top AI professors, including Nobel Prize winners, like just lots of people across the spectrum.
And because it's a public letter, no one can deny that this is real.
There is a thing that propagandists try to do a lot, where they take an issue like this and they pretend it's not real.
No one really believes that.
And this is obviously, you know, you can't claim that anymore if you have a letter like this.
We have the smartest people on the world, you know, from across the world saying right here, this is actually dangerous.
This should actually be banned.
So this is very important when you do politics in general, is that it has to be a topic you're allowed to talk about.
in a sense.
And it's getting harder and harder for the people who are trying to dismiss or hide or propagandize away these kinds of risks to deny that there is an actual thing here.
It's harder to hide from our politicians, from the general public what's actually going on here.
Like as you said, I've been worried about these issues for a long, long time.
And now it's really great to see that more and more of the general public and media is taking these risks seriously, is discussing these issues, because it does really affect all of us.
So will this in itself by itself lead to a ban?
Probably not.
I think there's a lot of hard work to be done.
But this is in many ways kind of like the warning shot, a flare, that we should get going.
joe allen
One thing that really struck me about your piece, the compendium or essay, manifesto, as it were, is that one of the solutions, perhaps the primary solution you put forward, is to cultivate a sense of civic duty, that if people really care about their societies and their families, they wouldn't want such a thing as artificial superintelligence to come into existence.
Is that a fair read?
connor leahy
Yes, I think this is very, very important.
So we have run polls across multiple countries, bipartisan across the world.
There is an unbelievable, like historically almost unprecedented level of support for this idea of regulating dangerous superintelligent AI, because it's a very simple argument.
It's a very, very simple argument.
If you make something that is smarter than all humans, you don't know how to control it, how exactly does that turn out well for humans?
Like, you know, I'm open to the argument, but like, I have not heard anyone make a good case here.
And like, there's this deep thing where we have all these like, you know, tech companies and the people behind them building these extremely powerful technologies.
And it's actually building is a bit misleading.
It's very important to understand that AIs are quite different from other software.
They're not really written with like lines of code.
That's how normal software is made.
They're more like grown.
You have these big supercomputers that kind of like take in like massive amounts of data and they crunch it.
And they produce, they grow this program called a neural network.
And this is how all modern AI works.
And the thing is, we don't really understand how these things work.
Not really.
Like we don't really understand what's going on inside of them.
And they constantly do all kinds of things that we don't really understand.
So there's kind of like two possible worlds we live in, right?
There's one world where these people do control these super powerful things that keep getting more powerful, keep getting integrated more into our lives, our economy, et cetera, or the world in which they don't control them.
And I'm not sure which one is worse.
I think both of these are, you know, very dangerous worlds to be in and not worlds that people want to be in.
And people have made their voices clear in polls across the world is that this is not what people want.
And I truly believe that people have a right and a stake to their lives, their safety, the lives and the safety of their friends, their family, their nation.
This is what democracy is built upon.
We don't let random people build dangerous things that threaten our lives.
That's illegal.
And I think the same thing should be applied here.
joe allen
Before we return to that idea that this should be stopped, it maybe could be stopped, and what the possible paths are to get there, you open up the compendium.
And again, this was published a year ago, almost to the day.
You open up the compendium talking about the state of the art of AI.
And at that time, people were by and large in the dark.
They didn't understand the complexities, the non-deterministic nature of neural networks, and the black box phenomenon.
I think the understanding is quite a bit better now, a year later, broadly speaking.
But the technology continues to develop.
You're always shooting at a moving target.
So if you could, what do you see?
Having had a year now to see the development of the technology, how different is it today?
How different of a world is it with GPT-5 and Grok 4 than it was on Halloween in 2024?
connor leahy
Yeah, it's a great question.
It feels like so long ago, which kind of is part of the problem, right?
Most technologies, you know, it might take a couple of years to see a new generation of huge breakthroughs.
What we're seeing with AI is really like every three months, every three weeks, there's a massive breakthrough.
And this is exactly what we've seen over the last year as well.
And my prediction is what we're going to see next year as well.
A year or so ago, we have pretty good chatbots and stuff like this.
But a thing that, for example, really didn't work quite so well is agents.
So this is autonomous systems that could write code, that could go search for information, that could, you know, solve kind of tasks.
And these have gotten radically better over the last year.
They're not perfect by any means.
But for example, a year ago, I never used AIs to help me with coding because I'm a pretty good coder and they weren't really helpful.
Now I use them for everything.
Like I can just tell my AI, go into my code base and like figure out how to do this and like fix that.
And then it'll just like go look by itself.
It will pull up stuff to read.
It will fix various things, test a couple things, and then give me a report on what it did.
This wasn't possible or like barely possible a year ago.
Now it's very possible.
We're also seeing massive advancement in stuff such as world modeling.
So this is basically virtual worlds.
So we can have AIs generate full 3D worlds that you can like walk around in that are close to photorealistic.
This wasn't a thing a year ago.
And yeah, it's only getting better from here.
So full like Star Trek holodeck type stuff is becoming more and more feasible the way it looks.
That one was a bit of a surprise to me.
The other stuff, like agents and so on, is kind of what I predicted.
We are in an exponential.
Things are getting exponentially faster.
Progress is not only keeping up, it is getting faster.
And so every year, even more progress is happening.
We're getting even closer to these truly autonomous, truly intelligent, or even super intelligent systems.
And I continue to not see us slowing down here.
I think there are many problems that still need to be solved.
AI still struggle a bit, for example, with memory.
But I think with enough money and enough engineering time, those will be solved in due course.
joe allen
I'm curious, maybe you can give a brother some advice.
It's very difficult to shoot at this moving target, right, as the technology keeps changing.
And around that, there's all this noise.
You have the deniers on one side, the dismissers, the doubters, those that say that this is all basically just a toy, an overpriced toy in a bubble that's just going to pop and everything's going to go away.
We'll go back to, I guess, social media and smartphones.
On the other side, you have this expectation that is pretty overtly religious, that what they're building is digital God.
This God will be benevolent.
It will cure all disease and perhaps allow us to be immortal.
So between those two poles, you have this rapidly developing technology.
How do you communicate the intensity and the urgency of the problems associated with this rapid development while avoiding some of the extreme hype on the one side and getting past the doubters on the other?
connor leahy
I think this is a genuine tricky communications challenge.
It's not just a tricky communications challenge because it's hard to explain.
My experience has been that a lot of people are very reasonable and you can explain these things quite simply to people.
As I said, the basic argument of we shouldn't even attempt to build things that are smarter than us, pretty plausible.
Even if we don't have it, I think we shouldn't even attempt to go there.
It should be illegal to even try to build a super intelligence, never mind succeed.
So the way I usually think about this is that what the actual thing we want is we want to make it illegal to attempt to do this.
We want to make it, we want to restrict precursors.
Because if we wait until we see the first superintelligence, that's already way too late.
And I think this is a very common sense thing that most people can understand.
It's like, yeah, that actually seems like something we shouldn't do.
That seems pretty straightforward.
It's really interesting that you bring up the religious aspect here because I do actually think this is a very important one that sometimes gets underappreciated.
Is that so?
I know a lot of people that work at these companies, you know, I've gone to their parties in San Francisco before, like, you know, and it is to for many, not by all means all, but to many, many of these people, including many people in charge of this technology, it is a religion.
It's transhumanism.
The reason they want to build superintelligence and the reason they want to do it as fast as possible is because they want to do it kind of before anyone notices what they're doing, because they want to live forever.
They want to become cyborgs.
They want to do whatever, right?
Like I've heard some, I mean, really just like awful things being said at these parties about like how these people think about other humans and what, you know, they should or shouldn't be done with them.
And I think there's a real aspect here that's like quite important to understand that there is an ideological aspect to this as well.
And they don't want to be noticed.
They don't want people to realize what they're doing.
They want to delay things as much as possible.
It's not even that they want to win the argument.
It's that they want to delay everybody.
They want to confuse everybody.
They want to distract from the very simple thing that we should be able to look at like, hey, these people are doing things that are already harming people today and is only getting worse and they don't have control over it.
And why should they have the right to even attempt to do something like this?
Like if they succeed by their own lights, what they put in their own marketing copy, like why are we letting people even try to do this?
joe allen
Well, to close out, what do you envision as a legitimate path to just the simple ask, no superintelligence, no drive towards creating a digital god?
Legally speaking, how do you see it going forward?
National legislation, an international body enforcing it, treaties, agreements?
How do you see it going?
Especially, you always hear, if we don't do it, China will.
How do you answer that?
And what do you see as a legitimate path to banning superintelligence?
connor leahy
I think it's very important to see here that China also has no interest in going extinct.
This is not to the benefit of the Chinese Communist Party or the Chinese people, the same way that going extinct is not beneficial to the American people or the American government.
That doesn't mean that.
joe allen
Just real quick on that.
Sorry to interrupt, but just real quick on that.
Maybe not.
Maybe Xi Jinping and his various ministers don't, but presumably neither do Sam Altman or Elon Musk, so on and so forth.
So should we assume that China wouldn't push forward just as American companies are pushing forward?
connor leahy
I don't think we should assume that at all, actually.
I think this is, in many sense, is should be seen as a Cold War situation.
I think this is a very, very hard problem.
There is actual competition happening and denying that would be ridiculous.
What I'm saying here is that there are ways forward here and that what needs to happen is again the same things we did in the Cold War with the USSR, is that it's hard to make regimes to find international ways of regulating mutually enforceable agreements.
Like the way I like to think about it is that at some point somewhere, we have to have some kind of way of mutually verifiable agreements to not build superintelligence.
It should not be pure trust.
That's not how things work.
Trust but verify.
I think this is a solvable technical problem for what it's worth.
The same way that, for example, we can detect nuclear detonations and also nuclear enrichment facilities extremely effectively, including in countries that are being non-cooperative.
I think very similar things can be done for superintelligence.
You need massive data centers that need massive amounts of energy and very specific hardware.
There are ways to control and detect such operations.
There are ways to make this happen.
But I want to be very clear, this is hard.
This is hard.
There needs to be some kind of way for us to deal with this.
I don't think it makes sense, you know, for just like, you know, one country to say something.
It's a thing that we have to do at a large scale.
But I also think it's definitely not, like, do you feel like the USA is currently in charge of AI?
I think the companies are in charge, which I think is a very different thing from saying the U.S. is in charge.
joe allen
Yeah.
So a multilateral approach before unilateral, you would say.
You wouldn't necessarily recommend the U.S. government ban U.S. companies as opposed to kind of pushing more towards something more international.
connor leahy
The thing I would recommend to the U.S. government is I do think the U.S. government should have more control over what happens within its borders.
I think at the moment, the U.S. government has very little, I'm not an expert on this.
I will say the U.S. government may have secret projects I'm not aware of, but my understanding is that the U.S. government has relatively light touch on these companies, that these companies are mostly able to act in impunity.
They're able to build their data centers in foreign countries.
They're able to ship their data, include to hostile countries.
I have heard from insiders in these companies that a lot of the AI training data and stuff is stored on servers in countries that are not friends of the United States.
So in a sense, I think it would be great if the United States government had very good understanding, very good transparency, and very good oversight over what exactly is happening here.
I would like if the United States population could have a vote on what are we going to allow these companies to do.
Again, I believe in democracy.
I think the people should be able to decide.
I think our elected representatives should have a say in how much risk the American public is exposed to from companies, including U.S. companies.
But yes, ultimately, we need a multilateral agreement at some point.
At some point, somewhere, we need to find a way where not just U.S. and China, but also other countries across the world, middle powers across the world, can come to an agreement that we should not do this and it should be enforced.
We should find a way to mutually check and enforce upon each other.
I think there's a lot of common sense domestic policy that can be done first, such as I say, good transparency and oversight as what are these companies doing within your borders?
Where are they putting the data outside of your borders?
Are you okay with them moving that data outside of your borders?
All of this, I think, is already things that our nation can do right now.
joe allen
Well, Connor, I could talk to you all day about this, and we definitely want to have you back.
Tell the audience where they can follow your work, where they can find the compendium, so on and so forth.
And until then, hopefully keep them busy with some homework.
connor leahy
Find me on X at NP Collapse, and you can find my company at conjecture.dev, the compendium at thecompendium.ai.
Thank you so much.
joe allen
Connor, I really appreciate it, man.
Thank you very much.
And if we fail to stop Super Intelligence and the robots come to get you, you definitely want to have some food on hand.
So go to mypatriotsupply.com/slash Bannon.
Buy the three-month emergency food kit and get a free four-week kit.
Originally $944, but you will get $247 off that price.
MypatriotSupply.com/slash Bannon.
And Birch Gold can still help you roll an existing IRA or 401k into gold.
You are still eligible for a rebate in free medals of up to $10,000.
So make right now your first time and buy gold and take advantage of a rebate of up to $10,000 when you buy by text Bannon to 989-898.
That's Bannon 989-898.
Claim your eligibility and get your free info kit.
Again, text Bannon to 989-898.
Thank you so much, Warroom Posse.
Export Selection