All Episodes
Aug. 29, 2025 - Bannon's War Room
48:02
WarRoom Battleground EP 839: Big Tech Races To to Build Digital Gods
Participants
Main voices
d
dr justin lane
11:11
j
joe allen
23:47
j
john sherman
08:09
Appearances
Clips
j
jake tapper
00:10
j
josh hawley
00:26
s
steve bannon
00:32
t
tucker carlson
00:08
| Copy link to current segment

Speaker Time Text
steve bannon
This is the primal scream of a dying regime.
Pray for our enemies.
Because we're going medieval on these people.
You're going to not get a free shot on all these networks lying about the people.
The people have had a belly full of it.
I know you don't like hearing that.
I know you try to do everything in the world to stop that, but you're not going to stop it.
It's going to happen.
jake tapper
And where do people like that go to share the big lie?
MAGA media.
I wish, in my soul, I wish that any of these people had a conscience.
steve bannon
Ask yourself, what is my task and what is my purpose?
If that answer is to save my country, this country will be saved.
unidentified
War room.
Here's your host, Stephen K. Banner.
josh hawley
So I'm keeping a little list here of potential downsides or harms, risks of generative AI, even in its current form.
Let's just run through it.
Loss of jobs.
Manipulation of personal behavior.
Manipulation of personal opinions and potentially the degradation of free elections in America.
Did I miss anything?
steve bannon
Raise your right hand.
tucker carlson
Misinformation.
josh hawley
Generation of deep fakes.
unidentified
A new government report now identifiesying a major risk that artificial intelligence could have on the US financial system.
Specifically, Androphic is concerned that AI could empower a much larger set of actors to disuse biology.
I think if this technology goes wrong, it can go quite wrong.
Jeffrey Hinton made headlines with his recent departure from Google.
What I've been talking mainly is what I call the existential threat, which is the chance that they've got more intelligent than us.
And they'll take over.
They are racing to systems that are extremely powerful that they themselves know they cannot control.
Do you think that's real?
tucker carlson
It is conceivable that AI could take control and reach a point where we've been able to turn it off and then we've been making the decision for people.
unidentified
Yeah, absolutely.
tucker carlson
Absolutely.
josh hawley
That's definitely where things are headed.
unidentified
We're really at a crossroads.
We could have everything we could dream of if we were careful, but we could have a nightmare beyond contemplation if we're not.
I'm not saying I know what the right tradeoff between acceleration and safety is, but I do know that we'll never find out what that right tradeoff is if we let Moloch dictate it for us.
I think without the public pressure, none of them can push back against their shareholders alone, no matter how good-hearted they are.
Smolek is a really powerful foe.
joe allen
Good evening, War Room Posse.
I am Joe Allen, sitting in for Stephen K. Bannon.
All you long-time viewers know that I'm very proud of my cold opens.
I cut them all myself.
I choose the material.
But I have to say what you just saw really outdoes everything I have attempted.
That comes from the filmmaker Dagan Shani.
He has done a fantastic job of creating these documentaries of all of the statements we hear about artificial intelligence and giving you juxtapositions of viewpoints.
You can hear everything from artificial intelligence will kill everyone to artificial intelligence doesn't really exist and it never will.
I really urge you to go to either his ex profile which is @DaganShani shani one that's d a g a n sh a i n or sh a n i numeral one dagon shani one at x he has pinned his documentary, Don't Look Up, the case for AI as existential risk.
And you can also follow him at his YouTube channel, that is Dagon on AI on YouTube.
You can also go to my own Twitter account or X account, and I'll have all of that at the top of my feed this evening.
Now to the problem of artificial intelligence.
We have two fantastic guests tonight.
The first will be John Sherman of the AI Risk Network, and the other, a fellow who I consider to be one of my absolute closest and most trusted friends, Justin Lane, who gave me my first real education on the nuts and bolts of artificial intelligence.
Before we bring in John Sherman, though, I want to just frame the problem of artificial intelligence as I see it.
You're well familiar now after four and a half years of hearing this, but it bears repeating, artificial intelligence is the great technological imposition of our current era.
It's being shoved down our throats in every sector of society from education to medicine, to corporate life, to government agencies, to the military, and of course the social implications.
So the way I see it, the most immediate and perhaps the most significant threat is the social damage that artificial intelligence is already doing and could be doing.
could be catastrophic in the future.
These social and psychological effects are made very obvious from things ranging from Grok, XAIs, AI's AI companions, the so-called goonbots, Grock basically pedaling softcore porn for losers who have chosen AI's for mates.
Taking it just a tad further, you have Meta, who recently was caught with basically instructions for the development of their AI, their protocols.
In their generative AI standards guidelines, they openly say that it's okay for their meta AI.
meta-AI companions to seduce children ranging from high schoolers down to eight-year-olds.
And while they have eliminated it from their standards and protocols, we know that someone in the organization thought to write that and someone high up in the organization decided to sign off on it.
We also know the long-standing accusations against Meta and other social media platforms that their technology has caused tremendous psychological and social harm and yet they've done nothing but expand 3.5 billion users for Facebook and 600 million for X you also have more extreme cases like the young teenager Adam Raine who was given instructions explicit instructions
by GPT on how to kill himself which he did and all of this is just in the realm of artificial narrow intelligence artificial narrow intelligence is what we have now these are algorithms which can function in narrow domains so everything from surveillance to genetic sequencing to robotics control facial recognition, and of course large language models and photographic or video generative AI.
These artificial narrow intelligences have been enough trouble.
But in theory, and the goal towards which these frontier AI labs are working, everyone from Google to XAI to Anthropic to OpenAI and now MetaAI, the creation of artificial general intelligence and then artificial superintelligence.
And again, just to restate, artificial general intelligence, unlike the narrow intelligence, is a system that is cognitively flexible.
It can operate across all of these domains.
It would be, in essence, an Einstein level or above genius on every subject imaginable and have competency in any sort of activity a human could do, including coding, which, as we know from the narrow intelligences, coding is something that these AIs actually excel at.
So with this idea of general intelligence, you have the notion of recursive self-improvement, that the AI could begin to alter its own code, improve its own code, and then basically create an intelligence explosion that would be beyond the comprehension of human beings, even its creators, and out of control of those human beings.
And it's that concern, that fear of loss of control, that leads to what some would call the Doomer ideology, unfounded.
The notion is simply that you could create a system that you did not control fully, which could lead to catastrophic outcomes like the creation of bio weapons or the hijacking of a nuclear arsenal or the existencial risk of either gradual disempowerment with AIs slowly but surely taking power away from humans or perhaps an immediate and instant vaporization of all human beings either
through nanobots or nuclear war, Terminator-tier stuff.
To talk about this, I want to bring in John Sherman.
John Sherman is a Peabody Award winning journalist and now is president of the AI Risk Network.
I urge you to go to his YouTube page.
He has an array of fascinating interviews with all sorts of individuals, some of whom I have had the pleasure of interviewing myself, but most of whom I have yet to do.
John, I really appreciate you coming on.
Thank you so much for joining us here at the War Room.
john sherman
Yo, thank you so much for having me.
That was an awesome job setting it up.
They're really excited to talk to you.
joe allen
John, before we get into the meat of AI as existential risk or even just catastrophic risk.
Tell us, how did you get into this line of work?
What drove you to this vocation?
You were a very successful TV anchor.
You've actually worn a lot of hats, including video production.
What drove you to throw it all or put it all down and take up this cause?
john sherman
Yeah, so I was a journalist the first part of my career, an entrepreneur the second part, and I was just sitting here in my business two years ago minding my own business.
And I read one article online.
It was an article in Time magazine online written by a man named Eliaser Yadkowski.
basically said that the default setting if we continue on our current path is that AI is going to kill us all.
And I sat here in this office, couldn't believe it, and have spent the last two years trying to prove him wrong.
Still have found even the smallest shred of evidence that would prove him wrong.
And so I'm a father of two, got boy girl twins.
They'll be 20 years old in three days.
And I can't live in a world where we are giving our kids this future.
So I have set out to use my skills as a communicator to try to make AI extinction risk kitchen table conversation on every street in America.
joe allen
Of your guests, and I've seen quite a few, Roman Jompalski, one of my favorites, but of your guests, who has really shaped your thinking on all this more than others?
john sherman
I mean, I think Roman's a huge one for people out there.
Connor Leahy is fantastic in these subjects, but something I do at the AI Risk Network and on my podcast there for humanity is we've elevated the voices of regular people.
So I've done shows talking just to moms about AI extinction risk, talking to a truck driver.
And I did one show with a veteran marine, and he said something that really sticks with me, and it was this.
If you know your neighbor's house is going to be bombed, you're not doing them a favor by not telling them.
And we were talking at the time about how hard it is to bring up AI extinction risk, to think about this idea that it's not just no tomorrow for someone, it's no tomorrow's for everyone.
It's such a heavy, heavy thing to bring up, but the fact of the matter is it's not doing anyone a favor by not telling them.
joe allen
In the AI safety community, people talk a lot about P doom, the probability of doom if we create even artificial general intelligence, but definitely artificial super intelligence, an AI that as it is now.
It's now fashionably defined, an AI that is smarter than all human beings on Earth.
It's something like the singularity concept in which you have exponential growth and exponential increase in capabilities.
So on that, on P doom, the probability of doom, what's your P doom, brother?
john sherman
Joe, it's moved around a little bit, but I'm going to tell you it's about 80 percent.
I'm about 80 percent that AI is going to kill me and everyone I know and love.
joe allen
Now that being, you know, to qualify, that being if we create or if, not we, I'm not working on it.
Maybe Justin Lane, who will come soon, is working on it.
But if they create artificial general or artificial super intelligence, you think that there's an eighty percent chance of total extinction or just simply mass catastrophe?
john sherman
No, I think it's total extinction.
And I don't think it's I don't think it's like comes from a hate or, you know, it's super willful.
I just think that this intelligence that will have different goals than ours arrives here.
And, you know, we are all atoms that can be used for purposes that it would choose, not the purposes that we have chosen.
You know, if you look around me, this is all stuff human set up to achieve our goals.
If we build an alien intelligence.
that is smarter than us, that has its own goals, it's going to build its own stuff, and it's not going to include us.
joe allen
That's a really key idea that I think a lot of people stumble on, especially if they're not familiar with how artificial intelligence works.
They oftentimes say, well, AI is programmed by people, why would an AI then be programmed to kill everyone?
But that's not really the theory.
Can you walk the audience through how it is that a system that's made by human beings could slip out of human control?
john sherman
Sure.
There's a really good example which is using a chess playing model, right?
And so it's pretty simple.
It's like you build a model and you just want it to play chess.
You want it to be the best at playing chess.
So it's like, okay, I want to be a better chess player.
It's on the open internet though.
So now it's on the open internet and it can go and steal compute power because it's, you know, can find vulnerabilities and break through and steal compute power.
So now it's stealing compute power.
And, you know, maybe it says, oh, if I get money, I'll be able to, you know, get even more compute power and even be better at chess.
So it goes and starts stealing money.
Now we the humans discover what's going on and we say to ourselves, what's going on?
This model is stealing stuff.
It's breaking in.
We need to stop it.
If it's smarter than you and you have a different thing you want to achieve than it wants you to achieve, humans are in a very, very bad place.
So we don't want to create this thing that has these goals that we then get in the way and try to stop and we'll say, Oh, well, we'll just turn it off.
It's smarter than you.
It knows you're going to try to turn it off.
It's ahead of us.
That is a bad situation to create.
Why would we do that?
joe allen
You know, the long time listeners in the War Room posse know that in that spectrum between doomer, sorry.
I'm, you know, somewhere in the middle.
I'm quite agnostic as to the imminence or even possibility of artificial general or artificial superintelligence.
But there is an element of their argument that I really think needs to be emphasized to dispel this whole garbage in, garbage out dismissal or it's just programmed that way.
It's that non-deterministic element in advanced neural networks, a degree of freedom that these systems already have right now where they're not really programmed to do everything that they do or maybe better put, they're programmed to do things that they're not programmed to do.
Nobody's determining every output, for instance, of GPT.
It's done somewhat within a range of freedom.
how important is that element?
Do you think that with artificial general intelligence, for instance, that that Yeah, I mean this really gets to to what I think are the three things that everyone needs to know about AI risk.
john sherman
And these are three things that anyone without technical background can understand.
So the first thing is that the makers of these AI models openly admit the technology they're building can kill us all.
It can end all life on Earth.
The outside experts, the leading outside experts also agree with this, so there's no controversy there.
They openly admit they're building technology that can kill us all.
Number two, and this gets to just what you were talking about, they do not understand how to make it do what we want, how to control it, and they do not even understand how it works.
They don't even understand how it works.
Number three, they spend all theiring money making it stronger, not safer.
So, you know, getting back to point number two, imagine if we were building cars, right, and they were built in a black box, not a factory.
There was no plan for how the car was going to be built.
That's what we do with AI.
We take this data, we fry it with compute, and on the other side comes this thing, and we don't know how it got there.
So to the car example, now we have our car.
We took some metal, we put it in the black box, out comes the car.
But, oh, we're having problems.
They're crashing.
Was it the brakes?
Was it the steering?
Well, we don't know.
It's just a black box.
We just fried some metal.
We have no idea how we made it.
That's how we're building AIs.
They're not built, they're grown.
That's why you get all that irregularity.
That's why you get those properties that come out later and they start doing unexpected things.
We don't know the makers of AI don't know how their systems work.
joe allen
I think that really is an amazing element of what we call artificial intelligence that does get missed by lay people, that black box that neural networks, at least the very large scaled up neural networks present that they truly don't understand its inner workings just like the human brain there are certain details that are well understood but ultimately the the function its behavior is a mystery and
that mystery i think also opens up the possibility for a lot of different arguments so you had mentioned your first point experts from within these companies and from without these companies, a large number of them agree that existential risk is certainly a possibility.
And people like Elon Musk put it at, say, 20%, like one out of five chance super intelligent AI kills everybody.
But then you have other experts, Demis Asabas, the head of DeepMind at Google.
You have people like Gary Marcus, who very much oppose all of these premises.
Mark Andreessen, who's a CEO and an investment house leader, but he does understand the technology pretty well.
And Peter Thiel, who...
How do you weigh those two perspectives as a journalist and as someone who's really wrestling with the morals of the big tech projects?
john sherman
Yeah, so this is like a 95 to 5, 99 to 1 ratio, I think, somewhere in there of the people of reputation who think this.
I mean, one way to think about it is the literal founders of the field, Jeffrey Hinton, Yoshua Benjio, those guys are the leaders of the movement to stop the thing they founded.
The founders of the field are the leaders of the ones of the movement that is trying to get this thing under control.
So, you know, that is just.
absolute madness.
I think that something also really important is if you look at the statement that was done in May of 2023 by the Center for AI Safety, it's just 22 words.
Sam Altman signed it, all the CEOs signed it, thousands of people signed it, and it says that mitigating the extinction risk of artificial intelligence should be a global priority alongside pandemic and nuclear war, right?
So it's literally saying mitigating the extinction risk factually.
That is a fact.
It is an extinction risk that must be mitigated.
Sam Altman signed a piece of paper that said that, right?
So there's no running from it.
Sam Altman openly admitsmits the thing he does could kill you and me and everyone we know and love.
joe allen
Without a doubt, even if this technology never really gets beyond the level of very good artificial narrow intelligence, I think that intent, that willingness to deploy a technology that you truly are not sure is safe, even on a mundane level,
let alone being somewhat convicted that it could kill everyone if you keep building bigger and bigger data centers, you keep filling them with more and more GPUs, you keep scaling it up until you get God in a box and you don't know what's going to happen, but you're going to do it anyway.
I think the moral quandary alone, just setting aside the technology.
The technological issue is just profound.
These people have a monstrous view of the world and they're willing to impose it on the rest of us.
Wouldn't you agree?
john sherman
Absolutely.
And here's one of the most fundamental questions.
It's a question of consent, right?
You have not consented to this experimentation with you and your family.
I have not consented to my children being the test subjects of these labs so that they can make profit and technology.
Like, no one has agreed to this.
And yet we are all in it.
joe allen
You know, you don't have to use AI at all to have the AI extinction risk coming for you doesn't matter yeah i i think again on the more mundane level we already see the problems we already see people with what's now called ai psychosis fashionably but very clearly people are turning to these things as companions schools are being filled with these things as teachers as authorities on what like what
is real what is not real and then you have these romances and you have these relationships in which the ai is treated as a guru i think all of these elements just on the mundane level are enough to say that these companies should be restrained.
And on that note, we have just a little bit of time left, but how do you see solutions going forward?
What sorts of regulatory actions or just personal actions do you think people can take to mitigate or maybe even stop the spread of this scourge across the planet?
john sherman
Yes, so I think the most important thing people can do is reach out to their elected leaders and tell them you care about this.
You know, I have great hope that this issue is going to transcend party.
This is something where we have Bannon and Bernie, MTG and AOC all on the same side of this thing.
It's humans versus aliens.
And something that's really important to keep in mind is that we have very little time to make a meaningful difference to get this turned around with how fast the technology is going.
Many of the experts say we have less than a hundred weeks to make the meaningful difference here.
So reaching out to your congressman, to your senator is huge.
There's a website, it's safe.ai slash ACT that Center for AI Safety has put together that allows you to contact your elected leaders about this issue really easily.
And then so the policy asks, there are three policy asks.
One is domestic regulation, right?
It's insane that your haircut and your lunch.
is more regulated than the most dangerous technology ever created.
That's absolutely insane.
Number two, chip tracking and verification.
We need to know where these chips are.
Senator Tom Cotton has a bill in the Senate right now about this.
There's some positive things happening around this.
And then the third thing is a treaty with China.
There is no one that wins a race to suicide.
We are not in a race with China to discover super intelligence.
If we race to suicide, everyone loses.
joe allen
So I think those four things with John, we are out of time.
I don't want you to go without telling the audience where they can find your shows, your amazing catalog of information interviews.
Where do they go?
john sherman
Go to the AI Risk Network on YouTube.
Please subscribe.
A ton of content for everyday people to understand these topics.
joe allen
Thank you very much, sir.
I really appreciate it and we definitely look forward to having you back.
I think your voice is very important in this conversation and War Impossible, please stay tuned.
We're coming back with Justin Lane with a very different perspective on what the problems of AI are and what even AI is, if we can even call it that.
stay tuned after the break.
unidentified
I always find her a child Even Welcome to War Room.
Here's your host, Stephen K. Vance.
Here's your host, Stephen K. Vance.
So all sorts of kind of science questions as to how good this will get.
Where the answer is, I don't know, but neither does anybody else.
No one knows, and we'll just have to kind of wait and find out in a couple of years.
You think it's an AI bubble?
You said bubble, so I know that.
Is this the bubble?
So there's this suspicion growing that AI is nothing but marketing hype.
That it's the same as crypto.
john sherman
Like I was saying, it's FTX.
joe allen
It's a safe and easy way to get into crypto.
unidentified
Eh, I don't think so.
A web 3 and the metaverse, just a new way for these tech companies to prop up their stock prices because they don't have any other real ideas.
Do you have an example of something that humans are doing that you think AIs are potentially super far away from?
Well, almost everything that humans do in the economy, AI is pretty far away from it.
I don't need some special example.
I can just pick a random job.
Do you think that we are in some sort of hype cycle?
Do you think that actually this market is as big as many are factoring in?
Well, first of all, we are definitely in a hype cycle.
There's going to be multiple.
One day you read in the papers, LOMs can do anything, and the next day you read, they've hit a limit.
josh hawley
Ignore all that stuff.
We're just starting this.
There's going to be massive transformations.
joe allen
All right, War Room Posse, we're back.
Again, that is film from Dagon Shani.
You can see his full productions at his X profile at D-A-G-A-N-S-H-A-N-I-N-I-N-I-1.
That's Dagon Shani or his YouTube page, Dagon on AI.
Definitely check it out and maybe he'll teach me how to do a cold open that good.
Coming up, I have a very special guest, Justin Lane.
Justin Lane is an Oxford trained AI expert.
He is also a profoundly insightful student and scholar of world religions.
We met when I was doing graduate studies at Boston University, and there he taught me to the extent that my wee brain can contain it, the nuts and bolts of artificial intelligence.
Justin has gone on to a number of other ventures, but most importantly, his current company, he is CEO of CulturePulse, which tracks all manner of social trends by way of machine learning techniques, AKA artificial intelligence.
Justin, I really appreciate you coming on.
Thank you so much for joining us.
dr justin lane
Super happy to be here.
joe allen
All right, before we get into AI as tool or AI as world destroying God, would you just tell the audience a bit about your work?
You work with these systems.
You know these systems in and out.
What do you do all day?
dr justin lane
Well, you know, as the CEO of a growing startup company, I mostly answer emails, do calls and a lot of paperwork.
But the fact of the matter is that the way we've started this is on technology that I've developed and that I still very much have a hand in developing every day, where we're building AI systems ultimately to have a positive impact in the world.
And there are ways of doing that.
It's not always easy, but there are a lot of technologies that we've been pioneering here at the company, as well as keeping an eye on the work that other researchers are doing around the world that allow us to really have you know a positive impact doing things like tracking and helping to mitigate gate conflicts trying to help create ceasefire deals you know working with a lot of different people around the world who are not in as blessed a position as we are to be sitting in nice peaceful rooms or nice peaceful cities and
seeing what we can do to try and make their lives a little bit better.
joe allen
Yeah, you work a lot with conflict zones.
You've been in Ireland recently actually covering the conflict in Belfast and you've been in Israel and Palestine.
You've been all over.
How does the artificial intelligence how did the artificial intelligence systems you work on benefit anyone in these in these problems with these problems yeah so there's two aspects to that.
dr justin lane
On the one hand, dealing with issues of conflict are really complex.
And a lot of times humans are bringing in a lot of biases, and some of those biases are good.
They're the exact biases we need to solve a conflict, right?
Things that computers can't do, like have empathy and love, a lot of times those are the things that bring an end to a conflict.
And AI is not able to replicate that in any meaningful way.
The other aspect of this is giving surety to the complexity of what we have.
When we're looking at data streams here at the company, we're bringing in data from all over the world in a lot of different languages to try and make sense of those complexities.
And the AI systems that we create, they differ a little bit from a lot of the assumptions I think that's a lot of the assumptions I think John was making, particularly around issues of explainability.
There are other kinds of AI algorithms besides the standard neural networks that are reliant on your back propagation training that create those black boxes.
There are ways of opening that black box and peeking into that black box and creating some explainability.
But in the AI systems that we focus on, we really want to keep explainability and accuracy at the forefront.
Because I can't go to a policymaker and say, look, here's what you need to do to try and end conflict in Israel Palestine, or here's what you need to do to deal with paramilitary organizations in Northern Ireland when they're going to ask, great, how do you know that?
Why are you so certain?
And I just go, the AI told me that is not a sufficient answer for anybody to make an informed decision on.
So we've focused on creating explainability systems and ways that we can hook into this and really give people an understanding as to why we're saying decisions should be made a certain way.
And so to that end, we're not trying to create AI that replaces people.
It's AI that's more about augmenting human intelligence and decision making because humans have to be in the loop in life or death decisions.
joe allen
And you also have a really strong focus on privacy, data privacy, correct?
dr justin lane
Yes, very much so.
And this comes from my background really as a researcher in psychology and doing work with high-risk populations like people who are in conflict zones, is you need to have anonymity and you need to be able to have privacy.
And so ensuring that the algorithms that are being used to make decisions also reflect that, I think, is of paramount importance.
A lot of the things that have happened because of the rise of social media has really destroyed the idea of privacy in the West in a way that's extremely problematic.
And the way that you see, for example, OpenAI going where they realize they're replacing Google Search for a lot of people and saying, oh, well, we're going to start recording all of your conversations and we're going to be using those to try and build your profiles and potentially monetize that.
That's something where I almost started agreeing with John a little bit.
We need to watch out for that, but that's more of a data privacy than an existential issue.
joe allen
Okay, so big picture on all this.
I mean, you're describing one narrow AI system or a suite of narrow AI systems that you control.
And so you're able to determine whether or not they're within the range of ethical appropriateness.
But big picture, you're no stranger to the transhumanist ideology or the posthumanist ambitions of the most extreme end of that.
You're no stranger to the predation of big tech companies.
So when you hear the arguments that AI poses either a catastrophic risk to human beings or an existential risk, it could cause humanity to go extinct, what's your reaction?
dr justin lane
It's generally, it's a strong disagreement.
The core focus of why I disagree there has to do with the idea of agency.
The AI systems can only do what we allow them to do.
If we allow it to pull a trigger, we shouldn't be surprised if it pulls a trigger.
And if we create an AI system that has 30% hallucination rates and we put a gun in its hand, you know, technologically speaking, we shouldn't be surprised if we're going to do harm to ourselves.
The fact of the matter is that there are already AI systems today that could destroy humanity.
All we have to do is take the nuclear codes and that little red button and hook that up to an AI algorithm.
Those AI algorithms, though, those have existed for 20, 25, 30 years.
We actually don't need AGI in order to have technology pose an existential risk to us.
It already has.
But because of the oversights that we've implemented into the technology.
We have taken away the agency of technology to make that life or death decision or that existential decision in the case of nuclear war.
And that's exactly the sort of approach that I think we need to take moving forward.
So to that end, it's a hard disagreement that, you know, creating stronger AI is going to pose an existential threat to humanity.
Humanity poses the biggest existential threat to humanity, and it's up to us to keep the AI in line.
We are the gods in this situation, not the AI.
joe allen
You know, this is a conversation we've been having with Colonel Rob Manus of a retired Air Force soldier and Brad Thayer, who is an expert in geopolitics and takes a keen interest in artificial intelligence and its implications for war.
You also have, though, a lot of other voices who they're arguing that we should automate as many systems from education to corporate life to government to the military, including drone swarms, machine gun turrets, nuclear weapons.
You have guys like Palmer Luckey at Angerill, who in the early days at least talked about human responsibility as an emphasis, and now seems to be hell-bent on creating fully autonomous capabilities.
killing machines and they're being enabled by accelerationists who are close to the Trump administration like David Sachs, Mark Andreessen and Andreessen.
Andreessen Horowitz has partnered with OpenAI's Greg Brockman to found a new super PAC for national politics, $100 million already, the leading the future super PAC to accelerate both the development and deployment of this technology and also Meta in California with $10 million now to get pro AI candidates.
So all this said, you do take the concerns seriously.
What to do about the, in my opinion, reckless accelerationist wing in the current Trump administration and really across governments all over the world?
dr justin lane
I think that we always have to rein that in.
And I think that, you know, to the extent that I also would agree with John that any policy that comes out of this needs to be really a groundswell of policy, because we can't really always rely on the elites among us to make the best decision, particularly when none of them have any real AI training whatsoever.
It becomes, you know, in the world of the blind, the man with one eye is king.
There's a lot of that happening right now.
The demand of the people to ensure that in any potentially lethal system, much less a system lethal by design, in the case of Andrew and others, we have to have a human in the loop.
Otherwise, who's responsible for the wrongful death?
The idea of unleashing an unmanned drone swarm on a population is probably going to be found to be something akin to a war crime.
But who are you going to try when that goes wrong?
Are you going to just put a drone on the stand?
Or are you going to put the CEO of that company on the stand?
Are you going to put a developer on the stand who developed the algorithm?
Those are the sorts of questions where I think people like me who are much more more positive about AI and say, no, we need to develop this, and we need to develop this in a way where the West wins, right?
I do think that the idea of a treaty with China, for example, is a misguided goal in this space.
American western AI technology needs to win the AI race, and there's just no two ways about that.
I'd say the good news is that we're well positioned to do so.
But the idea that we can have bad actors that are going to outflank us in our AI, that's going to be a problem that we need to deal with politically sooner rather than later.
And human loop and the AI holding that to accountability, we need to do that as soon as we can.
Otherwise, why not just throw it all out, right?
Why do we even vote?
Let's let AI choose our politicians too.
If we've taken our most important moral decisions and outsourced them to an algorithm, you know, go all the way.
But I'm very much against that.
I say we can nip this in the bud now and take a more realistic stance about what's going on.
The fact is, as a CEO, I would love to have, you know, the CEOing automated in the way that OpenAI has promised, for example.
But OpenAI and Microsoft, to the best of their ability, can barely automate a simple email right now.
So it worries me greatly the idea that they would be combining that with lethal technology.
But again, we've had that capability for thirty years.
It's been our ability to really hone in the agency and define the agency that we're allowing these algorithms to have that have been our saving grace and that needs to be the case moving forward.
So if any, you know, in terms of the accelerationism and the lobbying that's going on, there needs to be something put in place where regardless of where the technology goes, because, you know, to that extent, I agree with the more doomer side of the argument, we don't know where the technology is going.
But we didn't know where any technology was going until we got there.
So it's a bit of a deflated argument to me.
But the fact of the matter is, is that morals outlive technology.
Killing was wrong, you know, when we had fire and no internet.
The internet didn't change that and AGI is not going to change that either.
It's still going to be wrong.
So we need to make sure that we're holding ourselves to the highest moral human standards as we move forward through this.
joe allen
You have a really keen sense of where people are positioned on this race.
The frontier companies, Google, XAI, Anthropic, OpenAI versus the startups and the upstarts that are chasing at their heels.
And you also have a keen sense of where these American companies are in relation to China and the rest of the world.
As briefly as you can, can you give us a sense of who's winning this race and why?
dr justin lane
America is not hard for me to say.
America is currently winning the AI race, and it has to do with our ingenuity, is really what it is.
And when you look at, for example, Stanford's AI report and you look at who they're mentioning the most, they really only mention three political entities.
They mention the United States, they mention China, and they mention the European Union.
The only reason they mention the European Union is because of regulation, because they're spending all their time regulating something that they don't even produce, which has its own moral issues I can unpack with you another day.
But when it comes down to who's actually producing AI in any meaningful way, it's really just the United States and China.
But as you can see, for example, with Deepseek, most of Deepseek, and there's already been allegations being made by the likes of Google, Meta, OpenAI, that Deepseek is just selling the sort of cheap plastic knock-off of American technology anyways.
So it's really American ingenuity, American technology, and that innovativeness that has always been the driver of the American economy that is what's really pushing AI forward right now.
So to that extent, it's up to the U.S. to be the leaders in that going forward.
China is going to be taking the derivative scraps of American ingenuity and building it and putting it together any way they can.
That's the very nature of their economy, right?
They're not a big value add economy.
They're the ones that put all the pieces together at the very end and then ship it overseas.
AI is following exactly the same pattern.
You know, they don't build the intellectual property there.
They take the intellectual property through joint ventures and IP capture clauses and contracts.
The United States and the American workforce and technological ingenuity there, they're the ones that are really making most of that AI.
And you also see that from the exodus of AI innovators in Europe who are leaving Europe to go to the United States.
And that's been the case not just in AI, but when you look at it, the invention of the Internet, the invention of the automobile, a lot of the groundbreaking innovations of the world, right?
European minds have been behind it and then they take it to the United States.
And that's when it changes the world, is when it actually gets into that culture of innovation, that culture of risk taking that, that, you know, we're so well known for and that we, frankly, just do better than everybody else.
joe allen
Well, I would add that with that risk taking culture, we also have American responsibility in all this and American culpability in all this.
But we'll save that for another day.
unidentified
Justin, the AGI nuclear technology is the best.
joe allen
We have a lot of people, very, very sophisticated people and people who can move policy and can move money.
Can you just give us a quick pitch?
We have about a minute left.
What is CulturePulse again?
Where do people find it?
How do people get in touch with you if they need your services, sir?
dr justin lane
Yeah, you can find us at culturepulse.ai.
We're also very big on LinkedIn, for example.
You can always email me.
The emails are all there on the website.
Our core competency is putting the humanity back in AI, using human psychology and the things that make us fundamentally human, and using that technology to try and make messages that resonate, right?
Keep people honest on social media.
Help them get their brand out there so that they can speak with their own voice.
They don't have to just work with Zuckerberg's algorithm or any other algorithm being designed by those companies.
Work to tell their own story.
But then in the work that we do with governments and NGOs, we're really all about trying to understand conflict and get us to a more peaceful world so that the world is a lot better for our kids than it even was for us.
joe allen
Well, Justin, you may not make me more optimistic about technology, not in the least, to be honest, but you make me more optimistic about human beings.
So again, I really appreciate you coming on.
I thought you could, brother.
dr justin lane
Anytime.
Happy to.
joe allen
Thank you, sir.
All right, go to birchgold dot com slash Bannon.
That's birchgold dot com slash Bannon.
Is the continued divide between the, between Trump and the Federal Reserve putting us behind the curve again?
Can the Fed take the right action at the right time or are we going to be looking at a potential economic slowdown?
And what does this mean for your savings?
Consider diversifying with gold through Birch Gold Group.
For decades, gold has been viewed as a safe haven in times of economic stagnation, global uncertainty and high inflation.
Birch Gold makes it incredibly easy for you to diversify some of your savings into into gold.
If you have an IRA or an old 401k, you can convert that into a tax sheltered IRA in physical gold or just buy some gold to keep it in your safe.
First, educate.
Birch Gold will send you a free infokit on gold.
Just text Bannon to the number 989898.
Again, text Bannon to 98989898.
Consider diversifying a portion of your savings into gold.
That way, if the Fed can't stay ahead of the curve for the country, at least you can stay ahead for yourself.
Also, maybe you missed the last IRS deadline or you haven't filed taxes in a while.
Let me be clear, the IRS is cracking down harder than ever, and this won't go away by itself.
That's why you need Tax Network USA.
They don't know, they don't just know the IRS, they have a preferred direct line to the IRS.
They know which agents to deal with and which to avoid.
Their expert negotiators have one goal, settle your tax problems quickly and in your favor.
Go to taxtnnetwork.usa.
I have completely misstated that.
I apologize, Tax Network USA and War Room Posse.
Visit TNUSA dot com slash Bannon.
That is TNUSA dot com slash Bannon or call 1800 958 1000.
That is 1800 958 1000 for Tax Network USA.
Thank you very much War Room Posse for hanging out there even with these sloppy ad readings.
I pray the machines don't get you.
Export Selection