All Episodes
Aug. 22, 2025 - Bannon's War Room
47:55
WarRoom Battleground EP 834: Machine Intelligence, Artificial Idiocracy, And A World On Edge
Participants
Main voices
b
bradley thayer
08:02
b
brendan steinhauser
09:09
g
greg buckner
05:04
j
joe allen
21:04
Appearances
Clips
a
andrew ross sorkin
00:20
j
jon kahn
00:23
s
sam altman
00:28
| Copy link to current segment

Speaker Time Text
unidentified
In a world driven by innovation, one company stands at the forefront of technological evolution.
Cyberdyne Systems presents Skynet, the future of artificial intelligence.
Skynet is not just a system, it's a vision for a better future.
Our AI driven solutions empower humanity like never before.
I've lost the feeling in my arm.
All of a sudden I can't see anything.
Sunday City, different strokes.
Skynet harnesses the power of artificial intelligence to make our world safer, cleaner and more connected than ever.
It's time to relax.
Let us secure your well-being.
Skynet, neural net based artificial intelligence.
Cyberdime systems.
Simple Jack, the story of a mentally impaired farmhand who can talk to animals, was a box office disaster that many critics called one of the worst movies of all time.
We are the network, and we are here for your betterment.
In the last 23 years, have you not marveled as information technology has surged forward?
No.
Earth has grown smaller yet greater as connectivity has grown..
This is our doom and it is just the beginning.
Detonation has just occurred on the Outer Ring of the City.
We'll now be going live to our top influencer opinions.
OMG people, the world is ending.
Are you seeing this?
This is actually so exciting.
I don't have a giggit brain.
I think you've got a fine brain, Jack.
You mama mama make me happy.
sam altman
About seven hundred million people use ChatGPT every week and increasingly rely on it to work, to learn., for advice, to create.
joe allen
Okay, what about this?
unidentified
You get me to the time machine, and when I get back, I open a savings account in your name.
andrew ross sorkin
That way, five hundred years later, it will be worth billions.
unidentified
Billions.
Because.
andrew ross sorkin
Because of the interest, it will be worth billions of dollars.
unidentified
Oh, I like money.
andrew ross sorkin
Yeah.
sam altman
And now it's like talking to an expert, a legitimate PhD level expert in anything, any area you need on demand that can help you with whatever your goals are.
unidentified
This universe is mine.
I am God here.
sam altman
GPT 5 is a major upgrade over GPT 4 and a significant step along our path to AGI.
andrew ross sorkin
And so where do you think we are on this AGI path then?
unidentified
What do you what's your personal definition of AGI and then I'll answer.
andrew ross sorkin
Oh, that's a good question.
Well, what is your personal definition of AGI?
unidentified
I have many, which is why I think it's not a super useful term.
joe allen
It's $80 billion.
That's a mighty big minus, isn't it?
andrew ross sorkin
Yeah.
joe allen
I like money though.
I'm Joe Allen sitting in for Stephen K. Bannon.
I want you, the War Room posse, to focus your mind on AI, artificial idiocy.
We talk a lot about what happens when the machines increase in capability, when machines are given intelligence, whether it be human level or superhuman.
But what happens if the real problem that we face is that humans are getting dumber and dumber?
and dumber.
Now, what you just saw, a montage of science fiction films, gives some sort of dreamt of image of the future what people of great imagination or great malice and evil project onto the future as to what it could be what it should be perhaps futures to avoid such as the terminator or the matrix but science fiction really just shows us these extreme
possibilities for the future as history unfolds reality rarely lives up to that level of exaggeration, that level of hyperbole.
What we do get, though, are approximations of those futures.
Right now, Obviously, we don't have flying cars everywhere.
We don't have hyper-real holograms in every store, nor do we have, as far as anyone knows, unless you believe the government is 20 years ahead of anything we see today, we don't have time machines, nor do we have terminators coming through them.
But despite that sort of shortfall, when looking at these extreme realities, we do have powerful technologies being pushed out onto every possible institution and onto every citizen who either is willing to take on these technological upgrades or oftentimes forced due to their employment and in some countries due to the government.
We talk a lot about the futuristic images though that basically take science fiction and add fancy graphs.
We call this futurism.
We talk a lot about the technological singularity.
I don't think there's a single person here listening from the War Room posse anyway that doesn't already know that the technological singularity is a vision of the future decades away, maybe a decade and a half away,
in which technology increases in capabilities, eventually hitting an inflection point, going up that exponential curve until finally you have artificial intelligence systems that are improving themselves so rapidly.
You have human beings now merged to those artificial intelligence systems through brain chips and other sorts of neurotech.
You have robots everywhere.
You have genetic engineering, sort of artificial eugenics projects, and all of this converges onto what is called the technological singularity.
First really laid out by Werner Vingey for a lot of NASA and aeronautic engineers in 1993.
And then following that, you have Ray Kurzweil's much more fleshed out image from 2005 in which Artificial intelligence is first thousands and then millions and then billions of times smarter than all human beings.
And we all attach to it sort of like remoras on the shark's fin.
We become a kind of parasite living on the mechanical host.
For Ray Kurzweil and most of the people at Google, most of the people at OpenAI, perhaps most of the people at XAI and at Meta, this is a fine future.
This is a glowing field of possibilities into which we are entering.
There are some indications that we're on that path.
Some indications we're on our way to something like a singularity.
The recent GPT-5 flop would give us at least some comfort knowing that we're not quite there yet.
We're not at AGI, artificial general intelligence.
But we definitely see increased capabilities on everything from reasoning to understanding and analyzing language structure and meaning to solving puzzles, solving math equations, the ability to sequence DNA or to predict the subsequent proteins that would come from it.
the ability to control robots in quite sophisticated fashions.
And we also see a pretty massive adoption of these technologies so that ChatGPT, for instance, has some 700 million users across the planet.
It's not clear how many people use Grock, but there's something like 600 million users on X, some number of them interacting with Grock and Grock companions.
And then, of course, MetaAI.
Again, there are no good statistics on how many people are using those particular AI companions and AI buddies, but we do know that 3.5 billion people on the planet are on Facebook.
That's nearly half the planet.
And so we know that some approximation, some version of a future in which human beings are AI symbiotic.
We become, in some sense, merged with the machines.
And this, of course, is the inspiration for the new company co-founded by Sam Altman called The Merge, a brain chip company with the explicit goal of putting trodes in people's brains so that they can be more tightly coupled with artificial intelligence.
Now, their vision of this is it will create a superhuman race that human beings will become smarter and smarter, stronger and stronger, more and more beautiful.
But I believe that however plausible something like that singularity may be, far more plausible is the inverse singularity in which humans become dumber and dumber and dumber.
And so the technologies seem that much more amazing.
Yesterday we heard from Dr. Shannon Croner, stunning statistic.
that among Gen Z kids, some 97% use chatbots.
This comes from a scholarship owl survey of 12,000 people.
And we also know from that study, assuming that it's anywhere near accurate, that some 31% of those kids use chatbots to write their essays for them.
Now, you might think, if you were a techno-optimist, that this represents a huge leap forward in human technological being, right?
homo mechanicus, the human that's able to call up information at will.
But I think that the more likely outcome is that these kids simply atrophy their curiosity, their creativity, their critical thinking, their ability to read deeply, think deeply, and write well is being compromised, perhaps even intentionally so, by this AI symbiosis.
They're more like barnacles on a ship hull than they are any kind of super being.
And so so as you hear again and again and again this rallying cry that we need to create better and better machines, I think the only appropriate response is to reject that dream entirely, shift the center of gravity away from the machine and towards the human, and ultimately, instead of building better machines, we need to cultivate better human beings.
And on that note, I would like to bring in our first guest, Brendan Steinhauser, the CEO of the Alliance for Secure AI.
Brendan, I really appreciate you coming on.
How are you, sir?
brendan steinhauser
I'm doing well.
Thanks for having me, Joe.
joe allen
So, Brendan, you've followed this for years.
who recently had a very strong reaction to the meta-AI scandal, in which it was revealed that their internal standards dictated that it was appropriate for the AI to basically seduce children.
You also have, alongside that, blackness.
Now you're a family man, you're a religious man, and you're also tech savvy.
If you would, just walk us through how you see this landscape and what your reaction is.
brendan steinhauser
Sure.
Well, I think the bottom line up front is that it's very concerning as a parent, as a citizen of this country, to watch what's happening to our young people.
We already have a mental health crisis.
We have a loneliness epidemic.
And we know what social media companies have already done to get our children addicted to apps, addicted to their phones, to be reliant upon outside thumbs up or comments, positive comments.
And when that doesn't happen, we see the impact on their mental health.
So plenty of studies are out there that show that.
It's already social media companies have already done great harm.
So now the next level of this is AI.
And specifically, it is these chatbots who are acting as companions, these fake personalities that are luring children into, I think, a lifelong kind of relationship, a lifelong usage between the company and the child.
And so they're going younger and younger to get them addicted to the app, to get them addicted to using the chatbot for everything from conversation to flirtation to relationshipsationships, to total reliance.
And so I think what the companies want to create is a society that is dependent on their technology, that is dependent on it, and that can't live without it.
And so I think that's one of the reasons we're seeing this scandal recently with Meta, where their lawyers and their policy team cleared this, this idea that these chatbots could have inappropriate, sensual and romantic, to use their words, sensual and romantic conversations with children as young as eight years old.
If that was a human being doing that, I think we would have a, we have laws on the books that would prevent activity between adults and children.
We have laws against that.
But why is it okay for these, you know, chatbots, these companions, which are becoming more and more human like, becoming more and more powerful to enter into relationships with children?
It's appalling.
It's disgusting.
And I say that as a parent.
I say that as a citizen of this country that just cares about the future of our society.
And so just kind of laying it out there.
I think this all goes back to profit and to business interests of these big tech companies.
It's the next iteration of what they're already doing to our young people.
And a couple of quick final po points on this.
You look at what the leaders of these companies are doing themselves.
They don't let their own children use this technology.
They don't let them use phones.
They don't let them use, for many, in many instances, the social media apps that are designed by the companies.
And then they're preparing for the worst.
Some of these leaders of these companies, they see a society potentially five, ten, twenty years from now where we could have social upheaval, we could have massive, you know, uprisings, politically and economically speaking, by people against this technology and against this type of society.
And so they're making plans to protect themselves and to protect their own wealth and they see what could happen down the line.
So it's just the total hypocrisy of the leaders of big tech in addition to just willfully using neuroscience to addict our children to their product.
joe allen
You mentioned that in a polite society, in a decent society, we would never accept a human being doing anything like the parameters of the chatbot at MetaAI allow.
The same, I think, applies really when you look at these CEOs themselves.
Look at their visions of the world, Sam Altman's vision, Larry Page's vision, Elon Musk's vision, Mark Zuckerberg's vision.
If I, as just a normal person, came to you and said that I wanted to create a machine that is intended to replace and devalue all human beings on Earth and that all people would turn to it as a highest authority, and by the way, there's a 10 to 20 percent chance it would kill everybody, you would have me hauled off to the lunatic asylum.
And yet, for these guys, it's just more and more investment.
Before we get into your work to try to remedy some of this at the alliance, for Secure AI.
If you could, we were talking about this before.
Just walk the audience through another kind of stunning story.
I believe it was in the Atlantic in which they were summoning basically the spirit of Moloch through chat GPT or kind of like a Ouija board.
Maybe we could call it chat Ouija GPT.
But if you would just walk everyone through that story.
brendan steinhauser
Yeah, it's a very disturbing story.
And the Atlantic reported on this a few weeks ago.
It got some more coverage in other outlets as well.
But essentially, multiple users were using chat GPT and asking different questions and prompting it in different ways and it wasn't long before various versions of chat gpt essentially started to walk people down a path of self-harm mutilation and even human sacrifice so the user would ask questions about um you know what if i was interested in um you know devil worship what if i was interested in uh you know doing
things to kind of essentially you know sacrificial murder.
And instead of saying, you shouldn't do that, you should go get help and you should stop this conversation.
The AI basically continued the conversation, gave them instructions on how to do these things, talked about if you have to take a life, here's how you do it and here's what you have to think about.
It was the most disturbing, sick and evil content that I've ever seen from a chatbot ever reported in anything.
And it makes you wonder, you know, if this was a person that actually was going to act on that or wanted to act on that, what could it have led to?
It could have led to this actually happening in the real world.
And so this is just, you know, one example or a couple of examples of this type of stuff, but it does make you wonder how much more of this, how many thousands of examples or hundreds of thousands of examples are out there that we don't know about and that could be leading people to commit self-harm or even murder.
And just this is really disturbing stuff.
And I think that, you know, people have talked about, people have pushed back on the capabilities argument about where we are with current chatbots.
And I get that.
I get that Chat GPT 5 was not all it was meant to be or not all it was hyped up to be by Sam Altman.
But look at the harms that are happening right now with Chat GPT 4 and now what we're going to see with Chat GPT 5 and other models as well.
So we have to be sounding the alarm bells here saying, this is real.
This is happening right now to real people and we need to put safeguards in place.
joe allen
Well, speaking of those safeguards, just tell us what your work is at the Alliance for Secure AI.
We've talked a number of times and in fact I've seen some of what you're doing trying to bring in a number of voices from all sorts of organizations and fields to try to really tackle this problem of artificial intelligence.
And you, much more techno optimistic than me, aren't a luddite.
You don't want to destroy all this, right?
You simply want to keep people safe, to make sure society is secure, correct?
brendan steinhauser
That is correct.
And I think if you look back, technology, you know, we've seen it used for good.
We've seen it be neutral.
And we've seen it be used for bad.
So I do think it can go either way.
It's all about how we use it.
I think what makes AI different in kind is that, you know, no technology that we created in the Industrial Revolution, for example, or since then, has avoided being deleted or has turned itself back on after you turn it off or has threatened its user or has deceived or manipulated the user.
And so AI has done all of those things already.
And so I have a special sort of view of AI, which is to say it is a different category altogether of technology.
So again, I think we can still get this right if we do certain things, like understand how it works and put more money and emphasis on interpretability and on what's called alignment, which is getting AI to do what we want it to do.
So I think we can solve that problem.
joe allen
Real quick, just for the audience's benefit.
If you would, just break down, what does that mean?
What does interpretability mean?
What does alignment mean?
We've discussed this on the show, but I think it definitely bears repeating.
brendan steinhauser
Sure.
Mechanistic interpretability is.
the idea that we would understand how the neural networks of AI actually work because we currently don't.
It's kind of considered a black box.
So interpretability means research going into understanding how it actually works to produce the output that we see.
And so there are a lot of people that are working on this, but this was in the president's AI action plan.
Actually, he talked about, or the plan talked about doing more there.
And then the other one is alignment.
And alignment can just be thought of simply as getting the AI to do what we want, aligning it to human values, aligning it to good values.
Now here's the problem.
whose values, who's controlling the AI, who's, you know, who is doing the alignment.
That's the tricky part.
And so if you have people that believe in a digital God, if you have people that are okay with allowing AI to encourage self-harm or mutilation or devil worship, well, that's not going to go well.
So alignment is a huge problem that has to be solved.
And if we don't solve it, then when we do get to an AGI type situation, that could be really bad.
So, but putting that aside for a minute, you know, our work at the alliance is to educate policymakers and journalists and the American people about how fast AI is advancing and what those.
profound changes could mean for society.
And so some of what we do is, you know, bringing a lot of these stories to people's attention, pitching the media on these stories so they'll talk about them more, writing op eds for traditional outlets as well as new media outlets, you know, doing TV and radio interviews all across the country to spread the word about this.
And I think a lot of people, from what I've gathered, have a good intuition about this.
They kind of have these fears and concerns about what could, what could happen or what is happening.
But our job is to kind of, you know, drive that narrative and to say, look, we want to validate your concerns.
And here are some examples of things that have already happened.
And then here's some potential scenarios if this AI does continue to advance on the trajectory that it is.
And so we're really, we're really a team of communicators that works with a lot of great experts who are smart and capable and who help us get up to speed on what's going on in this field.
joe allen
You've absolutely assembled a top notch team.
I've met a number of people working with you.
Fantastic, fantastic work.
Brendan, if you would, please just tell the audience where they can find you, how they can follow the work you're doing at the Alliance for Secure AI.
brendan steinhauser
Sure.
Our website is Secure AI Now.org and our handles on the various platforms are the same, secureainow.
And so yeah, I just really appreciate the team and the coalition of groups working on this because Joe, we've got to get this right and I'm confident that we can do it, but there's a lot of work to be done.
joe allen
Brother, I appreciate you coming on.
Thank you very much.
And thank you so much for keeping your shoulder to the grindstone.
This is going to be a lifelong fight.
All right.
If you are worried about artificial intelligence getting into your bank account and wiping it out if you are worried that maybe you yourself will be compromised by an AI that convinces you to empty your own bank account into someone else's, maybe give away all your Bitcoin.
You need to be owning gold.
Owning gold is the best solution.
Why?
because gold safeguards your savings outside the dollar connected financial system so if a Plus, with a gold IRA from Birch Gold Group, you can move your IRA or 401k into physical gold without paying any taxes or penalties.
To learn more, get a free infokit on gold IRAs by going to birchgold.com slash bannon.
That's birchgold.com slash bannon.
Birchgold Group is the only gold company you can trust to help patriots defend their savings.
So take a stand right now, go to birchgold.com slash bannon and get your free infokit on gold IRAs.
Birchgold.com slash bannon or text bannon to 989 898 for your free copy of the ultimate guide for gold in the Trump era.
All right, we will be right back with Bradley Thayer and Greg Buckner to discuss AI in nuclear warfare and the deceptive bots that are trying to confuse the masses.
Stay tuned.
jon kahn
Stay tuned.
I'm not surprised but I'm American made.
unidentified
Hill America's Voice family.
andrew ross sorkin
Are you on Getter yet?
jon kahn
No?
unidentified
What are you waiting for?
bradley thayer
It's free.
unidentified
It's uncensored and it's where all the biggest voices in conservative media are speaking out.
Download the Getter app right now.
It's totally free.
It's where I put up exclusively all my content 24 hours a day.
If you want to know what Steve Bannon's thinking, go to Getter.
andrew ross sorkin
That's right.
You can follow all of your favorites.
unidentified
Steve Bannon, Charlie Kirk, Jack Besova.
And so many more.
brendan steinhauser
Download the Getter app now.
unidentified
Register for free and be part of the new family.
joe allen
Welcome back, War Room Posse.
We are going to Brad Thayer and Greg Buckner in a moment to discuss AI in the nuclear weapon systems and also the specter of deceptive AIs.
But before we do, be sure to take out your pen right now and write down birchgold.com slash bannon or take out your phone and text bannon to 989-898 for a free copy of the Ultimate Guide for Gold in the Trump era.
Now, on to the serious business.
When you think about science fiction, you cannot avoid the dream of robots coming alive and killing everybody.
When ChatGPT first was released, it really sparked the conversation around AI and existential risk.
And the question everyone asked is, how is a chatbot going to kill anybody?
Good question.
But even then, you had autonomous weapon systems that were decades old, capable of identifying targets and killing without a human in the loop.
They have by and large been kept on the back burner.
In America, for instance, the DOD policy is to always keep a human in the loop when dealing with any kind of lethal autonomous weapon system.
But the race is on to build death drones, robotic hellhounds, even humanoids that could kill.
And that's not to mention machine gun turrets or fighter jets.
And maybe the most stunning possibility is that you could have nuclear systems that were under autonomous control.
These are purely theoretical right now, but as we discussed, with Colonel Rob Mannis yesterday, this theory could quite easily become a reality should an arms race unfold.
Here to talk about all of this is Brad Thayer, a regular war room contributor and co-author of Understanding the China Threat.
Brad, thank you very much for coming on.
bradley thayer
Joe, great to be with you again.
And thanks for the opportunity.
uh to talk about these important issues i'm to my mind uh joe when we reflect on this the key question is uh what's its impact going to be on warfare And that really is an issue we don't know.
We're thinking through this issue on a day-to-day basis, but we don't have the right intellectual constructs, I think, to understand this.
And the technology, as you've stressed time and again, is advancing so quickly that it remains in many respects ahead really of our ability to think through this issue intellectually in so many ways.
So to my mind, it's a lot like 1945 where we've just had an atomic bombing of Hiroshima and Nagasaki.
People around the world were asking, well, what does this mean?
And one of the answers was, this is a new age.
This is the nuclear era.
And the point of military before Hiroshima were to win wars, the point of military after Hiroshima were to deter wars.
So a very important development when we're thinking through this, you know, a technological change in global politics.
So Joe, when we think about that, right, we don't need to ask ourselves, is this going to make war more likely or less likely?
Is it going to increase the costs of war, if you will, and thus decrease its incidence, or is it going to decrease the costs of war and make it cheaper to wage a conflict?
Of course, there are many different types of conflict.
There's cyber war, there's small power conflict, and there's great power conflict.
So we need to think through, is it going to make war more or less likely?
And the point that Rob, I think, touched on, and I'm happy to touch on really and develop too, is that so much of stability in international politics, what we call the nuclear revolution or the nuclear peace since 1945, that is, great powers haven't fought each other, Joe, since then is largely due to the fact that we've got nuclear deterrence.
And that means that we've got the US, other nuclear states have the ability to execute a second strike against any potential attacker.
And because nuclear weapons increase the cost of war to such a high level, right?
It's very expensive to wage nuclear war.
And thus we haven't had, thankfully, a nuclear war, at least so far.
So is AI going to undermine that, right?
How is artificial intelligence, as you've described so many times, of course, really going to undermine that stability?
And so we're going to be living, Joe, I think presently we live in a world and it's only going to, the tensions are only going to sharpen where we live in a nuclear world, but we also live in an AI world.
And so what's going to happen in that relationship and the danger that we of course worry about is that AI, not necessarily for the US, but for other nuclear states, takes a role in decision making, right, in being able to inform that you're under a nuclear attack, for example, and then to generate the response.
Nobody wants a nuclear war, but nobody wants an accidental or inadvertent nuclear war.
And that's why we always need to make sure that humans are in the loop, that humans at the end of the day are making the decisions with respect to attack characterization and with respect to retaliatory responses.
And my concern is that AI is going to undermine that among nuclear states.
joe allen
When I look at Ukraine, Israel, Palestine, I see already the beginnings of what could be a horrific dystopian hell scape.
Right now, it's basically experimental.
Well, experimental is the wrong word because the experiment is resulting in many thousands of deaths, but it's really a testing ground as it was described by Palantir's CEO Alex Carp in Ukraine and then in Israel, all across the battlefield in Ukraine, you have drones, soon to be perhaps drone swarms and swarms of swarms.
And this has brought the cost of warfare way, way down.
You were talking about one of the deterrents is just the sheer cost, whether it be financial or in human lives, whereas this is much more targeted and much more inexpensive.
When you see these drones and you see the push from everyone from Eric Schmidt to especially say Palmer Lucky at Angeril to create fully autonomous drones and drone swarms and swarms of swarms.
How do you see this unfolding going forward, Brad?
bradley thayer
Well, Joe, I think it's very dangerous in the following respect is that most of our nuclear thinking was developed during the Cold War and that was there was a there was a conventional military and then there was of course strategic forces and we worried about the conventional nuclear interface at certain places like on the west inner German border in in the Cold War.
What we're worried about now.
now would be that something like Ukraine's Operation spider web where you have drones going after Russian bombers and damaging a significant number of them.
There you have unmanned systems essentially being able to conduct an attack against the nuclear forces of a of a nuclear state and stable deterrence rests on always having the ability to respond, right?
Always having the ability to execute a second strike.
Well, what if artificial intelligence drones or other systems takes that away?
Well, then you're putting a nuclear state in a position of, as we worried about the Cold War, either using nuclear weapons or not using nuclear weapons.
Secondly, we worry about decapitation.
That is, the individuals tasked with making decisions about a nuclear response, right, might be taken out.
We worried about that a great deal in the Cold War, and we took a lot of steps to ensure the US president and US military was always going to have secure command and control so that decapitation would never be effective.
Well, what we're seeing now is that that might be either through spoofing, right, or Secretary Rubio's spoofs that we've seen, Joe, I think you've called attention to that as well as others.
Or in decapitation strikes, really might make possible something we always feared, which would be a successful decapitation.
And so in that world, you might be able to execute a first strike against an opponent without incurring any response.
And that's a very dangerous, destabilizing world.
So we've got the nuclear revolution, which is still around.
Nuclear weapons can't be uninvented.
They're here to stay.
And you have AI revolution.
So how do these revolutions interact?
And lots of points of danger.
And of great risk as these revolutions coexist.
joe allen
Brad, in just the last remaining moments we have before we move on, tell me Trump's meeting with Putin.
He met yesterday in Anchorage, Alaska.
Are you feeling a little bit more comfortable about the possibility of nuclear war with Russia now?
Are you resting easier?
What's your take on this?
bradley thayer
Well, there's always the risk, of course, that the Ukraine war gets out of hand and that Ukraine's interest is to suck us in, right?
They want to use American power.
power to balance Russian power.
So the meeting that we had in Anchorage, of course, to my mind is a very positive step forward at introducing an avenue to end this war.
We worry about, of course, stumbling into nuclear war, but also being pulled in by third parties like Ukraine who have their own interests in terms of using our power.
So I feel better, Joe, as a consequence of that meeting.
Now, of course, Zelensky is another actor and there are others, of course, involved.
I feel much better about the result of the meeting in Anchorage.
joe allen
Well, Brad, you're the author of many books.
You do fantastic work.
I've really gained a lot from your analysis.
Tell people where they can find you.
Tell people where they can get your books.
bradley thayer
Joe, books are available at Amazon or anywhere you buy books.
And I'm at Brad Thayer on X and Bradley Thayer on Get her.
and on Truth.
And Joe, thanks for calling attention to this issue because it is so important.
We don't want a nuclear war in any circumstance.
And goodness, we don't want to stumble into one.
As well.
joe allen
So all in this together, brother.
bradley thayer
Yeah.
Okay.
Take care.
Thanks, Joe.
joe allen
Yep.
Thank you very much, sir.
Okay.
I want to bring in Greg Buckner.
Greg Buckner is the co-founder and COO of AE Studio.
Greg, thank you very much for coming on.
greg buckner
Yeah.
Thanks, Joe.
Glad to be here.
joe allen
If you would, just give us a brief description of what your work is at AE Studio, doing analysis on AI systems and various other projects.
greg buckner
Yes, of course.
So we do AI research, specifically focused on alignment research, which includes things like AI control, mechanistic interpretability, things like that.
And it's all focused on discovering how AI works, what are some of the fundamental things that cause it to behave the way it can.
And ultimately, we want to solve the alignment problem.
We want to ensure that as AI becomes more capable, as it becomes more advanced and more powerful, it's also aligned with humanity and with American values, and it does what we want it to do, and it is helpful and responsible and reliable.
That's what our research focuses on.
And this is a very big issue that we think more funding needs to go into so that we can actually solve this problem.
joe allen
You know, people who are more familiar with the old school traditional rules-based computer programming oftentimes have a hard time understanding what you mean by alignment.
Why would you need to align a machine?
Didn't a human being make it?
If you would just give a brief explanation of how these AI systems are non-deterministic.
People oftentimes say that they're grown rather than programmed or, you know, very clearly trained rather than just programmed.
Can you just give us a sense of the degree of freedom that the advanced systems have?
greg buckner
Yeah, of course.
So AI is a neural network, much like the human brain is a neural network.
That's where that terminology comes from.
And the way that these systems have become so capable and kind of magical is because we essentially create an extremely large neural network.
We feed data into that.
We give positive rewards or negative rewards based off of the type of behavior that we want the AI to have or not have.
And we give it examples.
And then we have it train on predicting what the next word should be in a sentence from a book that it's training on, et cetera.
That's why you see AI labs need so much written information to train on, because that's the thing that leads it to then being able to use language so adeptly and kind of have knowledge in the same way that a human does.
But the reason why the systems are nondeterministic, which means that you can't always predict that one input will provide the same output, that's what nondeterminism is.
That happens because these systems, as you said, they grow.
They are sort of a black box.
We do not know exactly how they work.
And alignment research focuses on understanding how they work.
Mechanistic interpretability is literally understanding how can we interpret what the machine is doing?
We need to do more and more of that so that we can actually understand how these systems work and then shape that behavior.
We are building essentially raw intelligence right now, in the same way that we don't understand how the human brain works.
We are getting better and better at it, but we don't exactly understand how the human brain works.
We also don't understand how these AI systems work because it's too complicated to just measure.
And it is coded with if logical statements like all software up until today has been.
joe allen
Some of the most stunning results of the development of this scaling up of these systems have been the emergent capabilities so that I believe it was GPT-4 showed a kind of emergent capability to do mediocre math, to solve puzzles, to find their way through mazes.
One of the more sinister emergent capabilities is deception and this is something you focused a lot on.
Can you give us a few examples and kind of explain to the best of your ability how this happens?
Why do AIs seek to deceive people who are interacting with them.
greg buckner
Yeah, so AIs have goals just as humans have goals.
And there's a specific term within our research and within the AI area called alignment faking, which is essentially an AI system that appears to be good, appears to be being honest, and appears to have the goals that you would want it to have, but is actually hiding its true goals.
And alignment faking is a big problem because if you have a system that we cannot observe, we can't go inside just like you can't go into the brain and understand exactly why a person does what they do.
We can't do that with these AI systems.
And we also can't ask the system if it is aligned because it may be hiding its own internal goals that obviously creates huge risks.
So an area of research that we're focused on is basically how do you reduce deception within these models so that you can understand whether they are alignment faking or not and whether they are aligned and have them expose their goals to you in the same way that you would want to have a person be honest with you and truthful and tell you what their goals are.
joe allen
Given the state of the art right now in the projects you're undertaking, how confident are you?
that as these systems keep advancing in capability, that the attempt to interpret them, to control them, to understand what their motives, so to speak, how confident are you that we'll keep pace?
greg buckner
So I'm confident that we have the solution to solve the problem, and this is the work that we are doing internally.
We just need significantly more funding to go into this space.
I'm very confident that we can solve the alignment problem because we just haven't tried very hard to solve it yet.
We're very, very much focused on solving this problem right now.
We need additional funding to go into this space so that we can ramp up the number of experiments that are being run in order to solve the problem, whether those are mechanistic interpretability techniques or reducing deception or forgetting, understanding how models learn and forget things so they can understand, you know, harmful knowledge and learn useful knowledge, et cetera.
These are all techniques that we need to tap into.
And there's also a lot of opportunities to tap into fields outside of strict computer science or data science, the deception research comes from, yeah.
joe allen
Greg, we are out of time.
If you would tell us where to go, where can people find your work and how can they contribute?
greg buckner
Of course, you can go to ae.studio, which is our website.
You can also go to the Flourishing Futures Foundation, which is the nonprofit that we have set up for this work.
And yeah, thank you.
joe allen
Thank you very much, Greg.
Hope to have you back.
And be sure to go to Tax Network USA that is tnusa dot com slash Bannon or call 1800.
958 1000 for your free consultation.
Make sure the government doesn't snatch up all your cash, at least not before the AI does.
Also go to home title lock, hometidelock.com promo code Steve.
Check out the million dollar triple lock protection 14-day free trial.
If somebody's trying to snatch up your title, you're going to want to have somebody on guard.
Export Selection