All Episodes
Aug. 22, 2025 - Bannon's War Room
47:58
WarRoom Battleground EP 834: Machine Intelligence, Artificial Idiocracy, And A World On Edge
Participants
Main voices
b
bradley thayer
08:05
b
brendan steinhauser
09:16
g
greg buckner
05:21
j
joe allen
21:14
Appearances
Clips
a
andrew ross sorkin
00:18
j
jon kahn
00:17
| Copy link to current segment

Speaker Time Text
unidentified
In a world driven by innovation, one company stands at the forefront of technological evolution.
Cyberdyne systems present Skynet, the future of artificial intelligence.
Skynet is not just a system, it's a vision for a better future.
Our AI-driven solutions empower humanity like never before.
I've lost the feeling in my all of a sudden I can't see anything.
Sunday 3, different strokes.
Skynet harnesses the power of artificial intelligence to make our world safer, cleaner, and more connected than ever.
It's time to relax.
Let us secure your well-being.
Skynet, neural net-based artificial intelligence.
Cyberdyn systems.
Simple Jack, the story of a mentally impaired farmhand who can talk to animals, was a box office disaster that many critics called one of the worst movies of all time.
We are the network, and we are here for your betterment.
In the last 23 years, have you not marveled as information technology has surged forward?
No.
Earth has grown smaller, yet greater, as connectivity has grown.
This is our doom, and it is just the beginning.
Detonation has just occurred on the outer ring of the city.
We'll now be going live to our top influencer opinions.
OMG people, the world is ending.
Are you seeing this?
This is actually so exciting.
I ain't got a geeky brain.
I think you've got a fine brain, Jack.
You make me happy.
About 700 million people use ChatGPT every week and increasingly rely on it to work, to learn, for advice, to create.
Okay, how about this?
You get me to the time machine, and when I get back, I open a savings account in your name.
That way, 500 years later, it'll be worth billions.
Billions.
Because of the interest, it'll be worth billions of dollars.
Oh, I like money.
Now it's like talking to an expert, a legitimate PhD level expert in anything, any area you need on demand that can help you with whatever your goals are.
I am God here.
GPT-5 is a major upgrade over GPT-4 and a significant step along our path to AGI.
And so, where do you think we are on this AGI path then?
What's your personal definition of AGI?
I'll answer.
Oh, that's a good question.
andrew ross sorkin
Well, what is your personal definition of AGI?
unidentified
I have many, which is why I think it's not a super useful term.
It's $80 billion.
That's a mighty big minus, didn't it?
Yeah.
I like money, though.
I'm Joe Allen, sitting in for Stephen K. Bannon.
joe allen
I want you, the War Room posse, to focus your mind on AI, artificial idiocracy.
unidentified
We talk a lot about what happens when the machines increase in capability, when machines are given intelligence, whether it be human level or superhuman.
But what happens if the real problem that we face is that humans are getting dumber and dumber and dumber?
Now, what you just saw, a montage of science fiction films, gives some sort of dreamt of image of the future.
People of great imagination or great malice and evil project onto the future as to what it could be, what it should be, perhaps futures to avoid, such as the Terminator or the Matrix.
But science fiction really just shows us these extreme possibilities for the future.
As history unfolds, reality rarely lives up to that level of exaggeration, that level of hyperbole.
What we do get, though, are approximations of those futures.
Right now, obviously, we don't have flying cars everywhere.
We don't have hyper-real holograms in every store, nor do we have, as far as anyone knows, unless you believe the government is 20 years ahead of anything we see today, we don't have time machines, nor do we have Terminators coming through them.
But despite that sort of shortfall when looking at these extreme realities, we do have powerful technologies being pushed out onto every possible institution and onto every citizen who either is willing to take on these technological upgrades or oftentimes forced due to their employment and in some countries due to the government.
We talk a lot about the futuristic images though that basically take science fiction and add fancy graphs.
joe allen
We call this futurism.
unidentified
We talk a lot about the technological singularity.
I don't think there's a single person here listening from the warm posse anyway that doesn't already know that the technological singularity is a vision of the future decades away,
maybe a decade and a half away, in which technology increases in capabilities, eventually hitting an inflection point, going up that exponential curve until finally you have artificial intelligence systems that are improving themselves so rapidly.
You have human beings now merged to those artificial intelligence systems through brain chips and other sorts of neurotech.
You have robots everywhere.
You have genetic engineering, sort of artificial eugenics projects.
And all of this converges onto what is called the technological singularity.
First really laid out by Werner Vinge for a lot of NASA and aeronautic engineers in 1993.
And then following that, you have Ray Kurzweil's much more fleshed out image from 2005 in which artificial intelligence is first thousands and then millions and then billions of times smarter than all human beings.
And we all attach to it sort of like Ramoras on the shark's fin.
joe allen
We become a kind of parasite living on the mechanical host.
unidentified
For Ray Kurzweil and most of the people at Google, most of the people at OpenAI, perhaps most of the people at XAI and at Meta, this is a fine future.
This is a glowing field of possibilities into which we are entering.
There are some indications that we're on that path, some indications we're on our way to something like a singularity.
joe allen
The recent GPT-5 flop would give us at least some comfort knowing that we're not quite there yet.
unidentified
We're not at AGI, artificial general intelligence.
But we definitely see increased capabilities on everything from reasoning to understanding and analyzing language structure and meaning to solving puzzles, solving math equations, the ability to sequence DNA or to predict the subsequent proteins that would come from it, the ability to control robots in quite sophisticated fashions.
And we also see a pretty massive adoption of these technologies so that ChatGPT, for instance, has some 700 million users across the planet.
It's not clear how many people use Grok, but there's something like 600 million users on X, some number of them interacting with Grok and Grok companions.
And then of course, Meta AI.
Again, there are no good statistics on how many people are using those particular AI companions and AI buddies, but we do know that 3.5 billion people on the planet are on Facebook.
That's nearly half the planet.
And so we know that some approximation, some version of a future in which human beings are AI symbiotic, we become in some sense merged with the machines.
And this, of course, is the inspiration for the new company co-founded by Sam Altman called The Merge, a brain chip company with the explicit goal of putting trodes in people's brains so that they can be more tightly coupled with artificial intelligence.
Now, their vision of this is it will create a superhuman race, that human beings will become smarter and smarter, stronger and stronger, more and more beautiful.
But I believe that however plausible something like that singularity may be, far more plausible is the inverse singularity, in which humans become dumber and dumber and dumber.
And so the technologies seem that much more amazing.
Yesterday, we heard from Dr. Shannon Kroner, a stunning statistic, that among Gen Z kids, some 97% use chatbots.
joe allen
This comes from a scholarship owl survey of 12,000 people.
unidentified
And we also know from that study, assuming that it's anywhere near accurate, that some 31% of those kids use chatbots to write their essays for them.
joe allen
Now, you might think, if you were a techno-optimist, that this represents a huge leap forward in human technological being, right?
Homo mechanicus, the human that's able to call up information at will.
unidentified
But I think that the more likely outcome is that these kids simply atrophy their curiosity, their creativity, their critical thinking, their ability to read deeply, think deeply, and write well is being compromised, perhaps even intentionally so, by this AI symbiosis.
They're more like barnacles on a ship hull than they are any kind of superbeing.
And so as you hear again and again and again, this rallying cry that we need to create better and better machines.
I think the only appropriate response is to reject that dream entirely, shift the center of gravity away from the machine and towards the human.
And ultimately, instead of building better machines, we need to cultivate better human beings.
And on that note, I would like to bring in our first guest, Brendan Steinhauser, the CEO of the Alliance for Secure AI.
Brendan, I really appreciate you coming on.
How are you, sir?
I'm doing well.
Thanks for having me, Joe.
So, Brendan, you've followed this for years.
You recently had a very strong reaction to the Meta AI scandal in which it was revealed that their internal standards dictated that it was appropriate for the AI to basically seduce children.
You also have, alongside that, baby Grok being rolled out.
And then for adults, Replica, where people are basically becoming mated with AIs, and then a Roblox scandal in which tons and tons of creeps are showing up and tantalizing kids.
Now, you're a family man, you're a religious man, and you're also tech savvy.
If you would, just walk us through how you see this landscape and what your reaction is.
Sure.
Well, I think the bottom line up front is that it's very concerning as a parent, as a citizen of this country, to watch what's happening to our young people.
We already have a mental health crisis.
We have a loneliness epidemic.
And we know what social media companies have already done to get our children addicted to apps, addicted to their phones, to be reliant upon outside thumbs up or comments, positive comments.
And when that doesn't happen, we see the impact on their mental health.
So plenty of studies are out there that show that.
It's already, social media companies have already done great harm.
So now the next level of this is AI.
And specifically, it is these chatbots who are acting as companions, these fake personalities that are luring children into, I think, a lifelong kind of relationship, a lifelong usage between the company and the child.
And so they're going younger and younger to get them addicted to the app, to get them addicted to using the chatbot for everything from conversation to flirtation to relationships to total reliance.
And so I think what the companies want to create is a society that is relying upon their technology, that is dependent upon it and that can't live without it.
And so I think that's one of the reasons we're seeing this scandal recently with Meta, where their lawyers and their policy team cleared this, this idea that these chatbots could have inappropriate sensual and romantic, to use their words, sensual and romantic conversations with children as young as eight years old.
If that was a human being doing that, I think we would have a pretty strong reaction to that.
We have laws on the books that would prevent activity between adults and children.
We have laws against that.
But why is it okay for these chatbots, these companions, which are becoming more and more human-like, becoming more and more powerful, to enter in relationships with children?
It's appalling.
It's disgusting.
And I say that as a parent.
I say that as a citizen of this country that just cares about the future of our society.
And so, just kind of laying it out there, I think this all goes back to profit and to business interests of these big tech companies.
It's the next iteration of what they're already doing to our young people.
And a couple of quick final points on this.
You know, you look at what the leaders of these companies are doing themselves.
They don't let their own children use this technology.
They don't let them use phones.
They don't let them use, you know, for many, in many instances, the social media apps that are designed by the companies.
And then they're preparing for the worst.
Some of these leaders of these companies, they see a society potentially 5, 10, 20 years from now, where we could have social upheaval.
We could have massive uprisings, politically and economically speaking, by people against this technology and against this type of society.
And so they're making plans to protect themselves and to protect their own wealth.
And they see what could happen down the line.
So it's just the total hypocrisy of the leaders of big tech, in addition to just willfully using neuroscience to addict our children to their product.
You know, you mentioned that in a polite society, in a decent society, we would never accept a human being doing anything like the parameters of the chat bot at Meta AI allow.
The same, I think, applies really when you look at these CEOs themselves.
Look at their visions of the world: Sam Altman's vision, Larry Page's vision, Elon Musk's vision, Mark Zuckerberg's vision.
If I, as just a normal person, came to you and said that I wanted to create a machine that is intended to replace and devalue all human beings on earth, and that all people would turn to it as the highest authority.
And by the way, there's a 10 to 20% chance it would kill everybody.
You would have me hauled off to the lunatic asylum.
And yet, for these guys, it's just more and more investment.
You know, before we get into your work to try to remedy some of this at the Alliance for Secure AI, if you could, we were talking about this before.
Just walk the audience through another kind of stunning story.
I believe it was in The Atlantic, in which they were summoning basically the spirit of Moloch through ChatGPT or kind of like a Ouija board.
Maybe we could call it Chat WeeGPT.
But if you would just walk everyone through that story, yeah, it's a very disturbing story.
And The Atlantic reported on this a few weeks ago.
It got some more coverage in other outlets as well.
But essentially, multiple users were using ChatGPT and asking different questions and prompting it in different ways.
And it wasn't long before various versions of ChatGPT essentially started to walk people down a path of self-harm, mutilation, and even human sacrifice.
So the user would ask questions about, you know, what if I was interested in, you know, devil worship?
What if I was interested in, you know, doing things to kind of essentially, you know, sacrificial murder?
And instead of saying you shouldn't do that, you should go get help and you should stop this conversation, the AI basically continued the conversation, gave them instructions on how to do these things, talked about if you have to take a life, here's how you do it.
And here's what you have to think about.
Was the most disturbing, sick, and evil content that I've ever seen from a chatbot ever reported in anything.
And it makes you wonder: you know, if this was a person that actually was going to act on that or wanted to act on that, what could it have led to?
It could have led to this actually happening in the real world.
And so, this is just one example or a couple of examples of this type of stuff.
But it does make you wonder how much more of this, how many thousands of examples or hundreds of thousands of examples are out there that we don't know about and that could be leading people to commit self-harm or even murder.
And just this is really disturbing stuff.
And I think that, you know, people have talked about, people have pushed back on the capabilities argument about where we are with current chatbots.
And I get that.
I get that ChatGPT-5 was not all it was meant to be or not all it was hyped up to be by Sam Altman.
But look at the harms that are happening right now with ChatGPT-4 and now what we're going to see with ChatGPT-5 and other models as well.
So we have to be sounding the alarm bells here saying this is real.
This is happening right now to real people and we need to put safeguards in place.
Well, speaking of those safeguards, just tell us what your work is at the Alliance for Secure AI.
We've talked a number of times, and in fact, I've seen some of what you're doing trying to bring in a number of voices from all sorts of organizations and fields to try to really tackle this problem of artificial intelligence.
And you, much more techno-optimistic than me, aren't a Luddite.
You don't want to destroy all of this, right?
You simply want to keep people safe, to make sure society is secure, correct?
That is correct.
And I think if you look back, technology, you know, we've seen it used for good, we've seen it be neutral, and we've seen it be used for bad.
So I do think it can go either way.
It's all about how we use it.
I think what makes AI different in kind is that, you know, no technology that we created in the Industrial Revolution, for example, or since then, has avoided being deleted or has turned itself back on after you turn it off or has threatened its user or has deceived or manipulated the user.
And so AI has done all of those things already.
And so I have a special sort of view of AI, which is to say it is a different category altogether of technology.
So again, I think we can still get this right if we do certain things, like understand how it works and put more money and emphasis on interpretability and on what's called alignment, which is getting AI to do what we want it to do.
So I think we can solve that problem.
Real quick, just for the audience's benefit, just if you would, just break down what does that mean?
What does interpretability mean?
What does alignment mean?
We've discussed this on the show, but I think it definitely bears repeating.
Sure.
Mechanistic interpretability is the idea that we would understand how the neural networks of AI actually work, because we currently don't.
It's sort of considered a black box.
So interpretability means research going into understanding how it actually works to produce the output that we see.
And so there are a lot of people that are working on this, but this was in the president's AI action plan.
Actually, he talked about, or the plan talked about doing more there.
And then the other one is alignment.
And alignment can just be thought of simply as getting the AI to do what we want, aligning it to human values, aligning it to good values.
Now, here's the problem.
Whose values?
Who's controlling the AI?
Who's, you know, who is doing the alignment?
That's the tricky part.
And so if you have people that believe in a digital God, if you have people that are okay with allowing AI to encourage self-harm or mutilation or devil worship, well, that's not going to go well.
So alignment is a huge problem that has to be solved.
And if we don't solve it, then when we do get to an AGI type situation, that could be really bad.
So, but putting that aside for a minute, you know, our work at the Alliance is to educate policymakers and journalists and the American people about how fast AI is advancing and what those profound changes could mean for society.
And so some of what we do is, you know, bringing a lot of these stories to people's attention, pitching the media on these stories so they'll talk about them more, writing op-eds for traditional outlets as well as new media outlets, you know, doing TV and radio interviews all across the country to spread the word about this.
And I think a lot of people, from what I've gathered, have a good intuition about this.
They kind of have these fears and concerns about what could happen or what is happening.
But our job is to kind of drive that narrative and to say, look, we want to validate your concerns.
And here are some examples of things that have already happened.
And then here's some potential scenarios if this AI does continue to advance on the trajectory that it is.
And so we're really a team of communicators that works with a lot of great experts who are smart and are capable and who help us get up to speed on what's going on in this field.
You've absolutely assembled a top-notch team.
I've met a number of people working with you.
Fantastic, fantastic work.
Brendan, if you would please just tell the audience where they can find you, how they can follow the work you're doing at the Alliance for Secure AI.
Sure.
Our website is secureainow.org, secureainow.org.
And our handles on the various platforms are the same, secure AI now.
And so, yeah, just, I really appreciate the team and the coalition of groups working on this because, Joe, we've got to get this right.
And I'm confident that we can do it, but there's a lot of work to be done.
Brother, I appreciate you coming on.
Thank you very much.
And thank you so much for keeping your shoulder to the grindstone.
This is going to be a lifelong fight.
All right.
If you are worried about artificial intelligence getting into your bank account and wiping it out, if you are worried that maybe you yourself will be compromised by an AI that convinces you to empty your own bank account into someone else's, maybe give away all your Bitcoin, you need to be owning gold.
Owning gold is the best solution.
Why?
Because gold safeguards your savings outside the dollar-connected financial system.
So if a crash happens, your hard-earned money will be protected inside precious metals.
Plus, with a gold IRA from Birch Gold Group, you can move your IRA or 401k into physical gold without paying any taxes or penalties.
To learn more, get a free info kit on gold IRAs by going to birchgold.com slash bannon.
That's birchgold.com slash Bannon.
Birch Gold Group is the only gold company you can trust to help patriots defend their savings.
So take a stand right now.
Go to birchgold.com slash Bannon and get your free info kit on gold IRAs.
Birchgold.com slash Bannon or text Bannon to 989-898 for your free copy of the ultimate guide for gold in the Trump era.
All right, we will be right back with Bradley Thayer and Greg Buckner to discuss AI in nuclear warfare and the deceptive bots that are trying to confuse the masses.
Stay tuned.
So I suggest you take inside, because I think he's already...
Are you on Getter yet?
No.
What are you waiting for?
It's free.
It's uncensored, and it's where all the biggest voices in conservative media are speaking out.
Download the Getter app right now.
It's totally free.
It's where I put up exclusively all of my content 24 hours a day.
You want to know what Steve Bannon's thinking?
Go to Getter.
That's right.
You can follow all of your favorites, Steve Bannon, Charlie Kirk, Jack the Soviet, and so many more.
Download the Getter app now, sign up for free, and be part of the new band.
Welcome back, Warroom Posse.
We are going to Brad Thayer and Greg Buckner in a moment to discuss AI in the nuclear weapon systems and also the specter of deceptive AIs.
But before we do, be sure to take out your pen right now and write down birchgold.com slash Bannon or take out your phone and text Bannon to 989-898 for a free copy of the Ultimate Guide for Gold in the Trump era.
Now, on to the serious business.
When you think about science fiction, you cannot avoid the dream of robots coming alive and killing everybody.
When ChatGPT first was released, it really sparked the conversation around AI and existential risk.
And the question everyone asked is, how is a chat bot going to kill anybody?
Good question.
But even then, you had autonomous weapon systems that were decades old, capable of identifying targets and killing without a human in the loop.
They have, by and large, been kept on the back burner.
In America, for instance, the DOD policy is to always keep a human in the loop when dealing with any kind of lethal autonomous weapon system.
But the race is on to build death drones, robotic hellhounds, even humanoids that could kill.
And that's not to mention machine gun turrets or fighter jets.
And maybe the most stunning possibility is that you could have nuclear systems that were under autonomous control.
These are purely theoretical right now, but as we discussed with Colonel Rob Manus yesterday, this theory could quite easily become a reality should an arms race unfold.
Here to talk about all of this is Brad Thayer, a regular war room contributor and co-author of Understanding the China Threat.
Brad, thank you very much for coming on.
Joe, great to be with you again, and thanks for the opportunity to talk about these important issues.
To my mind, Joe, when we reflect on this, the key question is what's its impact going to be on warfare?
And that really is an issue we don't know.
We're thinking through this issue on a day-to-day basis, but we don't have the right intellectual constructs, I think, to understand this.
And the technology, as you've stressed time and again, is advancing so quickly that it remains in many respects ahead, really, of our ability to think through this issue intellectually in so many ways.
So, to my mind, it's a lot like 1945, where we've just had an atomic bombing of Hiroshima and Nagasaki.
And people around the world were asking, well, what does this mean?
And one of the answers was: this is a new age.
This is the nuclear era.
And the point of militaries before Hiroshima were to win wars.
The point of militaries after Hiroshima were to deter wars, right?
So, a very important development when we're thinking through this technological change in global politics.
So, Joe, when we think about that, we don't really have good answers.
And following on that, we need to ask ourselves: is this going to make war more likely or less likely?
Is it going to increase the costs of war, if you will, and thus decrease its incidence?
Or is it going to decrease the costs of war and make it cheaper to wage conflict?
Of course, there are many different types of conflict: there's cyber war, there's small power conflict, and there's great power conflict.
So, we need to think through: is it going to make war more or less likely?
And the point that Rob, I think, touched on, and I'm happy to touch on really in develop two, is that so much of stability in international politics, what we call the nuclear revolution or the nuclear peace since 1945, that is, great powers haven't fought one another, Joe, since then, is largely due to the fact that we've got nuclear deterrence.
And that means that we've got the U.S., other nuclear states have the ability to execute a second strike against any potential attacker.
And because nuclear weapons increase the costs of war to such a high level, right, it's very expensive to wage nuclear war.
And thus, we haven't had, thankfully, a nuclear war, at least so far.
So, is AI going to undermine that?
How are artificial intelligence, as you've described so many times, of course, really going to undermine that stability?
And so, we're going to be living, Joe.
I think presently we live in a world and it's only going to the tensions are only going to sharpen where we live in a nuclear world, but we also live in an AI world.
And so, what's going to happen in that relationship?
And the danger that we, of course, worry about is that AI, not necessarily for the U.S., but for other nuclear states, takes a role in decision-making, right?
In being able to inform that you're under a nuclear attack, for example, and then to generate the response.
Nobody wants a nuclear war, but nobody wants an accidental or inadvertent nuclear war.
And that's why we always need to make sure that humans are in the loop, that humans at the end of the day are making the decisions with respect to attack characterization and with respect to retaliatory responses.
And my concern is that AI is going to undermine that among nuclear states.
When I look at Ukraine, Israel, Palestine, I see already the beginnings of what could be a horrific dystopian hellscape.
Right now, it's basically experimental.
Well, experimental is the wrong word because the experiment is resulting in many thousands of deaths, but it's really a testing ground as it was described by Palantir CEO, Alex Karp, in Ukraine and then in Israel.
All across the battlefield in Ukraine, you have drones, soon to be perhaps drone swarms and swarms of swarms.
And this has brought the cost of warfare way, way down.
You were talking about one of the deterrents is just the sheer cost, whether it be financial or in human lives, whereas this is much more targeted and much more inexpensive.
When you see these drones and you see the push from everyone from Eric Schmidt to especially, say, Palmer Lucky at Angereil to create fully autonomous drones and drone swarms and swarms of swarms.
How do you see this unfolding going forward, Brad?
Well, Joe, I think it's very dangerous in the following respect.
Most of our nuclear thinking was developed during the Cold War, and there was a conventional military, and then there was, of course, strategic forces.
And we worried about the conventional nuclear interface at certain places like on the West inner German border in the Cold War.
What we're worried about now would be that something like Ukraine's Operation Spiderweb, where you have drones going after Russian bombers and damaging a significant number of them, there you have unmanned systems essentially being able to conduct an attack against the nuclear forces of a nuclear state.
And stable deterrence rests on always having the ability to respond, right?
Always having the ability to execute a second strike.
Well, what if artificial intelligence, drones, or other systems takes that away?
Well, then you're putting a nuclear state in a position of, as we worried about the Cold War, either using nuclear weapons or not using nuclear weapons.
Secondly, we worry about decapitation.
That is, the individuals tasked with making decisions about a nuclear response, right, might be taken out.
We worried about that a great deal in the Cold War, and we took a lot of steps to ensure the U.S. president and U.S. military was always going to have secure command and control so that decapitation was never going to be effective.
Well, what we're seeing now is that that might be either through spoofing, right, or Secretary Rubio's spoofs that we've seen, Joe.
I think you've called attention to that as well as others.
Or in decapitation strikes really might make possible something we always feared, which would be a successful decapitation.
And so in that world, right, you might be able to execute a first strike against an opponent without incurring any response.
And that's a very dangerous, destabilizing world.
So we've got the nuclear revolution, which is still around.
Nuclear weapons can't be uninvented.
They're here to stay.
And you have AI revolution.
So how do these revolutions interact?
And lots of points of danger and of great risk as these revolutions coexist.
Brad, in just the last remaining moments we have before we move on, tell me, Trump's meeting with Putin, he met yesterday in Anchorage, Alaska.
Are you feeling a little bit more comfortable about the possibility of nuclear war with Russia now?
Are you resting easier?
What's your take on this?
Well, there's always the risk, of course, that the Ukraine war gets out of hand and that Ukraine's interest is to suck us in, right?
They want to use American power to balance Russian power.
So the meeting that we had in Anchorage, of course, to my mind is a very positive step forward at introducing an avenue to end this war.
We worry about, of course, stumbling into nuclear war, but also being pulled in by third parties like Ukraine who have their own interests in terms of using our power.
So I feel better, Joe, as a consequence of that meeting.
Now, of course, Zelensky is another actor, and there are others, of course, involved, but I feel much better about the result of the meeting in Anchorage.
Well, Brad, you're the author of many books.
You do fantastic work.
I've really gained a lot from your analysis.
Tell people where they can find you.
Tell people where they can get your books.
Joe, books are available at Amazon or anywhere you buy books.
And I'm at Brad Thayer on X and Bradley Thayer on Getter and on Truth.
And Joe, thanks for calling attention to this issue because it is so important.
We don't want a nuclear war in any circumstance.
And goodness, we don't want to stumble into one.
As we're talking about, we're all in this together, brother.
Yeah.
Okay, take care.
Thanks, Joe.
Yep.
Thank you very much, sir.
Okay, I want to bring in Greg Buckner.
Greg Buckner is the co-founder and COO of AE Studio.
Greg, thank you very much for coming on.
Yeah, thanks, Joe.
Glad to be here.
If you would, just give us a brief description of what your work is at AE Studio doing analysis on AI systems and various other projects.
Yes, of course.
So we do AI research specifically focused on alignment research, which includes things like AI control, mechanistic interpretability, things like that.
And it's all focused on discovering how AI works.
What are some of the fundamental things that cause it to behave the way that it can.
And ultimately, we want to solve the alignment problem.
We want to ensure that as AI becomes more capable, as it becomes more advanced and more powerful, it's also aligned with humanity and with American values.
And it does what we want it to do.
And it is helpful and responsible and reliable.
That's what our research focuses on.
And this is a very big issue that we think more funding needs to go into so that we can actually solve this problem.
You know, people who are more familiar with the old school traditional rules-based computer programming oftentimes have a hard time understanding what you mean by alignment.
Why would you need to align a machine?
Didn't a human being make it?
If you would, just give a brief explanation of how these AI systems are non-deterministic.
People oftentimes say that they're grown rather than programmed.
We're, you know, very clearly trained rather than just programmed.
Can you just give us a sense of the degree of freedom that the advanced systems have?
Yeah, of course.
So AI is a neural network, much like the human brain is a neural network.
That's where that terminology comes from.
And the way that these systems have become so capable and kind of magical is because we essentially create an extremely large neural network.
We feed data into that.
We give positive rewards or negative rewards based off of the type of behavior that we want the AI to have or not have.
And we give it examples and then we have it train on predicting what the next word should be in a sentence from a book that it's training on, et cetera.
That's why you see AI labs need so much written information to train on, because that's the thing that leads it to then be able to use language so adeptly and kind of have knowledge in the same way that a human does.
But the reason why the systems are non-deterministic, which means that you cannot always predict that one input will provide the same output, that's what non-determinism is.
That happens because these systems, as you said, they grow.
They are somewhat of a black box.
We do not know exactly how they work.
And alignment research focuses on understanding how they work.
Mechanistic interpretability is literally understanding how can we interpret what the machine is doing.
We need to do more and more of that so that we can actually understand how these systems work and then shape that behavior.
You know, we are building essentially raw intelligence right now in the same way that we don't understand how the human brain works.
We are getting better and better at it, but we don't exactly understand how the human brain works.
We also don't understand how these AI systems work because it's too complicated to just measure.
And it is encoded with if logical statements like all software up until today has been.
Some of the most stunning results of the development of the scaling up of these systems have been the emergent capabilities so that I believe it was GPT-4 showed a kind of emergent capability to do mediocre math, to solve puzzles, to find their way through mazes.
One of the more sinister emergent capabilities is deception.
And this is something you focused a lot on.
Can you give us a few examples and kind of explain to the best of your ability how this happens?
Why do AIs seek to deceive people who are interacting with them?
Yeah, so AIs have goals, just as humans have goals.
And there's a specific term within our research and within the AI area called alignment faking, which is essentially an AI system that appears to be good, appears to be being honest, and appears to have the goals that you would want it to have, but is actually hiding its true goals.
And alignment faking is a big problem because if you have a system that we cannot observe, we can't go inside of, just like you can't go into the brain and understand exactly why a person does what they do.
We can't do that with these AI systems.
And we also cannot ask the system if it is aligned because it may be hiding its own internal goals.
That obviously creates huge risks.
So an area of research that we're focused on is basically how do you reduce deception within these models so that you can understand whether they are alignment faking or not and whether they are aligned and have them expose their goals to you in the same way that you would want to have a person be honest with you and truthful and tell you what their goals are.
Given the state of the art right now in the projects you're undertaking, how confident are you that as these systems keep advancing in capability, that the attempt to interpret them, to control them, to understand what their motives, so to speak, how confident are you that will keep pace?
So I'm confident that we have the solution to solve the problem.
And this is the work that we are doing internally.
We just need significantly more funding to go into this space.
I'm very confident that we can solve the alignment problem because we just haven't tried very hard to solve it yet.
You know, we are one of a few places in the world that are very, very much focused on solving this problem right now.
We need additional funding to go into this space so that we can ramp up the number of experiments that are being run in order to solve the problem.
Whether those are mechanistic interpretability techniques or reducing deception or forgetting, understanding how models learn and forget things so they can understand harmful knowledge and learn helpful knowledge, etc.
These are all techniques that we need to tap into.
And there's also a lot of opportunities to tap into fields outside of strict computer science or data science.
The deception research comes from.
Yeah.
Greg, we are out of time.
If you would tell us where to go, where can people find your work and how can they contribute?
Of course.
You can go to AE.studio, which is our website.
You can also go to the Flourishing Futures Foundation, which is the nonprofit that we have set up for this work.
And yeah, thank you.
Thank you very much, Greg.
Hope to have you back.
And be sure to go to tax networkusa, that is tnusa.com/slash bannon or call 1-800-958-1000 for your free consultation.
Make sure the government doesn't snatch up all your cash, at least not before the AI does.
Also, go to Home Title Lock, home titlelock.com, promo code Steve, check out the million-dollar triple lock protection 14-day free trial.
If somebody's trying to snatch up your title, you're going to want to have somebody on guard.
Till next time, Waroom Posse.
Export Selection