All Episodes
Jan. 9, 2026 - Bannon's War Room
48:20
WarRoom Battleground EP 922: AI Doom Debates with Liron Shapira

Stay ahead of the censors - Join us warroom.org/join Aired On: 1/8/2026 Watch: On X: @Bannons_WarRoom (https://x.com/Bannons_WarRoom) On the Web: https://www.warroom.org On Gettr: @WarRoom On Podcast: Apple, iHeart Radio, Google On TV: PlutoTV Channel 240, Dish Channel 219, Roku, Apple TV, FireTV or on https://AmericasVoice.news. #news #politics #realnews

Participants
Main
j
joe allen
24:43
l
liron shapira
13:50
Appearances
b
bernie sanders
sen/d 00:40
b
bill whitaker
cbs 01:07
e
elon musk
01:23
j
josh hawley
sen/r 00:55
r
rob playter
00:35
r
ron desantis
r 00:46
s
steve bannon
r 00:40
Clips
j
jake tapper
cnn 00:16
j
jonathan haidt
00:25
m
max tegmark
mit 00:07
|

Speaker Time Text
AI Exceptionalism? 00:03:59
josh hawley
Not content with addicting our kids to their gizmos or amassing fortunes the size of lesser European states, our tech elite has turned with rabid enthusiasm to artificial intelligence.
No, only the less cautious articulate the real reason, what many quietly believe, that AI will reinvent human existence.
jonathan haidt
Social media changed how kids talk to other kids.
It dehumanized it.
AI is going to take the human on the other end away, and kids are going to grow up talking to artificial creatures.
They are not going to learn how to talk to real humans, which bodes very, very poorly for their own lives, their own work lives, for marriage, for child rearing.
All those are threatened if kids grow up interacting with AIs rather than humans.
bernie sanders
This is the most consequential technology in the history of humanity.
It will transform our country.
It will transform the world.
And we have not had in Congress in the media.
And I'm glad you're doing this show or among the American people the kind of discussion that we need.
josh hawley
The eugenicist Julian Huxley predicted this as far back as 1957.
I believe in transhumanism, he said.
He coined the term.
And so do untold numbers of today's tech class.
That is the vision, the religion, the ideology that animates so much of the breathless race for artificial intelligence and for general artificial intelligence and super intelligence and beyond for the day when humans are no longer embodied beings at all, but live infinitely in the cloud.
bernie sanders
Multi-multi-billionaires are pouring hundreds of billions of dollars into implementing and developing this technology.
What is their motive?
Do you think they're staying up nights worrying about working people and how this technology will impact those people?
They are not.
They are doing it to get richer and even more powerful.
ron desantis
I call it AI exceptionalism, as if this is going to be something where, you know, it's going to solve every problem and humans aren't even going to be needed to do.
Everyone can just sit around and play golf all day and you're going to get universal basic income and they're going to cure.
And it's like, no, like, first of all, that's not likely to happen.
And second of all, I think it raises huge concerns.
And so I am, you know, not an AI exceptionalist.
I'm an individual and human exceptionalist.
I think new technologies have to be developed in a way that aligns with American values.
Things like self-government, free speech, having a healthy labor force, federalism and the rights of states, and the creation and maintenance of strong families.
jake tapper
Do you think ultimately that there will be a bipartisan majority willing to take any sort of action?
bernie sanders
Well, any sort of action is a big, what is that?
jake tapper
Any sort of legitimate action.
bernie sanders
Significant action.
unidentified
Yeah.
bernie sanders
I don't know.
steve bannon
This is the primal scream of a dying regime.
Pray for our enemies because we're going to medieval on these people.
You're going to not get a free shot on all these networks lying about the people.
The people have had a belly full of it.
I know you don't like hearing that.
I know you're trying to do everything in the world to stop that, but you're not going to stop it.
It's going to happen.
jake tapper
And where do people like that go to share the big lie?
MAGA media.
I wish in my soul, I wish that any of these people had a conscience.
steve bannon
Ask yourself, what is my task and what is my purpose?
If that answer is to save my country, this country will be saved.
unidentified
Here's your host, Stephen K. Bannon. Good evening.
AI Psychosis and Suicide Risks 00:08:05
joe allen
It is Thursday, January 8th in the year of our Lord 2026.
I am Joe Allen, and this is War Room Battleground.
As you know, Posse, artificial intelligence has spread out across the world, infecting brains like algorithmic prions, giving the sense that perhaps the entire human race is under threat of getting digital mad cow disease.
We've seen instances of AI psychosis.
We've seen instances in which artificial intelligence has lured children into suicide.
Now, up on Capitol Hill, the fight for who gets to run this algorithmic insane asylum and who goes to the digital padded room has heated up.
We have laws on the books across the country at the state level banning psychiatrists from using artificial intelligence as a kind of agent, as a proxy for their practice in Illinois.
We have laws on the books in California to hold up AI companies to accountability, transparency.
SB 53 in California is probably one of the strongest laws looking at the catastrophic risks of AI and making some attempt to hold these companies accountable.
You have a similar law on the books in New York, the RAISE Act.
And Josh Hawley and Richard Blumenthal have introduced a similar national level bill entitled the AI Risk Evaluation Act, the goal being to monitor companies and force them to publish their safety protocols, to publish any safety incidents, and to delineate what sorts of penalties they would suffer if, for instance,
their AIs began to lure children into suicide or drive people insane.
At the national level, this struggle for control over who is in charge of the future of AI, who is responsible for any damages, and what direction it will go is led at the moment by a bipartisan coalition, a very small one.
But if I look into my crystal ball, I certainly see as this issue heats up, as the various catastrophes become more and more imminent, that this fight will be explosive.
You have Bernie Sanders, who recently learned the word artificial intelligence, calling for a full moratorium on data center construction.
That may be unrealistic, but at least it sets a bar.
It tells these companies that someone is willing to stand up to them.
And even if it doesn't end up being Bernie Sanders, ultimately, we know that you have younger, brighter minds on the left like Rokana, and you have younger and at least diligent individuals like Ron DeSantis in Florida who are willing to step up and lead the charge against these companies and their excesses.
Now, as you know myself, I'm much more concerned about the social and psychological implications of all of this.
The AI psychosis is monstrous.
The ways in which these sycophantic systems will lure people into not only mental instability, but also suicide.
And in the case of the famous murder suicide that occurred last August, in which a 53-year-old former Yahoo executive murdered his mother at the encouragement of ChatGPT and then stabbed himself to death.
And the authorities found that GPT was encouraging not only his general break with reality, but also his suspicion that his mother was in fact in on the conspiracy against him.
These sorts of things are extreme edge cases.
These sorts of incidents give us a sense of how bad it could get should these prions spread and the infection become worse.
But just on a general level, you don't have to go too far into the internet to see that not only are search engines now dominated by AI interpretation rather than guiding you to human-produced information, but social media is suffused with it.
You see endless streams of AI slop, AI-generated images, AI-generated posts, essays that are supposedly human-created, which are obviously the result of algorithmic systems.
And of course, deep fakes.
If you look just recently, the shooting in Minneapolis, you have real footage of an incident which is tragic and an incident which we should be able as a society to look at the video evidence from multiple angles and come to some kind of consensus, some kind of conclusion as to what is and isn't real.
And yet you see the split wherever you are on that line.
You see the split, not just in what is right and what is wrong, but what is real and what is not real.
And this is real video evidence.
Imagine a world in which half, three quarters of the videos on the internet are simply deep fakes.
And they are so close to reality.
They're so photo or video realistic that there's really no way for the human eye or the human mind to detect the difference.
The only recourse you have is to turn to an AI to ask, is this real?
I've talked about the religious implications of artificial intelligence for years.
If there is any one question that religion answers, that humans are yearning for eternally, what is real?
What we see are the wealthiest men on earth, empowered by the most powerful government on earth, putting their algorithmic systems, their non-human minds forward as the ultimate arbiter of what is and isn't real.
And if you think that the fight in Minneapolis is going to spark off into something like another subsequent string of national tragedies, imagine two, three, four years on down the road, if these companies are not restrained, if the flow of AI slop and deepfakes is not stopped, what it looks like when we're all scrambling to decide what is real and what is not,
while half or more of our countrymen are activated by videos, text, fabricated evidence, deepfakes that have encouraged them to hate their fellow Americans.
It's a dystopian idea, one that I don't think we are necessarily going to experience in its fullness, but some portion of it is already happening.
The seeds of this dystopia have already sprouted.
And it's up to us on the individual level, on the communal level, to push back, on the institutional level, to say, this is not how our companies, our churches, our government agencies are going to be run at the behest of algorithms.
And of course, at the political level, by putting in place regulation and perhaps even banning certain levels or certain uses of artificial intelligence to at least give humanity a fighting chance in this cosmic war against the machine.
Timeline Shifts 00:13:19
joe allen
Beyond the social and psychological problems, you have the economic problems.
You have the problem of replacement.
What happens when jobs en masse are replaced by AI?
And then on the deepest level, the catastrophic risks.
What happens if AI systems allow any simpleton to create novel viruses, for instance, or any other type of bioweapon?
What happens when AI systems empower a tyrannical government or security state to unleash swarms of death drones that can autonomously kill hundreds, perhaps thousands of people with only one push of the button?
And in the most far out, the most fantastic vision of human doom, what happens if these AI companies create a system that they can't control at all?
What happens when they create first a human-level artificial intelligence, artificial general intelligence?
What happens if they create a system or a series of systems, a system of systems, which is smarter than all human beings on Earth combined?
Here to talk about that possibility is Liron Shapira, host of Doom Debates.
If Denver will roll, I just want to give you a sense of what Liron has going on over there.
It's fantastic, and I encourage you to dig in.
liron shapira
Welcome to Doom Debates.
Professor Gary Marcus, what's your P Doom?
unidentified
P Doom is a number that should be updated daily, depending on the circumstances in the world, just like the midnight clock for nuclear war.
And mine has gone up.
max tegmark
I would argue that artificial superintelligence is vastly more powerful in terms of the downside than hydrogen bombs would ever be.
unidentified
Let me make an uninterrupted point for a few minutes.
If you don't mind, I think that there will be tons of side effects, and I think that we will stave off a lot of wonderful possibilities for the future.
It's very possible that super intelligent AI alignment is intractable.
liron shapira
Vitalik Buterin, what's your P doom?
unidentified
My probability of total extinction by 2050 is so low that Daniel Kahneman would yell at me for giving a number.
It's 0.1%.
liron shapira
You did agree that one data center pretty soon could be better than a doctor at doctoring.
Maybe it could be better than a general commanding an army.
Maybe it could be better than a Hitler or a David Koresh.
unidentified
We need to think about the good futures more instead of just reacting and being terrified by things and wanting everything to stay the same because otherwise you end up being like, I warned you, and then nothing's going to happen.
Imagine the good scenario and push through.
joe allen
Liron Shapiro, welcome to the war room.
liron shapira
Joe, great to be with you.
And thanks so much for showing the montage.
A lot of great stuff to talk about there.
joe allen
Yeah, I think that the war room audience, we've talked a lot about AI risk, catastrophic risk, existential risk.
And what I really appreciate about your show is that you're not just simply berating people.
You're not necessarily an evangelist.
You are holding your ideas and other people's ideas up to scrutiny.
And I really, really appreciate that.
Now, my first question for you: what is your P doom?
liron shapira
I appreciate the question.
My probability of doom is about 50%.
So, about even odds that in the next 10 or 20 years, humanity is just going to be over in a bad way.
Like, there's just not going to be a human future.
The whole universe is just going to get conquered by some AI virus, some AI cancer, and it's just over.
We lost our chance on Earth.
We lost our chance to have kids, descendants.
That's how I see the world right now, most likely.
joe allen
Brother, that's harsh.
Now, the war room is not at all unfamiliar with harsh evaluations, but I'm curious: if you had to pick, say, three most likely paths by which an artificial intelligence system or multiple systems were to overtake the human race and, as you say, spread across the solar system and then galaxy like a cancer, what would those three paths be?
liron shapira
So, if I understand correctly, you're kind of asking about the mechanisms, like what technology will it use, what weapons can it use.
unidentified
So, the first thing I would say is: would it be sorry?
joe allen
Would it be nanotechnology?
Would it be something more mundane, like just driving humanity insane?
How do you see it going down?
liron shapira
The first place I would go is I would go all the way to what you'd call science fiction, except it's not going to be fiction, it's going to be real.
I would go all the way to nanotechnology, new forms of life.
And the reason why I insist on going there is even though it might not happen, nobody can predict the future.
I do want to give people a sense of perspective that the intelligence scale goes a lot higher than humanity.
Like Einstein, with all due respect, it's possible to make a mind that's much, much smarter than Einstein's mind.
And that's what we're doing with AI in as short as five or 10 years.
And when you see a mind like that on the same planet as you, you should expect things that are pretty miraculous.
Because what the human race has already done in the year 2026 relative to humans in biblical times is already quite miraculous, right?
And we've pulled that off just using little two-pound pieces of meat in our heads, right?
We've done it with very little hardware over the course of 2,000 years of human-level intelligence.
We're about to have superhuman intelligence.
So, I do want to set expectations that we're about to see fireworks in terms of the level of superhuman technology that's probably going to exist soon.
Things like nanotechnology, things like building a Dyson swarm, like a swarm of satellites, harvesting the sun's entire power so Earth doesn't get any sunlight.
I do want to set expectations that those kind of crazy technological feats are likely to happen.
joe allen
And what is your timeline?
If you have a definite timeline, what is your timeline say for the arrival of artificial general intelligence?
liron shapira
I don't even have a unique timeline.
I would just encourage people to go look at the consensus timeline of the experts.
So, for example, if you go to metaculous.com, which is a prediction site, they will tell you roughly 2032.
If you'd asked them five or 10 years ago, they would have been like, oh, don't worry, 2050, 2060.
But now they're converging to like 2032, which is in about six years.
And they don't know for sure.
So, when they say 2032, they really mean it could happen this year.
It could happen in three years.
It could happen in nine years.
If you listen to the experts, you know, Elon Musk is saying, Yeah, it could happen in 2026.
If you want my personal opinion, I just agree.
I think it could happen in one year to five years.
If it doesn't happen in 10 years, I start to get surprised because even people who have traditionally been pessimists are now saying it'll probably happen within like 10 years.
joe allen
You know, I came at this quite skeptical of the possibility of, say, for instance, superhuman AI or even human equivalent AI.
It was going over the evaluations that I won't say it's changed my mind, but it's certainly driven home the real possibilities of what these systems could do.
So the meter benchmark, for instance, how long can an AI code 50% of the output a human could do?
These sorts of things.
The benchmarks, for instance, the omniscience index or humanity's last exam.
How well can AI go into its own mind, so to speak, and draw out meaningful answers to incredibly difficult questions on health, business, science, so on and so forth?
Was that at all part of your journey?
I mean, I know that you've been at this for a decade and a half plus, maybe two decades.
You've been concerned about this.
Do those evaluations come into play as a way of kind of judging or measuring where we're at in relation to this possible artificial general or super intelligence?
liron shapira
Yeah, so the meter benchmark that you're referring to, it is very interesting.
And it's talking about the dimension of task length.
So like, can an AI work for two hours straight?
Or rather, can it do a task that would traditionally take a human two hours to do, like write a software program, like a simple checker's game or whatever?
Can the AI also do that?
And if a human can do it in two hours, can the AI also do it with 80% reliability?
So it gets to mess up a little bit.
And that time length, like two hours, it's turning into four hours.
We're roughly at this point where if a human can do something in four hours, an AI can do it with maybe 80% reliability if you run it now.
And maybe the AI will even do it faster than the human.
Like that's roughly where we are right now.
But to your question, have I been following this for the last 20 years?
Because I have been a self-described AI doomer for the last 20 years.
But the difference is that I used to think we had a lot of time.
I used to think we had like a century and it's okay.
Like it's not the biggest rush.
Like we'll figure it out.
You know, we'll discover new theories.
The problem is that the timeline got accelerated with ChatGPT.
Recent developments have pulled the timeline forward, as you saw in Metaculous.
Now I don't think it's going to happen in 2100.
I don't think it's going to happen in 2150.
I think it's going to happen 2130, 2030, something like that.
So to your question about looking at these benchmarks, we have to realize how weird it is that these benchmarks already exist because the meter benchmark preassumes that there's such a thing as artificial general intelligence.
Like the idea that you could ask about a general task, any task that a human can do.
That wasn't even on the table to ask about AI doing anything that a human can do.
And that's now the language that we're talking in.
We're talking like, here's a human, here's an AI.
And we're now watching the AI ascend past humanity as we speak in a matter of months or years.
joe allen
You know, when I think about the history of this and just the recent history, say the last nine, 10 years, the development of the Transformer, its adoption by OpenAI, the release of GPT, I think GPT-1 was released, what, 2018?
And at the time, it was very, very clunky.
It wasn't a whole lot better than, say, something like ELISA.
A bit more sophisticated, but not much.
And then all of a sudden, by 2022, you have a very sophisticated chatbot, ChatGPT, released in November of 2022.
And even then, it's really wonky.
And it's only a large language model, right?
It can only process text.
At the same time, you had DALI and all those sorts of independent programs coming out.
And it has just been an onslaught ever since.
These models are now multimodal.
They are much, much more accurate in the ability to gather or to, within themselves or on the internet, to gather and interpret information.
I'm wondering, I've seen your posts.
I think your posts on Less Wrong, for instance, go back to 2009.
I mean, you've been thinking about this for a long time.
Was there any moment or any incident or incidents that really changed your mind on how soon something like artificial general intelligence could actually develop?
liron shapira
Yeah, I changed my mind roughly the same time everybody else did.
If you go dig up Metaculous, if you look at the history of the predictions that the community has been making on that website, you can see around 2022 when ChatGPT comes out or when GPT-3 comes out, the underlying model, you can see the timeline just crashes.
It crashes from like 2050 to 2030.
So my own opinion was roughly coincident with that.
And what you're seeing with ChatGPT is, you know, it's the famous Turing test, right?
Alan Turing proposed this in the 40s, this idea that if you can talk to an AI in natural language and you can bring up any subject and you can't even tell if you're talking to a human or a bot, which you used to kind of be able to tell.
And now the only reason you can tell is because they programmed it to act like an AI.
But if somebody goes and programs it to pretend to be a human, they've done tests where they do that and you really can't tell.
This was a famous test.
I didn't think the Turing test was going to fall in my lifetime.
And now there's been studies to show like, nope, we're past the Turing test now.
This is such a brave new world where we're past the Turing test, watching the AI, the meter evaluation, where the AI is getting better than humans at every single task.
And the time horizon is going up at a rate of like faster than doubling every year.
And it's about to, it's about to go, you know, it's about to do things that humans can do in like a whole year.
It's about to be able to grind through that and who knows how little, like a day.
And then what's it going to do the rest of the year?
Like it's going to do superhuman amounts of work in a single data center.
And this is just all happening soon.
So yeah, brave new world.
joe allen
Absolutely.
And, you know, the scale of adoption is just so remarkable.
I think Google's Gemini has some 650 million users.
Why You Need Gold 00:03:03
joe allen
OpenAI's Chat GPT, it's over 800 million users.
Meta AI claims a billion users.
And some overlap there, obviously, but you're talking about anywhere from a 10th to perhaps a sixth of the entire planet.
And Liron, if you would hang on through the break, as the War Room posse processes this and imagines a world in which artificial intelligence has perhaps taken over everything, you're going to want something to trade in.
It's probably not going to be Bitcoin.
Definitely isn't going to be dollars.
What you're going to want is gold.
A new year means new financial goals, like making sure your savings are secure and diversified.
Will this be the year you finally listen and talk to someone from Birch Gold Group?
Honestly, they're great people.
I appreciate their educational approach.
And they are not AIs.
These are flesh and blood humans.
And their understanding of macroeconomics are astounding.
There are forces pushing the dollar lower and gold higher, which is why they believe every American should own physical gold.
So until January 30th, if you are a first-time gold buyer, Birch Gold is offering a rebate of up to $10,000 on qualifying purchases to claim eligibility and avoid a world catastrophe.
Singularity, start the process.
Just text Bannon to 989898.
Birch Gold Group can help you roll an existing IRA or 401k into an IRA in gold, and you are still eligible for a rebate of up to $10,000.
Can't beat that with a stick.
Now, make right now your first time to buy gold and take advantage of the rebate up to $10,000 when you buy by January 30th.
Text Bannon to 989898.
Claim your eligibility today.
Again, text Bannon to 989898.
Back in a moment, War Room Posse.
unidentified
American fame In America's heart, are you on Getter yet?
No.
What are you waiting for?
It's free.
It's uncensored, and it's where all the biggest voices in conservative media are speaking out.
steve bannon
Download the Getter app right now.
It's totally free.
It's where I put up exclusively all of my content 24 hours a day.
You want to know what Steve Bannon's thinking?
Go to Getter.
unidentified
That's right.
You can follow all of your favorites.
Steve Bannon, Charlie Cook, Jack the Soviet, and so many more.
Download the Getter app now, sign up for free, and be part of the new thing.
Different Views on AI Survival 00:15:20
joe allen
War Room Posse, welcome back.
We are here with Leron Shapira of Doom Debates.
I cannot recommend enough the Doom Debate platform.
You can find it on YouTube.
You can find it on Leron's social media.
You'll see some War Room favorites like Max Tegmark, Jeffrey Miller.
You'll also find people like Robert Wright.
You can find Leron debating Beth Jezos, who has still not accepted the invitation to come on the War Room, but I'm sure he'll come on any day.
Gary Marcus, a War Room favorite, Holly Elmore, Roman Yampolsky, whose P-Doom beats everyone's.
I think it's almost 100% P-Doom.
And you can also really dig your teeth into, sink your teeth into the technical details.
As Leron and his various opponents are going over the possibilities of either some kind of wonderful future of abundance or a horrific, doom-inflected, catastrophic end to all humanity and life itself, they are teaching you the underlying mechanisms of artificial intelligence and you can really gauge not only where it's at now, but also where it's going and where it may go in your life.
So doom debates.
Leron, if we can just come back with a little breath of fresh air, a little bit of optimism.
You have been involved in Silicon Valley firms and technology for a long time.
And I would say just as an outsider, I would describe you as in general a techno-optimist.
Is that correct?
Am I completely off there?
liron shapira
No, very much techno-optimist.
And this really cuts against some people's assumptions about AI doomers.
I've never suffered from depression.
I've never been a pessimistic guy.
I've loved technology my whole life.
If you ask me about self-driving cars or virtual reality, I'm like, yep, that's great.
I love that.
I love the internet.
I'm even fine with social media.
I don't have a beef against social media.
It's just in the case of artificial intelligence.
I don't think we're ready to survive sharing the planet with a smarter species.
That's just a, it's purely logical.
You know, I'm just the logical guy.
So that's the nature of my concern.
joe allen
You know, it's so funny.
I don't know whether I would want to debate you on the possibility of doom.
It's not a huge concern of mine, just because I think that if I had any kind of thesis, it would be a reformulation of Yudkowski and Nate Soros.
It would be if anyone builds it, everything sucks.
But what I would argue about is whether or not, you know, fully autonomous vehicles all over the road, bugman mobiles, or people lost in virtual reality kind of in a digital trip, whether that is beneficial to humanity.
But maybe we can coexist, assuming we're not all destroyed, huh?
unidentified
Right.
liron shapira
I mean, you know, there's different levels of doom.
And some people like to focus on the problem like, oh, how are we going to have privacy in the age of AI?
I'm like, okay, yeah, sure.
You can think about that.
It's just that we're all about to get annihilated, right?
So you really have to prioritize the concerns here, right?
Like if we can survive 10 or 20 years, so we have time to worry about things like privacy or like amusing ourselves to death or whatever.
Like those are good problems to have.
joe allen
If I can ask a more personal question, what, you know, you're a father.
What has that done to your perception of technology and its potential consequences?
liron shapira
I mean, it does make me conflicted about whether I should have had kids or to have more kids.
And it's tough because, you know, I'm partially responsible for creating more victims of getting annihilated by AI.
One thing that helps is that my P-doom isn't 100%, right?
So I'm still optimistic that we're not going to destroy ourselves.
I think there are ways we cannot destroy ourselves.
And I have to live much of my life according to the good outcome.
Like I haven't thrown away my retirement savings, right?
I'm still hoping that I'll have a retirement or live forever or whatever is going to happen, right?
I haven't completely committed to the idea of annihilation.
The other thing about having kids is I can also see that the AI is getting smarter faster than my kids are.
joe allen
Yeah, that is a very eerie sort of phenomenon, isn't it?
I think it was on the Joe Rogan show where Elon Musk was talking about watching his kids grow up and just kind of weaving it in with artificial intelligence and talking about how watching an AI being trained is very much like watching a baby grow up.
And there came a point where it wasn't really clear what he was even talking about.
Was he talking about a digital mind?
Was he talking about his baby?
And I think that even beyond just the capabilities, you described the Turing test as this major milestone that's already been passed.
This tendency for humans to anthropomorphize these systems and the vast, vast number of people who are using them, it's as if we've been invaded by artificial immigrants.
Is there, as far as AI development goes, without a total ban on development of AI, like what is a comfortable limit for you?
How far do you think these companies should take AI capabilities?
liron shapira
I wish I could tell you a really crisp answer because then we would just go right up to that line and stay there and never take a step forward.
That would be fantastic.
Unfortunately, because of the nature of this research, nobody knows where the line is.
It really does feel like we're driving in the fog toward a cliff and all the different AI research companies are just flooring the gas because, hey, the closer you get to the cliff, you know, it's like shuffleboard, more points for you, right?
More money, trillions of dollars.
And the truth is that today, I don't think we're over the cliff yet.
You know, there's some people who will tell you today, AI has caused so much damage.
It's so bad.
No, I think today it's still net good.
You know, it's very useful.
I use AI a lot today as long as I'm alive.
The problem is I do think the cliff is coming and the cliff is just when it gets smarter than humanity.
And so at the very least, the kind of proposal we need to do right now is we need to just build an off button.
We need to build a brake pedal because right now there is no brake pedal.
There's only gas.
So at the very minimum, let's get ready to hit the brakes a little later.
joe allen
You know, we have SB 53 in California, RAIS Act in New York, and the legislation introduced by Hawley and Blumenthal, the AI Risk Evaluation Act.
These are steps towards something like an e-stop, as we would say in the entertainment industry.
Do you see these attempts at legislation or actual legislation passed as positive?
Do you think that it's kind of sedating people and giving them a false sense of comfort?
How do you see the current legislative landscape?
liron shapira
So the short answer is it's not enough because what I'm saying right now is like we're making a smarter species and we're going to lose control.
Like this time, in 10 years from now or less, we may have no levers of control because all of the levers of control are at the hands of the AI and it's game over.
There's no undo button, there's no off button, it's game over, right?
No children, like this was going to be our galaxy, now it's never going to be.
We're never going to have grandkids.
The kids that we have are not going to grow up.
Like, this is a major disaster here that we're trying to avoid.
And the regulators are coming out and they're saying, Hey, can you guys send us a report when you're creating this AI?
You know, there's a big disconnect between the magnitude of the emergency and these little baby step regulations.
Like when the rubber beats the road, which is literally a few years, it's not going to be enough.
So we really need to step it up.
joe allen
Well, you know, in Marsha Blackburn's proposed, it's not a fleshed out bill yet, but the Trump America AI Act.
It's a framework that gives a sense of where kind of one national or one federal standard might go.
And in it, the recommendation, one of the recommendations is to have agencies such as the Department of Energy, which has been responsible for tracking nuclear risks and really controlling that possibility of doom for decades.
Do you think that those sorts of approaches, just specifically the Department of Energy, do you think that they're capable of such a task?
Do you think that they have the right expertise to kind of switch over to address the possibility of out-of-control AI?
liron shapira
So the problem is that all of humanity has to cooperate.
So unfortunately, you know, this whole solution is actually a bit complex.
It requires an international treaty.
I mean, if you think about nuclear proliferation, right, it's not about one country managing itself.
It's about all the countries, everybody policing everybody, right, in this kind of shared centralized way.
And I'm no fan of centralization.
You know, I like free markets.
I like everybody defending themselves, right?
Everybody pulling themselves up by their own bootstraps.
Unfortunately, when it comes to creating a smarter species, you really do need some oversight that random hackers don't decide to create a smarter species and unleash it on the whole human race.
So you do need something like nuclear proliferation enforcement that's happening through a consortium of nations.
And this all has to happen fast, you know?
So like when I see these little efforts, you know, one state at a time proposing something, it's better than nothing.
And the funny thing is that the AI companies are already aggressively fighting even that, even these token efforts.
But we need to just get serious.
You know, the grassroots, the people watching right now, they need to consider this an urgent voting issue.
Like whatever you think is your number one voting issue, consider surviving the next decade to also be an important voting issue.
joe allen
Yeah.
And I think, you know, you hear a lot from people who are older.
They say, oh, well, I'm not going to be alive.
It's not my problem.
But I think that whether the real issue around artificial intelligence for you is the possibility of people just simply getting their brains melted or of massive job loss or of humans creating some kind of catastrophe by being enabled by AI or the ultimate, right?
Like out-of-control AI, it's starting the salience, I think, is really sinking in.
The war room posse really understands, I think, the magnitude from psychological all the way down to doom.
But you're right, it is a matter of mobilizing as many people as possible.
Do you think that populism plays into this?
Do you think that this is much more the task appropriate to a populist approach as opposed to kind of standard elite or moneyed political activism?
liron shapira
So it has to be grassroots because leaders, they're not going to really lead from the front.
You're not going to have a leader that says, hey, I've heard the argument for why we're doomed.
I've looked at metaculous.
I know the predictions.
So trust me, America, we need to go do these international treaties.
We need to have a stop button on AI.
There's not going to be a forward-thinking leader who gets elected president or to Congress and pulls the nation along.
It has to be what the voters are demanding, right?
The voters are going to get what they're demanding in the polls.
And so, you know, the term raising awareness, usually it's just like hippies wasting their time, you know, raising awareness.
It's kind of meaningless.
In this particular issue, I actually think raising awareness helps in the sense of taking the issue seriously and making it a voting priority.
Because I think that the war room posse, I think that they, most of them already agree that this is an important issue, but they haven't been treating it like the number one voting issue.
And when they talk about it with their friends, their friends are like, yeah, you know, I'm pretty convinced that makes sense.
But again, they don't go and vote on it, right?
They don't have politicians promising to build that stop button and go negotiate with China, right?
Have China build their own stop button too.
Like this isn't treated urgently.
And it's crazy how little time we have left.
Only people in Silicon Valley have opened their eyes to how little time we have left.
The rest of the world is completely head in the sand.
joe allen
Well, before we sign off, I'd like to give you the opportunity just to give any message that maybe I haven't prompted you, like GPT, that I haven't prompted you to give.
A final word, the floor is yours, sir.
liron shapira
Thanks so much.
Yeah, I mean, so I think it really is this idea of like waking up, like see how serious the threat is.
Listen to what the AI companies are saying in Silicon Valley.
They know this is coming.
They've already driven the last few years of progress where AI went from, you know, nice little language translation to it can do anything.
It's an agent.
It's about to replace a bunch of jobs.
If you extrapolate the curve, we don't have much time left.
So take it seriously, vote on it.
And for more information, I recommend watching my show, doomdebates.com, where I discuss this every week.
joe allen
Yeah, in fact, I would like for the War Room posse because they're not just going to get you and they're not just going to get the Doomer perspective.
They're going to get all sorts of perspectives.
I actually do have one final question that I failed to ask.
Of the various guests that you've had or opponents that you've taken on the Doom Debate platform, who has given you pause?
Who has swayed your opinion the most, if it's been swayed at all?
liron shapira
There's been a couple smart insiders from different AI companies.
So like OpenAI has this employee named Rune who came onto the show and he had some arguments.
unidentified
I've met him.
joe allen
He's a fantastic guy.
unidentified
Yeah.
liron shapira
Exactly.
So he's saying, look, I think that the AI will probably keep listening to our orders and he has some arguments why.
There's some smart people giving some arguments.
The problem is, if you watch my show, the different people who are saying why we're going to survive, they say different reasons.
So they haven't gotten their story straight about why we're going to survive.
So that then makes me anxious again.
joe allen
Well, I hope they're not watching right now because they're going to gang up on you.
They're going to start colluding against you.
Well, again, LaRon, thank you so much for coming on.
And let the audience know again, where can they find you on social media?
Where can they find the Doom Debates?
And perhaps a suggestion for one or two of the first episodes that they should take on.
liron shapira
So doomdebates.com or go to YouTube and search Doom Debates or go to any podcast player and search Doom Debates.
As a first episode, if you want kind of a gentle introduction, check out my debate with Mike Isratel.
He's a popular YouTuber in his own right.
I've also got one with Gary Marcus that you might want to check out.
And there's also a debate with Dean Ball, who wrote America's AI Action Plan.
So those are some good episodes.
joe allen
Fantastic.
Thank you very much, sir.
liron shapira
Thank you, Joe.
joe allen
Well, posse, I think we have just enough time for a little bit of entertainment.
You know, one of the more ancient motifs in mythology is the robot.
Robots Beyond Human Limits 00:04:54
joe allen
Many people don't know this.
So for instance, Talos on the Isle of Crete in Greece, or the Gollum, the Jewish myth of the clay man who has been brought to life.
If the Denver control room will just prompt up robots, and I will come back in just a moment after a little bit of light entertainment.
unidentified
You've said recently tens of billions of robots, but that's decades away.
elon musk
At least one decade away.
unidentified
It's got to be more than that.
elon musk
It's going to grow very fast.
unidentified
Why do you think that?
elon musk
I think humanoid robots will be the biggest product ever.
The demand will be insatiable.
unidentified
You've said that.
elon musk
Everyone's going to want one.
It's like, basically, who wouldn't want their own personal C3PO R2D2?
bill whitaker
When 60 Minutes last visited Boston Dynamics in 2021, Atlas was a bulky, hydraulic robot that could run and jump.
When we dropped in again this past fall, we saw a new generation Atlas with a sleek all-electric body and an AI brain powered by NVIDIA's advanced microchips, making Atlas smart enough to pull off hard-to-believe feats autonomously.
We saw Atlas skip and run with ease.
elon musk
If optimists can watch videos, you know, YouTube videos or how-to videos or whatever, and based on that video, just like a human can, learn how to do that thing, then you really have task extensibility that is dramatic.
Because then it can learn anything very quickly.
bill whitaker
Robots today have learned to master moves that until recently were considered a step too far for a machine.
unidentified
And a lot of this has to do with how we're going about programming these robots now, where it's more about teaching and demonstrations and machine learning than manual programming.
elon musk
Right now, we're training optimists to do primitive tasks, where a human in a kind of a what's called a mocap suit is and sort of cameras on the head is moving in the way that the robot would move to say, pick up an object or open a door, or the basic tasks throw a ball dance.
rob playter
This robot is capable of superhuman motion, and so it's going to be able to exceed what we can do.
Why not, right we?
We would like things that could be stronger than us or tolerate more heat than us, or definitely go into a dangerous place where we shouldn't be going.
So you really want superhuman capabilities.
bill whitaker
To a lot of people, that sounds scary.
unidentified
You don't foresee a world of terminators absolutely not.
elon musk
We might be able to give people, if somebody's committed crime, a more humane form of uh, containment of future crime, which is if if, If you say, like, you now get a free Optimus, and it's just going to follow you around and stop you from doing crime.
But other than that, you get to do anything.
It's pretty wild to think of the various of all the possibilities, but I think it's clearly the future.
bill whitaker
Goldman Sachs predicts the market for humanoids will reach $38 billion within the decade.
Boston Dynamics and other U.S. robot makers are fighting to come out on top.
But they're not the only ones in the ring.
Chinese companies are proving to be formidable challengers.
They are running to win.
Are they outpacing us?
rob playter
The Chinese government has a mission to win the robotics race.
Technically, I believe we remain in the lead, but there's a real threat there that simply through the scale of investment, we could fall behind.
unidentified
The Unitree G1, you can actually buy it right now via Looking Glass XR.
Unitree's been advertising it as starting at $16,000, but via Looking Glass XR, the starting price is actually $28,000.
joe allen
Waroon Posse, I do not recommend buying the Unitree robot, nor do I recommend inviting these beasts into your home.
Consider them algorithmic immigrants and bar them at the border.
Why would you want to bar them at the border?
Because if you are a homeowner, you need to listen to this.
In today's AI and cyber world, a world of humanoid robots, scammers are stealing home titles with more ease than ever, and your equity is the target.
Stay Human, Stay Free 00:00:43
joe allen
Here's how it works.
Criminals forge your signature on one document, use a fake notary stamp, and pay a small fee with your county, and boom, your home title has been transferred out of your name into a robot.
Go to hometitalock.com.
Use promo code steve at home titlelot.com to make sure your title is in your name.
Also, text Bannon to 989898 to get your free Birch Gold Guide.
$10,000 rebate.
Text Bannon to 989898.
Stay human.
God bless Warroom Posse.
Export Selection