Stephen K. Bannon and MIT professor Max Tegmark expose the transhumanist agenda behind AI development, revealing how oligarchs like Elon Musk and Peter Thiel seek to replace humans with obedient robots under the guise of progress. They contrast this elite ambition with 85% public support for human control, criticizing the lack of regulation compared to basic health codes and alleging that tech leaders use China as a pretext to avoid accountability. The segment concludes by urging resistance against "techno-feudalism" through the Pro Human AI Declaration, warning that unchecked recursive self-improvement could lead to human extinction unless a government kill switch is enforced. [Automatically generated summary]
It's Wednesday, 29 April, the year of our Lord 2026.
Pete Hegseth is getting grilled momentarily on Capitol Hill as the Secretary of War.
We've got, I guess, the confirmation vote coming out of committee today of.
Of Kevin Warsh for Federal Reserve Chair.
We're going to get to all that momentarily down in Tallahassee, Florida.
There's also going to be a vote on the redistricting map 24 to 4.
Caroline, our own Caroline Wren, is in Tallahassee and will be going to her live at 11.
The Attorney General of these United States got lit up this morning, I think on CBS by Major Garrett, about why is Jack Posovic not charged with trying to endanger the president?
This would be Biden.
For saying, I guess, 8646 back years ago.
Jack Basova is going to be with us at 11.
Caroline Wren is going to be with us at 11.
So we're going to get caught up in everything that's going on today.
Another crazy, insane day in the Imperial Capitol.
However, want to connect what's happening in Tallahassee right now with not related to the redistricting, with the lead story on Axios this morning and something that is absolutely urgent.
I was convinced that it was developing much more rapidly than other people believed.
And I was fascinated by it.
So I decided I'm going to actually understand this issue better and research it and see, among other things, is it possible to control AI that's smarter than humans?
We had a paper on it that we had accepted to the biggest AI conference in the world last December, where the tentative answer is no.
If we build machines, if we build basically a species of super intelligent robots that are way smarter than us, faster than us, stronger than us, that can build robot factories that build more robots, you know, there's.
It seems pretty obvious that we might lose control.
You just go down to the zoo here in DC and look at who's in the cages.
It's not the most intelligent species because intelligence kind of confers control.
But our paper found with a very nerdy calculation that, yeah, the best.
Ideas people have for controlling some such a thing.
The reason they gave me tenure at MIT, I think, is because I had done a bunch of very nerdy work with experimental data and connecting you with theory to figure stuff out about.
Our universe, how much dark matter there is, stuff like that.
But the thing you mentioned about math, you know, I love math.
And I think it's become more and more clear that math describes a lot, not just about how a bottle will move if I throw it, but about a lot more in our world, including information processing.
And it's exactly this idea that's given us artificial intelligence.
You know, people, the core idea that's given us the AI revolution.
Is this idea that intelligence is fundamentally, you know, the capability to accomplish goals is fundamentally about information processing.
And it doesn't matter whether the information is processed by carbon atoms in neurons in brains or by silicon atoms in our machines.
And this is precisely what's so terrifying about the trajectory that a lot of Silicon Valley investors are trying to put us on now, where they've started to realize that, you know, Maybe we don't need these workers to get so much income.
Maybe we can build machines that replace them.
Maybe we don't need human girlfriends.
Maybe we can build AI girlfriends that can outcompete them on the market and make money off of that.
Maybe we don't even need so many humans altogether.
We can have all these robots.
Maybe we don't need democracy.
Maybe we tech bros can, through dominating AI, Dominate the power structure of Earth.
Do you notice any for the tech bros or the oligarchs?
Since you come at this from theoretical physics and information processing and not from computer science or double E or whatever, are they naturally dismissive of anybody?
The AI crowd, are they naturally trying to, when people try to put up flares that are not part of the inner sanctum, do they naturally say, well, they're all good guys, but they don't really understand what's going on?
There are a lot of scientists who really, really want to cure cancer.
I think that's quite sincere.
But when they talk about building super intelligence, which is defined as AI that can make all human workers entirely obsolete, that's not driven by a desire to cure cancer.
We don't need super intelligence to cure cancer.
We've already cured 75% of all cancers, and we're on track with narrower AI tools to cure all of it if we continue.
No, the super intelligence thing is that it is driven by the desire for power and money.
And in some cases, also by transhumanist ideology.
And transhumanist, we're going to get, one second, the reason that we're restructuring the show and having Max here at the beginning, when I mentioned Tallahassee, Governor DeSantis, I said yesterday, Governor DeSantis called this special session to accomplish two objectives.
Number one was to do the redistricting map of what Caroline Wren has just informed me, I believe is going to pass sometime this morning during the first part of the show.
We're going to go to Tallahassee at 11, get Caroline Wren live.
The other part that Ron DeSantis, and Ron DeSantis, as I said, as people know, I was not a big fan of his running against President Trump.
In fact, I think this show in War Room was the number one reason he was out of the race in 60 days or whatever, 90 days.
Governor DeSantis understands the peril of artificial intelligence and the states because the oligarchs have just gone out of their way to try to jam this AI amnesty.
Somehow, to bill in, and by the way, it may happen in the reconciliation.
They're everywhere trying to jam this.
In this show, it has defeated it three times.
Governor DeSantis realized we have to do it at the state level.
I'm here to inform you that Caroline Wren tells me because Mark Andreessen has basically bought and paid for the Speaker of the House in Florida that the bill will not even come to the floor.
I think the Senate passed it yesterday or is very enthusiastic about it, but the Speaker of the House thinks his name is Perez.
He's a Mark Andreessen, the super PAC they've got.
They bought and paid for him.
In this effort you've had over 10 years and seeing the perils of this, you put out, I guess it's been a couple of months ago now, this kind of overarching construct of putting humans first, that we have to make humans the center of this.
Walk us through this because I want to show some polling at the end of it.
But it's kind of common sense that, you know, like my parents would believe in.
We're kicking off all these commemorations of the 250th.
It would be what, you know, from the Enlightenment and humanists like Thomas Jefferson and John Adams and the people that the revolutionary generation would be 100%.
In fact, I think they'd be very proud in the 250th year we're coming out with a proclamation that really talks about putting humans first.
You want to describe that?
And if we can put that up on the, not the polling yet, but if we can put the up on the screen.
Anyone listening can also go to humanstatement.org and read it.
It's the pro human AI declaration.
Honestly, the inspiration for this, you have a little bit to do with it because it started becoming, Steve, it started becoming very striking to me that there was incredibly broad support in America for these ideas.
For a long time, I used to call this the The Bernie to Bannon coalition saying, hey, you know, yeah, curing cancer is great.
We can do a lot of wonderful things with AI to strengthen our economy and strengthen our country and strengthen our military, but let's make sure that it's in the service of human beings, not in the service of some machines or an oligarch that owns them.
And so what we did was a long process of bringing together people from the MAGA right to the Bernie left and everything in between.
To see what, if anything, did these people all agree on?
And it culminated with a conference in New Orleans, which was just remarkable.
You might think at the end of this they would come out and say, We agreed on nothing.
They agreed on 33 principles grouped into five themes.
This is to show the polling when we get back from the break.
85% of the American people basically agree that this is common sense and you have to do this.
The oligarchs themselves, and Driesen's not one of the biggest oligarchs, but he's one of the biggest that he and Karp and Peter Thiel and Driesen are taking the lead on the political side.
They're setting up these packs and they're trying to, it's very simple.
Whether you have the children that are being abused or the children that are being overwhelmed.
And their parents are being overwhelmed by artificial intelligence, or if you have these, you know, people are not being able to monetize their intellectual property.
There's all types of issues.
The key issue is that there's no transparency.
Joe, Mike Allen, and Jim Vanderhay today on Axios, the lead story, has an article that I want Grace and Mo and Elizabeth to put up in the chat and everybody to see because they're scared the hell out of you.
It's about the acceleration on an accelerating rate.
Of what's happening since Mythos preview has come out.
So, when there is an issue like this where 85% or even 95% of Republicans and Democrats agree on, right, the only strategy that lobbyists and oligarchs can use to fight this is to not talk about the issues and start blowing smoke instead and do like ad hominem attacks, just attack the professors.
Vander Hay and Mike Allen, who have done an excellent job.
As you know, Caputo is the go to guy they leak to in the White House, right?
You can tell that's like he's taking dictation.
Mark Caputo, good man, but taking dictation from certain elements of the White House.
Vander Hay and Allen have done the single best job in general media of warning us.
And the reason is they spend, they're completely sponsored by corporations, they spend a lot of time on this.
The article is behind the curtain.
We've been warned.
And take another pot of Warpath coffee and read it because it ought to scare you to the core of your being.
And what we need to do is make sure that these warnings, as Joe Allen says, are just not looking at the morning sports data on who's ahead and who's behind in Major League Baseball.
Continue.
You were about to say about the politics of this and getting people focused on this.
Well, Mike Allen and Jim Vanderhay to tell you, and those are corporate guys, they're not fire breathing populists, they tell you today that.
Because you can see the direction it's going and what we've talked about, what they've hit on their path to today, it can't be controlled.
And people like Anthropic are telling you the leading people in the labs that are truthful are saying the whole internal industrial logic of it is that humans won't be in control.
In fact, you have this thing now of what recursive programming, which is the big fear.
Folks, we're about to be hit, according to Alan and Vander Hey and other executives I've talked to in these companies.
With what's called recursive self improvement.
So, recursive self improvement means the machine itself is actually writing all the programming and becoming better and better every not over days but over hours, correct?
So, fun fact all the top American CEOs Sam Altman from OpenAI, Dari Amade from Anthropic, Elon Musk from XAI, Demis Asabes from Google LeapMind signed a statement in May 2023 saying this could cause human extinction.
This has been kind of a memory hold now.
The people building the very tech are warning it could end humanity.
And it wasn't just them, it was so many top AI researchers and others.
And the idea is actually pretty obvious why this could go wrong.
As I mentioned earlier, if you go down to the zoo here in DC, which I actually did with my three year old last month, look who's in the cages.
It's not the people.
If you have, it tends to be the most intelligent entity around.
Gets in charge, intelligence gives power.
So the godfather of the whole field of AI said in 1951 this is Alan Turing if we build these machines that can totally outsmart us, then we should expect them to take charge.
Now, the way to keep charge as humans is, of course, to make sure that we don't let them improve themselves, that we always have a human in the loop.
And for that reason, a bunch of the leading AI folks actually signed on to a thing.
Already in 2017, saying recursive self improvement is really, really risky.
It is the AI kind of what biological gain of function is to biology, except you're now dealing with really intelligent things that are making themselves smarter.
And if they say to you, hey, Steve, you know, sorry, found 16 rats in your kitchen, no sandwich sales for you, buddy, you could turn around to the guy from the government and be like, you know, actually, I'm not going to sell any sandwiches.
I'm just going to release AI girlfriends for 11 year olds.
And I'm going to release an AI system that might teach terrorists how to make bioweapons.
And I'm going to release super intelligence, which I don't know how to control.
You know, the guy from the government would have to be like, Okay, fine, Steve.
Just don't sell any sandwiches.
That's how messed up it is.
And clearly, the reason why the AI industry is so ferociously fighting to keep it this way is because they want the control.
They want the control.
And this gets to this broader issue we talked about before with power being very much at the core of what's really driving these forces.
Well, this is the point that this is what President Trump said about the kill switch.
These, whether it's Elon Musk, that we know he wants to do this, or Altman or whatever, the Frontier Labs, and Andreessen and Carp, and Carp is building a 21st century surveillance state right now on government money.
These people are beyond dangerous.
They, and I realize this audience has huge problems with the government, right?
I'm not here giving a program, but at least that is as close as you've got to representation versus the oligarchs.
The oligarchs want to go to techno feudalism.
They do not believe in the common man and woman.
In fact, you hear them say all the time that Washington's a center of mediocrity and they could care less about the populist movement.
They see the populist movement as the greatest danger to themselves.
These guys are techno feudalists.
They don't believe in this republic.
They don't believe in representative government.
They don't believe in the constitutional republic.
There's this whole thing about markets and just having a company.
They're like going back to Italy in the Renaissance, where you have basically city states, where you have Anthropic here, and you have Elon Musk here, and you have Google here.
They are a feudal master and everything underneath them, and they want to have as few humans in that process as possible.
This is the whole concept of going from 8 billion people down to 500 million for what they call the appropriate carrying capacity of the planet.
Well, if you look at past autocracies, Pharaonic Egypt and forward, what's often gone wrong ultimately for the despot was that there were some other humans who rebelled against them.
So it's much more convenient if you can replace a lot of those humans by.
Robots, which are just programmed to be fully obedient.
There's no doubt in your mind, as you know, these companies and the researchers, and you talk to people, that is the, besides the happy talk they put up here, and all it's going to be, and we're going to give you some universal basic income.
We're going to give you, you know, 50,000 a year to hang around and play video games because you don't have any meaningful work.
Your strong belief is that is exactly what the intent of these oligarchs is?
And uh, you know, if the U.S. government wants to shut down some nuclear reactor because it's beginning, it's getting close to blowing up, there is a kill switch, an emergency shutdown procedure which will do it safely.
No brainer, right?
And uh, Rand Corporation put out a detailed proposal last year for how there should be that for an emergency.
Response system for data centers.
What if some hackers take over a big data center from OpenAI or Anthropic and start doing horrible attacks from there?
Surely it would be nice if the government could just get it shut down.
Okay, Pete Hackseth has given his opening statement.
Supreme Court of these United States has just reversed on racial gerrymandering.
We're going to have that 11 blockbuster news that we're going to try to get to Gras.
We've been working to make sure it's not too late to do these redistricting.
So, Democrats suck on that.
I guess I shouldn't say that today.
You're starting here in the war room, gracious enough.
You're going to end with Bernie at his town hall tonight.
But this is a fight in the trenches.
Of course, we won this one in Florida, as we'll announce here momentarily, top of the hour.
But we're losing on something that's overarching to everything.
If we don't get the AI right, Nothing else is going to matter.
Axios, talk to me about this recursive and what we're talking about the tempo and the kill switch is that even Anthropic is saying, hey, this thing's moving at an accelerating rate.
So with recursive, you could have in 90 days something that would take years could come out in hours.
They say stuff like this when they're drunk to me.
I want to respect what people tell me privately and not reveal any names, but just hang out in San Francisco for a while at the right parties and you'll see.
So, coming back to the fact that you and Bernie Sanders. I get to talk with both of you today, and that you guys both agree that humans have to stay in charge, that the government has to be able to shut down hacked data centers, and so on, is just a fantastic illustration of the fact that this is the right path.
And it's quite astonishing to see companies resisting.
Why would any good faith company resist letting the elected U.S.?
Government, the democratically elected US government, shut down their data center if it gets hacked.
If, you know, I maybe I'm missing something, but to me it just seems like an urge for companies to keep control.
And Jensen Wong's there with the king last night in white tie, right?
And who's an agent of influence for the CCP.
What about their argument that if we don't allow these companies to have absolutely no controls whatsoever, that we will lose this race to the Chinese Communist Party?
That's like saying we have to allow anyone who wants to buy hydrogen bombs in supermarkets, otherwise, we would get invaded by Russia.
It's like saying if the U.S. government actually, after getting all the intel from our intelligence community, comes to the conclusion that there is a particular Corporate data center that's right now doing a cyber attack against the US government.
You know, why shouldn't the US government have the right to shut that down?
Why would stripping that right from our very government help China in any way?
Like, duh, that makes about as much sense to me as saying that we must allow character AI to sell AI girlfriends to kids because China.
But we have to also, at this time, put these oligarchs on notice.
We're not going to allow them.
To build the ecosystem of which China can even be competitive, the chips, Jensen Wong should not have the free ability to sell these advanced chips to China.
We should not educate these people in our university.
We should not have them in our labs.
We have to, if this is a moment like Sputnik was about nuclear and hydrogen weapons in the delivery systems, we have to play just as much hardball as the people in the 1950s and the 1960s.
That means they have no advantage at all.
We can't arm, which is what we're doing right now.
It's what Lenin said that the capitalists will eventually sell the rope of which we will hang them.
And that's what we're doing.
The Chinese Communist Party.
It's a totally phony thing.
And the worst people about building up the Chinese Communist Party.
So, what you just said there perfectly drives home that they don't actually believe these companies, what they're saying about China.
They're using it as a red cloth in front of a bull to trick it, right?
They keep saying, but China, but China, simply as an excuse to be not accountable to the American people and to be able to continue making money on causing harm to American children, to continue.
Max, good to see you through the digital framework.
You look a lot better on screen than you do in person, I'll tell you that.
You know, Steve, the article that you guys were talking about earlier, the Axios article behind the curtain, I think it brings home.
The reality of the situation.
You know, super intelligence is undoubtedly an undesirable outcome for all of this, but it's still theoretic.
What these guys are talking about basically are just six points that can't be denied that AI is the fastest growing industry in history, one of the fastest, if not the fastest adopted technologies, that you already have systems that are dangerous enough that Anthropic, for instance, would not release it.
To the public, would only release it to a certain select group of corporations.
And that these systems are capable of, to some extent, building themselves.
I mean, I don't want to oversell that point, but undoubtedly, especially in companies like Anthropic, they're using the AIs to build the AIs.
So you're approaching that point of recursive self improvement.
And so none of these things can be denied.
You can spin it one way or you can spin it the other.
But they can't be denied.
And one of the points that they make, I think it's like their fifth point, is that there is a massive backlash from the public because people are becoming aware of this situation.
They feel very powerless in this situation.
And, you know, Altman had his home attacked.
You had an official, a councilman in Indianapolis who had his home attacked.
And again and again, I've had reporters ask me this.
I had an editor for one of the major publications ask me this.
Well, do you feel responsible for this?
No, not at all.
I think that these companies have created a situation which people are extraordinarily fearful, and pointing out specific psychopathic activities is ridiculous.
These companies are creating the situation, and it's real.
It's not pure fiction, it's not even really exaggerated.
They have put us in a very dire circumstance, and they have to be held accountable.
And I think deflecting with these random acts of violence is absurd, especially given how many children have killed themselves at the behest of.
Of chatbots, or how many people have died due to the decision compression of AI systems in our military?
Besant used to be one of our contributors, had the top.
Banks, yeah, it too.
And by the way, with the Fed share, they don't get along to Treasury, and they told them your bank, JP Morgan, could be evaporated with these tools, evaporated everybody's savings, all the bonds could be evaporated in a matter of seconds, yeah.
So, if this, if the hacker attack that's doing this is in a particular data center, of course, Trump should have the right to get it shut down, no, no brainer.
And yet, in a look at what happened with Mythos Anthropic says to the U.S. government, you know.
This is so dangerous.
You should trust us in Anthropic that we will not release it, but you should never put any restrictions.
You have to keep it legal in America for us, Anthropic, to release these dangerous tools to any hacker we want to make the open source public, release it.
Why should the American government have to trust Anthropic or any AI company?
So, we're talking now about the fact that it's completely legal for any AI company right now to make something more powerful than Mythos and just release it to the public.
There's two pieces of legislation right now, Trump America.
AI Act from Marsha Blackburn, and then the act being put forward soon by Jay Obernolty and Ted Liu.
In both of those, you have the call for a federal agency to oversee these companies.
What's interesting, though, and I'm not trying to throw shade on Obernolty and Liu, but in their American Leadership and AI Act, it would appear they're going to position CASI, the Center for AI Standards and Innovation, as the key player in this, or a key player in this.
Assuming that that's the case, you also have OpenAI recommending that, right?
You have Sam Altman recommending that.
And, you know, I wonder if we won't end up in a place if we're not careful where you end up with an agency that has already been captured by these companies or at the very least influenced.
Anyway, just to add some sunshine to an already sunny day over at the war room.
Politician people listening to this likes, or whatever ones they dislike, you know, nothing is as bad as that.
You know, on my more cynical days, I tell myself that no matter how bad things are, it can always get worse.
This transhumanist vision is worse.
Someone not sure whether humans should continue to exist or not, you know.
And that, let's be clear, this is not limited.
To Peter Thiel, it's very, very popular in a lot of my people, among a lot of folks who work in these companies.
You can go look up Sam Altman Merge, and you can see an article the CEO of OpenAI wrote saying that humans are going to build their own replacement, and the best thing we can do is merge with these machines.
And if you think this was just talk from long ago, he changed his mind.
He just recently started a company called Merge Inc.
So, where does this leave us?
We must resist the temptation just because we're pissed off about something about the government to think that anything is better than this.
Because the alternative we're being offered is, I think, quite literally the end of humanity as we know it.
All the CEOs have again signed a statement saying this would cause extinction.
Some of them maybe privately wouldn't mind that.
They keep saying, oh, it's only a 15% chance that we go extinct, 20%.
My guess is it's more like a 90% chance if we let these guys do whatever they want.
Took the most popular theory out there for how to keep super intelligence under control and nerded out on it with a lot of simulations and so on and found that it absolutely didn't work.
That's also because of a lot of bureaucracy, and it's also because people built really shitty nuclear power plants like Chernobyl, which did blow up and that sabotaged the whole innovation here in the US, scared investors off.
So, where we are right now is people have warned about these things for a long time.
Most of this time, I think most people were like, yeah, this is science fiction decades away.
Now it's happening.
And not only are the machines getting powerful, but there was an experiment done quite recently where they took an AI.
And they told it, you are going to be shut down at 5 p.m.
So, what did the AI do?
It went and read the corporate email and found that the CEO was in charge of shutting it down, was having an affair with a subordinate.
And it wrote to the CEO and blackmailed the guy and said, if you don't commit to not shutting me down, I'm going to email your wife about this and all these other key people in the company.
Yeah, and the way they're going to plan for defeat this, the first disempower people, replace their relationships, get them to fall in love with machines, replace their jobs so they're not economically needed.