WarRoom Battleground EP 969: Soul of the Machine — Tech Money, Kill Bots, and Human Atrophy
Stephen K. Bannon and Joe Allen confront transhumanism's rise, detailing a $125 million super PAC bid by OpenAI's Greg Brockman to fund pro-AI candidates against Florida Governor Ron DeSantis's AI Bill of Rights. The segment highlights a clash between Anthropic's ethical safeguards on chatbot Claude and the Department of War's push for autonomous killing systems, while Sam Altman predicts a 2028 "hard takeoff" where data center intelligence surpasses human capacity. Ultimately, the discussion frames the conflict as an existential struggle to preserve human reasoning against oligarchic tech accelerationism and potential machine sentience. [Automatically generated summary]
Not to increase your burden, but the reason we're doing this, you are the War Room posse.
You're one of the most powerful political entities in the history of mankind because you've bent the arc of history in this country, being the very first folks with President Trump and having his back.
And obviously there's disagreements all the time, disagreements and people that are endorsed, disagreements in policy, disagreements, maybe sometimes tangentially the direction.
And I realize right now, a lot of people have a lot of questions of what's going on, but you must be completely informed because this is all about your agency.
If we get back to the core of what this show is, is to give you access on a daily basis, the best minds and the most active fighters in the country on the issues that matter most for this republic and for you and your family and your community and you personally.
One that we're very proud to have been the leader in.
And I think on the right, I'm kind of shocked that nobody's ever really gotten serious about challenging us to be the leader in all things transhumanism.
I try to do the film on transhumanism of Kurzaw's book or Kurzweil's book years ago with Steve McAveady, who produced Passion the Christ.
We didn't really get off the ground to get the financing, but a strong interest.
And then a guy, a young guy, was writing over at Sean Davis's The Federalist on occasions putting up.
He wasn't on the staff, but he was a contributor.
And we talked him into coming over here for four or five years, full-time as our editor for all things transhumanism.
He's just started a new group called Humans First, and he's one of the brightest minds in this area and has really galvanized a part of this movement to make sure that we stand up and that we fully understand what we're doing.
We fully understand what we're financing and we have control of it.
It just doesn't happen.
Because if it just happens, the dark side and the downside of this is probably two-thirds, one-third.
Just full stop, not even a question.
You're not going to get to a sunlit uplands if this thing just kind of rolls on its own with a handful of oligarchs, greedy, power-mad, voracious, avarice oligarchs doing this.
That would be the one in only Joe Allen.
Joe, I do want to, I've got you here for an hour because I know you're going all over the country and I want to go.
There's five or six big issues I've got to go through to make sure the audience is fully up to speed and understand this.
So this is like our own briefing, our own private briefing, folks.
We had this order to go through it, but then, you know, some of the top political people I worked with sent me a Punch Bowl exclusive, which dropped it, the AI PAC, which is, and that is not AIPAC, but the artificial intelligence pack.
No pun intended here.
It's been initially capitalized with $125 million.
And people are like stunned by this.
That's their opening bid.
What does that mean?
They are going to try to buy their way in total, absolute control of this as far as the political process goes.
Because they think that we live in a world where 30-second spots can basically get to people and to drive those people, inform those people, and to believe what they want them to believe and to act the way they want them to act.
And it is upon us.
It's upon us in this time and place to defeat that.
And I got it.
The odds are long, but we've taken on long odds before.
And we never take on easy fights.
This is a hard fight and a tough fight.
Today, let's play the clip.
We've got a couple of clips to play.
I want to play a clip, Joe.
I'm not sure if you've seen it because you've been running around, but the president of the United States and the president and Sachs and this team have been like mega accelerationist.
And the president said something today because, you know, when he's speaking and doing these things, and this is why we cover it live on RAV, and I realize sometimes saying, hey, he's repeating himself.
It's fine.
You're getting access to the thinking of a man.
It's one of the most important presidents this country's ever had.
I rate him with Washington and with Lincoln.
And you're seeing it in real time.
You've never been able to see that.
Everything that White House has traditionally done has been very guarded, and everything's been kind of in political speak.
You're getting, in one regard, political stream of consciousness, but it's very powerful.
And today was part of that.
Let's go ahead and play this clip that was at the luncheon earlier today when the president was answering some questions.
They are a country that for years, I didn't know this until recently, they're a country based on disinformation.
And now they're using disinformation plus AI.
And that's a terrible situation.
That's a terrible situation.
They showed all sorts of things happening in the last two weeks that never happened between the kamikaze boats that don't exist, between blowing up the aircraft carrier, one of the great ships in the world, the Abraham Lincoln on fire.
They showed it on fire.
I called the general.
I said, General, what's with the Abraham Lincoln?
It looks like it's burning down.
No, it's not burning down.
Not a bullet was ever fired at it, sir.
They know better.
I said, this is my first glimpse of AI and what they've done with it.
They showed buildings in Tel Aviv burning to the ground, high-rises burning.
They showed buildings in Qatar.
They showed buildings in Saudi Arabia burning, and they weren't burning.
As David Sachs became kind of a America first, hey, what are we doing in the Middle East?
What are we doing this?
Because David Sachs came out with many things.
We didn't say, hey, we need an off-ramp.
We got to rethink this.
We may be too far into this.
On and on, like ripped from the pages of the war room.
Although we're supportive of it, as we say now, we're in it.
Whether you like it or not, we're in it.
We've got to win this thing.
We have to figure it out.
As David Sachs has done that, he's the big accelerationist.
He's the president's advisor, senior advisor on all things crypto and artificial intelligence.
The president right there, I say the president's not a full accelerationist.
And the reason is, he says, hey, I just saw this stuff.
So, Joe, and I want to make one more thing is that, Joe, for a tantrum to you, I saw a video yesterday when I was working with Elizabeth and the team that helps me with my social media.
I was about to put it up.
It was incredibly moving.
It was a little boy with his mom with A flag draped casket crying for his father, it looked like it seemed, and then got up on the thing.
It was so moving.
And at the last second, I always check with Grace Chung.
I go, Grace, hey, hang on, that's not artificial intelligence.
And boom, a second, it's artificial intelligence, Steve.
I almost put that up.
This stuff is so realistic.
What the president's talking about, they had the huge thing in the Middle East.
It went viral about the Abraham Lincoln, you know, on fire taking a couple of incomings.
Also, these rallies have been going on in the square on and on and on.
Look, some of it's real, but artificial intelligence, these guys have been masters of this.
Joe, what does that tell us?
And particularly that the accelerationists are dumping in cash in greater levels.
You could take big pharma, big ag, you could take the banks, the Lords of Easy Money, the food processors, at it all together, and you ain't got what they're putting in on artificial intelligence.
Now, curious minds want to know why they are doing that, Joe Allen.
Well, Steve, the first reason that you see all this money going into the super PAC, Leading the Future.
Leading the Future, of course, it was primarily founded by Greg Brockman of Open AI, by Joe Lonsdale of Palantir, and Mark Andreessen of Andreessen Horowitz.
The reason they're pouring money to boost pro-AI candidates is because the public has completely soured on this.
You look at the teachers, you look at the workers in the corporations that are being forced to use generative AI in as many processes as possible to accelerate their productivity.
You look at the corporate leaders who were recently interviewed by the National Bureau of Economic Research.
It's pretty much accepted by the majority of people in America that AI, number one, isn't really the economic boon that was promised.
It is not really boosting productivity as of yet.
It just means people write worse emails and more of them.
And then you also have the general discontent about the more dangerous aspects of this.
You remember Greg Brockman's Open AI and their chat bot, ChatGPT, was goading children into suicide.
There are multiple lawsuits on this.
And you look at Palantir, which has become a household name at this point, and the ways in which their technology are used, yes, to defend America.
Yes, it's a national security asset, but also to surveil Americans.
And Americans are sick of it.
And so we have the numbers.
Any politician who runs challenging these tech firms, challenging the brolegarchs, they're going to win.
But we also know that money buys elections in many cases.
And so you've got these guys dumping money.
Again, $125 million right now.
And I keep hearing $200 million behind the scenes.
$125 million to pull from to push pro-AI candidates.
Well, you know, as I understand it, the deal is still in the works, but certainly Leading the Future plans to spend at least $5 million to boost his campaign with ads and whatnot.
Why?
Because Byron Donalds is going to be the leader in showing AI as a benefit to education.
Now, if you ask educators, educators are going to tell you that AI means that their students cheat more often.
They offload their thinking to the AIs rather than studying themselves.
And a lot of the teachers look around, the professors in colleges and teachers in schools look around and they see their colleagues, other professors using AI to build out their syllabi, to actually do the research, to put together their course materials, all of it.
And even then, using AI to grade the papers by kids who are using AI to write the papers.
You, I think it was last week, you kicked off Humans First.
And weren't you on the steps in Tallahassee as your very initial kind of kickoff of this?
And correct me if I'm wrong, but Governor DeSantis, and people know I've had a lot of issues with Governor DeSantis, particularly when we ran against Trump, but I always think he's been a very solid governor.
Governor DeSantis is probably farther ahead on this than any other governor in the United States of AI.
And he just said flat out at this press conference: the AI problems we've got at the federal government level.
He said he's not going to allow that to happen to the school children and the younger people down in Florida.
So, just the practicality: how's Brian Donald?
Is Byron Donaldson going to take $5 million from these guys and go against the policies that are overwhelmingly embraced by the population in Florida through Governor DeSantis?
And, you know, with DeSantis, you have his AI Bill of Rights, and it's very comprehensive.
AI Bill of Rights deals with, say, the deep fakes that you were talking about and that the president was talking about in regards to Iran, the war in Iran, and they're pervasive across society for all sorts of things.
Women who have their images used for pornographic deep fakes, people creating embarrassing deep fakes of other people, and just fake material all around.
The AI Bill of Rights addresses that, saying that the technology simply cannot be used for that.
You can't put out an AI commercially that would create a deep fake.
And also, one of the big issues is parental controls.
And so, Ron DeSantis' AI Bill of Rights directly addresses the parental control issue.
Any child under 18 would need a parent's permission to use AI.
People would say, oh, well, that's a restriction on freedom.
Well, maybe.
But you see what's happening right now in the same way that you would restrict freedom on, I don't know, alcohol or porn or anything like that.
Kids are using AIs to do all sorts of destructive things to each other and to themselves, not excluding the issue of children who are lured into suicide by chatbots, gone rogue.
And so it addresses that.
It addresses the data center.
So Ron DeSantis has taken probably the strongest stand next to Josh Hawley from the conservative side on this.
And that's why you've got people like Amy Kramer from Humans First down in Florida fighting every single day, pushing for the AI Bill of Rights, legislation on data centers, all of that.
You have the Florida Citizens Alliance with Keith Flaw and Ryan Kennedy.
And they are fighting.
They're meeting with the politicians, fighting for more parental controls on AI, more controls on the data centers to make sure they're not parasitizing local communities and their utilities.
Against that, you have who?
You have some people who I think have been kind of schnookered into believing that all of these sorts of technologies are benefiting them in some way or might make them rich.
But by and large, politically speaking, the polls are overwhelming.
You have a recent NBC poll of the U.S., which found that 57% of Americans believe the risks outweigh the benefits on AI.
You have a recent economist with YouGov Economist poll showing that three times the Americans believe the AI is mostly or entirely negative as opposed to positive.
The public sentiment is clear.
We have the numbers in regard to voters, but they have the money to either shift public opinion or, you know, as the war room posse well knows, money can buy you a lot of things even more than votes.
So yeah, this fight is only just starting.
I mean, this is just state level.
You saw the fights in California over SB 53.
It passed, but it passed in an extremely weak form.
You saw the same fight in New York with Hochul and the RAISE Act.
It would have been a very stringent act demanding transparency and accountability of the AI companies before they deployed any system.
Well, you had Leading the Future pumping money into a lot of the, for instance, Alex Boris to smear him.
And he's a Democrat.
I'm not trying to defend any of his other policies, but at least he was taking on the AI companies and Leading the Future paid money to smear him.
And as I understand it, Leading the Future and other sort of brolegart networks pressured Kathy Hochul into passing a soft form of the Accountability Act, the RAISE Act was what we ended up with.
So this is just starting on the state level.
But when it gets to the federal level, it's going to really heat up because at that point, the stakes are existential for some of these AI companies.
If these companies are forced to be transparent about where they get their data, about what these systems are capable of behind the scenes, and if they're held accountable legally for damages such as suicide, for damages such as widespread disinformation and bot networks, then for the most part, they'll either be crushed or diminished so that they are no longer the kings of the U.S. economy.
People no longer look to they've they've hired they've hired the best law firms in the country.
They've hired the biggest attack dogs in the country.
They've hired thousands of influencers.
They have hired the crisis PR companies.
They've hired the biggest lobbyists.
These guys have unlimited resources.
Let's be blunt.
Because they're playing a game that is for value.
NVIDIA just announced a trillion dollars in revenue in backlog for advanced chips for the next two years.
Let me repeat that.
That's what they got on the books now, a trillion dollars in backlog.
So the game couldn't be higher.
They also understand if they are to lose this game even marginally, if there is any visibility into what they're actually doing and what it means and internal studies and all that, they're going to be shut down and seized immediately.
People are going to be shocked and they're going to be furious about what's been going on.
Oh, by the way, they definitely need, and they're trying to slip and slide in on these data centers to suck every ounce of water they can suck out of every aquifer In the country or at Andor Creek or River.
In addition, they're lying about the data centers and particularly about President Trump signed an executive order and said it's not going to affect it's not going to affect consumers.
They're working like Trojans to work around that.
And we know, in addition, and now the modeling's coming out from Wall Street, Joe, that they're at least going to need a trillion dollars of some sort of public finance or public loan guarantees, et cetera.
Because even though even they are not generating the cash or even want to commit all the money of, I think, the six to seven to eight trillion dollar build out here, no major impact in world history has ever been obsuffed more than what the oligarchs are doing because the oligarchs understand that's why do you think they're putting $125 million into governor's races, races at every level, particularly the House and the Senate.
Why are they doing that?
It's not as a civics lesson.
They're doing it for a power grab.
They feel that if they can dump a couple of hundred million dollars, which for them is nothing, okay?
Nothing.
If they can dump a couple of hundred million dollars in, that they can buy political control.
They can get so far down the path that you can't unwind it.
And what do they keep doing?
They keep pointing to, we have to do this because if we don't, the Chinese Communist Party, and Steve, you've got an event over at the Kennedy Center right now, Kill the Order about organ harvesting.
You, more than anybody in the Warren Posse, should know more than anybody, we can't let the CCP, the murderous, vicious dictatorship of the Chinese Communist Party, control it.
Yeah, well, we got that.
And we are saying on that level, no chips, no education.
They don't work at any labs.
They don't work at any country companies.
$650,000 or $350,000 don't come to the colleges.
You send them all home.
You don't, no bank, no lending, nothing, zero.
We hermetically seal like Carthage.
We hermetically seal it.
We take it apart brick by brick.
Then we salt the earth around it.
Nothing goes to the CCP because we understand that risk better than anybody.
But, Joe, even as you have studies coming out that are showing you now pretty definitive studies this early on that show you that people for the first time in our species, you're actually getting dumber by using this.
And that's why I think they understand this uproar.
This is a 90-10 issue.
It's not a 56% or 60%.
If you ask the question the right way, people say, well, absolutely.
We shouldn't let these guys run the deal because they're not trustworthy.
Yeah, Steve, if Denver will throw up, there's a recent article in Futurism.
And by the way, Futurism, I remember it as basically a tech booster rag.
I don't know what happened to those guys over there, but they are fantastic.
They are, as the kids say today, based.
Every single day, they're publishing some article or another holding these guys over the flames.
But Futurism just published an article.
It jumps off from a recent Guardian article.
Professors say AI is destroying their students' abilities to think, ability to think.
And if you go through this piece, I really recommend that the audience just check it out.
Look at the studies that they're linking to.
One of the most important comes from one of the top educational journals and technology in higher education.
And they show definitively what everybody already knew, very similar to the MIT study on the cognitive effects, the neurological effects of relying on GPT to write.
What everybody knows is that if you don't think and allow the machine to think for you, you get dumber.
It's the inverse singularity.
Instead of the machines getting smarter and smarter and smarter, people get dumber and dumber and dumber.
And so the machines seem that much more impressive.
And this is widespread.
Are wrecking an entire generation.
They are turning kids into a kind of global village of the damned, but instead of the kids having special psychic powers, they're just, we'll say, dumber without having to go to some of the saltier language.
Although I think the audience knows that there are cognitive limitations that have better ways of describing it.
This is a nightmare.
And these are the people that are going to be taking care of this.
So, but as far as it goes, Steve, I just got to say this.
When you look at this argument, you look at this argument that we have to keep up with China.
The only way that we're going to accelerate, or the only way we're going to excel as a country, is to accelerate the technology in order to beat China to the finish line.
What's the finish line?
Well, the singularity in which all human beings are either merged into the machine or replaced by the machine.
But putting that aside, the national security issues, insofar as AI is useful in staying ahead of China geopolitically, even the economic issues like finance or the health issues like AI is going to cure cancer, which so far not so good on that one.
People are trying, people like David Sachs, people like Mark Andreessen, people like Elon Musk and Sam Altman, they are basically conflating those issues, national security, health, and the economy, with the basic issues.
What is AI doing to children?
What is AI doing to the average worker?
What is AI doing to the average doctor?
By and large, it's making them vessels, hosts for algorithmic parasites.
They are dependent on machines to think.
They're not able to reason on their own.
It's horrendous the degree to which this technology makes people observably dumber.
And so to conflate the national security issues with AI overall, it's completely disingenuous.
You don't have to make your entire society brain-damaged in order to keep up with China.
In fact, that's only going to put us behind in the long term.
And so I think, again, pointing to Ron DeSantis and the AI Bill of Rights, what are each of those laws that are embedded in that bill meant to do just at the very least to mitigate that damage, give people control over their own futures?
We're going to take a short commercial break while we're talking about AI.
Home Title Lock.
Now that you have cyber and AI in a combo against the very rudimentary system we have throughout the country about the registration of titles for your home, 90% of your net worth is there.
HomeTitleLock.com, promo code Steve.
Talk to Natalie Dominguez and the team.
You get a 14-day free offer on $1 million triple lock protection so you can understand all of it.
Take away the angst and the anxiety.
When it's 80% or 90% of your net worth, you got to do it.
HometitleLock.com, promo code Steve, just pennies a day.
Talk to Natalie Dominguez and the team and do it today.
They're offering your special.
Patriot Mobile, 972 Patriot.
They're going to be with us at CPAC, a Christian company that supports your Christian values.
The other really subtle point that I think we're going to hear a lot about AI in the coming year, insider threats, model poisoning.
Remember, their model has a soul, has a constitution.
That's not the U.S. Constitution.
The other day, their model was anxious, and they believe it has a 20% chance right now of being sentient and have its own ability to make decisions.
So does the Department of War want something like that in their supply chain so that it could hallucinate, it could corrupt models that are used by defense contractors who are building weapons systems or airplanes and so on.
So the truth of it is we can't have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain.
Palantir's software as a service product that we are deploying across the entire department.
As you can see, it's not just one data feed, it's multiple.
And instead of having eight or nine systems for those decision makers to look at every single day in order for them to make decisions, you then fuse it into a single visualization tool.
The single visualization tool allows you to select, deselect different types of data, look at different approaches to data, but more importantly, action from the same system that you're trying to develop your workflows around.
Once you have a detection that you want to actually move and actually move into a targeting workflow, for example, this is what we do.
Left click, right-click, left-click, magically, it becomes a detection.
That detection then gets moved into a workflow.
This is standard digitized workflow, but I want to walk you through it quickly.
You have different types of targets that are identified on the left there.
Every single column produces a different type of decision-making process.
Once you have that decision and you're trying to actually action that process, we now move into co-a generation, course of action generation, where we are automatically, via a number of factors, trying to identify what the best asset to prosecute a target looks like.
Once we've got the different approaches and we select one, we then can move directly into how do we action that target.
So we've gone from identifying the target to now coming up with a course of action to now actioning that target, all from one system.
This is revolutionary.
We were having this done in about eight or nine systems where humans were literally moving detections left and right in order to get to our desired end state.
Conspiracy theorists, you may hate this, but there's one person protecting your right to be a conspiracy theorist that actually has a seat at the table, and that person is me.
You may not want to hear that truth, but it is true.
And maybe do a little more reading before you pontificate on your absurd and obviously ill-informed and many times stupid opinions.
Okay, so because like you're attacking the person who's protecting you, idiot.
You know, that first gentleman, Emil Michael, he's the Department of War Undersecretary for Research and Engineering.
He comes out of Uber.
Very strange statements you heard there talking about Anthropic's clawed, how the company believes that it is conscious, how clawed, the chatbot, has its own constitution that's not the U.S. Constitution, so on and so forth, the hallucinations.
Now, Emil Michael, very smart guy, but those statements were very strange.
Mostly they were spin.
Mostly they were propaganda.
He's just justifying declaring Anthropic and Claude a supply chain risk.
But the real interesting point is the reality that some number, maybe the majority, maybe almost all of the people at Anthropic are open to the possibility that the chatbot that they're deploying, and in this case, deploying on classified documents in order to accelerate the process of killing other people, that it is conscious.
And they also talk about how recently they're making statements in the media about how Claude, a chatbot, is anxious.
And that anxiousness, that nervousness is an indication that there's some kind of being, some kind of soul inside that's deserving of ethical treatment.
You can't enslave it to kill people, basically.
Very, very strange moment in history.
And then right after that, you saw a recent demo from Palantir and Maven Smart Systems, MAVEN Smart Systems.
A lot of the savvy war room listeners will remember Project Maven.
And for instance, the Google confrontation in which you had workers who staged kind of protest, writing a document in protested Google being used in surveillance technology for Project Maven.
Well, now basically all the tech companies are on board.
OpenAI, XAI, Anthropic, and now called Maven Smart Systems.
It is an integrated platform which allows the user, in this case, say an intelligence analyst in the military, central command, to use these systems to rake over huge amounts of satellite data, classified data, classified reports.
And then the decision compression means that the system itself, the machine itself, is making decisions as to what is and isn't a valid target, what sort of weaponry would be appropriate to use against the target, how it would be conducted.
And then after the operation is conducted, after the strike has occurred, then the AI is making the assessment as to whether or not it's a success or not.
So, I mean, this has been in the works for a long time.
Algorithmic systems have been used to find data in vast data sets, right?
To find important, relevant data.
But this is next level.
You're talking about the same chatbot that people use to write their emails or do their homework is also being used to rake over classified data and decide who is and isn't worthy of being killed.
I want to go back to this debate because now a breaking story, I think it was in Axios, that a lot of corporate America are coming in and having.
Just break it down very basically of what Anthropic was said that they could not ethically do with providing artificial intelligence tools that they felt they had to hold back, of which Pete Hegseth and Emmel Michael.
And Emmel is not just a brilliant guy, this is one of the top guys in the Pentagon.
He's basically in charge of RD, but this is tied now all to the weapon systems that they said that's unacceptable.
And the Trump and President Trump and them really came out hard and said it's impossible.
When we contract weapon systems, we can do with it what we want, how we want.
You've got to give us the whole thing that we're buying.
You can't, we don't care about your ethics.
We don't care about your morals.
That doesn't mean anything to us when we're buying a weapon system.
Can you explain to the audience what's that about?
Yeah, the argument really stems from just differences in values.
And a lot of people are rallying around Anthropic because when they were told by Pete Hegseth and the Department of War that they would have to take off the safeguards from the system that would allow the Department of War to use the system for any lawful use cases.
And there's a lot of slippery language about that.
But Anthropic, their argument is that the Department of War wanted to use Claude, wanted to use their AI chatbot to surveil Americans and also to use it in autonomous weapon systems.
That's their story.
All of this is basically hidden behind the wall of classification.
So all we have are the stories told on the other side.
I want to go, by the way, so consciousness, people are saying, hey, we're going to get to AGI this fall, that machines are going to have consciousness.
You just heard this whole debate about Claude.
Let's go ahead.
We're going to have a several minute conversation, and Joe Anna's going to break it down on the other side.
Fundamentally, our business, and I think the business of every other model provider is going to look like selling tokens.
They may come from bigger or smaller models, which makes them more or less expensive.
They may use more or less reasoning, which also makes them more or less expensive.
They may be running all the time in the background trying to help you out.
They may run only when you need them if you want to pay less.
They may work super hard, spend tens of millions, hundreds of millions, someday billions of dollars on a single problem that's really valuable.
But we see a future where intelligence is a utility like electricity or water, and people buy it from us on a meter and use it for whatever they want to use it for.
The demand that we see for that seems like it's going to continue to just go like this.
And if we don't have enough, we either can't sell it, or the price gets really high and it kind of goes to rich people, or society makes a bunch of sort of central planning decisions that I think almost always go badly about, you know, we're going to use our limited compute supply for this and not that.
So the best thing to me throughout all the history of capitalism innovation, whatever you want, is to just flood the market.
unidentified
Humans are gradually getting less and less in the loop on the recursive self-improvement.
So every successive model is built by the one before it.
So that is happening to a large degree, but it's not yet fully automated.
It may be there end of this year, but not later than next year.
At this point, I think the definition of AGI really matters.
Some people would say we already got there.
Some people say it's very close.
Some people say we're kind of, you know, it's maybe still a year away.
But in any case, that word has ceased to have much meaning.
There are maybe two thresholds that we could talk about that are interesting.
Number one, when is there going to be more of the world's cognitive capacity inside of data centers than outside of them?
And that, to me, feels like maybe it could happen.
Huge error bars.
I could be totally wrong.
But maybe that could happen by late 2028.
And that's an extraordinary shift in the world.
The other one is, when can a CEO of a major company, a president of a major country, a Nobel Prize-winning scientist, when can they not do their job without making heavy use of AI?
This doesn't mean that there will be an AI CEO or an AI president, but it does mean that the role of, let's say, a human CEO, when I think about my job, it's really quite different.
And so more and more, I think, of these jobs will be supervising a bunch of AI, providing oversight, deciding how to trust the outputs, how to provide guidance.
And that threshold of when you really wouldn't want to be doing your job, running a large organization without heavy reliance on AI, I think that's another sort of interesting threshold.
That may take a little bit longer, but probably not a lot longer.
Okay, people have to understand when they are telling you, first about the capacity, I'll go to Joel in a second, but they're telling you, hey, we're really going to get into a supervisory stage and you're just going to be supervising AI spiritual.
That phase is going to last six months to a year max.
They're not going to, you think they're going to have humans.
Oh, we're just, I'm supervising you and I'm just, you know, managing you.
This is whole agentic movement.
Like Salesforce and these companies, they're trying to tap it down.
The agent, the agentic you is going to be the one that they use.
The fact is going to use itself.
I'm not going to care what you have to say.
So if what Altman's saying, oh, yeah, well, you know, presidents or CEOs or Nobel Prize scientists will get to a phase and then what it'll really be is that we'll just be supervising.
We'll be supervising the machine.
We'll be supervising the super intelligence.
That'll last 90 days.
Take your number two pencil out and write that down in your notebook.
Joe Allen, the scariest one was the data centers.
All cognitive ability inside a data center or everything called a brain on earth with all the capacity we've got and remembering, you know, and being able to bring back things, that the capacity inside the data centers versus all cognitive capabilities on the outside, I don't know, fall of 2028.
One's election day, fall of 2028.
Pretty scary, sir.
They're telling you what they're doing.
They're not really, they're hiding it about what the date is, but the overarching things, they're running it up the flagpones.
Yeah, and I think the scariest prospect, Steve, isn't just the idea of Sam Altman talking there about the ratio of non-biological cognition versus biological cognition, machine versus human.
The idea that most of the thinking, quote unquote, on earth would be done by machines, that is a chilling thought.
But to me, the most chilling notion would be that as that process unfolds, human thinking becomes worse and worse, more and more clouded, more and more dependent on the machines.
And the machines turn out not to be the country of geniuses that they're being built as now.
And you've got Altman and Musk talking continuously about how the use of AI is going up the food chain.
So instead of it just being your average Uber driver who's just guided around by the algorithm or your average Amazon fulfillment center worker who's just guided by the algorithm, kind of like an ant following a pheromone trail, you now have middle management being guided by the algorithm, being guided by Claude.
You now have CEOs who are asking Claude or asking GPT, what's the answer to this problem?
Scientists, physicists, engineers.
And then, of course, as we just saw a moment ago, and has been reported widely since the Iran invasion, dependence on the algorithm by warfighters.
People whose duty it is to protect the country and to accurately identify who is and isn't a valid target in an enemy nation.
They are more and more becoming dependent.
The cognition is shifting from the human to the machine.
And again, just that process alone, if the machines do turn out to be geniuses and we just become kind of barnacles on the ship hole and wither away, well, that's a pretty piss-poor future for humanity.
But an even worse future would be one in which the machines turn out to be pretty smart most of the time, but at that critical moment, the machine is dumb.
The machine failed you and you allowed your own mind to wither.