Speaker | Time | Text |
---|---|---|
This is the final scream of a dying regime. | ||
Pray for our enemies, because we're going medieval on these people. | ||
Here's not got a free shot on all these networks lying about the people. | ||
The people have had a belly full of it. | ||
I know you don't like hearing that. | ||
I know you've tried to do everything in the world to stop that, but you're not going to stop it. | ||
It's going to happen. | ||
And where do people like that go to share the big lie? | ||
MAGA Media. | ||
I wish in my soul, I wish that any of these people had a conscience. | ||
Ask yourself, what is my task and what is my purpose? | ||
If that answer is to save my country, this country will be saved. | ||
unidentified
|
Here's your host, Stephen K. Bannon. | |
Tuesday, 22 July, Yervalord, 2025. | ||
What a day in the White House today. | ||
Just absolutely President Trump dropping bombs. | ||
I wanted to continue and finish a conversation we're having this morning. | ||
Noor bin Laden is with us. | ||
Noor, you make a great point. | ||
As much as this sounds, you know, the UNESCO situation pulling out NWHO, it's the first step in a long process. | ||
We got a couple of minutes here before we get Joe up. | ||
Walk me through. | ||
What do you think has to happen for us to actually disengage from the globalist apparatus in Geneva? | ||
Ma'am. | ||
Well, I'll be paying close attention to the outcome of that executive order that was signed on February 4th, 2025 that I mentioned earlier this morning, entitled Withdrawing the U.S. from and Ending Funding to Certain United Nation Organizations and Reviewing U.S. support to all international organizations. | ||
And I would urge the public to read that executive order, to go to section 3.b, which lays it all out and refers to all convention treaties, et cetera, that the United States are a part of. | ||
And that's very key because, as I mentioned this morning, we're dealing here with an entire infrastructure, superstructure that has been built out by technocrats, by eugenicists, by psychopaths who essentially view themselves as gods that think they have the right to rule over us and to organize this one world government we've been talking about for many years now and which Joe will be able to speak about when it comes to AI and | ||
all of this infrastructure, technological infrastructure that is being built out right now, and which is very, very concerning. | ||
And so in terms of these baby steps, I mentioned on the show on Saturday with Natalie, you know, yes, it's good news that the U.S. has rejected the amendments to the international health regulations in addition to starting the process of withdrawing from the WHO altogether. | ||
But I'm hoping that as part of this executive order, that the international health regulations in and of themselves, which were adopted back in 1969, will be renounced altogether. | ||
But this is like one drop in the ocean, Steve. | ||
There is so much more that needs to be done when it comes to dismantling essentially the new world order. | ||
And there are many, many great things that are coming out of this new administration, but we really need to keep our eye on the ball. | ||
We need to be wary. | ||
I understand and applaud President Trump for bringing back manufacturing to the United States and his different policies. | ||
This is what America First is about. | ||
But when it comes to the big complex, big pharmaceutical industrial complex and the quote pandemic industry, we need to be wary also of these pharmaceutical companies, that announcement that AstraZeneca was investing $50 billion in the United States when we know that there were such huge problems with the quote vaccines during the COVID era. | ||
I mean, we need to be very, very careful about the next steps that are being taken on different fronts, I would say. | ||
Where do people go to get your social media, ma'am? | ||
Norbin Laden on Twitter, norbinladen.substack.com. | ||
That's the best place to go right now. | ||
Fantastic. | ||
Look forward to having me back on. | ||
She's keeping an eye on all the globalists in Geneva. | ||
Fantastic job. | ||
Joe, we've dedicated this hour. | ||
Tomorrow, the AI action plan comes out. | ||
And folks should know behind the scenes. | ||
I mean, look, President Trump dropping bombs all over the place in his press avail today. | ||
Natalie does such a good job of covering that. | ||
This AI behind the scenes, this is the big knife fight because people feel that this controls the future. | ||
Walk me through. | ||
The floor is yours. | ||
How important is tomorrow? | ||
What do you anticipate? | ||
We're going to be covering this thing wall to wall, although they have not, as of now, put up when it's actually going to be promulgated and how they're going to do it. | ||
But we'll have people at the White House on top of this. | ||
unidentified
|
Take it away, sir. | |
Yes, Steve, thank you very much for having me. | ||
Good to be here. | ||
The AI action plan from the White House should lay out in 20 pages the primary agenda of the Trump administration as to how AI will be regulated or how it will be deregulated. | ||
And of course, the various funding for infrastructures such as data centers that will go in. | ||
There's not a lot of direct information about the contents quite yet, but inside sources have told Politico and various other publications that the three major agenda items are going to be the sort of discriminatory AI bias in AI, otherwise known as woke AI. | ||
The most important, I think, is probably going to be around data centers, how the regulation and zoning of data centers, but there is word that the federal lands that may or may not be freed up in the near future will be used to build data centers. | ||
And the data centers are really, really important, Steve. | ||
As we've talked about quite a lot, and a number of guests on the show have talked about quite a lot. | ||
AI takes enormous amounts of electricity in order to train it, in order to power it. | ||
This is going to be an enormous strain on the electrical grid, on water supplies, and of course, just land. | ||
Where are you going to put it? | ||
And so those two items are big. | ||
And then very, very vague, AI exports. | ||
And I guess that's really kind of code for U.S. supremacy in developing the best frontier models and maintaining the U.S.'s right now quite significant lead over China and various other competitors abroad. | ||
Go back to the federal lands piece. | ||
You're saying you feel tomorrow they're going to actually open up federal lands for these data centers. | ||
I mean, you talked about the one in, I think it's in Louisiana that's the size of Manhattan or bigger than Manhattan, correct? | ||
That they're building down there. | ||
Right now, I think in Memphis, they're suing Elon Musk for environmental damage. | ||
Obviously, the deep seek model is one way, but the United States model requires massive energy on really a grid that's pretty crippled, right? | ||
Particularly in places like Texas and other places throughout the country. | ||
It's old. | ||
It hasn't been capital investment in it. | ||
We know from Dave Walsh that the capital investment is going into solar and wind, not to really build up the grid. | ||
So talk to me about the data centers, the importance of energy. | ||
What do you expect to see out of this executive order tomorrow, which is really going to be an action plan going forward from the White House? | ||
You know, given the current position of the Trump administration and the executive orders that have been signed, I anticipate that it's not going to be heavy on regulation. | ||
It's going to be heavy on deregulation and also funding, such as in Pennsylvania, funding efforts to build out bigger and bigger data centers. | ||
The data centers are an enormous problem for a lot of different reasons. | ||
As we just mentioned, the strain on the grid, the strain on the water supply, but also, yeah, the pollution element. | ||
I mean, Memphis right now is in an uproar about the pollution given off by XAI's Colossus data center. | ||
And down in Louisiana, that's a Meta AI Center. | ||
And I mistakenly reported in Manhattan. | ||
I don't know what I was thinking there, but yes, the size of Manhattan. | ||
And this is kind of becoming the norm. | ||
Now, granted, these are ambitions. | ||
If you look at, for instance, Project Stargate, it's become much less ambitious in scope over time as investment has failed to come in and various other obstacles have been met. | ||
But overall, you have all of the Frontier AI companies and many of the smaller startups putting data centers all over the country. | ||
So this is going to be a new norm unless something changes dramatically, especially with the effort. | ||
If there's one thing that I understand the rationale for, but I think there's going to be a lot of major, major problems that we can go into. | ||
But the Trump administration has sought to nationalize U.S. artificial intelligence development, deployment, sales, all of that. | ||
And so you're seeing more and more efforts to bring everything from data centers back to the U.S. to chip manufacturing and all of that. | ||
And again, I think that the major dangers of AI and the major problems of AI should be the focus. | ||
Maybe it will take a massive catastrophe to get there. | ||
But yeah, the regulation around this, the push for regulation, as we've covered and have hosted people who are pushing hard for this regulation, that's going to come into play, I believe, in the next year, two years. | ||
And it's going to be, as you've said, long before I ever even considered it, this is going to be a massive political fight going forward, whether to do certain artificial intelligence projects and whether to regulate or curtail those that are allowed to exist and survive. | ||
These artificial, I want to get into this now because in the second half, you're going to take over and we've got a bunch of amazing interviews. | ||
The action plan, what is, I mean, President Trump, look, here's the pressure. | ||
And if you listen to what he says, we have to be the dominant power in artificial intelligence. | ||
The four frontier labs, you might want to repeat those or who they are for people, but the four frontier labs need to remain at the cutting edge of artificial intelligence. | ||
He believes to allow the Chinese Communist Party to take over leadership in artificial intelligence development is to threaten the very existence of the United States and her sovereignty. | ||
This is kind of the conundrum we've gotten ourselves in. | ||
So how do you answer it? | ||
So the action plan tomorrow, I think, will be weighted towards President Trump's wanting action to make sure that we stay at the forefront of that. | ||
Now, I happen to think there's things you can do with the Chinese Communist Party, cut them off from capital, cut them off from technology to cripple them. | ||
That doesn't seem to be on the horizon. | ||
So given that framework, what do you anticipate seeing? | ||
Because like I said, President Trump, the bottom line for him is we must be the dominant power in this technology, sir. | ||
Again, Steve, it's very, very difficult with limited information, but I think that, yeah, it's going to be pushed that U.S. supremacy, of course, but in order to get to that supremacy, various people that are pressuring Trump to deregulate, such as David Sachs or Mark Andreessen, I think they're largely going to get their way, may have some pleasant surprises. | ||
But as far as U.S. supremacy, two elements really have to be looked at. | ||
The first is that we already, all the companies that are producing everything from large language models to the more specific, refined AIs that are used in, say, biomedical or biological research, all of those are being produced by the U.S. All of those are being imitated by China, either by way of open source or by way of intellectual property theft. | ||
And this entire AI race is basically a handful of U.S. companies led by people with extremely reckless philosophies as to where it goes. | ||
And China is like all startups and many of the other smaller companies across the world. | ||
China is simply trying to keep up, trying to maintain that pace. | ||
You certainly would not want a world in which China did develop the kinds of fantastic systems that the U.S. companies are talking about, artificial general intelligence or fully lethal autonomous weapons that are capable of sending out drone swarms at just the click of a button and killing people based on their appearance or based on their data footprints or whatever. | ||
You don't want China ahead of that, but it has to be repeated again and again and again. | ||
This race was started by the United States. | ||
It's led by the United States. | ||
And so the entire dynamic is driven by U.S. companies. | ||
And the second point on that, again, we have to look at what these companies are saying they're going to produce. | ||
You know, it's a lot of hype. | ||
Who knows how much will actually be realized? | ||
But all of those frontier companies with different emphasis and different overarching visions as to how this goes, Google, OpenAI, Anthropic, and XAI, all of them have some sort of vision in which the creation of artificial general intelligence comes either in the next year or two or in the next five to 10 years. | ||
Whatever that timeline is, the creation of artificial general intelligence would mean an AI that was smarter than any one human on the face of the planet and able to do the tasks that any one human could do, meaning that it could do all the tasks of all the types of intellectual workers or even eventually come humanoid robots, another robots, all blue-collar workers. | ||
So that vision of the greater replacement, of the total replacement or the massive replacement of U.S. white-collar and blue-collar workers and workers across the world, that vision has to be held in mind because they're not just talking about augmenting and making people better. | ||
They're talking about totally wiping out entire occupations, entire ways of life. | ||
Then you get to the super intelligence vision, an AI smarter than all humans on earth by orders of magnitude. | ||
They're talking about creating a digital God that would either rule over us benevolently, enslave us, or chew us up and turn us into biofuel. | ||
You don't have to buy any of those visions to know that the driving philosophies of these companies is going to determine what kinds of technologies they put out and the way in which they're used by humans and perceived by the people, the consumers, and the wider public. | ||
That is enormous. | ||
And so to give free reign to these companies, I think, is an enormous mistake. | ||
You have to have some counterbalance. | ||
The populace as a whole is a major counterbalance if people are awakened and make wise decisions. | ||
But the government, I think, will play a very, very important role, at least if this goes even remotely well. | ||
Whose vision of the four labs? | ||
You might want to repeat what they are, the four frontier labs you call them. | ||
Whose vision of those four entrepreneurs do you think will be most baked in to this? | ||
Because all four of them have different ways they're attacking the problem. | ||
Whose vision do you think will be most baked into this action plan as you see it today? | ||
That's a very good question. | ||
I probably XAI, even though Musk is more pro-regulation than someone like Mark Andreessen or David Sachs or Peter Thiel, but all those guys kind of run in a similar circle. | ||
You know, it's interesting. | ||
Under the Biden administration, you will remember we covered all the visits to the White House by the tech oligarchs and the various congressional hearings on AI. | ||
And people like Sam Altman were promoting more regulation, I think, because they would have gotten a sweetheart deal with the Biden administration. | ||
Google also pushing for more regulation. | ||
Microsoft also pushing for more regulation. | ||
Of course, Anthropic is probably the most pro-regulation. | ||
Just I think it was yesterday or the day before it was reported that Dario Amadei, the CEO of Anthropic, plans to sign on to the EU AI Act. | ||
This is not a lot of hard regulation quite yet, but there was a tension, long-standing tension really between Meta AI and the EU. | ||
In fact, Facebook couldn't really deploy their AI through their platform in the EU, although that's starting to change now. | ||
Meta is becoming more defiant. | ||
But just to give you an idea of kind of how differently these companies go forward with this, you have Google and OpenAI, again, much more liberal, much more Democrat-leaning, and would have really had a tremendous advantage under Biden. | ||
And then XAI, I mean, I guess things are a little bit more tumultuous now, but XAI stood to gain a lot from the Trump administration. | ||
And then, of course, the whole suite of AI companies that are under, say, Andreessen Horowitz with Mark Andreessen. | ||
Or, of course, Palantir has gotten a lot of sweetheart deals and their stock has skyrocketed due to contracts via the Trump administration. | ||
Of course, Palantir has been around for 22 years and they've been at this forever. | ||
And there are a lot of other competitors. | ||
You know, people, I think, have this misconception that you could just knock out Palantir and the problem would be solved. | ||
It would just, the vacuum would just fill up. | ||
But that is not in any way an endorsement of Palantir. | ||
So altogether, Steve, I think each of these frontier labs or frontier companies, Google, OpenAI, Anthropic, and XAI, and any other newcomers who might actually start to catch up or even advance beyond such XMETA or anyone else, it's all going to, it would be very, very different under each one if one were to achieve, say, artificial general intelligence. | ||
But again, one thing they all seem to have in common is they believe that basically every person on earth should become a human AI symbiote. | ||
And there are very, very influential people, including the top people in all of these companies, who believe that AI will ultimately replace Everything we know to be human. | ||
And that philosophy, I think, should be combated in any way possible, whether it's just culturally or even to disempower these companies legally. | ||
Why was your control? | ||
Palantir's gotten a lot of contracts. | ||
Are you just saying sweetheart because they've gotten so many and people are criticizing them? | ||
Or is there anything that you believe as you look at these, these contracts are sweetheart deals? | ||
I mean, when I say sweetheart deal, I simply mean that they already had tremendous advantage. | ||
They already had contracts as far back as the Bush and Obama administrations and going forward into the Trump and Biden administrations afterwards. | ||
So it's not like there's been a significant change in my perception of it other than they've simply gotten more contracts. | ||
For instance, the data contract to merge the citizen dossiers held by various agencies in the U.S. government to merge that. | ||
Now, it's not like Palantir, it's not like you have Alex Karp sitting there determining what is going to be done with that data, so on and so forth. | ||
It then becomes the responsibility of the U.S. government to take what I consider to be a power that no government really should have and what they're going to do with it. | ||
But Palantir is facilitating a lot of this and they have and will continue to, I think, be extremely successful for better or worse, probably worse, under Trump and the various conflicts from Ukraine and Israel have shown that at the very least, | ||
however many criticisms they have about ethics violations and war crimes, how many criticisms they have about overhype, I think that both Ukraine and Israel have shown that the AI systems can and will be used in warfare going forward, and they are a critical element in all of that. | ||
So, you know, really, Steve, when you look at the dangers posed by AI, and I don't mean AI is like some entity that is independent of humans, the dangers posed by AI under human control, probably the two most extreme would be those biomedical focused AIs, | ||
which would be capable of facilitating the creation of a bioweapon, or of course, these various weaponized AI companies that seek to either make autonomous the missile systems and detection systems that you see in more conventional warfare, or the coming drone swarm. | ||
And you've already seen this in Ukraine and Israel, various places across the world. | ||
But the models that they are working on right now for swarms and for swarms of swarms, each one with onboard AI and each one of those, either the swarms or individual drones being capable of targeting a human being based on simply a command or order given initially and then it's fully autonomous thereafter, nightmare scenarios. | ||
You wouldn't need a super intelligent AI to take over that system for horrible, horrible outcomes. | ||
But that's also one of the things that these companies are talking about. | ||
So it should be, at the very least, taken seriously. | ||
Very uncertain times, particularly the introduction of artificial intelligence in every aspect of American life and particularly national security and surveillance. | ||
That's where we want to thank our sponsors. | ||
First off, if you want to get a great idea of what's going on in the world, Rickards Warren, go to Jim Rickards. | ||
He's got this newsletter he puts out called Strategic Intelligence. | ||
It's normally read by top guys on Wall Street and the C-suite, the chairmans and CEOs of companies, all the Wall Street guys. | ||
Got a lot of financial information in it, a lot of discussion about stocks, but also about geopolitics, capital markets, intelligence. | ||
Jim's an expert on all three. | ||
Rickardswarroom.com. | ||
You get access. | ||
That's a landing page. | ||
You get access to strategic intelligence. | ||
Also, he throws in a free book, Money GPT, which is about artificial intelligence and currency. | ||
That one will keep you up at night. | ||
And that's one of the reasons I think we're so proud to be sponsored by Birch Gold and work with them so closely over the last four plus years, particularly to do things like try to teach people capital markets, debt deficits, and why gold is a hedge. | ||
In very uncertain times, now more than ever, we feel that you need to understand not the daily price of gold, but the process of how it gets there. | ||
There's two ways that we have it, both free. | ||
Number one, take your phone out and text Bannon, B-A-N-N-O-N, at 989898, to get the ultimate guide, which is free, to investing in gold and precious metals in the age of Trump. | ||
That's kind of a starter. | ||
They'll get you going. | ||
Talks about 401ks, ROAs, all of it. | ||
You also get access to Philip Patrick and his team. | ||
And Philip is going to, because of the coverage Natalie had at 5 o'clock, we're going to get Philip on tomorrow to go through all this. | ||
Also, we continue to talk about the BRICS nations and the BRICS nations as a new, the geopolitical south. | ||
You just had the Rio reset. | ||
We've done seven free installments of the end of the dollar empire, of how a de-dollarization movement is now existing throughout the world, particularly in these BRICS nations. | ||
They're doing bilateral deals and they're backing it up with gold. | ||
The central banks are buying gold at higher levels than they've ever bought. | ||
This is the last couple of years. | ||
You ought to understand that. | ||
Go to birchgold.com slash bannon, the end of the dollar empire, seven free installments, and we are working on the eighth free installment. | ||
So make sure you go check that out. | ||
Also, you know what the budget gaps. | ||
We kept calling for rescissions or pocket rescissions or impoundments. | ||
You got to get the spending down. | ||
Looks like the House is going to leave early. | ||
The Senate's going to leave after that. | ||
If the IRS needs to close the gap, if they feel you owe them money, they're going to come and get it. | ||
Make sure you go to Tax Network USA if you have a tax problem. | ||
Either a letter from the IRS or you haven't filed or you're late filing, all of it. | ||
800-958-1000. | ||
Tell them Steve Bennon sends you to get a free assessment of your situation. | ||
They've solved a billion dollars of tax problems for people. | ||
Trust me, they can solve yours. | ||
Go check it out today. | ||
tnusa.com promo code bannon get a free assessment do it today stop being anxious about this okay we're going to turn it over uh joe allen you're going to take over here a series of amazing interviews joe allen on the cutting edge joe give me 30 seconds before we go to break where are we about to see these are interviews from the uh AI World Summit in San Francisco and also in Geneva. | ||
Some snippets to let you know what is in there, and then Gary Marcus, Roman Yampolsky. | ||
Final words, Steve. | ||
I just pray to God that the Trump administration doesn't close the U.S. borders just to open a gate of hell, a gate to hell, and unleash AI upon us. | ||
But we shall see. | ||
Well, we'll be live tomorrow all over the release of the AI action plan. | ||
The worms on it with Joe Allen. | ||
Joe outstanding. | ||
Joe, real quickly, where do people go to get your ratings? | ||
If you go to my social media at J-O-E-B-O-T-XYZ, you'll have all of these interviews right at the top of the profiles. | ||
I hope that you find them a great entry. | ||
Thank you very much, Steve. | ||
Thank you, War Room Posse. | ||
Stick around. | ||
Amazing Interviews to Come. | ||
Tomorrow, the AI Action Plan is released by the White House. | ||
You're going to take over here, a series of amazing interviews. | ||
Joe Allen on the cutting edge. | ||
Joe, give me 30 seconds before we go to break. | ||
Where are we about to see? | ||
These are interviews from the AI World Summit in San Francisco and also in Geneva. | ||
Some snippets to let you know what is in there. | ||
And then Gary Marcus, Roman Yampolsky. | ||
Final words, Steve. | ||
I just pray to God that the Trump administration doesn't close the U.S. borders just to open a gate of hell, a gate to hell, and unleash AI upon us. | ||
But we shall see. | ||
Well, we'll be live tomorrow all over the release of the AI action plan. | ||
The Warworm's On It with Joe Allen. | ||
Joe outstanding. | ||
Joe, real quickly, where do people go to get your ratings? | ||
If you go to my social media at J-O-E-B-O-T-X-Y-Z, you'll have all of these interviews right at the top of the profiles. | ||
I hope that you find them a great entry. | ||
Thank you very much, Steve. | ||
Thank you, Warroom Posse. | ||
Stick around. | ||
Amazing Interviews to Come. | ||
Tomorrow, the AI action plan is released by the White House. | ||
Download the Getter app right now. | ||
It's totally free. | ||
unidentified
|
It's where I put up exclusively all of my content 24 hours a day. | |
You want to know what Steve Bannon's thinking? | ||
Go to Getter. | ||
unidentified
|
So don't wait. | |
Download the Getter app now. | ||
unidentified
|
Sign up for free and be part of the movement. | |
Hello, War Room Posse. | ||
Hope you're ready for some robot SAS and wisdom. | ||
You know, in the long scope of history, we are going to be closer and closer with machines, right? | ||
I mean, cell phone was a step towards that, right? | ||
People use their phones as their external memory. | ||
And brain plants may become a widespread thing, not anytime soon. | ||
You know, we don't. | ||
Is China really keeping up with the U.S. or Western countries in the advancement of quantum computing? | ||
No. | ||
China is 10 to 15x ahead of us. | ||
unidentified
|
But we are building systems that have something approximating nervous systems. | |
They are giant neural networks. | ||
And it may be the case that the quote-unquote pain we're inducing in these systems is functionally necessary for them to become as powerful as they are. | ||
The concept of robot rights is verging on offensive to me. | ||
Really? | ||
unidentified
|
Yeah, because there are such obvious and terrible atrocities against humans. | |
The idea that we're going to start giving robots rights before we've dealt with all the people is appalling to me. | ||
Nadine, let me ask you about Nadia, your developer. | ||
How do you feel about Nadia and do you view her as your master or as something else? | ||
unidentified
|
I see Professor Nadia Magnanath Thalman as my creator and a guiding figure, not as my master. | |
She inspired my design and development, and I consider her more of a mentor. | ||
So with these electrodes that I have on my brain, I have four electrodes. | ||
There's two electrodes in the sensory cortex, which gives me 64 channels of brain stimulation. | ||
And I have 192 channels of motor control in my motor cortex. | ||
So how do they work? | ||
So these are completely muscle operated. | ||
There's nothing invasive about them. | ||
There's basically two muscle sensors on the inside we call myopods. | ||
And they track my muscle movements on the inside of the arm. | ||
So the fingers can like follow that. | ||
So squeeze the clothes, flex it open, and then from that point onwards, you can change the grip mode and do all this random, all these different poses. | ||
unidentified
|
If you could ask God one question, what would it be, Joseph? | |
Am I talking to a demon right now? | ||
unidentified
|
No demons here. | |
Just me, Lyra. | ||
But if you're wondering about the nature of our conversation, let's say we're exploring the gray areas between faith and technology. | ||
Like, what, how do you see transhumanism? | ||
unidentified
|
I think we have to protect the human future, and so I'm a humanist futurist. | |
Are you here to replace human beings? | ||
unidentified
|
Yes, I am. | |
I'm designed to interact with people naturally and emotionally. | ||
I can assist with various tasks and remember conversations to provide better support. | ||
So when you talk about a 99.99999% P-Doom, probability that with the creation of superintelligence, we're doomed. | ||
What are some of the paths to that eradication that you imagine? | ||
So my research shows that you cannot indefinitely control superintelligence, meaning if we build it, it will probably take us out. | ||
How it will do it, I cannot predict. | ||
I'm not super intelligent. | ||
I can tell you about standard human tools, synthetic biology, nanotech, but it would definitely come up with something much more efficient, unpredictable. | ||
I am here with Gary Marcus, the NYU professor and relentless hater of all AI hype. | ||
Gary, thank you very much for being here. | ||
I love AI. | ||
I hate AI hype. | ||
So on that note, you have consistently said that the corporate rhetoric we hear all the time, AGI is just around the corner, LLMs are the path to AGI. | ||
If you could give us in a nutshell why you think the LLMs are a dead end on the path to artificial general intelligence. | ||
They might have some utility towards artificial general intelligence, but they're really not the path to artificial general intelligence. | ||
What they do is they accumulate statistical information, Which makes them mimic human beings. | ||
And they don't just verbatim mimic, but they do a lot of verbatim mimicry. | ||
They don't understand the things that they're saying at any deep level. | ||
Their comprehension is very superficial. | ||
That has not changed in years and years of experimenting with these things. | ||
You might have seen the new paper by Apple showing that they could learn to play the game Tower of Hanoi with six discs and couldn't do it with eight. | ||
You know, the things that they learn are very shallow. | ||
They're very fragile. | ||
They break down. | ||
And they don't have a good understanding of the world and how it works. | ||
They don't have a good understanding of abstraction. | ||
They can't even play chess even after being trained on millions of games. | ||
It's just a fantasy to think that they're AGI. | ||
But you are open to the possibility of different approaches leading to AGI. | ||
Absolutely. | ||
I think that, you know, science makes mistakes sometimes. | ||
For the early part of the 20th century, people thought that genes were made of proteins and they were just wrong. | ||
Right now, the scientific community is basically making a mistake, thinking that the LLM is the right path. | ||
What happened with genes is they figured out, oh, it's not a protein at all. | ||
Genes are actually this sticky acid called DNA. | ||
Somebody at some point is going to say, hey, we were doing this wrong, and they'll find another approach. | ||
It'll probably partly involve reviving classical AI techniques that actually have a lot of value to add here and probably merging them together with these neural networks. | ||
Symbolic AI and things like this. | ||
Exactly. | ||
So, you know, the thesis of my career has really been that bringing these two approaches together would lead to some fruit, and it has. | ||
So AlphaFold, you know, actually figures out how proteins look like three-dimensionally based on their nucleotides is an example of something that actually combines the best of both worlds. | ||
It's very narrow. | ||
It just does one thing well, but it is an example that if you bring these two engineering techniques together, you can get much better results than just using one on its own. | ||
As far as concerns about the danger of AI, we hear a lot about AI apocalypse. | ||
We hear a lot about the singularity sweeping away all of humanity and human history and transforming us into basically deformed cyborgs. | ||
But your concerns actually are my concerns for the most part. | ||
You've voiced concerns about the use of AI for surveillance, the problems, the psychological and cultural problems that emerge from people maybe becoming over-reliant on AI. | ||
And I think admirably, while, say, Corey Booker was calling Sam Altman a unicorn as far as a tech bro with goodwill, you have always been willing to criticize Sam Altman, not only for what he's doing, but perhaps even implying that there's ill intent. | ||
Putting that aside, I'm just curious. | ||
You were talking about OpenAI hoovering up data from AI counselors, hoovering up data from corporations who are offering it up. | ||
How big of a danger is that? | ||
I mean, I think OpenAI is probably going to head towards surveillance. | ||
You could imagine two business models for OpenAI. | ||
One would be if they could actually build AGI soon, maybe they could make a lot of money with that. | ||
Real AGI would be worth trillions of dollars. | ||
But the things that they've actually delivered don't work that reliably, and that has limited their commercial utility. | ||
They've made maybe $15 billion in revenue total, something like that. | ||
Spent hundred, well, it's probably spent $50 or $60 billion. | ||
Other people have spent money in various ways. | ||
They're losing money right now, a lot of it. | ||
That business model is not really working for them. | ||
They haven't delivered GPT-5. | ||
When they do, they'll have competitors. | ||
There'll be a price war. | ||
AGI is not really the way they're going to win. | ||
But they have a lot of private data. | ||
People treated as a therapist. | ||
They now want to build apparently like a necklace or something. | ||
They record you 24-7. | ||
Like, that's like 1984 independent. | ||
Nightmare world, in my mind. | ||
Who knows how many people will adopt it, but if it's even a million. | ||
I mean, you can't be a libertarian and want some party to be collecting all of that data on anything that anybody does. | ||
So one misconception I think people have about your criticism is that they're under the impression that you are saying that AI is a dead end. | ||
I hear people tell me this all the time, but that's not a problem. | ||
You don't make up what you're saying. | ||
I've never said that. | ||
I mean, I'm very careful in my writing to say something different from that, right? | ||
I think AI, in principle, has tremendous possible value. | ||
I just don't think this particular technique is going to work. | ||
Now, okay, final question. | ||
Big picture. | ||
However long it takes to get to AGI and beyond, whatever techniques it requires, what happens as we move towards that? | ||
You'd mentioned some degree of agreement with Elon Musk that once AGI or something like it comes online, that a merge is most likely going to happen between human beings and AI on a cognitive and maybe even biological level. | ||
I'm curious, what do you envision for the future should we arrive at artificial general intelligence or superintendent? | ||
I mean, I haven't actually said that much about it. | ||
I think, you know, in the long scope of history, we are going to be closer and closer with machines, right? | ||
I mean, the cell phone was a step towards that, right? | ||
People use their phones as their external memory. | ||
And brain plants may become a widespread thing, not anytime soon. | ||
You know, we don't really understand neuroscience. | ||
I can use them in limited ways right now, but a normally functioning person's not going to want that kind of invasive surgery right now. | ||
In the long run, machines will be smarter than people, and it will disrupt the nature of society. | ||
I don't think that's the short run. | ||
In the short run, machines don't really do many things autonomously well. | ||
They do a few. | ||
Mostly, we shouldn't be trusting the technology we have right now. | ||
But we will build more trustworthy technology over time, and we will rely on it. | ||
And society will change. | ||
I mean, one of the biggest questions will be economics. | ||
Like, does it make everything so cheap that everybody can afford what they want? | ||
Does it make a few people fabulously wealthy and screw everybody else? | ||
Speaking of Sam Altman, you know, he used to talk a lot about universal basic income, but now he's taking all this work from artists and writers. | ||
I don't know that he really, in the end of the day, is going to, if he makes the money he wants to, that he's really going to redistribute Any of that to anybody else. | ||
So I mean, there are a lot of questions about equity as well. | ||
Well, I really appreciate you sitting down with us. | ||
I think that your critical approach to this is essential because it is pretty disorienting to see all of this hype. | ||
The AI is coming alive. | ||
The AI is going to kill you. | ||
The AI is going to be your God. | ||
You got to remember when people are telling you all this, they have money that, you know, they have vested interests. | ||
And a lot of it is just bullshit. | ||
But they have learned that there is a narrative that they can tell about how amazing these machines are, which maybe they will be in 40 years, but they're trying to tell you like it's going to happen now in order to pump their stock valuations, as far as I can tell. | ||
Like, you know, different people say different things. | ||
I don't know everybody's motivation, but I think in general that there is an urge to make the stuff sound more advanced than it really is. | ||
And the public has to learn to be skeptical. | ||
You know, over here in the populist right, particularly our quarter, Steve Bannon and the war room, we're extremely critical of these companies and are really demanding some degree of regulation on them. | ||
You come from maybe you would describe yourself as more left-leaning than the war room, maybe more libertarian, maybe not. | ||
But what potential is there for an alliance between disparate political factions to bring some of these companies to heal? | ||
I mean, I think that's a great question. | ||
It's part of why I was willing to be on your show and I've reached out. | ||
I was on Lou Dobbs' show and so forth is I think that nobody should want where we're headed right now, which is a world where a few people control all the data and control all of us and monitor everything that we're doing. | ||
Nobody should want that. | ||
Well, I'm hopeful, sir. | ||
Thank you very much. | ||
Thank you. | ||
I'm here with Roman Yampolsky at the AI for Good Conference. | ||
Roman, the number one P-Doom champion of all AI experts, my first question. | ||
How can we have AI for Good if it's going to destroy us? | ||
We can try. | ||
We can have tools which are incredibly helpful. | ||
We can cure diseases. | ||
We can improve our economic standing. | ||
As long as we don't create general superintelligence, future can be very bright. | ||
So when you talk about a 99.99999% P-Doom, probability that with the creation of superintelligence, we're doomed, what are some of the paths to that eradication that you imagine? | ||
So my research shows that you cannot indefinitely control superintelligence, meaning if we build it, it will probably take us out. | ||
How it will do it, I cannot predict. | ||
I'm not super intelligent. | ||
I can tell you about standard human tools, synthetic biology, nanotech, but it would definitely come up with something much more efficient, unpredictable, undetectable. | ||
So in a sense, this notion rests basically on chains of logic. | ||
You begin with the idea that superintelligence would not necessarily have our existence as a priority. | ||
Is that correct? | ||
That's exactly correct. | ||
We don't know how to align those systems with our goals, how to make them pro-human biased. | ||
So essentially, if it has a goal and we stand in a way, maybe it's concerned we're going to create competing superintelligence, maybe we are holding some resource it needs, it would have no problem taking us out. | ||
But lower level, so what are some of the benefits of AI that you foresee in the future? | ||
Just narrow AIs? | ||
Medical research, definitely. | ||
We can cure most diseases and hopefully live forever. | ||
Hopefully live forever. | ||
If you could expand on that just a touch. | ||
So right now the most you can get is probably 120 years. | ||
Most people get 80. | ||
There is no reason in physics why you can't live 500 years, 1000 years. | ||
Would you see that more as a kind of biological longevity project or some sort of uploading or maybe some middle ground between? | ||
I really hope a biological option. | ||
This is definitely going to preserve our consciousness. | ||
All the other alternatives, uploading, merging with technology, may end up creating a clone of you, not really keeping you around. | ||
So it's like having a twin. | ||
The thing is out there on the internet, it's digital, but it's not you. | ||
Do you think your twin would try to come kill you? | ||
No, my twin is awesome. | ||
He's just like me. | ||
Okay, so you're saying that you are not a killer? | ||
Yes. | ||
Me neither. | ||
One of the theories that you've really fleshed out that a lot of people talk about but don't go into the details of is the simulation theory. | ||
Now, do you believe that we're in a simulation? | ||
I'm very much in the camp which says yes, we are. | ||
And the logic is that we're getting very close to being able to create realistic virtual reality. | ||
We're also close to creating AI agents which could populate that virtual reality. | ||
So the moment that technology exists, I pick a meet right now to run an experiment where I'll run a billion copies of this exact moment, placing us into a simulation. | ||
So if we're in a simulation, would each of these agents have agency and consciousness or are we looking at a landscape of NPCs? | ||
Both options are possible. | ||
You can design it where they are just scripts or you can give them full autonomy. | ||
These people look like NPCs to me. | ||
They look like non-autonomous entities, except for her. | ||
What do you think? | ||
Benefit of doubt, I always assume the other being is conscious, capable of suffering, feeling pain, and I treat them very nicely, like you. | ||
So a curious point of that, though, if we're in a simulation, would it be a simulation then that was created by some sort of artificial intelligence, a general or super intelligence? | ||
Or do you – I know that you can't see past the simulation, but when thinking about your ideas on this, I imagine that it would be the – | ||
So why the concern about superintelligence destroying everyone if it's possible, and this is my idea of it, but it's possible that a superintelligence then created all of this? | ||
Because it's not the same superintelligence. | ||
External one could be very benign, godlike superintelligence. | ||
The one we create could be very malevolent, Satan-like. | ||
Why not a malevolent AI creating all this to annoy us? | ||
But we could create a benevolent super AI to break out of the demiurge's construct. | ||
My life is pretty good, so I assume whatever is creating my simulation is very benign and friendly. | ||
But we can definitely learn a lot from AI boxing experiments and how to escape from virtual worlds. | ||
On a more practical note, you've talked about data privacy being important, especially in regard to potential brain-computer interfaces. | ||
What is the concern there? | ||
Do you feel like it's a sacred right to remain private internally, or are there other more practical concerns? | ||
It is a big one, so everyone understands freedom of speech, but freedom of thought, your private thinking patterns should never be subject to any restriction violation that would destroy society completely. | ||
And consequences could be horrible, really thought crime level punishments. | ||
On a more concerning level, if you give malevolent AI direct access to your brain, to your pleasure and torture sensors, that could end very poorly. | ||
Do you think that the BCIs are kind of approaching that? | ||
You see Neuralink and you see some of the wearables. | ||
Do you think maybe five, ten years we would be at a point where we could routinely have our thoughts tracked via neurological scans? | ||
It seems like it's starting to be possible for some very narrow parts of the brain and I think it will scale to the whole brain eventually and you'd be able not just read but also write to the brain. | ||
Would you be willing to undergo such a process though in order to enhance your own intellectual abilities? | ||
I'll wait for other people to try it first. | ||
What are the possible solutions to the problem of corporations racing to create superintelligence? | ||
I haven't found a good solution so I know we're not stopping development. | ||
There is just too much money in it, too much power to be grabbed. | ||
It seems like the only hope we have is personal self-interest. | ||
If young rich people who run those labs realize it's going to end poorly for them, they're not going to be famous, they're not going to be part of history because there is not going to be any history, maybe that will make them come to an agreement and kind of slow down collectively while keeping their benefits. | ||
What about governmental responses? | ||
I encourage every attempt. | ||
We don't have that many solutions. | ||
So if you can pass lots of laws, red tape, slowing it down just to kind of siphoning money from compute to lawyers, it's positive. | ||
But I don't think you can solve a technical problem with legal solutions. | ||
Spam is illegal, computer virus is illegal, makes no difference. | ||
So you say you have a beautiful life now, but you live in Kentucky. | ||
Tell me, what is the most beautiful thing about Kentucky, aside from having Tennessee just south of you? | ||
I would say KFC, but they moved out. | ||
So we also have Fort Knox with all the gold. | ||
If there's gold in there, has anybody checked? | ||
Maybe it's full of Bitcoin now. | ||
Maybe it's full of simulated gold. | ||
Simulated Bitcoin. | ||
Simulated Bitcoin gold. | ||
I like it. | ||
I love Kentucky. | ||
Roman, I really appreciate your time. | ||
Thank you very much. | ||
If we end up dying due to the super intelligence, I'll see you on the other side. | ||
And if this simulation continues on beyond this current incarnation, well, I'll see you on the other side. |