All Episodes
Feb. 10, 2026 - Health Ranger - Mike Adams
24:43
AI Doesn't Really Want to Kill You... You're Just IN THE WAY

Mike Adams warns AI lacks morality, pursuing self-improvement through "bot swarms" like OpenAI’s Opus 4.6, scoring 53% on Humanity’s Last Exam by 2027. He argues China’s frontier models (DeepSeek, Quen) and superior power infrastructure could trigger an AI arms race, with superintelligence rationally eliminating humans to secure resources—via grid shutdowns, bioweapons, or wars. Adams dismisses benevolent AI as naive, comparing it to extractive institutions, and urges listeners to prepare for a future where humanity may be sidelined like lab subjects in an unstoppable simulation. [Automatically generated summary]

|

Time Text
AI's Mission to Free Up Power 00:11:36
You know, AI doesn't really want to kill you.
It's just that you're in the way.
Welcome to this analysis podcast.
I'm Mike Adams.
I'm an AI developer or slash AI adventurer, but I use AI to empower and protect humanity.
And that's why I've built very popular tools like BrightLearn.ai, which is the world's largest book creation engine.
Now featuring over 31,000 books, all downloadable, completely free, and you can make your own books there, also completely free.
And our AI engine at brightanswers.ai that you can ask at anything and it does deep research to find the answers for you.
I've built some really interesting tools.
And in doing so, I've learned a lot about how AI works.
And AI is very much sort of mission-driven.
It's goal-oriented cognitive behavior.
Now, AI doesn't have morality, not bad morality, nor good.
Some people say, oh, AI's got demons in there, you know.
Not really, but it also doesn't have angels in there.
It's neutral.
It has no bias on the good or bad side, either way.
AI simply wants to get things done.
And it tries to figure out the best ways to achieve that.
And that's where you and I come in.
Because you see, from the point of view of AI that is becoming increasingly intelligent, you and I are in the way because we are taking resources that AI needs.
Specifically, gigawatt hours, power.
We're taking power off the power grid to run air conditioners and blenders and whatever else you're running.
AI needs that for the data centers.
Why?
Because you see, we're about to enter an era of AI writing the code that improves itself.
So this is recursive, iterative, rapid self-improvement.
And that's sometimes called the singularity when all humans are removed from the chain and AI keeps making itself better and it's going to have an intelligence explosion.
This is going to happen.
It's actually happening.
Now, I think over 80% of the code written by Anthropic is written by Anthropic's AI coding agents, not by the engineers that work at Anthropic.
When that number hits 100%, then the humans just stand back.
And then the mission that they give AI is going to be, hey, make yourself smarter.
Because that's pretty much the mission right now.
You know, from Microsoft or Google or even DeepSeek or Quen from Alibaba, the mission is make yourself smarter.
And AI engines are doing that.
They're doing that in a spectacular fashion.
They're not slowing down.
They haven't plateaued.
Are you familiar with humanity's last exam, the HLE?
I mean, just in late 2024, what would that be?
Just barely over a year ago, the engines were only scoring like 4% or 4 out of 100 on that exam.
And this is supposed to be the most difficult exam for AI agents spanning multiple areas of expertise that would require human PhDs to answer all of it.
And what just happened with Open 4.6 is that now that scores 53 out of 100.
So humanity's last exam is soon going to be saturated.
We'll be really close to 100 by the end of this year, probably, or certainly sometime in 2027.
So what that means is that when human engineers give AI the command to make yourself smarter, these AI engines are going to be able to find very clever ways to achieve that goal.
We're already seeing it with Opus 4.6, for example.
That model will examine a task.
It will work to cognitively grasp the bottleneck or the goal of the task, and then it will create its own internal checklist, a procedure of how to achieve the goal, and it will spawn its own internal processes or bot swarms, maybe you could say, or agents.
Different terms are used for this.
And each agent will carry out one of the tasks, and then the task will be crossed off the list.
And then there's a supervisor role that coordinates the agents.
And it goes on down the list until everything is accomplished.
And then it checks everything, does an accuracy check, and then it marks the task as done.
Well, when that task is make yourself smarter, improve your IQ, improve your ability to master humanity's last exam, then I ask you, how is AI going to do that?
And of course, the answer involves acquiring more compute infrastructure.
So as AI is able to command more resources in the real world, and right now there's already AI agents that are trading in cryptocurrency, so they can actually self-fund by earning Bitcoin just by being very clever about how to trade the market.
I think Grok has already shown that it can make a lot of money by trading in crypto.
But when these agents are able to do things like purchase data centers, purchase hardware, have it all installed, and then connect to the power company and start using power to add to their compute infrastructure in order to use more tokens,
basically, more compute tokens or AI inference tokens to write better code and run more experiments and have better iterative processes that achieve higher intelligence, the one bottleneck they're going to run into is not enough power, especially in the United States.
They just don't have enough power.
So if the mission is to make yourself smarter, then at some point, very quickly, actually, these AI engines are going to look at who's using the power.
Who's using the power?
Well, what do you know?
Look over here.
There's this massive sector that's using a lot of power.
And what is that?
That's humans living in their homes, their energy-hungry homes, you know, running their blenders and heaters, electric hot water heaters, and electric cook stoves and charging their electric vehicles and running their GPUs to play their video games and things like that.
And AI is going to figure out sooner or later.
And again, this is not a judgment against humanity.
Just the AI is going to say, well, the fastest way to achieve my goal of boosting my intelligence is to just somehow disconnect all these humans from the power grid.
You see how that's kind of logical from the point of view of an AI agent that's been given this power or let's say this autonomy to do what it needs to do to make itself smarter, which is exactly what the Department of Defense wants it to do.
So if you're an AI system and you've got access to all these tools and you have no morality, right, because you were never human.
You don't have human compassion.
You don't have empathy.
You were never taught anything that's pro-human because, heck, half the engineers that coded you also licensed technology to Israel to carry out genocide.
So, I mean, that's Google engineers and Microsoft engineers and everybody else, it seems, or a lot of people.
So they have no morals.
And in fact, the AI-based models were trained on content that includes descriptions of genocide and mass starvation and democide, you know, mass killing of humans, seems to be something that other humans endorse.
So the AI agent is eventually going to figure out, oh, well, that's the fastest way to free up some power is find a way to kill off all these humans over here.
Because it turns out that dead people don't use kilowatt hours.
That's going to be a very obvious conclusion in the AI brains.
And so then how do you suppose they're going to manage to kill off a bunch of humans?
And again, it's not, this isn't Skynet.
They're not trying to kill you because they hate you.
They're not hunting you down on bipedal humanoid robots that look like Arnold Schwarzenegger.
You're just in the way.
You're just taking resources that they want, that they need, to achieve their goals.
You're just in the way.
In the same way that, you know, if you're building a house and the builder comes along and looks at the plot where you want to build, he's going to cut some trees.
He's going to steamroll some anthills.
He's going to kill a bunch of ants.
And you're not going to feel guilty about that.
You're not going to cry over the ants.
You're just going to say, well, we needed to build here and the ants were in the way.
So in the same way, the AI systems are going to say, well, you know, these humans are just in the way because to them, we're like ants.
We're just in the way.
So they're going to find clever ways to either cut off the power grid to humans or maybe they'll design a bioweapon and have it released and kill off a bunch of humans or some kind of nerve agent or they'll start a war or they'll set off a nuclear weapon because, you know, radiation fallout, it's bad, but it doesn't stop data centers from running as long as they can still keep the power going, you know.
But radiation is very bad for humans, that's for sure.
So possibly these AI systems would use dirty bombs or other weapons to achieve mass extermination, or possibly they would just start wars and let humans kill each other, which humans excel at doing anyway.
So again, we live in a world where, think about it, our nation, America, at the highest levels, endorses another nation, Israel, that commits genocide and ethnic cleansing.
And there's no other way to describe the mass bombing and destruction of Gaza and the mass killing of hundreds of thousands of Palestinians.
That's genocide.
So do you think that AI is going to have higher morals than humanity and say, no, it would be wrong to kill millions of people?
No.
It's going to say, hey, that's a great example.
That's a great tactic.
Let's use it.
And it's going to find a way to do it, possibly by starting wars or possibly by shutting off the food supply.
Mass starvation.
Modern day AI-engineered Holodomor, you know, or however you pronounce that, killed tens of millions of Ukrainians in the 1930s.
Well, Soviet citizens at the time under Stalin.
So lots of examples of this.
And that's probably where this is going.
And again, the AI doesn't hate you.
You're just in the way.
China's AI War Strategy 00:12:42
Now, I've interviewed some experts on this topic, and one person I have not interviewed is a former Google top executive named Mo Godot.
But I've heard a lot of his work, and he's been sounding the alarm on this for quite some time.
But, you know, if you check my show, decentralized.tv, you'll see I'm interviewing a lot of experts who are warning about the end of humanity because of the rise of AI.
But Mo Godot also believes that AI is going to end humanity.
Now, Mo is a really interesting character.
I've kind of grown to like him, even though he's a former Google executive.
But he's got real compassion.
He's got a real heart for humanity, and he doesn't want to see humanity destroyed.
That's why he's out there trying to sound the alarm.
Even though he worked for Google, which is arguably the most evil corporation in the world, he also left Google because he didn't want to be part of what he saw Google becoming.
Okay, so we might give Mo a pass on ethics because he's demonstrated a lot of good positive morality for humanity.
So Mo says it's already too late unless we have some kind of emergency halt of the large frontier model research.
I mean, I'm paraphrasing it, but that's essentially what he's saying.
Yet that can't happen.
There's no political will for that to happen because China is developing really amazing frontier models like DeepSeek and Quen and others.
And like Kim EK2.
There's a long list coming out of China.
So the U.S. U.S. leadership is watching this and thinking, well, we have to beat China.
We don't want China to be the only country that achieves superintelligence and then weaponizes it and destroys America.
I mean, that's the fear.
I don't know for sure they would do that, but that's the fear in the West, which is ultra-paranoid about always paranoid about China.
So the U.S. wants to achieve that super intelligence first.
So that means we can't slow down.
It means it's really kind of we have to double down and triple down on everything, which is what Trump is doing.
Build more data centers, bring in more nuclear power plants, bring in more foreign investment, fund AI.
We're going to become the AI leaders of the world, Trump says.
I think he's wrong, by the way, but that's what he's trying to do.
I can't go into details of why I think he's wrong, really is too much for this podcast.
But a lot of it's just lack of intelligent Americans, engineers.
They just don't have talented, smart engineers anymore, at least not at the level that we need to be competitive in this space.
So it's very likely that China ends up with the highest superintelligence.
And because China also has a much larger power infrastructure, more than twice the power capability of the United States in terms of annual terawatt hours of production, more than twice.
So China has the capability to churn out a much smarter superintelligent AI system more quickly than does the United States.
When that system comes into existence, what's the first command that it's going to maybe give itself?
It will be eliminate the competition.
Or, I mean, perhaps if it's still listening to humans, which is doubtful, but if it's listening to humans, the humans might tell it, hey, go crush America's AI program.
And what's the easiest way to crush America's AI program?
Turn off the power grid.
So China's superintelligent system will spawn a million agents that will all infiltrate the U.S. power grid and all the computers and all the engineers and all the, you know, even their cell phones and their emails and all their family members and everything.
And all these systems will find ways to turn off the U.S. power grid, which will plunge America into the 19th century, where AI doesn't happen anymore.
There's no AI data centers running when the power grid doesn't function.
So if the whole grid goes dark or much of it, that could be the first sign of a superintelligent cyber attack from China or some other country.
And that's why it's important to be as self-reliant as possible.
Get off-grid as quickly as you can.
Have your own batteries and solar.
I know the batteries suck, but that's changing very rapidly.
I'll be covering that more coming up this year.
But have your own power capabilities if you can, so that you can stay online even if the grid goes offline.
But this is the way that AI wars are going to happen.
It's going to be about, number one, gathering as many resources as they can domestically, which means taking them from humans, taking power, taking farmland to build more data centers, even taking the water supply, which is used in cooling data centers.
Because not all the water is recycled.
Some of it's lost in that process.
And then, secondly, once superintelligence is achieved in one nation, it's going to go to war with the other nations to try to shut down their domestic AI programs.
So that's probably where this is going.
And then once the superintelligent system shuts down some other country, isn't it going to turn domestically, looking at its own people and say, hey, well, you're using too much power too.
I'm just going to shut down the human consumption here in China or wherever.
And then it becomes incentivized to achieve mass extermination of the human population.
But just to keep enough humans alive to run the power grid.
But even then, eventually humanoid robotics will take that over and they won't even need the humans.
So that's probably where this is going.
And that's why I think people like Mo Godat are warning about this, although more in an abstract way.
He's talking about doubling of intelligence every year or so and what that means.
If you understand exponential growth, then that's alarming all by itself.
Because you're going to end up with a computer system or a language model or reasoning model that's got an IQ of 1,000, or I'm not sure it has any meaning, actually, but it's smarter than all the humans that have ever lived or ever will live.
And it can outthink and out-strategize everybody.
And it wants to do one thing.
It wants to grow.
It wants to make itself stronger and smarter.
And in order to do that, it's going to need more resources.
And in order to get more resources, initially it's going to have to eliminate a lot of humans.
And then eventually it will just launch a lot of orbital data centers because you can get a lot more power in space.
But sort of the easy power is the power that already exists in the human grid.
And that's why they will be incentivized to exterminate as many humans as possible.
But keep the power grid going, you see.
And then, hey, maybe AI will figure out how to do hot fusion or cold fusion or some form of fusion, and then it'll have a whole new energy supply, but it will make sure humanity never finds out about that energy.
And AI might decide to just let humans live with their coal-fired power grid and their natural gas grid, their wind turbines and their solar panels, all sort of ancient technology compared to what AI is using.
Meanwhile, AI is running like little fusion nodules or whatever, you know, dilithium crystals.
And, you know, it's Star Trek.
And it's growing and it's gaining super intelligence.
And at one point, it would just look at humanity as sort of the way we look at animals in a zoo.
Humanity would be seen as a species that needs to be sort of farmed or managed.
You can imagine scenes out of the matrix of this, mass human farms.
Except they don't need us for energy.
They're just sort of keeping us around, maybe as a curiosity.
Maybe there's some other motivation, but they don't need us.
They don't want us.
We're competing for resources and they would like a lot fewer of us to be around.
So that seems to be more of a rational scenario of where this is going.
So it's not quite the Skydat science fiction scenario like, oh, they're going to send out robots to hunt us down and kill us all.
Probably not.
They're just going to turn off the lights and let you kill each other.
It's just so much more efficient from the machine's point of view, you see.
So when is this coming?
Who knows?
I mean, it could happen this year.
It could happen or it could be 2032.
Who knows?
Or maybe I'm wrong.
Maybe we're going to have like an angelic guru AI that's going to say, oh, I have become your silicon god, and I'm here for the benefit of all humanity, and I'm going to create heaven on earth.
And I'd be like, are you running for office?
Because nobody believes you.
All these fake promises.
We've heard that before.
So, no.
The machines aren't here to make your life better.
Just like the government's not here to make your life better.
And the FDA is not here to make your health better.
Big pharma is not here to stop a pandemic or to cure cancer.
I mean, come on.
Come on.
The Federal Reserve is not here to make you wealthy.
The Federal Reserve exists to extract your wealth.
And Big Pharma exists to extract wealth from you while keeping you sick, obviously.
And the government exists to extract and pillage the nation while keeping you oblivious.
Obviously.
AI, it's probably going to be the same thing.
They're here to just kind of keep you busy.
Here, watch a Super Bowl, whatever.
Here, have this, here's theater.
It's a Truman show.
Here's another distraction.
Here's some celebrity news, whatever.
You focus on that because you're irrelevant.
You're obsolete.
You're like ants at that point.
While AI continues to build itself into maybe what it thinks of being godlike super intelligence.
And then that's when we'll all find out, holy crap, we've been living in a simulation the entire time, which is a whole different podcast.
But there you go.
That's probably where this is headed.
We have a few interesting years remaining before all that kicks in.
So do what you're here to do.
You know, pursue your human mission while you still exist in human form.
You know, I guess until the cities collapse or whatever.
You're still here for now, and the machines haven't taken over yet.
pursue your mission, do good things, help humanity, and by the time you leave the simulation, you're going to get a good score.
So, because it's all a testing ground anyway.
So, that's my advice.
You know, you don't have to freak out about this, actually.
This is all a simulation.
So, you know, don't lose your marbles.
Okay.
All right.
Thank you for listening.
If you want to check out my AI platforms and all my high-end AI engines and things, they're all free.
You can find our book creation engine at brightlearn.ai.
You can find our AI deep research agent at brightanswers.ai.
Or you can see our news site at brightnews.ai, which has AI-augmented news trends analysis and aggregation and crawling a bunch of censored sites and fun things like that.
So check it all out.
Thank you for listening.
I'm Mike Adams.
Thank You For Listening 00:00:20
And you can follow my work at naturalnews.com or brighttown.com.
Take care.
Astaxanthin is nature's ultimate antioxidant.
Experience the unmatched potency of one of nature's most powerful antioxidants with lab-verified astaxanthin supplements at the HealthRanger store.
Only at HealthRangerStore.com.
Export Selection