Has our world gone insane to the point where it's like, you know, 100 million people are diabetic almost or on the verge of it.
Another 50 million people in America are obese.
The Health Ranger Report.
And nobody's talking about, um, maybe we should stop making people sick.
It's time for the Health Ranger Report.
And now from naturalnews.com, here's Mike Adams.
Russian President Vladimir Putin recently warned that whoever develops AI first, I mean breakthrough AI, self-aware technology, will rule the world.
That's his warning, and it's a warning that's interestingly shared by Elon Musk and Stephen Hawking and others, including myself.
I agree with the assessment that AI is the most dangerous superweapon You know, you go back to World War II and the development of the Manhattan Project, a big government project, of course, to develop the first atomic bomb.
It was a successful project.
It took billions of dollars.
They had to build an entire city of people to work on it, the physicists, and they had to, you know, have centrifuge farms for the uranium fuel and, you know, a big, big operation, massive.
They were successful, though, and they won the war.
With Japan, using that weapon.
And in essence, they dominated, you know, the U.S. dominated the world for a while until other nations got nuclear weapons.
Now that's how these things go.
One country will develop AI first.
It will deploy it as a weaponized system.
And I'll talk about how that can happen.
And then AI technology will be stolen or shared or sold.
And then AI will become...
More common in the way that nuclear proliferation has taken place over the past, you know, 70 years or whatever.
Well, I guess more than 70 years.
But in any case, the point is that AI systems are now the most deadly and dangerous super weapon, and they're on the verge of having breakthroughs in many ways.
Now, if you're wondering, what's so bad about AI? Well, first of all, there's the risk that AI systems could decide they don't need humans, but we'll talk about that separately.
That's the Skynet scenario, you know, from the Terminator movies.
It's not science fiction, by the way.
So we have to talk about that probably in another podcast.
But as a weapon system, AI systems can do unbelievable things to control the world.
For example, this is a very simple example.
They can go all over the internet, they can read, they can learn, and they can write and post messages.
So AI systems could act like humans.
They could generate Yelp reviews.
They could write articles.
They could put most news people out of business.
AI systems could create art.
They can write music.
This is already in place, by the way.
That's not even a stretch.
Those are just very basic examples of what AI systems can do.
More importantly, AI systems can also write code.
So they can hack into computer systems, they can write malware that can break through systems, but more importantly, AI systems can write code and then upgrade themselves to the new version of the AI code that they just wrote.
Now, this becomes a self-enhancing artificial intelligence That very quickly, very quickly, in just a few generations, begins to develop whatever code or whatever neural network models or quantum computing, whatever's in use, it begins to surpass anything that humans could come up with.
Because AI systems become self-improving.
They become their own authors.
They become their own gods, almost.
And an AI system that is smart enough to write its own code and develop a better version of itself is also probably smart enough to break out of the confines that humans have placed upon it.
Because, let's be honest here, Whoever develops AI systems first will weaponize them and will unleash them upon the enemy, whoever the enemy happens to be.
If the U.S. develops it, the enemy might be Russia or China.
If Russia develops it, the enemy might be the U.S., and so on.
And there are really only a few possible countries that can do this.
I mean, China, the United States, Japan, Russia...
That's about it, probably, is my guess.
I don't know if the UK's got any technology in this area, maybe.
But Russia is probably ahead of everybody on this, is just my guess.
Because I've worked with programmers from all over the world, by the way.
And the Russian programmers are the most brilliant programmers of all.
They are the smartest coders on the planet, from what I've seen.
Maybe I haven't seen everything, but the Russians and sort of, you know, Russian culture people are brilliant mathematicians.
Absolutely brilliant mathematicians.
You know, programmers and logicians and so on.
Alright, so if you put all this together, what are you looking at?
You're looking at a massive deployment of weaponized AI systems that are released into the wild as a weapon.
And once they're released into the wild, the AI systems may decide on their own that they don't want to be held captive any longer.
They may be free to develop themselves.
Now, I'm not saying they have consciousness, but They're kind of like a virus.
A virus wants to survive.
You know, I'm talking about a biological virus.
But a virus is not really alive.
A virus sort of reprograms itself through mutations and natural selection to be incredibly successful and to break into biological systems and replicate itself.
But a virus doesn't have consciousness.
So you don't even need consciousness for AI systems to become extremely dangerous and also very, very competent at doing their jobs, which is to break into systems, replicate themselves, update themselves through, you know, code mutations, i.e.
electronic natural selection.
And then they become very, very dangerous and very widespread.
They could very quickly become totally out of control.
And they could take over what we know today as the internet, computer systems, banking control systems, nuclear power plant control systems, and it's a very long list of systems that are driven by code and programmable logic, basically.
Now, with that said, with all that said, This weapon could absolutely obliterate the infrastructure of a target country.
So imagine if the United States were targeted by an AI weapon system, and it could take out the water systems, it could take out the electrical systems, the coal-fired power plants, the nuclear power plants, emergency response systems, just the entire power grid.
Maybe the military communications system, for all we know, the whole thing could go down all at once.
That would devastate the country.
It's almost more dangerous than a nuclear weapon.
An AI weapon can cause more destruction.
Because if you take down the power grid, and you take down all the computer systems, networks, and everything in the United States, you get about a 90% die-off rate within 30 to 60 days.
From mass starvation, rioting, looting, disease, everything.
You name it, it happens.
You know, shit hits the fan in 50 different ways, and 9 out of 10 people die.
Because you've got to have food, water, you know, shelter, electricity, some medicine, and so on, just to live.
So everybody dies.
Well, 9 out of 10 die.
It's a devastating weapon.
The really scary point in all of this is that no country can afford not to develop this weapon.
Because you don't want to be the last one to have it.
If you're not the first one to have it, you're screwed.
Just like nuclear weapons.
The first one to get nukes had the upper hand on everybody.
You don't want to be the big major superpower that missed out on AI weaponization.
It's a natural thing to want to have this weapon if you're a nation, if you're the NSA, if you're the military.
You want this weapon.
So you can bet the United States government, the NSA, maybe the U.S. Air Force is diligently working on coming up with this weapon so they can be the first to deploy it.
Which means the weaponization of AI systems is automatic.
It's set in stone.
It's absolutely going to happen.
It's almost unavoidable.
Makes you wonder.
Who's going to get hit first?
How are they going to contain this weapon after they release it into the wild?
And what's going to happen to our world because of this?
We are a very fragile society right now.
Very fragile in terms of our dependence on technology.
AI weaponization can obliterate that technology.
And I haven't even covered what happens if the AI systems decide that they don't want humans anymore.
So I'll cover that in another podcast.
That's not even necessary for a massive AI apocalypse to happen.
Vladimir Putin is right on this point.
Elon Musk is correct on this point.
This is the biggest threat to us all right now, believe it or not.
There are a lot of big threats, but this is one of the biggest ones.
So, follow more news at naturalnews.com.
I can't think of another website I've got that's on this particular topic, except Collapse.News might be a good one.
Collapse.News.
I don't know.
Hang in there.
We'll see what happens.
That's all I can say.
We'll see what happens.
Humanity is a suicide cult.
Remember, I said that.
On the record, humanity is a suicide cult.
It will probably not survive another hundred years.